The path to resilience starts with a good-enough disaster risk modeling
We all agree on the need for disaster-resilient infrastructure. But what does it take to get there? If we hope to achieve resilience to disasters, we should start at a robust and fit-for-purpose risk modeling.
It is debatable what “robust” or “fit-for-purpose” means, so let me list some features that I think could help narrow down our modeling requirements:
· Multi-hazard. This, although obvious, is not as simple as it sounds. It poses the need to build distinctive models for each hazard-exposure-vulnerability combination. In addition, we need to be able to express risk in the same manner, making the impacts of different hazards commensurable.
· Probabilistic. If we are hardly able to know the weather conditions 2 weeks from now, what makes us think we can have any kind of certainty on when, how, and where will the next natural event occur and to what extent it will damage our infrastructure? A probabilistic model rationally incorporates uncertainty into the results.
· Stochastic. It is not the same as the previous feature. This means that the model should rely upon both the physics of the phenomena and the randomness of their occurrence. Many thousands of simulations of hazard events are commonly required to build up a reasonably exhaustive set of possible consequences our infrastructure systems must face.
· Non-stationarity. Disaster risk models are usually stationary. This works fine for hazards such as earthquakes. However, if we seek to model hazards that may be altered by background trends like climate change, we need to move to a non-stationary approach. This will transform all risk metrics into functions of time.
· Interdependency. Infrastructure elements belong within a system that will be affected if one or more of its components are affected. We are now talking about how the service rendered by the system begins to decline. And this can be very tricky to assess because natural phenomena can affect both demand and offer of whatever commodity flows through the system.
· Enable deep uncertainty. Let’s say we want to incorporate some variable for which we cannot confidently define a probability distribution or even bounds, it cannot be modeled without huge arbitrariness, and there is no closed consensus on how it will behave. In this case, we are facing deep uncertainty. But don’t worry, the problem is far from being intractable. There are many uncertainty theories developed precisely to tackle such difficult problems: e.g. fuzzy sets theory, Dempster-Shafer theory, random sets theory, fuzzy random sets theory, among others. Any of these approaches ultimately allows the quantification of risk metrics in the form of imprecise probabilities.
· Oriented to decision-making. Models are built for a purpose. If we hope to engage stakeholders and provide them with comprehensive information, we need to understand their needs. The model must account for the size of expected impacts and their probability of occurrence, but more importantly, it must allow testing the effectiveness of risk management strategies.
If all those features are met, we can have some peace of mind that our model should at least be reasonably robust. Don’t forget that “all models are wrong, but some are useful”.
By: Dr. Gabriel Bernal, Chief Scientific Officer, INGENIAR
The views and opinions expressed in this blog are those of the author and do not necessarily reflect those of the Coalition for Disaster Resilient Infrastructure (CDRI).