As humans, we are very poor at assessing risks. We find it really difficult to put risk in context and apply accurate measurements.
In the United States, figures from the National Safety Council show that we are approximately 100 times more likely to die while travelling in a car than when travelling in an airplane. Yet, perhaps because we rarely hear about the regular traffic accidents that consistently result in fatalities, the risk of travelling by car doesn’t normally enter our consciousness. In contrast, news sources tend to give significant coverage to stories of a single aircraft accident that results in multiple fatalities. This information becomes our source for calibrating the risks of travel: we hear about aircraft accidents; we don’t often hear about traffic accidents, therefore aircraft travel is riskier. This type of information provides us with incorrect mental shortcuts.
An example from the book Homo Deus: A Brief History of Tomorrow, by Yuval Noah Harari, is that in 2012, about 56 million people died throughout the world: 620,000 died from human violence (war killed 120,000, crime 500,000). Whereas 800,000 committed suicide and 1.5 million died from diabetes. And if you look at global figures from 2010, obesity and related illnesses killed about 3 million people, terrorists killed 7,697. So you are almost 400 times more likely to die from overeating than you are from a terrorist attack, yet how many of us think about the risk of eating a tasty dessert after our meal, compared with our preoccupation with the threat of terrorism?
This leads into how we assess project risks. We are likely to take the same approach that we do in our everyday life when we assess the risks associated with our projects. We are more likely to apply a higher weighting to risks that we have encountered previously, or ones that we are familiar with, than to other less obvious risks.
It’s important for us to be able to assess risks as accurately as we can and use objective methods for measuring risk where possible.
Your model can be the basis for objectively assessing risk. At present we tend to focus on the risks that we are aware of, usually ones we have experienced first hand. But, throughout our working life, we probably only see a handful of projects right through from the start of design to handover, which means our awareness of possible risks is limited.
We should use different and objective methods to ensure that all possible risks are assessed equally and without bias.
If risks are managed and integrated within our model, we can apply objective measurements to those risks. As a simple example, it would mean the assessment of an atrium that requires high-level access to maintain building services equipment could be based on actual data, such as the number of incidents that occur when accessing building services at high level.
We may find that maintenance access for a certain type of building service has a far higher risk than other types of building services. This will result in design decisions that focus on eliminating hazards that have the highest risk.
This type of data is not yet widely available and it has to become a long-term goal for the whole of the industry. Once the wider industry adopts multiple digital models containing this type of information and accurately records incidents, we will reach a critical mass of data that will result in informed and accurate risk reduction.
Assessment of risks should not stop at the risks we can identify. Instead, we should assess risks by analysing our digital models. We want our analysis of risk to inform us of where our project will fail before it fails. We want to be informed where programmes will overrun and where design will stall, so we can make plans to avoid the avoidable.
The assessments should fully objective without bias, to allow us to focus on the critical risks. The risk analysis should also take into account the design progress, and identify whether an area of design has fallen behind programme. This is especially important when specific delays in making design decisions could have an effect on the completion of a project. It’s all too easy to think that delays during the design stages are less important than delays during the construction process, even though they could both affect the completion date and delays during design often decrease the time available for later stages.
We should be analysing our models using algorithms. Algorithms are a series of logical instructions that can be followed by a computer. Algorithms perform the task of analysing our models in a far better way then we can. A computer checks everything that you want to be checked, it won’t get bored, distracted or take shortcuts. This enables us to manage projects using an analytical approach and to monitor trends; analytical project management rather than the all-too-common reactive and subjective project management.
We shouldn’t need to manually add a risk for accessing building services within an atrium. This should be automatically added to the model, as there is known and recorded risk.