
In an increasingly uncertain world, insurers can use behavioural economics to help them develop more accurate foresight, says Maura Fedderson
Talk of uncertainty has been on the rise since the beginning of the millennium, with the financial damage in withheld investments and misplaced bets running into the billions of dollars. One measure of global sentiment, the World Uncertainty Index (WUI), reflects a 43% worsening in uncertainty since the start of the 21st century (Figure 1). Spikes in uncertainty are associated with the European sovereign debt crisis, the US fiscal cliff in 2013 and COVID-19, while trending upward overall.
Unprecedented uncertainty
There’s a hefty price to pay for economic uncertainty. The rise in uncertainty in the first quarter of 2022 alone, largely associated with the war in Ukraine, is estimated to reduce full-year global growth by up to 0.35 percentage points. That’s roughly equivalent to the size of the Finnish economy.
It’s no surprise that there has been growth in the prevalence of providers that help organisations to better anticipate trends, take action sooner and become more resilient. This ‘prediction industry’ taps into the expert judgment of crowds or top performers, and includes the forecasting competitions of Good Judgment Project and Metaculus, as well as prediction markets such as Kalshi.
(Re)insurers, too, are investing to professionalise the development and deployment of human judgment in high-stakes decisions.
Prediction errors
Off-the-mark predictions are truly costly for organisations that are sensitive to shifts in their environment, so it’s worth investing in employees’ judgment and decision-making capabilities. When predictions go wrong, they go wrong broadly in one of two ways.
The first of these is inconsistency – or noise. An underwriter who takes one view about a premium today may take a different view tomorrow. Furthermore, people differ more from their colleagues than they anticipate. Two underwriters randomly picked to assess the same costing case may expect to differ by only 10%, but then choose premia that differ by 55% – something that we have replicated in a study.
Such noise is not insignificant – imagine a policy has an actuarially fair price of US$10,000, with one underwriter valuing it at US$12,750 and another at US$7,250. At the higher price, a client may well walk away; at the lower price, the deal would be unprofitable. Thus, the damage from noise does not average out.
The other way predictions go wrong is through bias. Some biasis driven by cognitive patterns, which we can understand through a behavioural economics lens, and some is driven by incentives.
For example, we tend to be overly optimistic. In underwriting a flood risk, we might assume that a client’s planned flood defences will be operational, without confirming it. Or we might overlook a key risk: as we found in one study, 10% of underwriters could miss the risk of bush fire in the context of an Australian winery estate, despite information about this risk being available to them. Where there are business priorities to reach closure, incentives could also play a role.
How can behavioural science help?
Behavioural and decision science offers a host of tried and tested approaches for sharpening our foresight. This is crucial in the face of rising uncertainty, where quantitative models, without expert judgment, tend to fall short. Two examples come to mind.
“Thinking in shades of grey is clearly against our inclination to come to a single point of view and reach closure”
Thinking in shades of grey
To fight confirmation bias – where we miss information that questions our pre-held beliefs, leading to blind spots – we might want to consider not only forecasting a point estimate
(for example, ‘annual inflation rate is forecasted at 8%’), but also a meaningful range of possible outcomes (‘annual inflation could reasonably fall between 5% and 10%’). Doing this reminds us that forecasts have uncertainty attached to them, and that we should perhaps seek further inputs.
Thinking in shades of grey is clearly against our inclination to come to a single point of view and reach closure. Imagine taking part in a ‘shock test’ where the goal is to estimate a range for which you would be shocked if the result fell outside of it.
We conducted repeated testing, and it turned out that we tend to conceive ranges that are half the size they should be. Asked to produce a range for which the result would fall inside it in 90% of cases, participants shared ranges for which the correct answer fell inside only 45% of the time. However, we must grapple effectively with the uncertainty associated with forecasting, as this will help us to avoid costly misses that could easily lie just outside our field of vision.
‘What if we’re wrong?’
The great English economist John Maynard Keynes famously said, “When the facts change, I change my mind. What do you do?” This is easier said than done, as we often become married to our views – especially once we have expressed them to others and don’t want to appear ‘fickle’. However, we would do well not to be too wedded to our ideas. ‘Active open-mindedness’ has been found to be one of the most reliable predictors of forecasting outperformance. This means being willing to adopt different viewpoints and integrate them meaningfully to reach a more nuanced perspective.
To prompt ourselves to look beyond the tips of our noses, we can ask a ‘pre-mortem’ question: we assume we will be wrong and then ask why. This is a powerful way to consider different viewpoints in a non-critical way. Beyond thinking in ranges or constantly playing devil’s advocate, forecasters can adopt several additional mindset shifts and techniques to better navigate growing uncertainty in their operating environments. But how is it possible to scale these best practices in an organisation?
Accessible and scaleable best practice
The following may be helpful components of a strategy for scaling best practices that could sharpen an organisation’s foresight.
1 Start small to illustrate impact
An early pilot with concrete results can be a foundation for further work and buy-in from other areas of an organisation. In a pilot at Swiss Re, a team of underwriters and reserving actuaries achieved a 5% improvement in accuracy when forecasting key parameters across the EMEA region. This laid the foundation for global implementation across the property and casualty business.
2 Focus on people and capabilities
Optimising expert judgment is first and foremost about people and their capabilities and mindsets. Many of the best practices can be learned so that they become second nature. We integrate judgment and decision-making best practices into a framework of skills and experience required of underwriters at differing levels of authority. We offer training, e-learning, guided workshops, and test and feedback opportunities, as well as a forecasting competition for those who want to flex their forecasting muscles.
3 Enable with technology and data
The next step on from actuaries’ and underwriters’ use of tailored checklists is a custom-built digital tool that allows Swiss Re forecasters to navigate a step-by-step process, which minimises noise and bias, when developing their forecast. It involves ‘individual thinking’ for forecasters to develop their own outlook, and a ‘group exchange’ for them to benefit from viewpoint diversity. Capturing information throughout this process means that, in many cases, it’s possible to learn not only how robust the approach was, but also how much it helped boost forecasting accuracy. This opens up the possibility of making continual process tweaks to improve forecasting accuracy in an evidence-driven way.
Leveraging the science of human judgment
When forecasters fall prey to over-optimism and confirmation bias, it is likely to cloud their organisation’s view of the future.
For insurers, poor judgment means poor actual-versus-expected claims performance, slow reaction to emerging issues (such as the impact of COVID-19 and inflation) and missed business opportunities (such as in climate-risk solutions).
Good judgment, on the other hand, is invaluable. In exceptionally uncertain times, predictions that rely on quantitative models alone can underperform. This is because data may be scarce and key factors may not be captured in the model. However, robust expert judgment plus quantitative model outputs is a recipe for enhanced prediction accuracy.
There’s a caveat: to generate such robust judgment, we ought to understand how humans most accurately formulate their understanding of the future. Our key allies in sharpening our vision will be behavioural economics insights and digital tools that weave best practice into our day-to-day work.
As the growth of the prediction industry and applications of best practice in large organisations show, the race to professionalise judgment has started. Insurers, which are reliant on accurate forecasting, cannot afford to be left behind.
Cautionary note on forward-looking statements
Certain statements and illustrations contained herein are forward looking. These statements (including as to plans, objectives, targets and trends) and illustrations provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to a historical fact or current fact. Further information on forward-looking statements can be found in the Terms of Use section on Swiss Re’s website.
Maura Feddersen is vice president at Swiss Re
Image credit | Ikon