Actuaries are well placed to help humanity prevent many of the gravest risks it faces, argues Sanjay Joshi
In early 2020, my partner and I were trying for a baby; then COVID-19 struck. My savings tumbled precipitously, global death rates soared, and I started to wonder whether I really wanted to bring a child into this world.
All being well, my now-pregnant wife will have given birth by the time this article has gone to press. However, my daughter will be born into a world still rife with risks from this pandemic, future pandemics, climate change, global conflict and emerging technologies.
Actuaries are excellent risk managers, and we can use quantitative risk management skills to make financial institutions safer. However, we can make the wider world safer, too – and perhaps help to create a world that we are happier to bring children into. An approach that helps us to achieve this in an insurance context might be:
- Leverage the recently growing field of existential risks (x-risks) to get more insight on tail risks
- Incorporate those insights into explicit models, linking to capital requirements
- Take bold action to reduce those risks in the real world, leading to a lower capital burden for the insurer and a safer world.
In the case of pandemics, it might mean more modelling of how pandemics could arise, leading to capital requirements that capture those risks in a more granular way. Work done by insurers to reduce those risks, making pandemics less likely, could lead directly to a release of capital, giving insurers an extra incentive to make the world safer for everyone.
Any extra incentive to make the world safer is a good thing. It also encourages decision-making based on models of how bad those risks are, and model-based decision-making is one of the great ways the actuarial profession can contribute to the world.
Interpolating vs extrapolating: what can x-risks tell us?
Models have traditionally tended to rely heavily on historical data. Inevitably, historical data is dominated by ‘normal’ events (those that occur towards the middle of the probability distribution); the data provides scant insight about what happens at the tails.
It is hard to understand the tail of a distribution based mostly on its middle – we have to extrapolate and use judgment. If there was something at the extreme tail that we could quantify, our task of understanding the tail would be much easier – more like interpolating than extrapolating. Cue the recently growing field of x-risks: the study of risks that could pose a huge threat to humanity, such as human extinction. Until recently, this field received relatively little attention in academia, although this is starting to change with the emergence of groups such as the Centre for the Study of Existential Risk at the University of Cambridge and the Future of Humanity Institute at the University of Oxford.
Let’s take the example of pandemics. Pandemics have generally tended to involve ‘naturally’ occurring pathogens, such as zoonotic disease – a disease that has moved from animals to humans. To understand tail risks for naturally occurring pathogens, the model that considers the likelihood of this type of pathogen emerging might explore:
- At what rate deforestation is occurring, because this disturbs animals – especially bats – which could, in turn, lead to emerging zoonotic disease
- At what rate animal agriculture is growing, and to what extent it encompasses the intensive farming methods that could lead to antimicrobial resistance.
(For brevity, I haven’t covered other factors, such as how much the pathogen will spread or how well-prepared humanity is for a pandemic).
We can gather data about these factors and extrapolate from it. Under the interpolation approach, on the other hand, we also consider risks that could lead to outcomes as extreme as human extinction. At first glance, it might not be clear why this would help. If it’s hard to quantify moderately bad tail risks, surely it’s harder to quantify more extreme tail risks? The answer is that the more extreme risks can tell us something qualitative, even if the quantification is challenging.
For example, according to the book The Precipice, written by Toby Ord from the Future of Humanity Institute, the probability of human extinction from a pandemic during this century is around one in 30. (Note: I would not suggest you trust this number blindly.) This seems surprisingly high, given that humanity has already survived for substantially more than 30 centuries. The main reason for this apparent discrepancy is the risk from engineered pathogens – pathogens made in a lab.
In our actuarial capacity, we probably aren’t intrinsically worried about human extinction events – in that scenario, the last thing we would care about is whether an insurer was going to honour its obligations! So let’s bring this back to the actuarial domain and apply it to, say, a one-in-200-year event, or a risk assessment more suitable for Solvency II Pillar 2. Should engineered pathogens be an important component of our model? Unless we totally disbelieve the one-in-30-centuries estimate, the answer may well be yes. In this case, several other questions would become important:
- What types of research have been/are occurring, and how much of that research involves ‘dual-use’ research in which potentially dangerous pathogens may be created? For example, in 2011, some researchers took H5N1 flu – which has an estimated 50%–60% fatality rate – and modified it to be more transmissible. A pathogen that is far more fatal than COVID-19 while also being highly transmissible would be devastating.
- How safe are labs? How likely is it that a potentially dangerous pathogen created for research purposes could leak from a lab? In June 2021, Dr Filippa Lentzos of King’s College London published an article arguing that, of the 59 labs that handle the most dangerous pathogens – the so-called ‘biosafety level 4’ labs – only a quarter score highly on safety.
- How many states are furthering illegal bioweapons programmes? We know that some, including Russia, have bioweapons programmes – contrary to international law.
- There are regimes to enable international collaboration to stop bioweapons from being developed and used, but how strong are they? The existing 1975 Biological Weapons Convention is profoundly underfunded, having a smaller budget than a typical McDonald’s restaurant.
- How likely is bioterrorism? How much easier is it (now and in the coming years) for a lone scientist to create a highly pathogenic virus? How much is happening on the part of vendors of biotech and regulators to control those risks?
Can we quantify these extreme risks?
Let’s say we want to explicitly incorporate these considerations in a model of how pandemic risk could affect a life or non-life insurer. Is it even possible to quantify these extreme risks? And is there any benefit in quantifying those risks?
Quantifying these risks may seem fiendishly difficult, but the actuarial profession has surmounted similar challenges before. For example, longevity event risks also refer to events that have not happened before and whose emergence depends on unknown future circumstances; nonetheless, those risks can be quantified in a Solvency II Pillar 1 context. Techniques that can be used when data is scarce include expert surveys and Bayesian methods. Communicating the uncertainty or the error bars becomes important when modelling without lots of historic data, but it can be done. Furthermore, risks do not necessarily need to be modelled in a Solvency II Pillar 1 way in order to be understood or acted upon.
The models would also need to explore what happens when a severe pandemic strikes – in other words, how it would affect the insurer’s balance sheet. One example of this is asset risk: early 2020 saw severe asset volatility. Furthermore, future pandemics could incapacitate enough people that critical infrastructure no longer functions, which could lead to deeper and more persistent asset shocks.
Similarly, on the liability side, not only would mortality and morbidity exposures be significant, but COVID-19 has highlighted that non-life insurers can also be affected by pandemics. Worryingly, engineered pandemics could operate in several hard-to-anticipate ways, which could cause them to interact with other non-life insurance liabilities beyond those most badly hit by COVID-19. But even if we can quantify those risks, is it worth it? Does it yield an insight that is actually decision-relevant?
Let’s say a life insurance company is choosing how much longevity, mortality and asset risk to have on its balance sheet. To make this decision, it models how much capital it believes it should hold for each risk. It might then use this capital model to guide business decisions to reposition its risk profile.
This example does not include any modelling of systemic risks (for example, what if there is a climate catastrophe, or a pandemic worse than COVID-19?). If the systemic shock is going to harm all assets and liabilities indiscriminately, modelling the risk won’t tell you how to reposition your risk profile. You might argue that as long as the company holds enough capital – for example, by holding a buffer in excess of the solvency capital requirement – then the insurer’s actuaries have done their job.
The above example focused on decisions an insurance company might make to influence its own operations. However, insurers can make other decisions, including decisions that have a positive influence on the wider world. As asset owners, insurers hold trillions of dollars of assets, and could have substantial influence over the real economy. They may find that directly funding things that make the world better could be a net positive for them, after accounting for the cost of doing so and the potential benefit to their balance sheets. Insurers also have a substantial voice and can influence society; similar comments could be made about the actuarial profession.
The actuarial community has been very active during the COVID-19 pandemic, and has much to be proud of. My hope is that our modelling and risk management capabilities mean that we prevent the next one.
All views are the author’s own, not necessarily those of his employer or any other organisation he is associated with.
Sanjay Joshi specialises in ESG at Hymans Robertson