Has your ERM framework missed a fundamental risk type? Graham Fulcher and Matthew Edwards explore the issue

Enterprise risk management (ERM) in insurance firms has concentrated on such tangible risks as mortality, reserving, financial, catastrophe and operational risk. In addition, insurers, particularly life insurers, have increasingly considered the behavioural traits of their policyholders. But insurers should pay particular attention to the behaviour of their own risk stakeholders, because their behaviour is itself a considerable source of risk. Indeed, it as a risk as important as model risk, if not more so, sitting 'above' many other sources of risk.
To put this idea in context, it is instructive to consider how risk management has moved over the last 10-15 years.
The focus on sources of variation (for example, risk) was initially all about parameter risk: by how much might equities move? Attention moved on to consider, in particular, how these different parameter risks might interact; and then, with such aspects 'solved', less tangible but very important forms of risk such as basis risk and model risk came into view. But even if a firm has all of these supposedly under control, there is still enormous scope for damage from the behavioural characteristics of its risk stakeholders.
Behavioural economics
Over the last 30 or so years, the term 'behavioural economics' has become common currency; indeed, the FCA's first 'occasional paper' was entirely devoted to the subject (Applying Behavioural Economics at the Financial Conduct Authority, April 2013).
The FCA defines behavioural economics as an area that "uses insights from psychology to explain why people behave the way they do. People do not always make choices in a rational and calculated way. In fact, most human decision-making uses thought processes that are intuitive and automatic rather than deliberative and controlled."
In our view, chief risk officers (CROs) of insurance companies and others involved in ERM have much to gain from an understanding of behavioural economics. One of the important roles of ERM is to help firms to make appropriate decisions in the face of risk and uncertainty. It is essential for a CRO to understand the common flaws in decision-making, to help individuals to overcome them, and to understand the implications for the firm's risk management framework.
Thinking, fast and slow
One of the leading researchers in this field is the psychologist Daniel Kahneman, who won the 2002 Noble Prize for Economics for his work (principally with the late Amos Tversky) on heuristics, biases and prospect theory.
Heuristics are experience-based techniques for problem solving, such as rules of thumb. Prospect theory is a generalisation of the classical utility approach, which allows for the biases that people exhibit when faced with uncertainty.
Kahneman has pulled together and amplified his work in this field over the last 40 years in his recent book Thinking, fast and slow, a work that has met with generally great acclaim. As well as identifying the various biases to which we are subject in the face of risk and uncertainty, Kahneman develops a vocabulary that people and firms can use to acknowledge and discuss these biases, and suggests ways in which the biases can be taken into account in decision-making.
We consider some of these biases and discuss applications to the role of risk management in insurance companies.

Anchor bias
Anchor bias is one of the best-known findings of experimental psychology. This bias occurs when individuals are asked to estimate an unknown quantity. If before estimation the individuals are presented with a particular value for that quantity then their estimates inevitably stay closer to that prior value than would otherwise have been the case.
Typically the way this is illustrated is by asking the question in two parts: for example subjects are asked the two questions:
? Was the Peace of Westphalia signed before or after 1815?
? What is your best estimate of when the Peace of Westphalia was signed?
This typically produces answers to the second question significantly later in average than a group asked the same questions but with the 1815 anchor changed to 1515 (in some cases even on average 300 years later).
Astonishingly, the same bias is produced even when the individuals 'know' (or
would if they were acting and thinking rationally) that the anchor in the first
question cannot have any influence on the second question (such as when they generate the last three digits of the first number themselves, for example from their own telephone number).
In a purely ERM context, anchor bias is often exhibited by insurers in their choice of parameters when building internal models. This bias is often encouraged by some of the main 'hurdles' in the insurance sector - regulators and auditors - expecting firms to lie close to some market benchmark or standard regulatory formula. Anchoring can also apply in a qualitative sense: insurers can be anchored in their model design to market-standard approaches or to models developed for a different purpose.
Anchor bias can also be important for finance and actuarial teams in insurers when setting reserves for new lines of business (especially where they are long-tailed). In this case it is often the business plan of the new underwriting team (in some cases the business plan that may have formed part of an acquisition or interview process) which can unwittingly act as an anchor. Furthermore, the standard Bornhuetter-Ferguson reserving technique can mathematically incorporate these results as an anchor on the real results for many years if (as is common) the business plan is used to set prior loss ratios.
Availability heuristic - risk identification
Availability heuristic is a shortcut that people make when trying to estimate the probability of events, in which their probability estimate is biased by how 'front of mind' the event is (in other words by the 'availability' of the event to their thinking).
Consider a well-known example: public surveys show that high-profile causes of death (for example, tornadoes, accidents, lightning strikes) are estimated as being much more frequent than they actually are, whereas the opposite is the case for 'lower-profile' causes, such as diabetes or asthma.
Even though risk evaluation is the core function of the insurance industry, insurers are not immune from this type of bias - as can be seen by reviewing surveys of which risks most concern insurance practitioners.
For example, consider the Centre for the Study of Financial Innovation's bi-annual Insurance Banana Skins survey, which asks respondents to rank the risks that most concern them. In the 2009 survey, the top four ranked risks: investment performance/equity markets/capital risks/macro-economic trends were all clearly related to the financial crisis. Only two years previously these had ranked 11th/13th/26th and unranked.
For a CRO (or other executive) whose role is to identify, assess and rank the risks facing a company, a clear understanding of this bias is key in the risk identification process. CROs can adopt a two-stage strategy here, splitting risk identification into working risk identification and tail risk identification.
Working risk identification focuses on risks with, say, a one-in-10-year return period (or similar order of magnitude). For these risks availability bias can, if anything, be a positive influence, and the focus is on recent history.
Tail risks (for instance, one-in-200-year risks) are where the impact of availability bias is greater. Strategies a CRO can adopt in tail risk identification to minimise this bias include:
? Consulting as widely as possible in
the organisation
? Reading as widely as possible across industries and looking at historical crises and events to expand the number of risks 'available'
? Looking back at past years' lists of major risks and consciously ensuring that the risk ranking does not vary too much from year-to-year in light of topical events
? Encouraging people in risk workshops to reduce their focus on recent events, perhaps by posing such questions as "Imagine you had not read a newspaper for the last five years; what risks would you see as facing our firm?"
Planning fallacy and related biases
Another key bias that Kahneman and Tversky identified was the planning fallacy, in which plans (for example business plans or project plans) are unrealistically close to best-case scenarios and significantly underestimate the likelihood or potential scale of failure.
Again this is an important consideration for a CRO whose key role is often identifying the risks inherent in a plan; be that a major project or the insurer's financial business plans over, for instance, a one or three-year time horizon.
A key remedy to counter the planning fallacy that Kahneman identifies is 'reference class forecasting' - that is, accessing as wide a possible a source of distributional information about the outcome of similar projects or plans, and especially information sourced from outside the enterprise doing the planning.
For an insurer, this typically involves making extensive use of market and external benchmarks and external advice.
There are many related biases which can both cause and aggravate the planning fallacy:
? Anchor bias (as discussed above) so that an initially optimistic plan becomes an anchor when considering risks
? The illusion of control and over-confidence: both in explaining the past and when considering the future, individuals are prone to dismiss poor
erformance or outcomes as one-off bad luck but to attribute good performance to skill. These illusions are manifestations of a broader bias - the optimism bias.
In the context of setting business plans both these biases are readily observed:
? We have already seen that business plans can often anchor initial financial results and even reserves over a period of time
? Likely future results are often assessed using an 'as-if' version of historical results that explicitly identifies incidences of past poor performance as being due to one-off, non-repeatable causes and which are then in
effect removed from the historical data used to set assumptions.
Managing behavioural risk
Risk culture is at the heart of an enterprise risk framework and we have seen great value in firms commissioning an external risk culture survey. However, as we have discussed above, even insurance risk professionals may demonstrate various biases in their decision-making.
One starting point to counter this problem is to include a behavioural assessment in such a risk culture survey. Another is to introduce an expert judgement policy and accompanying documentation process that seek to nullify these biases.
When developing a capital model one of the most important and often neglected risks is model risk. Model risk can be considered a 'meta' risk due to largely qualitative factors, for example: re-using an inappropriate old model; misinterpreting results; or failing to communicate the results of the model effectively. Insurers that are most advanced in capital modelling understand and mitigate model risk alongside other risks.
Behavioural risk is another meta risk: the risk that key stakeholders exhibit biases or behaviours which mean that a firm's whole ERM framework will not function as it is supposed to.
Those firms who want to develop truly effective ERM frameworks need to manage and mitigate behavioural risk.