Open-access content
Wednesday 26th November 2014
—
updated 5.13pm, Wednesday 29th April 2020
The IFoAs Model Risk Working Party reflects on the cultural aspects of model risk

The practice of modelling can enlighten and frustrate us in equal measure. We are pleased by our insights and technical advances. But the more we use and reflect on our models, the more we become aware of their limitations - the roughness of approximations to complex problems, the sensitivity of results to assumptions that cannot be validated, and the reliance on past data when our concern is the future.

A stock response to such frustration is: "Well, a model is just a model; you cannot expect it to be always right!" The Federal Reserve's guidance on model risk management warns against inappropriate use of models and emphasises
the need to understand limitations and assumptions. The authors of The Dog And The Frisbee paper, presented by Bank of England executive director Andrew Haldane in 2012, went further, arguing that the complexity of financial risk requires simple and robust metrics, rather than elaborate models.
Such reasoning, while justified, does not settle the argument. It implicitly assumes model outputs drive decisions in a straightforward manner, such that errors in assumptions directly translate to errors in decisions. But this is not necessarily how, in our experience, decisions happen. As for restricting the use of models to those applications where likely errors are insubstantial: that should reasonably exclude the calculation of '1 in 200 year' losses - an actual requirement under Solvency II - from the scope of internal models. And that is not something we would bet on happening.
In fact, a variety of responses to models and uncertainty can be observed. Figure 1 shows the mean and 99.5th percentile of an annuity value distribution for seven different models, each fitted to the same data. It is clear the sensitivity to model choice dominates the variability reflected within any given distribution.
Faced with such uncertainty, different responses are plausible, each rational in its own way. Business cannot just stop; some will say "let's pick the likeliest model and run with it". Others may find that one particular model corroborates their intuition and legitimises their plans. There will also be those who solemnly declare that "more research is needed on this important topic". Meanwhile, the sensitivity to model choice will confirm the beliefs of some, that "longevity is just too hard to model" and that "we should not have taken on such risk in the first place". They all have a point and they will never agree.
Alternative perceptions
Different ways in which modelling and its use in decision making are perceived within an organisation are categorised in the image above. The horizontal axis reflects the perceived legitimacy of modelling: in the right half-plane, models should be used in decision making; in the left half-plane, they should not. The vertical axis reflects concern with uncertainty: stakeholders in the top half-plane are confident in their processes leading to good decisions; in the bottom half-plane they are not so sure.
Using, or indeed not using, models in a way consistent with each quadrant generates different sorts of risks. At top-right, Confident Model Users are keen to optimise decisions, a process that can and should be driven by modelling. But such agents are likely to ignore for too long evidence discordant with their models. In this quadrant, the main risk consists of model inaccuracies driving wrong decisions. This is indeed the kind of model risk that the Federal Reserve's guidance seeks to address.
At bottom-right, we find Conscientious Modellers, for whom technical expertise and professionalism are of paramount importance. Model uncertainty can be quantified and the fitness for purpose of models clearly defined. But such agents can be obstructive in putting models to business use, by delaying model releases or limiting the scope of their applications. Furthermore, the overall scientific paradigm they use - no matter how necessary for their deliberations - may be flawed.
Uncertainty Avoiders populate the bottom-left corner. In their view, all risks that matter
are ever-changing and interconnected. Paradigms constantly shift and modellers are like the proverbial "general fighting the last war". Decisions should be robust to model uncertainty; but such decisions will usually be highly suboptimal. The position of Uncertainty Avoiders can become difficult in an organisation focused on delivering profit, but they may find a friendly home in, say, an emerging risks committee.
Lastly, Intuitive Decision Makers don't see why models should be used in the first place. Gut instinct and market knowledge will always trump mathematical abstraction. Whether a model is correct or not is for them an issue of
no relevance. The immediate risk with such an attitude arises from ignoring the information and insight that a model can bring; human intuition cannot always cope with the full complexity of the problems that we often have to tackle.
Intuitive Decision Makers may, nevertheless, feel compelled to demonstrate the use of a model (by perceptions of best practice or by regulatory stricture). Then they will show strong preference towards a model whose results align with their views and corroborate their position. Modellers may be given incentives to generate the 'right results'. The major risk here is loss of accountability: if intuition fails, will it be recognised as such or will models, and modellers, take the blame?
We believe that all four perspectives need to be represented in - and responded to by - model governance, with each required to challenge and respond to the others. Hegemony of a single perspective is self-defeating. Conscientious Modellers, possibly to their chagrin, need the operational focus of Confident Model Users (to attract investment in the model), the scenarios imagined by Uncertainty Avoiders (to challenge long-held wisdoms) and the survival instincts of Intuitive Decision Makers (to ensure model use does not lead to commercial disadvantage).
At the same time, Conscientious Modellers can use the model to challenge Intuitive Decision Makers, by demonstrating "what you have to believe" for the model to be consistent with intuition. Such a challenge reveals management's implicit assumptions and enhances accountability.
In our analysis, multiple perspectives on risk and modelling are considered legitimate, which market participants may not find easy to express in public.
For instance, would the following statement ever be acceptable in the context of regulatory model review? "Assumptions are consistent with empirical evidence and best modelling practice. Model uncertainty remains high. The precise model calibration is such that standard outputs are also consistent with senior management's perspective of a commercially reasonable capital requirement." For many, that may be too much candour to stomach. But when uncertainties are deep and stakes are high, risk can no longer be the exclusive domain of technical experts.
By pretending that modelling is a purely scientific exercise, we create conditions of self-censorship and unaccountability. Good governance, as well as good science, requires transparency. For this we need more, not less, politics, with all the noise and uneasy compromises that engaged dialogue involves.
Model Risk working party
The working party group members include Andreas Tsanakas, M Bruce Beck, Tim Ford, Michael Thompson and Ivy Ye
Figure 1: Mean and the 99.5th percentile of annuity value for a 70 year old male discounted at 3% p.a, for seven different models
