Former Governor of the Bank of England Mervyn King talks to Dan Georgescu about over-reliance on black box models, and the impossibility of quantifying all risks
Risk’ has two meanings: the common definition, ‘an unpleasant outcome’, and the financial theory definition, ‘measurable volatility’. The two are not the same – a man on death row who knows he will be executed tomorrow does not have to worry about volatility, even though his life is at risk. However, while risk-as-volatility is here to stay, Mervyn King and John Kay’s new book Radical Uncertainty argues that we must distinguish between unknown outcomes that can be described probabilistically (such as losing at roulette) and those that cannot (such as interest rates 20 years from now). The latter is an ‘uncertainty’ that is not measurable, and the best we can say is that we simply do not know. Good risk management should mean we are prepared for risks that threaten survival, whether or not we can attach a probability to them.
Black box models
The problem arises when uncertainties are confused with calculable risks. For instance, Value at Risk (VaR) models used for capital adequacy calculations in Solvency II and other regimes assume the past is a guide to the future. This is despite the fact that we know some of the assumptions do not hold in all cases – such as the stationarity of distributions for investment returns.
We live in a non-stationary world, so using data to construct probability distributions can be dangerous. “It is extremely dangerous to believe that you can say, ‘this is the precise amount that’s at risk’,” says King. “The idea that, for a tail of a distribution, we know enough about what’s going on to give a precise number in the tail is weird, frankly. It’s another example of regulation being overly precise, and as a result people make up numbers. Regulators accept them because they want to have a number, and people go on fooling themselves.”
To illustrate this point, the book discusses ‘the Viniar problem’ – the mistake of believing we have more knowledge about the real world because we have built a black box model. It is named after David Viniar, CFO of Goldman Sachs during the 2007 financial crisis, who stated at that time that the financial world was experiencing 25-standard deviation events several days in a row. “The risk is you stop thinking about what the risks are,” says King. “It is nonsense to say that we were facing 25-standard deviation events several days in a row.”
The real problem was that the models used were hopeless at capturing the risks faced. “If you’re going to insist on calculating lots of numbers from the past, you need to make sure you don’t think there is a risk of something happening in the future that hasn’t happened in the period from which you are using data,” says King. “There had been plenty of banking crises, and it should not have been beyond the wit of man to realise that there could be another, and that the models were not capturing that. People delegated responsibility by putting faith in black box models that they didn’t really understand but that allowed them to say, ‘We’re using the best-practice techniques here.’”
“People delegated responsibility by putting faith in black box models that they didn’t really understand”
What was the probability of COVID-19?
Radical Uncertainty was written in 2019, and somewhat prophetically states that “we must expect to be hit by an epidemic of an infectious disease resulting from a virus which does not yet exist”. King brings this up not to demonstrate his great foresight, but rather to distinguish black swan ‘unknown unknown’ events from the sorts of uncertainties we could have predicted. Both are types of uncertainty, and neither can be quantified probabilistically.
The aim of good risk management, King says, should be to create resilience and robustness so that the risk at hand can be withstood – whether that be within our financial services, economy or health service. However, he continues, “although we must expect this, that does not enable you to attach a number to an event such as ‘a virus emerges from Wuhan in China in December 2019’. To say the probability of that was 8% or 37% would make absolutely no sense. There is no basis on which you can make that kind of statement.”
King is clear about the nature any future learning should take. “There’ll be an opportunity later on to look back on all of this and ask, ‘How should we be prepared in future?’ That’s the kind of question that you can ask – but what you do not do is say, ‘The probability of that happening is ‘X’ per cent; therefore, we should be prepared to spend ‘Y’ pounds.’ That really is not a judgment that makes any sense.”
What is going on here?
Where does this leave the project of using tails of distributions to quantify uncertainty for capital purposes? I ask King about operational risk models, which aim to quantify once-in-a-lifetime events for which there is never enough data to identify the appropriate statistical distribution in the tail. “In that situation, it makes no sense to pretend that we have some kind of quantitative representation of the risk,” he says. Rather than relying on bureaucratic risk management processes, King maintains that it would be “better to ask questions like: What could go really badly wrong? What would be a disaster here?“There’s no point being as lean and efficient as possible in normal times if, when some unexpected event comes along, you fail to survive it,” he says. “Survival is a very important characteristic of a company that wants to have a long life.”
In this context, King warns against the potentially spurious accuracy of results derived from models. “I think it is important not to get fooled by the apparently precise nature of black box models, which are usually designed by someone else with fancy distributions but made-up numbers. If the numbers are made up, it doesn’t matter what distribution you have. These numbers can be very dangerous and misleading.”
Indeed, the main lesson one takes from Radical Uncertainty is that there is no substitute for asking ‘What is going on here?’“We are not against using models,” King clarifies. “We are in favour of using models to throw insight onto problems, rather than to forecast the future. We think there is really no substitute for boards of directors, decision-makers of all kinds, actuaries in their day-to-day business, asking ‘What is really going on here?’ as a means of getting a grip on the problem they are confronting and understanding why they are calculating numbers. What are these numbers about? What are they for? What decision depends on them? Thinking those things through more deeply is worth a thousand numbers coming out of a black box model.”
“There is really no substitute for decision-makers asking ‘What is really going on here?’”
To emphasise that models can be useful, he cites the epidemiological models concerning the spread of a disease resulting from a virus we haven’t seen before. Those models, King says, are helpful for understanding the general path of an epidemic: “Why it spreads slowly to begin with – and why it’s hard, therefore, to come to an understanding of how important and serious it is – before suddenly it accelerates away from you and peaks, and eventually comes down again.
“What these models are not good at is predicting the future path of the disease,” he continues. “That’s because the parameters that have to be fed into the model are things we know nothing about. We didn’t know the true nature of the virus, we didn’t know the fatality rate or the rate at which it would spread, and we certainly didn’t know what people’s response would be to measures announced by the government, whether that’s a lockdown or indeed measures to ease the lockdown. These parameters tend to be made up. That gives the impression that we are following the science, but we’re not. The science doesn’t tell you this.”
What can actuaries do differently?
What does this mean for actuaries? “Part of the education should be to get actuaries to explain to stakeholders that radical uncertainty means you cannot quantify certain kinds of uncertainty,” says King. “It may be better to think hard about having fewer numbers that are crucial to the decision. Get people to ask what a decision is really dependent on, and what really matters for this decision. Then try to focus on producing helpful numbers for that decision. Don’t believe there is a framework, like a spreadsheet or formula, that you can apply to every situation. There isn’t a model that you can just carry around with you and apply to every customer, every client.”