Last autumn, Paul Wilmott argued in these pages *(1)* that actuaries should become more involved in quantitative finance. He wrote, “We need models people can understand and a greater respect for risk... what high finance needs now are precisely the skills that actuaries have, a deep understanding of statistics, an historical perspective, and a willingness to work with data.” One way in which actuaries demonstrate a “respect for risk” is the regularity with which they invoke the concept of model risk: the possibility that one’s model does not adequately reflect the salient characteristics of the system being modeled. For example, loss reserving actuaries apply multiple models to non-trivial loss reserving problems because they are keenly aware that any given loss reserving technique or model is an incomplete representation of the underlying reality.

*[To view this article in its original print format click here and go to page 24”*

An awareness of model risk serves as a useful corrective to the sort of error that members of the quantitative finance community have recently been accused of making: conflating the indications of mathematically elegant models with the messy, inelegant reality being modeled. Such accusations are made most forcefully in Nassim Nicholas Taleb’s book *The Black Swan* (Allen Lane, 2007)*(2)*. Value at Risk (VaR) models and the Gaussian copula models used to value collateralized debt obligations (CDOs) have suffered particular criticism. Setting aside debates about the intrinsic merits of these models, it appears that model risk was not adequately recognised and managed in the years leading up to the financial crisis when these models were used to make important decisions.

It is therefore an opportune time to deepen our ’respect for risk’ by taking stock of our received wisdom about the various forms of risk that arise in actuarial and financial modeling. In particular it is interesting to consider model risk in conjunction with Taleb’s ’black swan’ discussion as well as its antecedents in the philosophical and economic literature: David Hume’s problem of induction and Frank Knight’s conceptions of risk and uncertainty.

**The three faces of risk**

For the purpose of this discussion, let us use the word ’risk’ to refer to the possibility that the actual outcome will deviate from the outcome forecasted by one’s model. It is common in the actuarial literature to distinguish between three types of risk:

**Process risk:**

This refers to the stochastic nature of the process being modelled and is reflected in the ’error terms’ of one’s model. The ’diversifiability’ of process risk is a fundamental principle of insurance: the variance of the average loss from *n* identical and independent risks decreases with *n*.

**Parameter risk:**

The parameters of any statistical model must be estimated from a finite amount of data and therefore cannot be known with certainty. Parameter risk corresponds to the non-zero standard error associated with of each of a model’s parameters. The more data is available to estimate the model’s parameters, the smaller this standard error becomes. But in general the parameters can never be known with certainty. Explicitly modeling parameter risk is a hallmark of both actuarial credibility theory and Bayesian statistics.

**Model risk:**

Here much can be said but a picture quickly illustrates a few of the things that can go wrong. In the picture below, a regression line was fit to each of 4 data sets (constructed by the statistician F. J. Anscombe)*(3)*.

The regression models fit to each of the four data sets are mathematically identical: both the model parameters and measures of variance are virtually identical in each of the 4 models. But only in the first scenario does the model seem appropriate to the data. In the second scenario, the assumption of linearity does not capture the non-linear nature of the data being modeled. Other familiar forms of model risk include the omission of important variables and incorrect distributional assumptions (for example Poisson where Negative Binomial would be more appropriate, or the kurtosis risk in financial models discussed by Benoît Mandelbrot *(4)* as well as Taleb).

The third scenario points to a more insidious form of model risk. Here, the model is obviously flawed because its parameters have been unduly leveraged by a single ’outlier’. One’s first impulse might be to simply discard the outlier as a rogue bit of dirty data; but it could equally well be a ’black swan’ indicating a more complex reality that we ignore at our peril. Similarly with the fourth scenario: perhaps the outlier is merely bad data and that variable x4 is unrelated to y4. On the other hand, perhaps there is an interesting relationship between x4 and y4 involving extreme values for which we have little data.

Indeed, model risk is present even in the first scenario, in which the regression model appears to fit the data well. For example, perhaps the 11 data points used to fit the model are a biased sample of a process that is in fact highly non-linear.

Even more insidiously, the relationship between the variables could simply change between the time the data was generated and the time the model is applied to make predictions. As observed many years ago by Adam Smith’s friend and University of Edinburgh colleague David Hume, there is never a guarantee that a relationship observed in the past will continue into the future. This fundamental fact is known as Hume’s problem of induction. Taleb has famously illustrated Hume’s problem with the parable of the black swan: in 17th century Europe, it was assumed that all swans were white. At the time no non-white swan had ever been observed. However, black swans were in fact discovered in Western Australia in the 18th century *(5)*. A single observation invalidated a perfect statistical regularity, and no analysis of historical data could have suggested the possibility of non-white swans.

A more recent illustration comes from some of the models used by ratings agencies to rate mortgage-backed securities. Michael Lewis reported that at least one agency used a model for home price increases that could not accept negative numbers *(6)*. It is easy to poke fun at such models in hindsight: one need not have read Hume’s *Treatise on Human Nature* or Taleb’s *The Black Swan* to entertain the possibility that housing prices can fall as well as rise regardless of what the data at one’s disposal indicate. But an important practical lesson should not be lost sight of: the fact that highly trained professionals employed by prominent financial institutions built such models and used them to make important decisions suggests that these fairly elementary observations about model risk need to be more widely appreciated and acted upon.

**What we talk about when we talk about model risk**

There are two varieties of ’model risk’ implicit in the above discussion. On the one hand, there are the various types of specification errors (omitted variables, outliers, inappropriate functional forms, inappropriate distributional assumptions, and so on) that can be illustrated in diagnostic plots like the ones above. This type of model risk is somewhat tractable in that it can be diagnosed through such exercises as simulation studies and applying models to holdout samples of data. On the other hand, there is the more fundamental form of model risk arising ultimately from Hume’s problem of induction. We can analyze historical patterns in the data as much as we’d like, but the data themselves cannot tell us whether these patterns will continue into the future.

Another theorist whose work bears on the concept of model risk is Frank Knight. Knight began his academic career studying philosophy at Cornell, but at the encouragement of his advisors he switched to economics. He went on to become one of the founders of the University of Chicago school of economics. His students included such legendary Nobel laureates as George Stigler, Milton Friedman, and Paul Samuelson. In his 1921 book *Risk, Uncertainty and Profit*, Knight distinguished between what he called ’risk’ and ’uncertainty’. This distinction bears on our discussion of model risk.

"Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, from which it has never been properly separated. The term ’risk,’ as loosely used in everyday speech and in economic discussion, really covers two things which... are categorically different... ’Risk’ means in some cases a quantity susceptible of measurement... A measurable uncertainty, or ’risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We shall accordingly restrict the term ’uncertainty’ to cases of the non-quantitative type. It is this ’true’ uncertainty, and not risk, as has been argued, which forms the basis of a valid theory of profit and accounts for the divergence between actual and theoretical competition." *(7)*

Properly interpreting Knight’s remarks would demand an essay much longer than this one, but it should be clear that the distinction between ’Knightian risk’ and ’Knightian uncertainty’ bears on the actuarial community’s familiar distinction between process, parameter, and model risk. Because it can be surprisingly difficult to think clearly about fundamental concepts, philosophical discussions tend to dwell on simple examples. In that spirit, let us continue by considering the humble coin toss. Suppose we are told that 53 out of 100 flips landed heads and want to forecast the proportion of heads in the following tosses.

Process risk arises because of the inherently stochastic nature of the coin toss. But process risk is also diversifiable, we would be more confident predicting that somewhere between 400 and 600 of the next 1000 flips will land heads than predicting that between 4 and 6 of the next 10 flips will land heads. Parameter risk arises because we used a finite amount of data (100 flips) to estimate the unknown “true” probability of heads. If we had used 10,000 flips the parameter uncertainty would have been reduced but not eliminated. Bayesian statistics could be used to reflect parameter risk in our model. Explicitly, we would replace the binomial distribution:

*[See equation 1”*

with a Bayesian posterior distribution of the form:

*[See equation 2”*

The ’mixing measure’ µ would be more or less sharply peaked around the sample average of heads depending on how many flips were used to form that average *(8)*.

Does model risk pertain to a scenario as rudimentary as coin tossing? Yes: for example, some magicians are able to toss an ordinary coin so that it lands heads more than 50% of the time. Suppose the tosses were in fact generated by such a magician who elects to invoke his special ability only when the previous toss landed tails. In this case, the assumption of exchangeability implicit in the above model would be violated, and a more appropriate model would involve transition probabilities. Furthermore, the information that 53 out of 100 flips landed heads would not be a ’sufficient statistic’ in this scenario: we would also need to know the outcome of the most recent flip. This example is of a piece with the types of model risk illustrated in the above diagnostic plots: closer inspection of the data, simulation, out-of-sample testing, and so on can often be used to detect model misspecifications such as the one illustrated here.

For a more Knightian form of model risk, suppose our sample of 53 heads in 100 tosses was generated by a magician who was being lazy and did not use his special ability at all. Suppose also that after we have built our model using the historical data, someone offers to pay the magician £1 for each toss that lands heads. Our Bayesian model accounts for both process and parameter risk, but does not account for the possibility that our coin-flipper is a magician who is suddenly influenced by an unexpected monetary incentive. This latter example of model risk arguably illustrates Knightian uncertainty: many are not aware that some people are able to influence the probability of heads at will. Recall Knight’s characterization of “true uncertainty” as risk that is not capable of being quantified. We cannot quantify risks that we are not aware of *(9)* .

**Black swans and red herrings**

Actuarial discussions of ’model risk’ tend to focus on the types of model specification issues that modelers can test and improve as part of an iterative modeling process. In principle, this sort of model risk can be reflected by assigning Bayesian probabilities to combine a number of models into a single ’model of models’ *(10)*. The above discussion of Knightian uncertainty suggests that another form of ’model risk’ arises from our inability to form a complete sample space of contingent events whose probability can realistically be measured.

But even this is not the last word on ’model risk’. In his 1996 note ’Model Risk’ *(11)*, the physicist/financier Emanuel Derman enumerates a variety of model risks that arise from practical as well as theoretical considerations. In addition to discussing the types of model misspecifications mentioned above, Derman points out that the end-to-end process of designing, developing, and using real-world models is highly interdisciplinary in nature. Therefore a variety of model risks can emerge at various points of the process. Domain knowledge, IT and programming skills, and sound business implementation strategies are no less crucial than statistical modeling abilities for avoiding the full panoply of model risks.

Most of all of the risks Derman discusses in the context of financial modeling are risks that I and my colleagues regularly face when developing pricing and underwriting models for insurers. For example, even well-designed models can be incorrectly solved either due to carelessness or for more subtle reasons; modern-day models are ultimately pieces of software susceptible to such risks as programming mistakes, logical errors, and rounding discrepancies; and even correct models can be poorly implemented or otherwise used inappropriately *(12)*. This latter source of model risk relates to Wilmott’s plea for “models people can understand”: simplicity, transparency, documentation, education, and collaboration with end users are crucial in order to ensure that even well specified models don’t lead users astray. Model risk arises from red herrings as well as black swans.

In short, there are many sources of model risk, ranging from the mechanical/practical (software and implementation errors, and poor communication), to the technical/statistical (model misspecification), to the fundamental/philosophical (black swans and Knightian uncertainty). It is worth keeping all of these in mind so that we do not implicitly define away important issues in the way we collectively use the helpful expression ’model risk’.

*[To view this article in its original print format click here”*

________________________________________________________________

(1) Paul Wilmott: *Actuaries versus Quants*, *The Actuary*, October 1, 2008.

*http://www.the-actuary.org.uk/815707*

(2) See the August 2007 issue of *The Actuary* for a positive review of *The Black Swan*. >http://www.the-*actuary.org.uk/pdfs/07_08_11.pdf*

(3) These datasets initially appeared in Francis J. Anscombe’s 1973 *Graphs in statistical analysis* (*American Statistician 27*). In 1989 Edward Tufte used Anscombe’s datasets in the opening pages of his classic *The Visual Display of Quantitative Information* to illustrate the importance of data visualization.

(4) See for example Mandelbrot’s 1999 Scientific American article, *How Fractals Can Explain What’s Wrong with Wall Street*.

*http://www.scientificamerican.com/article.cfm?id=multifractals-explain-wall-street&offset=3*

(5) Taleb is the latest in a long line of writers who have used black swans to illustrate problems in induction. The list also includes John Stuart Mill, Karl Popper, Hans Reichenbach, and Rudolph Carnap. Note also that Taleb’s book covers many important themes other than the philosophical problem of induction. Taleb’s focus is on ’black swan events’, such as the 1987 stock market crash, the 1998 Long Term Capital Management collapse, the 2000 bursting of the tech bubble, and the 2007 subprime crisis, that have outsized effects. Taleb points out that such events are often rationalized *ex post*, and dubs this phenomenon ’the narrative fallacy’. In addition, Taleb’s own trading strategies exploit an important theme from behavioral economics: people tend to underestimate the probabilities of unfamiliar and rare events.

For another of Taleb’s forays into behavioral economics, see his recent essay, *We don’t quite know what we are talking about when we talk about volatility*, coauthored with Daniel Goldstein. *http://papers.ssrn.com/sol3/papers.cfm?abstract_id=970480*

(6) *The End* by Michael Lewis, *Condé Nast Portfolio.com*, December 2008

*http://www.portfolio.com/news-markets/national-news/portfolio/2008/11/11/The-End-of-Wall-Streets-Boom*

(7) *See p.* 19-20 of the 2006 Dover reprint of Knight’s 1921 book.

(8) A fundamental result in Bayesian statistics, known as de Finetti’s Representation Theorem, is relevant here. In classical statistics it is typically assumed that quantities are independent and identically-distributed (iid). Bayesian models typically make the weaker assumption of *exchangeability*: the order of a finite set of random variables does not affect the joint probability. In the coin toss example, a sequence X1, X2, X3, ... of {0,1}-valued random variables is said to be exchangeable if all finite sequences of the same length and containing the same number of ones is equally likely. More formally, for all positive integers *n* and permutations ƒÃ of {1, 2, ..., *n*},

*[See equation 3”*

where *ei *denotes either 0 or 1. Note that the assumption of exchangeability can be interpreted as a formalization of the notion that the future is predictable based on past experience.

Let Sn=X1+X2+...+Xn. de Finetti’s representation theorem states that the limiting relative frequency of this exchangeable sequence Z„klimn„³„V(Sn/n) exists with probability 1 and

*[See equation 4”*

Where ƒÝ is the probability distribution of Z: ƒÝ(A) = Pr(Z „¡ A). In short, de Finetti’s theorem tells us that an exchangeable sequence can be represented as a mixture of iid sequences.

An interesting historical note is that de Finetti published many of his seminal papers while working as an actuary at the Assicurazioni Generali insurance company in Trieste. He later took chaired mathematical finance professorships at the Universities of Trieste and Rome. de Finetti’s work was unfamiliar in the Anglo-American world until L. J. Savage introduced it in the 1950s. (Savage wrote seminal papers in Bayesian decision theory with his University of Chicago colleague Milton Friedman, and the 1954 publication of his book *The Foundations of Statistics* was a watershed event in the modern renaissance of Bayesian statistics.) For reflections on exchangeability and de Finetti-type representation theorems, see Sandy Zabell’s exemplary *Symmetry and its Discontents* (Cambridge University Press 2005).

(9) It is worth noting that the FSA’s report on lessons learned from the banking crisis, written by Adair Turner, states that Knightian uncertainty suggests that “there may be extreme circumstances in which the backup of risk socialization (e.g. the sort of government intervention now being put in place) is the optimal and the only defense against system failure.” (p.45) *http://www.fsa.gov.uk/pubs/other/turner_review.pdf**.*

In addition, the Nobel laureate Edmund Phelps linked the banking crisis to Knightian uncertainty in a recent *Financial Times* commentary (April 15, 2009). *http://www.ft.com/cms/s/0/41f536ee-2954-11de-bc5e-00144feabdc0.html?nclick_check=1*

“But why did big shareholders not move to stop over-leveraging before it reached dangerous levels? Why did legislators not demand regulatory intervention? The answer, I believe, is that they had no sense of the existing Knightian uncertainty. So they had no sense of the possibility of a huge break in housing prices and no sense of the fundamental inapplicability of the risk management models used in the banks. "Risk" came to mean volatility over some recent past. The volatility of the price as it vibrates around some path was considered but not the uncertainty of the path itself: the risk that it would shift down. The banks’ chief executives, too, had little grasp of uncertainty. Some had the instinct to buy insurance but did not see the uncertainty of the insurer’s solvency.”

(10) See for example Cairns, A.J.G., (2000) “A Discussion of Parameter and Model Uncertainty in Insurance”, *Insurance: Mathematics and Economics*, 27: 313-330.

(11) Emanuel Derman, “Model Risk”,* Goldman Sachs Quantitative Strategies Research Notes*, 1996. *http://www.ederman.com/new/docs/gs-model_risk.pdf*

(12) Derman’s discussion of the various sources of model risks is helpful when evaluating debates about the usefulness of VaR. At one extreme of the debate is Taleb, who calls VaR “a fraud”. On the other hand Joe Nocera’s *New York Times* article *Risk Mismanagement* (January 4, 2009) suggests that Goldman Sachs – Emanuel Derman’s former employer – was able to avoid excessive losses on mortgage backed securities in the summer of 2007 by having paid close attention to clues offered by a number of risk models, including VaR. The implication is that independently of whatever merits and flaws are intrinsic to VaR, an organization’s vulnerability to model risk hinges partly on the way it uses such models in its decision-making processes. In other words, business implementation is part of the subject of model risk.

_____________________________________________________________

This publication contains general information only and is based on the experiences and research of Deloitte practitioners. Deloitte is not, by means of this publication, rendering business, financial, investment, or other professional advice or services.

This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte, its affiliates, and related entities shall not be responsible for any loss sustained by any person who relies on this publication.

As used in this document, ’Deloitte’ means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.Deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries.

Copyright © 2009 Deloitte Development LLC, All rights reserved.