Dr Andreas Tsanakas examines the use of internal models in insurance, the associated pitfalls, and how solvency regulation should approach them

In an insurance world preoccupied with Solvency II, internal models have increased in both complexity and business significance. Nonetheless, actuaries are painfully aware of models' limitations in representing the economic world, not least because of their reliance on often arbitrary assumptions. It is commonplace to say "all models are wrong but some are useful". But in what sense might an internal model be wrong? If a model is wrong, how can it be useful? Significantly, what incentives does regulation produce for model development and use?
Solvency capital requirements involve calculating the probability of extreme events, as well as the probability that a confluence of such or less extreme events produces a high financial loss. The focus on rare events makes statistical estimates intrinsically unreliable, as they are obtained from analysing limited, sometimes non-existent, data. Moreover, the complexity of models, mirroring that of insurance enterprises, exacerbates potential errors by increasing the sensitivity of model outputs to assumptions that cannot be supported by empirical evidence. For example, for highly 'granular' internal models, changes in correlation parameters that are easily dominated by statistical error lead to swingeing movements in estimated portfolio value at risk (VaR).
Still, such problems do not deter us from quantitatively modelling risks. When technically valid estimates are hard to come by, we are happy to make do with estimates that are socially valid - they are 'shared by others, are stable, and are believed in with confidence'.2 We may ask: "How much capital should be reasonably allocated to operational risk?" The expert judgment used to answer does not involve a mental calculus of probabilities; instead it considers the social expectations of stakeholders. If '12% of the total capital' happens to be the answer, it is reasonable only because we agree that it is.
Solvency capital requirements involve calculating the probability of extreme events, as well as the probability that a confluence of such or less extreme events produces a high financial loss. The focus on rare events makes statistical estimates intrinsically unreliable, as they are obtained from analysing limited, sometimes non-existent, data. Moreover, the complexity of models, mirroring that of insurance enterprises, exacerbates potential errors by increasing the sensitivity of model outputs to assumptions that cannot be supported by empirical evidence. For example, for highly 'granular' internal models, changes in correlation parameters that are easily dominated by statistical error lead to swingeing movements in estimated portfolio value at risk (VaR).
Still, such problems do not deter us from quantitatively modelling risks. When technically valid estimates are hard to come by, we are happy to make do with estimates that are socially valid - they are 'shared by others, are stable, and are believed in with confidence'.2 We may ask: "How much capital should be reasonably allocated to operational risk?" The expert judgment used to answer does not involve a mental calculus of probabilities; instead it considers the social expectations of stakeholders. If '12% of the total capital' happens to be the answer, it is reasonable only because we agree that it is.
When such issues are acknowledged by insurance practitioners, it can be with resignation. But despairing at models' lack of technical validity is to misunderstand their function. Whether models are importantly wrong depends on the application. While estimating accurately a 1-in-200 years loss is illusory, models may help answer other questions satisfactorily, such as the probabilities of less extreme scenarios or the relative impacts of exposure changes on the total risk profile. More generally, the usefulness of a model is not reducible to the accuracy of its outputs. A model is a metaphor.3 Models can be presented with different inputs and their outputs studied; such interrogations help us make sense of the aspect of reality that is being modelled.4
Educational process
In particular, internal models can be used to educate management in aspects of risk, by illustrating concepts, analysing scenario impacts, studying sensitivities, and showing the range of possible outcomes. We learn from modelling itself, not from summaries of model outputs. Moreover, models are tools for communicating risk across organisations and informing commercial transactions. For example, model output is often used to demonstrate the value of a reinsurance product to a potential buyer. As the subprime credit crisis has shown, models can be also be used to convince investors in complex products that they are not taking on much risk. But the resulting criticism of models often misses the point: the problem was not that models were wrong, but that enough people were willing to believe otherwise.
So what is the role of regulation? Regulation and rating agency requirements have done much to move the focus of risk modelling to quantities that cannot be reliably quantified, such as extreme percentiles. Regulators are naturally aware of the substantial potential for model error. Such awareness must, at least in part, be behind the increased emphasis that Solvency II places on model validation and documentation, and on embedding models into decision-making. This is sensible but not unproblematic. First, the focus on extreme events of low probability makes internal model output not only potentially inaccurate, but also hard to validate. After several years, it is possible to judge whether a long-tail liability portfolio was under-priced, but we may never know whether the portfolio had been capitalised consistently with the regulatory standard. Second, embedding the internal model into decision-making processes is seen as evidence of management's confidence in the model. It would be wrong to let such confidence count as evidence of technical validity.
Non-conformity cost
A different sort of problem arises from regulation establishing a causal link between a company's available assets (an economically driven figure) and internal model outputs (a statistical construct). Under Solvency II, internal model approval is often perceived to confer an economic advantage, by lowering the capital requirement in comparison with the standard formula. Consequently, the substantial investment in internal models may reflect the perceived cost of non-conformity, rather than management's own desire to be educated in the statistical aspects of risk. If insurance firms perceive that openness about the limitations of their modelling puts model approval at risk, they may try to conceal such limitations. But this has a corrosive effect, as 'confidence in the model' rather than 'learning from modelling' becomes a key story within the organisation.
As long as regulatory approval of the internal model is business-critical, modellers are required, along with other professionals, to make approval happen. Experienced modellers are a scarce resource, paid to deliver confidence, not doubt. Some may not risk undermining their role and status within the organisation by being fully open with management about uncertainties. A deeply embedded risk culture is needed to avoid such perverse incentives. Solvency II not only requires, but also necessitates, strong corporate governance.
The regulator's role is no less challenging. Suppose a firm decides to be candid about the potential for model error, and shows the regulator a sensitivity of capital requirements to unverifiable statistical assumptions. While honesty will be appreciated, once the potential inaccuracy of model output is on the record, it cannot be ignored. To allow for the possibility of even a single internal model being approved, regulators need to be tough in the overall supervisory review, but tactful in explicit enquiries about the accuracy of model outputs.
So what should we do? Breaking the nexus between regulatory capital requirements and statistical risk modelling is not a realistic choice. Problems such as the ones described are the price we pay for principles-based regulation. But a thorough validation process forms primarily evidence on the quality of a firm's reasoning around risk; assurance over the accuracy of model outputs can be of only secondary importance. In that context, the model-approval process is a platform for having informative conversations about risk. Pretending that model outputs at the 1-in-200 years level can be meaningful may be the premise of such conversations. Even if the pretence is somehow useful, we should question whether it is necessary.
References
1. Box (1976). Science and Statistics. Journal of the American Statistical Association 71 (356), 791-799.
2. March (1994), A primer on decision-making: how decisions happen, New York: Free Press.
3. For models as representations and much more, see Edwards and Hoosain (2012), The philosophy of modelling, www.sias.org.uk/diary/view_meeting?id=SIASMeetingJune2012
4. Morgan (2001), 'Models, stories, and the economic world,' Journal of Economic Methodology 8 (3), 361-384.
Educational process
In particular, internal models can be used to educate management in aspects of risk, by illustrating concepts, analysing scenario impacts, studying sensitivities, and showing the range of possible outcomes. We learn from modelling itself, not from summaries of model outputs. Moreover, models are tools for communicating risk across organisations and informing commercial transactions. For example, model output is often used to demonstrate the value of a reinsurance product to a potential buyer. As the subprime credit crisis has shown, models can be also be used to convince investors in complex products that they are not taking on much risk. But the resulting criticism of models often misses the point: the problem was not that models were wrong, but that enough people were willing to believe otherwise.
So what is the role of regulation? Regulation and rating agency requirements have done much to move the focus of risk modelling to quantities that cannot be reliably quantified, such as extreme percentiles. Regulators are naturally aware of the substantial potential for model error. Such awareness must, at least in part, be behind the increased emphasis that Solvency II places on model validation and documentation, and on embedding models into decision-making. This is sensible but not unproblematic. First, the focus on extreme events of low probability makes internal model output not only potentially inaccurate, but also hard to validate. After several years, it is possible to judge whether a long-tail liability portfolio was under-priced, but we may never know whether the portfolio had been capitalised consistently with the regulatory standard. Second, embedding the internal model into decision-making processes is seen as evidence of management's confidence in the model. It would be wrong to let such confidence count as evidence of technical validity.
Non-conformity cost
A different sort of problem arises from regulation establishing a causal link between a company's available assets (an economically driven figure) and internal model outputs (a statistical construct). Under Solvency II, internal model approval is often perceived to confer an economic advantage, by lowering the capital requirement in comparison with the standard formula. Consequently, the substantial investment in internal models may reflect the perceived cost of non-conformity, rather than management's own desire to be educated in the statistical aspects of risk. If insurance firms perceive that openness about the limitations of their modelling puts model approval at risk, they may try to conceal such limitations. But this has a corrosive effect, as 'confidence in the model' rather than 'learning from modelling' becomes a key story within the organisation.
As long as regulatory approval of the internal model is business-critical, modellers are required, along with other professionals, to make approval happen. Experienced modellers are a scarce resource, paid to deliver confidence, not doubt. Some may not risk undermining their role and status within the organisation by being fully open with management about uncertainties. A deeply embedded risk culture is needed to avoid such perverse incentives. Solvency II not only requires, but also necessitates, strong corporate governance.
The regulator's role is no less challenging. Suppose a firm decides to be candid about the potential for model error, and shows the regulator a sensitivity of capital requirements to unverifiable statistical assumptions. While honesty will be appreciated, once the potential inaccuracy of model output is on the record, it cannot be ignored. To allow for the possibility of even a single internal model being approved, regulators need to be tough in the overall supervisory review, but tactful in explicit enquiries about the accuracy of model outputs.
So what should we do? Breaking the nexus between regulatory capital requirements and statistical risk modelling is not a realistic choice. Problems such as the ones described are the price we pay for principles-based regulation. But a thorough validation process forms primarily evidence on the quality of a firm's reasoning around risk; assurance over the accuracy of model outputs can be of only secondary importance. In that context, the model-approval process is a platform for having informative conversations about risk. Pretending that model outputs at the 1-in-200 years level can be meaningful may be the premise of such conversations. Even if the pretence is somehow useful, we should question whether it is necessary.
References
1. Box (1976). Science and Statistics. Journal of the American Statistical Association 71 (356), 791-799.
2. March (1994), A primer on decision-making: how decisions happen, New York: Free Press.
3. For models as representations and much more, see Edwards and Hoosain (2012), The philosophy of modelling, www.sias.org.uk/diary/view_meeting?id=SIASMeetingJune2012
4. Morgan (2001), 'Models, stories, and the economic world,' Journal of Economic Methodology 8 (3), 361-384.
Filed in
Topics