Open-access content
Tuesday 10th April 2012
—
updated 5.13pm, Wednesday 29th April 2020

Imagine if, at a meeting, you offered your colleague a cup of tea, and he asked it to be made with seven spoons of sugar and a negative drop of milk, because that is what his model determined to be the optimal combination.

That would be strange. Of course, in this situation it would be clear to all that his model had severe limitations, what these were, and how to screen the output for sensible answers. Things aren't so simple, unfortunately, when trying to use financial models containing a range of explicit and implicit assumptions to determine an optimal set of management actions in respect of a complex product mix.

Actuaries are often accused of having models that are more complex than they need to be based on their intended usage, and in some cases these accusations are valid. At the same time, there are situations in which models are too simplistic for the reality that they are trying to represent, or where model output is misused, leading to inappropriate decision-making. Granted, we do not need a baseball bat to kill a spider but, on the other hand, we can't paint a house with a toothbrush and if we do, we should be aware of the consequences.

Understanding and communication of model risks is at the heart of effective risk management, and of particular importance in an environment in which the use of models to manage risks is increasing. In order to be able to communicate these limitations to decision-makers, it is crucial that, as actuaries, we have a complete grasp of them first.

In the last ten years the insurance and banking industries have increasingly been expected by regulators to 'embed' their internal capital models within their businesses. This incorporates pricing, performance management, strategy, and so on.

Solvency II requires a 'Use Test' to be passed before an internal model is approved for regulatory capital reporting. If this test is failed, then the firm has to revert to the use of 'Standard Formulae' for regulatory reporting, which may in many cases be more onerous, or felt to be inappropriate. This 'Use Test' requirement is considered by many to be a big step forward in risk management, but if the embedding process is not managed properly there could be unintended consequences of the not-so-good variety.

So, why are we required to embed these models in our business decision-making in order to use them for calculating regulatory capital? Surely the correctness or otherwise of a capital model does not depend on whether it is being used in decision-making?

One possible reason for the Use Test could be that, if an organisation is prepared to use the results of a model in its decision-making, then this gives greater confidence that the organisation believes its own model, thereby increasing the likelihood that it is correct! The more obvious reason though is that, given the increasing recognition by regulators of the importance of good risk management - not just risk measurement - regulators want to provide a direct incentive to firms to take risk management seriously.

All models are, by definition, simplifications of reality and, therefore, all models have their limitations. A non-actuary once told me:

"All business decisions should be driven by common sense, not by models."

And while this is a good maxim, things aren't so simple. Models can be used to inform questions like: "How much of this action should I take?" and are good for identifying potential pitfalls that haven't been spotted by our common sense. So models do have a use, but let's use them sensibly.

How many times has your finance director or a member of the Board approached you and asked: "What do we need to do to maximise our profits?" or "What's the best way to reduce our capital requirements?" Hopefully many! Profit maximisation and capital reduction are positive things that add value to the business. But have you ever had to consider an action that, while it optimises this or that financial metric, might actually be flawed? Conversely, have you ever been in a position where you've felt that a proposed action is a good action to take but it risks being rejected because its benefits are not visible in the existing set of financial metrics used by the firm?

Regulatory arbitrage occurs where organisations take actions which optimise their regulatory (capital) position but without properly considering the effect on the full distribution of financial outcomes. A variation on regulatory arbitrage involves optimising a set of performance metrics visible to the external world. Every time a new set of reporting metrics is introduced, there is a risk that the business is over-managed to optimise these metrics. These situations could arise as a result of superficial performance objectives being set within an organisation.

Here is an example of regulatory arbitrage leading to negative consequences:

Suppose the capital that I hold for equity risk is determined by a fixed level of market stress, prescribed in the regulations. Reducing my regulatory equity capital to zero is a straightforward task even where no perfect economic hedging assets for my risk exist - I simply put in place the exact amount of equity hedge in place that neutralises the prescribed stress. Job done!

What I would be missing, of course, is the fact that, for most 'normal' scenarios, I have worsened my prospects. In technical terms, the delta that I am hedging exceeds my instantaneous delta. The more extreme the stress that we are targeting, the further away from economic hedging we go. The key to a successful economic hedging and monitoring strategy is not only the instantaneous delta at a point in time, but also the rate at which that delta changes with respect to the underlying risks.

In this example, I have optimised my regulatory position but, in so doing, I have altered the underlying probability distribution of my capital requirements in a way that is sub-optimal, in some sense.

It may be tempting to look at this example and conclude that, while this issue may exist for standard stresses, it is solved by the use of a model which derives a capital distribution. Yes, this particular problem (of ignoring the underlying distribution) goes away, but gets replaced by a more subtle one - we are only picking off one point in a distribution, and paying no attention to the rest of the distribution. This is the main shortcoming of the VAR measure.

Conversely, there are countless examples of actions which do not appear to significantly improve the extreme tails of a distribution but may in fact be decent actions to take when considering the overall capital distribution. This is often seen, for example, in situations where a hedging strategy introduces a large interaction between market and lapse risk, and the scenarios in which these items interact most are seen in the extreme tails.

In 1973, Black and Scholes published their famous paper in which, under certain conditions, the price of a financial option could be determined. This was the breakthrough achievement in modern finance, one worthy of a Nobel Prize, and the mathematics that flowed from it was beautiful. Of crucial importance, though, is that the limitations of the model were clearly defined and, since the paper's publishing, much research in the field of mathematical finance has been aimed at addressing these limitations.

Stochastic volatility, stochastic interest rates, jump diffusions and a whole consortium of modelling tools has been built to enhance the basic theory and, at the same time, the derivatives market has evolved from the original set of vanilla products to highly exotic ones involving path dependency, complex underlying asset structures and so on.

It feels as though the insurance industry is now at its 'Black Scholes moment', with the use of stochastic techniques from mathematical finance finally replacing the simple deterministic formulae of the past. In the insurance case though, the products have not evolved in line with the models. The models are all new, but the products are highly exotic, for example the complex path-dependence of payouts on with-profits contracts. Several concerns still need to be addressed, and here are some examples of limitations of our existing assumption setting:

- Do our statistical distributions adequately capture extreme events?

- Is the focus of our calibration of marginal distributions and copulae on the right part of the distribution in situations where one or more of the risks is being hedged?

- Is it appropriate to calibrate market models to vanilla option prices and then to use these models to value highly exotic benefit structures?

The list goes on and to some extent we can claim that we are in a better position than ten years ago. But do we understand our model limitations in the same way that Black and Scholes did? And, if not, how do we expect senior management to understand these limitations and manage their risks effectively?

And for those who think that these concerns are secondary, and should be dealt with in the small print, it should be noted that some academics have blamed the recent banking crisis at least in part on the over-use of Gaussian copulae in modelling dependency. Search for "Banking crisis Gaussian copula" on your favourite search engine to see a list of examples.

One mistake to avoid is to see 'risk modelling' as chronologically preceding 'risk management' a tool is developed, and then the results used to inform decision-making. This 'siloing' takes us away from a unified theory of risk measurement and management, and the following examples illustrate this.

The first example is around formula-fitting of changes in insurance liabilities. This is a complex process but a necessary part of the toolkit needed by Internal Model actuaries with finite computing capability. This is seen by some as merely one component of the modelling capability, with risk management being something that happens after the model is fully developed and results produced. Wrong! Formula-fitting, as well as just a useful modelling tool, gives us the key to effective risk management. Through understanding of these formulae (and their limitations, of course), their partial first and second derivatives with respect to underlying risk variables, the mysteries of risk and how to hedge it are unlocked.

A second example is the siloing of setting internal risk appetites and modelling of risk. Many economic models in development today ignore the need to dynamically manage the investment strategy in each modelled scenario in order to ensure compliance with a firm's own risk appetite, and the associated costs of rebalancing. This can lead to inappropriate investment strategies being set up front.

It is almost certain that there will be firms who see good risk management as being somehow connected to a regulatory regime. These firms tend to focus exclusively on 'Solvency II' risk, namely the uncertainty around the final text and the resulting uncertainty around its regulatory capital requirements. Of course this is a major risk to many firms but, at the same time, it is important that firms develop their own in-house views on economic capital, and manage risk because risk management is a good thing, not because it is a regulatory requirement. Surely any business in any industry, even in the absence of regulation, should have a model of its risks and of its own value - a model that it believes, and uses to aid decision-making?

As good actuaries, we will be aware of the risk around the models and assumptions that we use, and will be communicating these to decision-makers. Any mathematical model used to describe reality is a simplification of that reality, and this spans across all industries and academic pursuits. When we say 'the 99.5% loss is X', what we really mean is 'the 99.5% loss, conditional on the model and assumptions being correct, is X'.

The question is how do we communicate model risk to decision-makers? No senior executive wants to receive a report that looks like this:

------------------------------------------------------------

Numbers*

Numbers*

Numbers*

Numbers*

------------------------------------------------------------

If we had an infinite amount of time and processing capability on our hands, we could develop an N-dimensional model of capital, where N is the number of modelling decisions or parameters that we are uncertain of, and each of these in turn is a random variable with an associated probability distribution (with uncertain parameters). Each realisation of these N variables would produce its own capital distribution, and we would read off our 99.5% capital from this N-dimensional capital distribution.

Clearly, this is not something that any sane person would consider implementing, and is merely an amusing thought experiment. In reality, the best that we could do would be to look at the sensitivity of our capital results to different methodologies and assumptions sets, and to communicate the key sensitivities to decision-makers.

In the real world, recommendations need to be made, and decisions need to be taken. The key thing for us, as actuaries, is not to be wedded too closely to any one model, to understand and explain the limitations of our model, and to emphasise the impacts of proposed actions on the entire distribution of outcomes. Common sense, really!

That would be strange. Of course, in this situation it would be clear to all that his model had severe limitations, what these were, and how to screen the output for sensible answers. Things aren't so simple, unfortunately, when trying to use financial models containing a range of explicit and implicit assumptions to determine an optimal set of management actions in respect of a complex product mix.

Actuaries are often accused of having models that are more complex than they need to be based on their intended usage, and in some cases these accusations are valid. At the same time, there are situations in which models are too simplistic for the reality that they are trying to represent, or where model output is misused, leading to inappropriate decision-making. Granted, we do not need a baseball bat to kill a spider but, on the other hand, we can't paint a house with a toothbrush and if we do, we should be aware of the consequences.

Understanding and communication of model risks is at the heart of effective risk management, and of particular importance in an environment in which the use of models to manage risks is increasing. In order to be able to communicate these limitations to decision-makers, it is crucial that, as actuaries, we have a complete grasp of them first.

**Embedding and Solvency II**In the last ten years the insurance and banking industries have increasingly been expected by regulators to 'embed' their internal capital models within their businesses. This incorporates pricing, performance management, strategy, and so on.

Solvency II requires a 'Use Test' to be passed before an internal model is approved for regulatory capital reporting. If this test is failed, then the firm has to revert to the use of 'Standard Formulae' for regulatory reporting, which may in many cases be more onerous, or felt to be inappropriate. This 'Use Test' requirement is considered by many to be a big step forward in risk management, but if the embedding process is not managed properly there could be unintended consequences of the not-so-good variety.

So, why are we required to embed these models in our business decision-making in order to use them for calculating regulatory capital? Surely the correctness or otherwise of a capital model does not depend on whether it is being used in decision-making?

One possible reason for the Use Test could be that, if an organisation is prepared to use the results of a model in its decision-making, then this gives greater confidence that the organisation believes its own model, thereby increasing the likelihood that it is correct! The more obvious reason though is that, given the increasing recognition by regulators of the importance of good risk management - not just risk measurement - regulators want to provide a direct incentive to firms to take risk management seriously.

All models are, by definition, simplifications of reality and, therefore, all models have their limitations. A non-actuary once told me:

"All business decisions should be driven by common sense, not by models."

And while this is a good maxim, things aren't so simple. Models can be used to inform questions like: "How much of this action should I take?" and are good for identifying potential pitfalls that haven't been spotted by our common sense. So models do have a use, but let's use them sensibly.

**Regulatory arbitrage - don't do it!**How many times has your finance director or a member of the Board approached you and asked: "What do we need to do to maximise our profits?" or "What's the best way to reduce our capital requirements?" Hopefully many! Profit maximisation and capital reduction are positive things that add value to the business. But have you ever had to consider an action that, while it optimises this or that financial metric, might actually be flawed? Conversely, have you ever been in a position where you've felt that a proposed action is a good action to take but it risks being rejected because its benefits are not visible in the existing set of financial metrics used by the firm?

Regulatory arbitrage occurs where organisations take actions which optimise their regulatory (capital) position but without properly considering the effect on the full distribution of financial outcomes. A variation on regulatory arbitrage involves optimising a set of performance metrics visible to the external world. Every time a new set of reporting metrics is introduced, there is a risk that the business is over-managed to optimise these metrics. These situations could arise as a result of superficial performance objectives being set within an organisation.

Here is an example of regulatory arbitrage leading to negative consequences:

**Scenario - Good regulatory outcome but bad decision**Suppose the capital that I hold for equity risk is determined by a fixed level of market stress, prescribed in the regulations. Reducing my regulatory equity capital to zero is a straightforward task even where no perfect economic hedging assets for my risk exist - I simply put in place the exact amount of equity hedge in place that neutralises the prescribed stress. Job done!

What I would be missing, of course, is the fact that, for most 'normal' scenarios, I have worsened my prospects. In technical terms, the delta that I am hedging exceeds my instantaneous delta. The more extreme the stress that we are targeting, the further away from economic hedging we go. The key to a successful economic hedging and monitoring strategy is not only the instantaneous delta at a point in time, but also the rate at which that delta changes with respect to the underlying risks.

In this example, I have optimised my regulatory position but, in so doing, I have altered the underlying probability distribution of my capital requirements in a way that is sub-optimal, in some sense.

It may be tempting to look at this example and conclude that, while this issue may exist for standard stresses, it is solved by the use of a model which derives a capital distribution. Yes, this particular problem (of ignoring the underlying distribution) goes away, but gets replaced by a more subtle one - we are only picking off one point in a distribution, and paying no attention to the rest of the distribution. This is the main shortcoming of the VAR measure.

Conversely, there are countless examples of actions which do not appear to significantly improve the extreme tails of a distribution but may in fact be decent actions to take when considering the overall capital distribution. This is often seen, for example, in situations where a hedging strategy introduces a large interaction between market and lapse risk, and the scenarios in which these items interact most are seen in the extreme tails.

**Getting the model right**In 1973, Black and Scholes published their famous paper in which, under certain conditions, the price of a financial option could be determined. This was the breakthrough achievement in modern finance, one worthy of a Nobel Prize, and the mathematics that flowed from it was beautiful. Of crucial importance, though, is that the limitations of the model were clearly defined and, since the paper's publishing, much research in the field of mathematical finance has been aimed at addressing these limitations.

Stochastic volatility, stochastic interest rates, jump diffusions and a whole consortium of modelling tools has been built to enhance the basic theory and, at the same time, the derivatives market has evolved from the original set of vanilla products to highly exotic ones involving path dependency, complex underlying asset structures and so on.

It feels as though the insurance industry is now at its 'Black Scholes moment', with the use of stochastic techniques from mathematical finance finally replacing the simple deterministic formulae of the past. In the insurance case though, the products have not evolved in line with the models. The models are all new, but the products are highly exotic, for example the complex path-dependence of payouts on with-profits contracts. Several concerns still need to be addressed, and here are some examples of limitations of our existing assumption setting:

- Do our statistical distributions adequately capture extreme events?

- Is the focus of our calibration of marginal distributions and copulae on the right part of the distribution in situations where one or more of the risks is being hedged?

- Is it appropriate to calibrate market models to vanilla option prices and then to use these models to value highly exotic benefit structures?

The list goes on and to some extent we can claim that we are in a better position than ten years ago. But do we understand our model limitations in the same way that Black and Scholes did? And, if not, how do we expect senior management to understand these limitations and manage their risks effectively?

And for those who think that these concerns are secondary, and should be dealt with in the small print, it should be noted that some academics have blamed the recent banking crisis at least in part on the over-use of Gaussian copulae in modelling dependency. Search for "Banking crisis Gaussian copula" on your favourite search engine to see a list of examples.

**Missing links**One mistake to avoid is to see 'risk modelling' as chronologically preceding 'risk management' a tool is developed, and then the results used to inform decision-making. This 'siloing' takes us away from a unified theory of risk measurement and management, and the following examples illustrate this.

*Formula-fitting*The first example is around formula-fitting of changes in insurance liabilities. This is a complex process but a necessary part of the toolkit needed by Internal Model actuaries with finite computing capability. This is seen by some as merely one component of the modelling capability, with risk management being something that happens after the model is fully developed and results produced. Wrong! Formula-fitting, as well as just a useful modelling tool, gives us the key to effective risk management. Through understanding of these formulae (and their limitations, of course), their partial first and second derivatives with respect to underlying risk variables, the mysteries of risk and how to hedge it are unlocked.

*Risk appetite*A second example is the siloing of setting internal risk appetites and modelling of risk. Many economic models in development today ignore the need to dynamically manage the investment strategy in each modelled scenario in order to ensure compliance with a firm's own risk appetite, and the associated costs of rebalancing. This can lead to inappropriate investment strategies being set up front.

**Regulatory over-reliance**It is almost certain that there will be firms who see good risk management as being somehow connected to a regulatory regime. These firms tend to focus exclusively on 'Solvency II' risk, namely the uncertainty around the final text and the resulting uncertainty around its regulatory capital requirements. Of course this is a major risk to many firms but, at the same time, it is important that firms develop their own in-house views on economic capital, and manage risk because risk management is a good thing, not because it is a regulatory requirement. Surely any business in any industry, even in the absence of regulation, should have a model of its risks and of its own value - a model that it believes, and uses to aid decision-making?

**Model risk - what can we do?**As good actuaries, we will be aware of the risk around the models and assumptions that we use, and will be communicating these to decision-makers. Any mathematical model used to describe reality is a simplification of that reality, and this spans across all industries and academic pursuits. When we say 'the 99.5% loss is X', what we really mean is 'the 99.5% loss, conditional on the model and assumptions being correct, is X'.

The question is how do we communicate model risk to decision-makers? No senior executive wants to receive a report that looks like this:

------------------------------------------------------------

Numbers*

Numbers*

Numbers*

Numbers*

** For heaven's sake, do not use these numbers.*------------------------------------------------------------

If we had an infinite amount of time and processing capability on our hands, we could develop an N-dimensional model of capital, where N is the number of modelling decisions or parameters that we are uncertain of, and each of these in turn is a random variable with an associated probability distribution (with uncertain parameters). Each realisation of these N variables would produce its own capital distribution, and we would read off our 99.5% capital from this N-dimensional capital distribution.

Clearly, this is not something that any sane person would consider implementing, and is merely an amusing thought experiment. In reality, the best that we could do would be to look at the sensitivity of our capital results to different methodologies and assumptions sets, and to communicate the key sensitivities to decision-makers.

In the real world, recommendations need to be made, and decisions need to be taken. The key thing for us, as actuaries, is not to be wedded too closely to any one model, to understand and explain the limitations of our model, and to emphasise the impacts of proposed actions on the entire distribution of outcomes. Common sense, really!

*Richard Schneider is a life insurance actuary hoping to make the move from industry into research.*

Filed in

Topics