**Mark Chaplin examines some of the effective risk-management options available to companies and regulators worldwide.**

In the past risk has usually been allowed for by taking prudent margins over best-estimate assumptions. These prudent margins are frequently set by one individual often an actuary and are based on a little historical data and a lot of judgement. In its most basic form, risk evaluation is little more than this with, perhaps, a slightly more formal identification of the degree of prudence that is being targeted. There is increasing pressure, however, for more quantitative risk assessment using established techniques, for example in deriving market value risk margins under new international accounting standards and for internal capital assessment in the UK.

As an illustration of the techniques available, in this article I want to show how a life insurer might quantify the risk of increases in mortality. For this example, we assume that the risk sensitivity is a 99.5% confidence level over a one-year time horizon in other words, we are considering '1-in-200-years' events. However, the approaches outlined can easily be applied to different risks, time periods, and confidence levels.

Expert opinion

Expert opinion is an extremely useful tool in risk assessment and is often overlooked as a separate technique in the quantitative actuarial world. It is particularly useful where relevant data are scarce, for example where conditions have changed materially (reducing the usefulness of past experience), or where the risks are very company-specific as would often be the case for lapse rates. In essence, the prudent assumption setter was providing one expert opinion on the risk. However, it will often be appropriate to seek input from a range of experts across different disciplines.

One common approach to gathering expert opinion is to set up risk-management workshops for senior managers within a firm to discuss the relevant risks. This can be quite effective, particularly if well facilitated, but there are potential problems:

- Small groups or single experts can suffer from significant bias.

- Results can be distorted by office politics.

- There is a tendency within a group to 'follow the leader', either the most respected or worse still the most dominant individual in the group.

- There will generally be a reluctance to abandon previously stated views.

The Delphi method was developed by the RAND Corporation to address these possible shortcomings, and came in response to a US military request to prepare a forecast of future technological capabilities. However, the forecasting techniques developed have since been applied in a much wider range of areas. The basic approach is to:

- select a panel of experts;

- develop a first round questionnaire on the risks to be considered;

- test the questionnaire for problems such as ambiguity and bias, send the questionnaire to the panellists, then gather and analyse the responses;

- provide a statistical summary of the panel's responses back to the panel; and

- prepare a second round questionnaire; and so on until the results converge.

The Delphi method still has weaknesses that are a function of human behaviour, and are common to many of the other techniques available for collecting expert 'unbiased' opinion for instance, the tendency to give greater prominence to more recent conditions or events.

As part of our research into mortality, we asked a number of medical experts and demographers for an indication of a possible 1-in-200-year deviation from expected mortality. We did not complete the full iterative process of the Delphi method, but even on the initial poll there was some agreement around a 20% variation in death rates for the year.

Historical simulation

A fairly straightforward approach to risk quantification is simply to gather as much past data as possible and use this history as a simulation of the future. For example, we can gather the daily price changes for the last 1,000 days for the shares we are currently holding in our portfolio. This generates 1,000 different scenarios for the performance of our portfolio over the coming day. If we take the 5th-worst performance of the portfolio then we will have generated the 99.5th percentile portfolio return over a one-day time horizon.

Taking the mortality example, figure 1 shows the year-on-year movement in mortality for the UK population aged 2059 between 1911 and 1995. As can be seen, the largest increase in mortality was a 50% jump in 1918 as a result of the Spanish influenza epidemic. This epidemic might prove a useful guide to a possible 1-in-200-year event. However, it might equally be argued that conditions have changed, and that state-led controls for reducing the impact of epidemics are more effective now.

Lack of data is often an additional constraint on carrying out historical simulation, as a time horizon of one year limits the number of independent observations that can be made from history. This problem is exacerbated when looking at the tail of a distribution. Reliance on historical simulation also introduces a further problem, sometimes known as pro-cyclicality, whereby the occurrence of a rare, significant risk event has a double impact. First, the capital available will be depleted by the adverse impact of the risk event and second, the required capital may increase because of the larger number of significant adverse events included in the past data set.

Normal distribution assumption

Another way of exploiting past data is simply to observe the mean and standard deviation of a particular factor, for instance equity market returns, and assume that the factor is normally distributed. The basic properties of the normal distribution then allow us to generate the chosen confidence interval around the mean by taking particular multiples of the standard deviation. This approach generally gives far less weight to the outliers in the data than would a historical simulation.

Returning to the data given in figure 1, we find that the standard deviation of annual mortality rate changes from 1911 to 1995 was around 9%. With a normal distribution, we should be 99.5% confident that the observed value will not be more than 2.58 standard deviations above the mean. This would suggest that a 1-in-200-year event would be a 23% increase in mortality above our expected change. This result is similar to that provided by our 'expert opinion'.

Naturally, more complicated distributions could be assumed and the confidence intervals derived in a similar, if more mathematically complex, way. There are numerous statistical techniques for helping us to decide which distribution might best describe the risk factor being considered, and then to fit the past observed values of the risk factor to that distribution.

Extreme value theory

Under extreme value theory (EVT) events below a particular threshold are excluded from the distribution-fitting process. In effect, this exclusion assumes that small variations in the risk factor are no help when trying to predict the occurrence of very large changes, and focuses the effort on replicating the observed large changes. The positive aspect of this approach is that attention is concentrated on the part of the distribution in which we are most interested. The generalised Pareto distribution (GPD) is commonly used for modelling events above the threshold. As can be seen from our example below, the problem with the EVT approach is that the answers produced can be very sensitive to the choice of threshold. It also suffers from the problem afflicting historical simulation: that the occurrence of an extreme event can have a very significant effect on the estimated risk.

In our mortality data we have only four year-on-year mortality increases in excess of 10% and only two increases greater than 25%. The sensitivity of the results (fitted using a GPD) to the choice of threshold is apparent from table 1 (right).

Monte Carlo simulation

Monte Carlo simulation is a statistical sampling technique for solving complex differential equations. Basically, we assume that the evolution of a particular item of interest can be described by a probability density function; the Monte Carlo simulation is then carried out by sampling from this probability density function and tallying the results. This is a powerful technique and may not strictly be required if 'closed form' (that is, formula based) solutions exist. However, Monte Carlo simulation is an approach frequently used in asset-liability modelling, and gives the user more flexibility in modelling the codependencies between multiple risk factors.

One danger that often occurs when deriving the distribution of the risk factor is to add in too many parameters, and so 'over-fit' the distribution formula to the past data. This would lead to the distribution formula explaining the past particularly well, giving very small observed 'error' terms. As a consequence, little uncertainty is projected, and the variability of the risk factor may be understated. This is an example of model risk, and can be difficult to quantify.

Taking our mortality example further, we constructed a simple stochastic mortality model and used the historical data from figure 1 to parameterise it. The model chosen was a simple ARMA (autoregressive moving average) process. Running 1,000 simulations and taking the 99.5th percentile gave a mortality increase of 31%.

Reasonable methods

As can be seen from the summary in table 2, a wide range of results can be derived using a selection of reasonable methods, all (bar the expert opinion) based on the same data. This presents a significant problem for the risk modeller.

The most robust approach is to employ as many methods as possible to help understand the risk better and to provide a reasonableness check on the results from the other methods. This range of methods will also give an indication of the model risk involved. From the options available, the most appropriate method should be chosen for use in quantifying risk. This selection itself requires no little judgement.