[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries

How extreme is your scenario?

cenario testing continues to be one of the main methods of ascertaining an ICA, whether or not more sophisticated stochastic modelling techniques are used. This article sets out some issues with selecting the various parameters used in defining a scenario to a particular level of extremity, in particular a trap that could result in a systematic underestimation of the capital values produced.
I assume that the reader has a reasonable background in the use of scenario-testing techniques and hence do not expand on the capital ramifications or particular methodologies in any detail. I concentrate on the (relatively simple) mathematical concepts and results showing the potential trap, which should be easily understood by all.
I also expect that similar issues will arise in combining any distributions using non-simulation methodology. In particular, this may become an issue in the estimation of reserve uncertainty.
I am aware of at least two methods of calculating the combined effects of multiple risks for capital scenarios: weighted sum and conditional probability, each of which I address briefly below. I then use an example to show that the actual percentile of a conditional probability calculation is generally less extreme than that implied by the combination of probabilities relating to the selected scenario.

Weighted sum method
Some practitioners use a weighted sum method when defining the effect of combining risks, taking a proportion of the capital allocated to each and summing the result to give an indication of the likely impact of a particular scenario. The overall capital is defined as the maximum value calculated from a series of such scenarios each assuming a consistent level of risk tolerance. This method is generally used for risks where distributions are highly uncertain, such as operational risks.
Other ways of approaching this calculation, such as considering explicitly the events that drive the scenario, and calculating the capital that corresponds to such a series of events, are equivalent.
Owing to the lack of any concrete information about the implied distribution for these risks (let alone dependence effects), this method, relying more on thought experiments and ripple effects than mathematical theories, is not necessarily directly affected by the issues I expand on in this article. However, in some instances it may be necessary to consider whether similar effects occur.

Conditional probability
If full distributions for various risks have been estimated, then an alternative method is to combine sets of these variables such that the probability of achieving a more extreme result for any risk considered (ignoring correlation effects) is set at the required probability of adequacy. It is this method that I wish to investigate further.

What severity is correct?
Purely mathematically, the correct way to determine a percentile of a combined distribution of two or more random variables is to convolute the distributions. This can be done analytically (rarely), or otherwise computationally using a number of algorithms, eg Monte Carlo simulation, Fourier Transforms, etc.
In fact, most stochastic models do indeed use Monte Carlo techniques to combine the multitude of distributions used in the determination of much ICA work. It is outside these models where there may be less experience of the critical aspects of convolution that apply when combining distributions to derive capital amounts.
As an approximation, it is common to take samples from one distribution at particular percentiles, and then sample the other distribution such that the probability of sampling either distribution at a more extreme level is equivalent to the risk tolerance required. The results of these calculations are then compared with the largest value taken as the required capital amount. This is therefore the analogous calculation to the weighted sum method above, but where enough information is available to derive distributions of the effects of each risk.
However, this will consistently underestimate the value of the appropriate percentile of the combined distribution, even when uncorrelated. Equivalently, and more accurately, it overestimates the actual percentile of the combined result. To illustrate this I have set out a simple example.
Consider two discrete distributions; each has a 10% chance of any value stated occurring (see table 1). Combining the distributions as shown in table 1 provides all possible results, with each result having the same probability of occurrence. We can therefore work out the ‘probability of exceedence’ for each cell, ie the estimation of the percentile using the approximate method. However, given that we have perfect information, we can also calculate the actual percentiles of the combined distribution.
In the table, the 90th percentile has been used as an example, with the approximate calculations in yellow, and the actual 90th percentile in red. Note that due to the discrete nature of the distributions, the indications of the approximate method do not all have the same probability of exceedence; they have been highlighted to indicate the general area of the results of such a method.
The reason for the disparity between the approximation and the actual percentile is because of the imperfect method of allowing for the entire distribution when selecting the severity of the second factor. In effect, the percentages shown are the probabilities that the sampling of either distribution is more extreme than the cell chosen, not that the total value of the sum of these distributions exceeds that value.
Hence the method ignores combinations of the distributions that produce a higher total result, but sample from either distribution at a smaller value. This effect occurs for both identical and non-identical pairs of distributions.
Mathematically, this can be described in terms of conditional probability as follows:
The probability of a combined event x=X and y=Y is:
P(x=X|y=Y) P(y=Y)
Note that if x and y are independent (as in this example) the conditional part of the expression drops out. However, I have not removed the condition here to act as a reminder of the method used in the approximation.
Hence in the conditional probability method, the value of X used in estimating percentile p given that y=Y is selected from the other distribution, is defined as:
P(x>=X|y=Y) P(y>=Y) P(x=X, y=Y) = 1p
The second term on the left-hand side of the equation tends to zero for continuous distributions, and we recover the more familiar ‘rectangular’ result.
This is shown graphically by the blue figures in table 2 for the case x=6. Here the approximate method implies that the approximation of the 90th percentile is given by y=27 (p=9%), with this result shaded green.
We define the combined distribution z as
z = x + y
Hence from the relation above, the approximate method gives
P(x>=ZY|y=Y) P(y>=Y) P(x=ZY, y=Y) = 1p
However, the actual value of the percentile p of the combined distribution z, where z=Z, is defined as:
y P(x>ZY|y=Y) P(y=Y) = 1p
ie for each row in table 2, count the squares that have a total larger than Z (33 in this case). In the example this gives the blue figures as defined above, but also the red figures, giving p=13%.
Note that as the distributions are discrete, this formula does not necessarily uniquely define Z for any given p, but for continuous distributions the relationship holds identically (replacing the sum with an integral). This revised formula then selects the box in table 2 as defined above, together with the red cells, giving the correct percentile of the value highlighted green.
This shows that even if P(y) is constant (as in this example) and X and Y are independent, the value of the percentile depends on all values of y rather than just those greater than a single selected value. The disparity between the approximate and actual percentiles increases dramatically as the number of distributions combined increases. This is due the ‘space’ being investigated becoming more limited. For example, three distributions pictorially represented as a cuboid would only look at a small element of the total possible results, rather than all relevant combinations.
Hence simply working on a combination of probabilities to derive a capital value based on the probability of a particular combination of factors will understate the overall capital.
The use of simple tables, such as the example here, can be used to give a better approximation of the combined result. In particular the table can be limited to concentrate on the top (or bottom) of the distributions to limit the calculations required, and improve the approximations where the distributions are continuous. Note that care should be taken when looking at partial distributions to ensure that enough of the distribution is captured to give the correct percentile.

Potential error
This article explains a potential error in a method for estimating percentiles of a combined distribution of risk events. Although defining any such set of single distributions alone is generally fraught with uncertainty, using a simple calculation similar to that shown above may help to better identify the appropriate scenarios to be used when calculating capital values, making the most of the limited information available.