[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Solvency II: Boxing clever with the one-year test

Solvency II, the incoming regulatory regime for insurers, is based around measuring the risk of an insurer becoming technically insolvent over the following year. That is, it is interested in the assessment of the assets and liabilities in a year’s time. This is referred to as the one-year test, or a balance sheet-to-balance sheet test.

The balance sheet at time 1 will contain an estimate of the ultimate claims cost based only on information available at that time. This is very different from the way many actuaries have traditionally thought about and communicated risk, which is to think about the actual ultimate claims cost straight away.

There are two common approaches to implementing this one-year test: the ‘actuary in a box’ approach and the ‘recognition pattern’ approach, also known as ‘proportional emergence’. This article gives an introduction to the two approaches, in particular noting that each actually covers a number of different models rather than being a uniquely defined method. It then sets out why I think that ‘actuary in a box’ is the better approach, both theoretically and practically, when it comes to parameterisation, backtesting and the 
use test.

The two approaches
The ‘actuary in a box’ approach (AIAB) seeks to model the reserves one year in the future explicitly and directly by attempting to do what the actuary would do in practice. 
It does this by simulating the future evolution of the sources of information used in the valuation process and specifying an algorithm that describes how the actuary would convert this information into reserves.

The most common approach I have seen uses only the claim payments. For example, when looking at reserving risk, the 
approach is to simulate the next diagonal of the paid triangle using a standard bootstrap. 
A frequency-severity model is used to generate the lower left entry of this extended paid triangle. A chain ladder is then applied to this extended paid triangle to give the 
best estimate reserves in that simulation. 
Doing this many times builds up a distribution of the best estimate reserves in one year’s time.

However, especially in the London market, incurred (ie. paid plus outstanding) claims generally form the basis of reserving, so restricting AIAB to use only paid data is unrealistic.

Better models simulate the timing of the reporting of claims, as well as their payment times. So the frequency-severity model would generate the incurred claims figure in a year’s time, as well as the paid claims. We can then apply a Bornhuetter-Ferguson algorithm to this incurred claims figure, which is often what would be done in practice.

It is true that this approach takes slightly more effort to implement and does increase the number of parameters. However, in my opinion the extra effort is justified by more useful results.

The recognition pattern approach first models the ultimate claims cost and then models the reserves after one year based on this ultimate cost. It could be argued there is a degree of clairvoyance in the approach. Its popularity in practice arises at least in part for historical reasons — UK insurers have created models to project ultimate costs as part of their ICA processes and there is a natural desire to try to convert this into a one-year model through the use of a recognition pattern.

There are many different variants in practice. The pattern can be fixed or stochastic, and it can be applied to the total ultimate claims cost or just to the ‘actual versus expected’ claims cost.

A formalisation
To set out the differences between the two approaches more precisely, I use the following notation:

F(n,t) represents the filtration over time generated by the first n sources of information. F(∞,t) represents all available information at time t. One should not worry too much about how different information sources are indexed — all that is necessary is that if n<m then F(n,t) contains less information than F(m,t).

U is the ultimate claims. U is only known at time T, the time of ultimate run-off.

Under the one-year test we are seeking to calculate EA[U| F(∞,1)], ie. the expectation at time 1 of the ultimate claims cost based on all available information at that time. 
The superscript ‘A’ denotes that, in theory, this is under the actuary’s risk measure. 
That is, ‘best estimate’ depends on the actuary and is not a unique number.

I prefer to write this as we seek 
A(F(∞,1)), the application of algorithm A, which requires information contained within F(∞,1). This highlights that we 
need to model the algorithm as well as 
the information.

The AIAB method can then be described as specifying an algorithm A’(F(n,1)) based on the limited information set we model.

This leads to the identity A(F(∞,1)) = 
A’(F(n,1)) + [A(F(∞,1)) – A’(F(n,1)]

This highlights that it is insufficient 
to simply implement an algorithm. 
One should also have a stochastic error term to reflect the uncertainty in whether the algorithm is the true one and the uncertainty in whether sufficient information has been modelled. This is a point that I have not seen raised elsewhere.

The recognition pattern can be thought of as some function of the ultimate claims, ie. F(U|F(m,T)).

Graphically, the problem and the two approaches can be represented as shown in Figure 1.

Figure 1

The information available when setting the actual time 1 provisions and the simulated information used by each of the methods are displayed as different shaded areas. The arrows highlight what difference in information each method has to overcome.

Figure 1 highlights one of the strengths of the AIAB — it seeks to model the reserving process. The insights this gives would potentially improve that process. As well as helping satisfy use-test requirements, this is a better model. To quote Einstein: “All models are wrong but some models are useful.”

Parameterisation and back-testing
One criticism of AIAB is that the algorithms used do not reflect reality. The answer is not to throw away AIAB but to model more of the information that is used in a real reserving process, for example, incurred claims as well as paid.

There is also a significant amount of information to help with parameterisation. We can look at how the reserves have been set historically — adjusting to Solvency II technical provisions and use this to come up with a possible algorithm.

Applying this algorithm to the data available at each historic time period, we can create a triangle of estimates to compare with the triangle of actual reserves. This means we can look at the differences to identify systematic errors in our algorithms, which would suggest we need to include some other information. We could also use these differences to parameterise a stochastic error term in our model.

I have yet to see a credible suggestion for parameterising a recognition pattern.

In a year’s time, we will know all information at that point and the actual reserves set. We also know the algorithm we use in our AIAB model. We can apply this algorithm to the actual data and compare the output to the actual reserves. That is, in a year’s time we can usefully back-test our model.

Again, I do not see how this could be achieved with a recognition pattern.

The future?
My prediction is that many firms will go down the recognition pattern at first, as 
it is often the path of least resistance in model building. However, as the validation and backtesting processes become more defined, I believe there will be more of a move to an ‘actuary in a box’ process. 
This is even more likely if the difficulties around the recognition pattern approach lead to conservative assumptions, which increase the capital requirements.

The question of whether the human brain is entirely algorithmic is an interesting one. 
My personal view is that it is not. 
The Emperor’s New Mind by Roger Penrose is a good place to start if you are interested.


The views expressed in this article are those of the author and not necessarily those of LCP as a firm. The firm is regulated by the Institute and Faculty of Actuaries in respect of a range of investment 
business activities

---------------------------------------------------------------------------------------------

Andrew Cox

 

Andrew Cox is a partner in LCP’s insurance team working on Solvency II projects, mainly in the London market