[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Uncertainty?

n a non-life insurance company’s external balance sheet and internal management information pack, one of the most scrutinised figures is the company reserves. By definition, the reserves are not an accurate figure. They are an estimate of what the company believes it will pay out on claims in the future.
More attention is now being paid to the variability of the outcome around the reserves. This is being driven by:
– Capital adequacy: both regulation (the ICAS requirements) and internal business planning require an understanding of the uncertainty within the reserves and within the business.
– Possible future regulatory requirements: Solvency II will almost certainly mean that companies will need to formally assess the uncertainty in their reserve estimates.
– Management: a better understanding of reserve volatility enables management to make better strategic pricing and investment decisions. It can also lead insurers to optimise their reinsurance programme design.

Definitions of reserve variability
Before attempting to estimate the variability in reserves, you need to know what range of variability you are trying to measure. Actuaries, as mentioned in the GRIT report, commonly use three different measures:
– Range of reasonable best estimates: the actuary’s view of the range of best estimate reserves that a ‘reasonable actuary’ could determine based on the available information.
– Range of probable outcomes: the entire range of possible outcomes, excluding those events that are considered extremely unlikely and could be ignored for all practical purposes. This could mean between the 10th and 90th percentiles of the distribution of the reserve movement.
– Range of possible outcomes: description of the entire distribution function of the possible ultimate claims costs relating to a block of policies. Theoretically this ranges from zero to infinity.
The range of reasonable best estimates is most commonly used when forming a view as to whether the proposed reserves are appropriate for inclusion in financial accounts. The range of probable outcomes is a useful measure for the purpose of financial planning such as reserving with a certain risk margin for example reserving at the 75th percentile.
Bootstrapping and Mack
Actuaries are increasingly turning to stochastic reserving techniques, such as Bootstrapping or Mack, to help them measure the uncertainty in the outcome. These techniques, which are continually evolving, produce a distribution of outcomes rather than a single point estimate, which is the output of more traditional actuarial methods.
However, stochastic reserving techniques by themselves are not a panacea for determining the variability in reserves. Here are some reasons for this.
1 Stochastic reserving methods are only as good as the data that are fed into them. If the quality of the data is poor, incomplete or sparse, the methods will not produce credible output.
2 The methods will only project forward the uncertainty that has occurred in the data in the past. For example, the Courts Act 2003 allowed courts to force general insurance companies to make annual payments to severely injured claimants rather than just paying them a lump sum. Stochastic reserving methods applied to data before this ruling would not capture the uncertainty in reserves introduced by the ruling. Other potential sources of uncertainty which may not be inherent in the data include:
– changes in the economy such as moving from a stable inflation environment to an unstable one;
– claims process changes, for example an organisation deciding to set case reserves on a ‘worst estimate’ basis rather than on a ‘best estimate’ basis; and
– a different type of claim emerging, for example changes in social attitudes towards claiming, or an underwriting oversight that failed to capture a necessary policy exclusion.
3 If the data violate certain assumptions (for example, some stochastic reserving methods rely on the chain-ladder assumptions being met, or on the normality of residuals), the methods will not be appropriate. Another example of this is that current stochastic methods may not cope with the uncertainty around individual large losses (for example hurricane Katrina) or for latent claims (for example asbestos, pollution, and health-hazard claims.
4 The correlations and dependency structures between classes of business or underwriting years that these methods replicate would be the historical ones. This would not take into account any changes in the sources of risk underlying these correlations.
5 The risk in using the models themselves. Different stochastic models will provide you with different ranges of uncertainty even though the underlying book of business and the risks therein are the same. Which model is ‘more right’?

An example
Figure 1, which is constructed from the FSA returns of one of the largest insurers in the UK for an apparently simple household account, demonstrates some of the potential pitfalls of using stochastic models.
We conducted a Bootstrapping and Mack estimation using this company’s paid data. The 95th and 5th percentile results are illustrated for both methods as dashed lines on the chart.
The full lines represent the change in the expected ultimate loss relative to the company’s initial estimate of ultimate loss. For example, the 1996 line at development period 2 represents approximately a 90% deterioration in the estimate of ultimate loss relative to the company’s initial estimate of the ultimate loss for that accident year. This highlights how volatile reserve estimates could be.
We conducted the analysis using ten accident years. However, we only plotted five of these years in order to maintain the clarity of the graph.
We note several striking findings from the chart:
1 The Bootstrapping and Mack method provides a significantly different level of reserve estimates at the 95th percentile. Bootstrapping indicates the 95th percentile to be 134% of the mean, while the Mack indicates the 95th percentile to be 152% of the mean. So which method should you choose to determine your 95th percentile?
2 Both methods did not appear to capture entirely the deterioration of the 1996 accident year. As we had ten years of data, the worst year (ie the 1 in 10 year event) should have been included within the range of 5% to 95% (a range representing a 1 in 20 year sufficiency level). Moreover, the 1996 accident year is approximately 40 percentage points outside the range, which is worrying in itself. We would need to understand what is causing the large deterioration in this year. Reasons may include subsidence, data errors, or a big storm just at the end of the year. The repeatability of the event may determine whether we keep this accident year in the data set we use for our Bootstrapping or Mack analysis.
3 All the other years (including ones that are not plotted) have small levels of volatility, but the ranges indicated by the stochastic models are quite wide, relatively speaking.
Hence, in this case, blindly using stochastic methods for the analysis of this household account appears not to produce a sensible measure of reserve uncertainty.
The reason for this is that the data do not fit the method. The 1996 accident year is causing the ranges to be wide relative to all the other accident years. So if, for example, the 1996 accident year is a result of, say, a one-off data error, then should it be allowed to increase the ranges in such a manner?

The best way?
What is required is a framework that helps you understand your general business and the lines of business you have actually written and captures the causes of uncertainty within these lines of business. It should help you to understand the processes by which case estimates and reserves are set and to capture the power of the statistical methods available where required. The organisation’s reserving experts, underwriters, and claim managers should combine all these quantitative and qualitative factors within the framework to derive the uncertainty.
The historical data need to be adjusted to fit the methods used. There are various diagnostic tests that could be applied to identify distorting trends or outliers in the data. These can be excluded from, or adjusted for within, the stochastic models, so as to avoid violation of the assumptions, and then brought back as explicit adjustments if they are deemed to be repeatable.
The framework should also consider the doomsday scenarios events that could radically alter the reserve position that may not be present in past data and would not be captured by any model and make appropriate allowances for them.
This framework should be flexible enough to apply different methods to different situations. Subject to the quality of the data, and the materiality of the risk to the insurer, a more or less complex combination of models and subjective adjustments should be applied.
These ‘class of business level’ reserve range estimates should then be combined to an ‘entity level’ reserve range estimate using appropriately derived correlations. These correlations should be based on consideration of the commonalities in the causes of risk between lines of business.

An actuary’s toolkit
Estimating reserve variability is and will continue to be Lan important part of a general insurance actuary’s toolkit. We must ensure that we do not just rely on the output of black box statistical methods but instead use a more holistic framework that enables us to capture and estimate the causes of uncertainty within our organisations.

06_06_05.pdf