[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries

General insurance: Micro-level stochastic loss

With the introduction of Solvency II (in 2012) and IFRS 4 Phase II (in 2013), insurers face major challenges. The measurement of future cash flows and their uncertainty becomes more important and also gives rise to the question of whether current techniques can be improved. In this article, we introduce a new methodology for stochastic loss reserving for general insurance and apply it to an existing insurance portfolio.

Current techniques
For an overview of current techniques, see England and Verrall (2002). These techniques can be applied to so-called run-off triangles containing either paid losses or incurred losses (for example, the sum of paid losses and case reserves). In a run-off triangle, observable variables are summarised by arrival (or origin) year and development year combination. An arrival year is the year in which the claim occurred, while the development year refers to the delay in payment relative to the arrival year.

The most popular approach is the ‘chain ladder’ approach, largely because of its practicality. However, the use of aggregate data in combination with (stochastic variants of) the chain ladder approach (or similar techniques) gives rise to several issues. A whole canon of literature has evolved to solve these issues, which are (in random order):

1) Different results between projections based on paid losses or incurred losses, addressed by Quarg and Mack (2008)
2) Lack of robustness and the treatment of outliers, see Verdonck et al (2009)
3) The existence of the chain ladder bias, see Halliwell (2007) and Taylor (2003)
4) Instability in ultimate claims for recent arrival years, see Bornhuetter and Ferguson (1972)
5) Modelling negative or zero cells in a stochastic setting, see Kunkler (2004)
6) The inclusion of calendar year effects, see Verbeek (1972) and Zehnwirth (1994)
7) The different treatment of small and large claims, see Alai and Wüthrich (2009)
8) The need for a tail factor, see, for example, Mack (1999)
9) Over parameterisation of the chain ladder method, see Wright (1990) and Renshaw (1994)
10) Separate assessment of ‘incurred but not reported’ (IBNR) and ‘reported but not settled‘ (RBNS) claims, see Schieper (1991) and Liu and Verrall (2009)
11) The realism of the Poisson distribution underlying the chain ladder method 12) Not using lots of useful information about the individual claims data, as noted by England and Verrall (2002) and Taylor and Campbell (2002).

Most references above represent useful additions to the chain ladder method, but these cannot all be applied simultaneously. More importantly, the existence of these issues and the substantial literature about it indicates that the use of aggregate data in combination with (stochastic variants of) the chain ladder approach (or similar techniques) is not fully adequate for capturing the complexities of stochastic reserving for general insurance.

Micro-level stochastic loss reserving
The run-off process of an individual general insurance claim is shown in Figure 1. The interval [t1, t2” represents the reporting delay. In this interval the claim is not yet known to the insurer, so IBNR. The interval [t2, t6” is often referred to as the settlement delay. Within this interval the claim is RBNS. Typically, databases within general insurers contain detailed information about the run-off process of historical and current claims. The question arises why this large collection of data is not used in the reserving process, by modelling on the level of individual claims (micro-level). Therefore, we have developed a stochastic model on micro-level for stochastic reserving, in the spirit of Norberg (1993, 1999) and Haastrup and Arjas (1996).

The quality of reserves and their uncertainty can be improved by using more detailed claims data. A micro-level approach allows much closer modelling of the claims process. Many mentioned above will not exist when using a micro-level approach, because of the availability of lots of data and the potential flexibility in modelling the future claims process. For example, specific information (deductibles, policy limits, calendar year) can be included in the projection of the cash flows when claims are modelled at an individual level.

The use of lots of (individual) data avoids robustness problems and over-parameterisation. Also, the problems with negative or zero cells and setting the tail factor are circumvented, and small and large claims can be handled simultaneously. Furthermore, individual claim modelling can provide a natural solution for the dilemma within traditional literature as to whether to use triangles with paid claims or incurred claims. Also, the case reserve can be used as a covariate in the projection process of future cash flows.

In the remainder of this article, the methodology is summarised and results are shown for an example based on a general liability portfolio. The model consists of four building blocks:

>> Reporting delay
>> Number of IBNR claims
>> Development process
>> Payments.

The distributions of the building blocks above can be fitted based on the available individual data.

Reporting delay
The reporting delay is a one-time single type event that can be modelled using standard distributions from survival analysis, such as the Exponential, Gompertz or Weibull distribution. For the majority of claims, as the claim will be reported in the first few days, we have used a mixture of a Weibull distribution and nine degenerate distributions. The latter are meant to fit the reported claims in the first few days more closely.

Number of IBNR claims
We have used a piecewise-constant specification (on a monthly basis) for the occurrence rate of a claim. Combining the reporting delay distribution and this occurrence process, one can distinguish between IBNR and RBNS claims and simulate the number of IBNR claims when projecting future cash flows.

Development process
The development process is modelled using the statistical framework of recurrent events. The different events that are specified are:
>> Type 1: settlement without payment
>> Type 2: settlement with a payment (at the same time)
>> Type 3: payment without settlement.

This process is modelled through a piecewise-constant specification for the hazard rate of an event. A good alternative could be to use Weibull hazard rates.

Events of type 2 and type 3 come with a payment. Several distributions have been fitted to the data, such as the Lognormal, Burr and Gamma distributions. The Lognormal distribution fits the data of the example portfolio best. This is further refined by including the development year and the initial reserve category as explanatory variables. The case reserves are categorised in a few classes. This reflects the empirical finding that the probability on a high (low) claim is higher for claims with high (low) case reserves. Based on the above building blocks, the future cash flows can be simulated. Results of this exercise are as follows.

We compare the results of the micro-level stochastic loss reserving model with results of traditional actuarial techniques applied to run-off triangles. This is done with an out-of-sample exercise, where the reserve at 01/01/2005 is calculated based on data from 1997-2004. Given that the results for 2005-2009 are known already, the results of the models can be confronted with the realisations.

Figure 2 shows the distributions at 01/01/2005 for bodily injury claims of a general liability portfolio (for private individuals), based on 10,000 simulations. Furthermore, the actual realisation (the dashed vertical black line) is given. The results are compared with two standard actuarial models developed for aggregate data, being a stochastic version of the chain ladder model (based on an overdispersed Poisson distribution) and a Lognormal model. Both of these models are implemented in a Bayesian framework.

Figure 2 shows that both the overdispersed Poisson model and the Lognormal model overstate the reserve for this case study; the actual observed amount is in the left tail of the distribution. The resulting distribution of the micro-level model seems closer to reality (see full working paper at http://ssrn.com/abstract=1620446 for tables with numerical results). Similar conclusions were drawn for separate calendar years and for another case study using material claims of the same general liability portfolio.

We have introduced a new model for stochastic loss reserving for general insurance, based on modelling at microlevel. This model makes better use of the large collection of data and circumvents the issues that exist with models based on aggregate data. An out-of-sample exercise shows that for our case study, the proposed model is preferable compared to traditional actuarial techniques.


Katrien Antonio and Richard Plat are members of the actuarial science research programme at the University of Amsterdam