[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Insurance: A new perspective on GLMs

As a result of the new regulatory regime in the UK, most large insurers now find themselves needing to set up and maintain complex stochastic assetliability management (ALM) models. This is also increasingly true overseas, owing to the trend towards market-consistent, embedded-value (EV) reporting and economic capital measurement.

These models are often built from a number of complex components, with economic scenario generators, statutory reserve calculations and dynamic asset-liability management decision rules combining to provide the projected financial results. The complexity of these models often poses a number of problems:

  • How can I obtain an independent check of the validity of the model?
  • How can I get round the speed limitations introduced by nested loops?
  • For complex capital calculations, such as the Individual Capital Assessment (ICA), how can I decide on a consistent set of economic and other inputs for the tail point of interest?

These are all challenging problems, especially so for with-profits insurers. Surprisingly, generalised linear models (GLMs) can provide a quick and relatively simple solution.

Background to GLMs
As most readers will be aware, GLMs have become the norm for general insurance personal lines pricing, allowing the influence of possible risk factors — such as driver age and vehicle group on claim frequency and claim severity — to be quantified in order to produce a claim cost model.

More recently, they have gained ground in the life insurance sector, where their ability to take automatic account of correlations in the data makes them particularly useful for mortality analyses.

Users can simultaneously analyse age and year of birth, and can also obtain a proper fix on the influence of annuity amount — a key driver in annuity portfolio cash flow projections — all in a ‘multiyear’ context that allows more value to be derived from many years of data.

How, then, can they be used to assist with the problems noted above? Without simplifying the mathematics too much, we can think of GLMs as involving the following multiplicative structure:

Modelled quantity = Base level for observed population × Factor 1 × Factor 2 × Factor 3

For instance, we might have the following ‘not so generalised’ form:

Modelled quantity (e.g. end year one economic capital) = Base level × Factor 1 (based on equity returns) × Factor 2 (based on AAA credit spread) × Factor 3 (based on bond curve parameter)

Instead of a company’s motor portfolio, we take as our ‘population’ the sundry thousands of runs produced by our complex ALM model. What we wish to take as the modelled amount obviously depends on our purpose — for example, we might wish to model statutory reserves at various points in the future, model a measure of surplus capital at some point or model the present value of future profits (PVFP).

While discussing the mathematics, a quick aside on the question of linearity, since many people naturally assume that GLMs involve linearity. Fortunately, for our purposes, the ‘linear’ term is a bit of a misnomer here — it merely denotes a requirement that all factors be in the same ‘log space’, so that we could not combine, for instance, one factor with the logarithm of another factor.

There is, in fact, no practical linearity constraint on our model. The factors themselves can be described by any type of curve, and the factors can also be combined in the model via an interaction to represent the effect of one factor varying according to the value of another.

Case study: ICA assistance
Consider how this might work in practice, helping with the ICA for a large UK with-profits insurer. What should we do, for instance, if we wish to have an independent check of the very complex model, and also help to derive a set of ‘consistently bad’ economic variables to give us the 1/200 tail event?

In this instance, the life model — after much groaning — will provide us with end year one capital results in respect of 20,000 simulations. We simply set up the GLM to model this capital amount as a function of the economic variables present, that is, the economic scenario generator (ESG) outputs. In other instances, we could also include the demographic parameters if relevant.

For each factor, the model will provide a series of multiplicative coefficients, quantifying the effect on the modelled amount (end year one capital) of the values taken by the factor in the 20,000 simulations. For example, if we split the equity return factor into 15 bands for ease of modelling, the model factor results will be a multiplicative coefficient for each of those bands.

We usually show these results graphically, rather than as a table of numbers, to better appreciate what the shape of the results might be. As an example, Figure 1 reveals specimen property return results. The straightforward nature of this graph, with a shape that can be explained intuitively, does not seem to indicate any severe abnormalities.

However, what might happen with other factors? In one analysis, we found that two factor graphs contained a large number of kinks, such as those shown in Figure 2. These indicated that something was going wrong in the underlying life model and, on rechecking the model’s innards, errors were discovered.

The other useful output at this stage is a sense of the ‘explanatory power’ of the factors. We can quantify this as the total multiplicative ‘distance’ for each factor moving from one extreme band to the other. Comparing the explanatory power of the factors provides another check. Can it be explained, approximately, why some have greater power than others? It may usefully indicate that some factors do not need to be stochastically jigged, and so can be left out of the model altogether. This explanatory power needs to be taken with some caution, however, as it will depend on the choice of bandings used for the factors.

Assuming we have repaired any problematic factors, satisfied ourselves regarding factor correlations — for example, seeing high and low correlations where we expect to — and removed any insignificant factors from the model, the GLM will now provide a simple closed-form solution for our end year ICA or economic capital. How can we use this?

In this example, we wish to find a ‘consistently bad’ set of ESG outputs as parameters underlying our 1/200th tail event. We can go back to our 20,000 simulations, take the 100th worst (our 0.5% tail) and use our simple closed-form expression to calculate — with a bit of trial and error — what choices of equivalently bad economic variables will give precisely that 0.5% tail amount.

This gives us an ICA scenario that corresponds to a rational combination of ESG outputs, and can now be used as the basis for further investigations. As a further bonus, the closedform results obtained may also be useful in deriving quick approximate monthly ICA results, estimating results for other ‘out of sample’ scenarios that management may wish to run, or even studying ICAs at other confidence levels that better reflect the firm’s risk appetite.

This may all sound useful, whether in ICA or other contexts, but does the cost in time and resources justify the means? When taking these factors into account, the GLM approach also scores well. It would typically take only a few days to go through the GLM work outlined above, compared with the many hundreds — or thousands — of work days likely to have been involved in the construction and updating or running of the underlying life model.

Related applications
It should be clear by now that the GLM structure is ideal for the production of top-down, closedform solutions. This may be of particular value to insurers that are intending to make step changes to their models with the implementation either of nested stochastic loops or, more likely, proxies such as bottom-up, closed-form solutions. By putting the new output through the sort of analysis outlined above, users may be able to detect otherwise hard-to-trace errors.

For those users whose models require substantially compressed model points to runin reasonable time, GLM techniques can also help to inform efficient model point selection by providing information on the explanatory power of policyholder attributes. This allows the ‘compression’ to be applied more to the less important attributes.

Moving back to the ‘frequency analysis’ domain, which may be more familiar to some, GLMs also have the potential to analyse policyholder behaviour, such as option take-up rates. This allows users to quantify the influence of, for instance, interest margins, policy duration or time to maturity on such behaviour.

To our knowledge, few life actuaries practise such alchemy — is this because there are many hidden snags? Not that we know of. There seem to be two main reasons why GLMs are little used in this context. First, the field of GLMs seems rather strange to most life actuaries, few of whom realise what the benefits may be. Secondly, the GLM approach can be less useful over longer time horizons, depending on the sensitivity of the results to the time path of the economic variables.

Life GLMs: take another look
GLMs can be a quick and useful tool to better understand complex life ALM results, providing a form of independent check as well as — inter alia — closed-form solutions that can be useful for many purposes. If you have thought of GLMs up to now only as something used by non-life pricing actuaries, it’s time to take another look.

Matthew Edwards is a senior consultant in the insurance and financial services practice of Watson Wyatt. He has a particular interest in wider applications of generalised linear models.