[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Life insurance: Life in the fast lane

The life insurance industry is often accused, and rightly so in many instances, of a lack of consumer focus. The finger is pointed at products that are not innovative and fail to meet the needs of the modern consumer. At the same time, compared to other industries ranging from retail to general insurance, we are woefully underdeveloped in our use of consumer data to improve the proposition and target customers effectively.

In recent years, predictive underwriting has emerged as a technique that, while still only being whispered about, is increasingly seen in the UK and the US as a potential solution that helps counter both these charges. In countries like these, where there are sizeable and distinct protection markets, predictive underwriting can connect with parts of the population that have not, so far, engaged with protection insurance and thereby make in-roads into closing the protection gap.

While still in its infancy as far as life risks are concerned, the technique is being developed to support a brand’s desire to improve its insurance offering. This is against a backdrop of increasing acquisition and medical evidence costs, along with long elapsed times between order and fulfilment.

A definition of predictive underwriting — sometimes referred to as lifestyle underwriting — would include:
• Making life insurance quick and easy to buy. The process allows for most of the traditional underwriting to be bypassed for those people most likely to end up with standard terms, had they gone through the long and expensive underwriting process.
• A means of targeting customers with an offer. By leveraging information already known about customers, organisations with strong data can make an asset of this and offer consumers additional convenience.
• The use of statistical models. These can predict likely underwriting or claims outcomes.
• A controlled buying experience. Similar to simplified issue, consumers can access cover at a price considered to be good value for those in good health.

Developments in the UK have so far focused on reducing the amount of traditional underwriting needed for those consumers offered a product through predictive modelling. In the US, life industry talk is more of using predictive techniques to triage the underwriting process and avoid expensive medical tests for healthy people. In general insurance, similar techniques are being used to predict which applicants will have the lowest claims frequency, the lowest rates of fraud and display the most loyalty.

Interesting insights
The best models are derived from the widest-ranging depersonalised data sets.  From the broad range of possible factors, statistical techniques are used to find combinations of predictors that are most correlated with mortality or an underwriter’s view of relative mortality. In other words, while the purpose of a model is to predict an individual’s likelihood to claim, underwriting decisions can be used to build the model rather than claim data because this is harder to come by. Several projects have been undertaken in the field of bancassurance, using banking data as the predictor. One or two projects have also been explored using supermarket, general insurance or publicly available commercial data as the predictor.

Over the past eight years or so, our work in this field has focused on building models based on depersonalised data held by high-street banks. This is then linked to traditionally determined underwriting decisions for sales made face to face, in branch or in a customer’s home. There is therefore a selective effect at work in the data used to build the model, in that the very ill tend not to apply for life insurance through these methods. As a consequence, to mirror this effect, even the very best predictive models tend to require a degree of filtering through the inclusion of a much-reduced number of traditional underwriting questions.

Whether looking at bank-specific or publicly available data, the strongest predictors of underwriting outcome are age, socio-economic — for example, community or neighbourhood, affluence, occupation and credit — as well as account activity or transactional fields.

If data points exist that are very close to health status then they too are highlighted as predictive of underwriting decision. As an example of account activity, we have found, on more than one occasion, that frequency of ATM visits proves to be one of the predictors of UK underwriters’ decisions. This surprising discovery highlights the need not to prejudge which variables will be predictive. Before it was discovered nobody thought of it, yet it is likely to have links to basic social and occupational activity that other studies show are linked to well-being and mortality.

Model effectiveness
Assume we take a large sample of recent underwriting decisions for life insurance policies. Say 80% of these were offered standard rates, or a small loading, and the remaining 20% were significantly loaded or declined. If we then took a random sample of 50% of these decisions, one would still expect an 80:20 split.

If a model is derived that aims to predict underwriting outcome and we then take the best 50% of lives according to our model, then the model is predictive if significantly more than 80% — the non-predictive average — of the lives selected are deemed to be good risks. The higher this figure, the better. If it is 95%, this still leaves a 5% type II model error [Failing to reject the null hypothesis when it is false], of all customers in the best 50% of the model — if they all buy the product — that is, the model predicts that they are a good risk but we know they are not (Figure 1).

Figure 1

Traditional underwriting has to be used to remove the worst cases of type II error and any additional anti-selection encouraged by the easy-to-buy proposition. Underwriting or pricing can be used to allow for moderately impaired lives that form part of this model error. The statistic used to measure the effectiveness of the model is the coefficient of concordance and anything over 50% is better than random, with the best models seen in this field achieving up to 70%.

Product characteristics
Clearly these techniques have the potential to be used across a range of products and distribution channels. Wherever possible, the product, brand and channel combination for any proposition should be derived from a model built on data from the same mix. For example, it would not be appropriate to target independent financial advisers with a proposition derived from a model built on tied sales.

In all cases the product presented to the customer should be very similar to standard products sold using traditional underwriting. It must be kept as simple as possible, in terms of added features and benefits, with limits on maximum sum assured, age at inception and term.

A key assumption, when pricing insurance contracts marketed at consumers selected using predictive techniques, is take-up rate by health status. Understanding consumer behaviour is therefore an essential part of the product development process.

Indeed, it is to be expected that some of the type II error lives will not have been able to buy insurance through any other method. Therefore, these would be expected to have a higher take-up rate than the healthiest lives. Furthermore, this effect is exaggerated for those not even represented in the model due to their very poor health.

A way to counteract this anti-selective effect would be to have the insurance linked to the sale of another product or packaged in such a way that all potential customers buy the product, for example, linked to employment or included in a premium banking package.

Users of predictive techniques
These methods are most appropriate for those who hold a lot of data on their customers. Ideally, they will have access to both predictor data and actual mortality or underwriting experience on some of those same customers. Bancassurers clearly meet these criteria but so too do consumer-centric data-rich companies with links to a life insurer. Examples of these are supermarkets — especially those who already sell financial products — non-life insurers and asset managers.

Predictive techniques are common in other industries and are used in life offices and banks in terms of propensity to buy. Of course, since premiums for life insurance are derived from a very low underlying claim rate, it does not take many more claims than expected to turn a mildly profitable line into one making big losses. We should therefore avoid increasing the attractiveness of the product to the few with a high propensity to claim, and hence increasing the price unfairly for the wider pool. Indeed, when carefully developed, these techniques have huge potential, as they can remove sales barriers that dissuade many healthy people from buying insurance.

Paul HatelyPaul Hately is head of accelerated underwriting and data insights at Swiss Re Life and Health