Stephen Carlin, Dan Pringle and Thomas Kernreiter consider how data science could improve life insurance repricing optimisation outcomes for customers
When it comes to cracking difficult insurance problems, modern data science methods are increasingly valuable. Methods such as life insurance repricing optimisation give insurers opportunities to improve products and prices so that they better reflect their ultimate priority: the customer.
Ethical considerations and customer treatment requirements are key components of wider industry discussions, particularly around the use of predictive algorithms. While life insurers operating within markets that have fewer consumer protections may gear price optimisation towards maximising shareholder value, this isn’t the case in the UK. We’ve found these discussions immediately raise questions regarding the principles of treating customers fairly (TCF), which have been fundamental to UK life insurers for over a decade.
Here, we outline modelling techniques we use in price optimisation exercises across different jurisdictions, and how incorporation of customer metrics improves outcomes for insurance customers.
Price optimisation overview
The simplified goal of price optimisation is finding the set of premium rates with an optimal trade-off between sales volumes and profitability of contracts sold. What ‘optimal’ means varies depending on a company’s specific objectives and constraints.
While robust optimisation is notoriously challenging in the complex life insurance business, advances in data science and computing power are changing this. Let’s look at three layers of data science methods in price optimisation: price-to-sales sensitivity, neural network proxy models, and price optimisation with constraints.
Price-to-sales sensitivity model
In most cases we expect lower sales with higher prices, and vice versa. But how do we identify exact sales fluctuations and isolate price effects from other factors? Price setting without this understanding is flying blind, and price optimisation requires it.
It’s necessary to account for a wide variety of factors that influence sales levels and price sensitivity, including:
- Sales channels – sales through comparison websites may be more sensitive than sales via a tied salesforce
- Company brand and customer service reputation
- Sensitivity to specific competitor pricing
- Customer needs and policy attributes (such as age, sum assured, policy term).
Price-to-sales models are generally built using generalised linear models. The overall process requires careful data preparation and limitations (including comparatively low sales volumes for life insurance), identification of appropriate model variables, and model selection.
Sales data can link to market prices by grouping sales into ‘segments’ such as age, gender, term, sum assured, smoker status and other attributes that affect price. A careful balance between very granular pricing resolution and segment aggregation into ‘buckets’ is necessary to ensure sufficient sales volumes for robust statistics.
Market position’s effect on sales can be assessed by comparing how the relative sales level of each bucket and each month varies in relation to market price metrics, such as rank or the percentage difference from the median price.
Different market price metrics may be investigated depending on available market pricing and expected effects. Both can vary by jurisdiction. For example, UK consumer protection rules require advisers to recommend best-value products, which might mean a significant sales effect when moving in or out of the top-ranked tier.
Figure 1 illustrates a simplified price to sales model. The price position of any one segment (eg 35 years old, 10-year term, non-smoker with sum assured £200,000, etc) can change over time as the insurer and its competitors reprice. Whatever the absolute price position (and with all other things being equal), we'd expect higher segment sales at times of lower relative price. Price position might be measured here by ‘price relative to lowest-priced main competitor’, ‘percent relative to median price’ or some other metric reflecting market dynamics.
Relative competitiveness (RC) measures variations in this metric over time. For a given segment and month, an RC of +10% means a price position 10% higher than that segment's average price position over all months. (This might not mean a higher price, or higher profitability, as the change in price position might be due to a competitor price change.) Sales from all segments and months can be grouped, or bucketed into RC intervals, as in the histogram in Figure 1.
For each RC interval, we calculate relative sales (RS). If a segment's sales one month are twice its average monthly sales, it has RS = 2. Taking the sales-weighted average over all segments and months in an RC interval gives its overall RS value. This is what the black step-curve in Figure 1 shows. Finally, a model of relative sales to relative competitiveness is fit to return a sales sensitivity to the chosen price metric (the red line).
Neural network proxy model
Once we understand how price affects sales, we need a better way to determine how full actuarial model outputs depend on price. Repricing optimisations require repeated key financial metrics evaluation for thousands of small changes in premium series and associated sales volumes. This simply takes too long using full, complex actuarial models. Proxy models that accurately approximate the actuarial model output are a much faster alternative.
Using proxy models for actuarial models has been established practice in the UK and Europe for the past decade. They are used to calculate capital requirements for complex life insurance liabilities, such as contracts including options and guarantees.
In our case, repeated ‘sensitivity runs’ help generate actuarial model outputs for the financial metrics of interest, a range of pricing inputs, and a proxy model fitted to the resulting inputs and outputs. In terms of TCF, this provides an opportunity to ensure training data is fully unbiased – an issue many industry experts have cited as a primary concern with machine learning. As the proxy model itself doesn’t need to be integrated with aggregation systems or scenario generators, we can explore different proxy model approaches. One promising approach is based on neural networks, a machine learning method well suited to finding complex underlying models.
Figure 2 illustrates the key concepts of neural networks. The neural network model learns the impact of price changes (input layer) on financial metrics (output layer), such as embedded value or present value (PV) of premiums. The hidden layers (two in this example) and the output layer include trainable weights that are learned in the model fitting process, such as minimising a loss function (eg mean absolute error) using gradient descent.
In addition to enabling the price optimisation process, proxy models may provide additional quantitative insights through feature importance analysis, where the algorithm determines which pricing factors have the most impact on the financial metrics. This is useful as it allows the optimiser to focus on a narrower set of variables. An advantage of neural networks is that they can learn complex relationships in the data without significant engineering effort. This makes the approach flexible in the pricing use case, where pricing factors can change.
Optimisation with constraints
When performing a repricing exercise using the proxy model, it may require thousands of iterations to explore the parameter space and find the best solution across the large number of pricing points and factors. With proxy model evaluation times of only a few milliseconds, this becomes possible, allowing for scenario analysis and comparison of various pricing strategies in a way traditional tooling can’t support.
There’s still one key input for the optimisation process that we haven’t discussed: constraints. The total number of constraints can be in the tens of thousands, including but not limited to:
- Output metrics: payout ratio, profit margin, competitive positioning on a portfolio or segment level, etc
- Premium consistency constraints: premiums increasing for higher ages at inception, increased premiums for smokers, premiums increasing for higher sums assured, etc
- Marginal profitability constraints at the segment level: which limits cross subsidy across segments.
Proxy models’ flexibility allow pricing teams to explore multiple pricing strategies with different combinations of constraints and levels to understand how constraints affect business performance. This allows executives to understand the impact of different pricing strategies and make informed decisions about shareholder and policyholder value. Increased clarity on how different pricing strategies can positively impact policyholders is another opportunity to incorporate TCF principles into pricing optimisation.
Figure 3 illustrates an example output, with change in gross margin plotted against change in PV premiums for a range of optimisation scenarios with varying constraints.
- The red scenario is an unconstrained optimisation to maximise sales. It shows strong sales growth but a significant reduction in profitability.
- Blue scenarios show that highly restrictive constraints on premium consistency and marginal profitability (0%, 0.5% and 1.5%) allow little opportunity to grow sales or margin.
- Green scenarios with moderate premium consistency and marginal profitability constraints show a useful sales uplift without sacrificing profitability.
- Yellow scenarios show additional sales and margin uplift compared to green, but at the expense of premium consistency.
Beyond being a significant lever for growth and a chance to align pricing strategy with business strategy, pricing optimisation can improve insurers’ flexibility in the market and reactiveness to customers.
The life insurance pricing optimisation process outlined above involves the combination of three layers of data science applications and multiple techniques. It can naturally accommodate consumer-focused metrics and TCF considerations, such as constraints on premium rates or financial outputs at both the portfolio and segment level. This results in a powerful tool that life insurers can use to their strategic advantage while simultaneously improving policyholder value.
Stephen Carlin is customer success director and product owner at Montoux.
Dan Pringle is a data scientist at Montoux.
Thomas Kernreiter is a data scientist at Montoux.