Rachael Armitage and Simone Bohnenberger-Rich discuss how to automate the identification of asset characteristics for the Solvency II matching adjustment
Since its emergence in the 1500s, the UK life insurance industry has been a keen adopter of technologies. The earliest life insurance policy is believed to be dated 1583 and covers the life of a William Gibbons. At that time, technology included the abacus – one of the earliest calculating devices.
The 1700s saw the introduction of logarithms and mortality tables, while the 1800s brought machines and devices to assist calculation. Electrically operated machines began to appear in actuarial departments in the 1930s, but it wasn’t until the early 1960s that we saw the first electronic desk calculators in actuarial offices.
Less than 60 years later, a life insurance office without technology would be unrecognisable. Life insurers are now looking to embrace artificial intelligence (AI) and machine learning (ML), to the extent that they can benefit from them. Since life insurers are required to process text in addition to numerical data, documentation stands out as being ripe for the application of these smart technologies.
Solvency II matching adjustment and documentation
The Solvency II matching adjustment (MA) allows firms to adjust the risk-free interest rate used to calculate the best estimate of a portfolio of eligible insurance liabilities. The allowable adjustment is based on assets that meet the MA criteria set by the regulator.
Identifying such assets requires extracting asset characteristics directly from product prospectuses and associated publications, such as pricing supplements. These documents are not always in electronically readable formats, and can be lengthy.
A technology-focused and forward-thinking asset manager wanted to explore the use of smart technology to determine whether certain bonds are eligible for MA. The pilot was an assessment of 1,000 bonds. We believe this is one of the first adoptions of smart technology in the life insurance industry and in Solvency II.
AI and ML have come on a long way over the last couple of years, and can now help with nuanced tasks as well as repetitive tasks. ML is focused on developing systems that can ‘learn’ patterns from data and using those patterns to make predictions when presented with new data. In this example, ML has been applied to the document review needed to determine MA eligibility.
Natural Language Processing (NLP) applies this predictive ability to human language. In this example, users can show an NLP platform examples of bond prospectuses and direct the platform to the information they want to surface from those prospectuses, such as maturity date. The platform then builds patterns from these examples, which allows it to predict and produce information when shown new bond prospectuses.
Training the platform
Figure 1 illustrates the simplified workflow of this project. As a first step, we worked to define what information is required from bond prospectuses to assess whether a bond is eligible for MA. Determining MA eligibility is nuanced, and an assessment needs to consider multiple data points. A provision or characteristic that disrupts, or calls into question, certainty of the cashflow profile of the bond needs to be understood. Example data points include maturity date, coupon timing or call option applicability.
Once these data points were identified, the NLP platform was trained on a sample of bonds, teaching it how to find the information required to determine eligibility (2).
Next, the platform created patterns based on the training set (3) so that it could predict the required data for additional bonds. After the training of the machine was complete (4), around 3,000 documents were uploaded for the platform to analyse (5), from which it extracted, bond-by-bond, the information required, as shown in Figure 2.
Finalising the results
At this stage, human intervention is critical. The NLP platform flags data points about which it is uncertain (6) and these should be reviewed, checked and, if necessary, corrected by a human. In addition, data points that the actuarial analysis team recognised as unlikely to be correct were reviewed and, if necessary, corrected. ‘Flagging’ of low-confidence answers brings together the best of both worlds: human and platform together produce more accurate results than human alone or platform alone.
ML is not a silver bullet. It does not produce results with 100% accuracy – but nor do humans. NLP in the application of MA review can be a time saver and promote accuracy. Data gathered from large-scale exercises, where the platform has reviewed hundreds of thousands of documents and been benchmarked against human performance, shows that human accuracy is around 70%–80%. The human brain is not designed for repetitive tasks, which is why human accuracy falls below 100%.
There is enormous power in combining the strengths of ML with human review to get to a near-perfect data set. In particular, this project has demonstrated that NLP, together with actuarial and ML expertise, can drive efficiency and enable repetitive tasks associated with MA eligibility to be outsourced, in part, to a machine.
Rachel and Simone will be presenting their webinar ‘AI in an actuarial world: training a machine to assess matching adjustment eligibility’ at this year’s Life Conference. Visit www.actuaries.org.uk/Life2020
Rachael Armitage is a director in Deloitte's Actuarial Life Insurance practice
Simone Bohnenberger-Rich is head of product portfolio at Eigen Technologies