New game-changing technology could become the 'new normal' for the insurance industry, according to Osmo Jauri and Timo Penttilä.
Traditionally, contract level Monte Carlo simulation has been considered computationally too slow for large portfolios. However, it offers great advantages:
- No simplifications: all products work as in reality
- No replicating portfolios: all is done on contract level cash flows
- No option theory needed: embedded options are automatically priced
Monte Carlo simulation offers more complete information since all variables carry full probability distributions. Results are more reliable due to reduced model risk. When we incorporate asset-side valuations and hedging strategies into the model, one model can apply to many tasks: financial planning, actuarial modelling, product development and asset liability management/risk management. It can offer significant economic benefits, particularly when building an internal model.
To achieve these benefits, many have been using shortcuts, e.g. grouping of contracts and using a mixture of formulaic and simulation-based valuation techniques. However, Solvency II has a clear message: the less you assume, the less you have to explain. For example, grouping of contracts with embedded options requires significant work to demonstrate that nothing gets 'lost in the grouping'. In this article we study an approach where no compromises are made, and all valuations are based on future cash flow simulations created at contract level. But is it too heavy to run?
High speed simulation is created by combining two different technologies: distributed computing and advanced software design.
Distributed computing provides high computational capacity by dividing tasks among several cores, Central Processing Units and workstations, inside a workstation, on a local area network or cloud service. Cloud services offer high performance computer cluster power at potentially lower cost, because the user defines the number of CPU-cores to be used, and pays by core-hours spent and not idle time. The expense is the same for taking one core for 100 hours or 100 cores for one hour.
Modern tools offer several concepts to ease model-building and help to speed up computations. Modelling work can be simplified by using rule-based object oriented modelling. When we create our model based on rules and objects our model becomes easy to use, document and audit, in contrast to algorithm and procedure-based models. Simulations can be executed in vector form instead of For-loops, and intelligent algorithms can minimise unnecessary work by detecting differences between products.
How does it work in practice?
We present two real-life cases, where we apply contract level Monte Carlo to real-life insurance portfolios. We run the models both locally on workstations and for comparison in a cloud service. We receive full probability distributions for all variables for all time steps. Market Consistent Embedded Values, life-Solvency Capital Requirement and Own Risk and Solvency Assessment applications can all be built on those outputs.
Case A - Life Insurance
Case A is a life insurance company with 60,000 contracts, including traditional risk policies and savings-based products. The model was set up to run for 60 years with changing time steps, starting with one month steps and later moving to 12 month steps. All products and cash flows were defined realistically. Customer behaviour was specified through stochastic lapses and premiums. Policyholders were simulated to have disability and death risks. Economic scenarios were read from a separate source.
Case B is a P&C company where comprehensive motor vehicle insurance portfolio was analysed. The model covered seven different types of claims with correlation structure, all with stochastic claim size and three with stochastic time frame (see Figure 2).
A summary of the model structures and execution times is in Table 1 below.
Does it run in a Cloud?
In theory, when we increase the number of cores (n) the price of computation remains the same, while calendar time spent gets shorter (1/n). In practice this is not true: there are increased data transfers between cores and there are parts of software code that cannot be distributed among cores. Our testing has shown that this loss in efficiency, whilst small for many tasks, can increase when using thousands of cores - performance depends on the products and model structure. Through testing, the user can easily determine the optimal core usage.
Increasing simulation rounds does not multiply computation time with equal factor. Our experience shows that multiplying simulation rounds from 1,000 to 10,000 the computation time will increase by a factor of 2-4 depending on product mix, both on cloud service and local workstations.
How were the models built?
In both cases we defined all relevant product and technical specifications into the models, and required the following definitions:
1. Product terms. Products were defined as agreements of exchanging cash flows when given circumstances are met by using object oriented modelling language.
2. Random variables and processes. This includes customer behaviour and claim processes, and was done partly by using object oriented modelling and partly by writing rules as MATLAB sentences.
3. Company balance sheet formulae and technical specifications. In our examples we used mark-to-market valuations.
4. Decision making rules. These affect the course of a simulation by making path-dependent decisions, e.g. dividends and benefits.
In addition, we imported economic scenarios and contract details policy by policy.
In this article we demonstrated that contract level Monte Carlo is applicable to real life modelling. By using distributed computing and cloud services, desired performance can be reached in a cost-efficient manner. Perhaps the new 'normal' for the insurance industry is to expect powerful results quickly and at low cost.
Osmo Jauri, Dr.Tech. M.Sc. (Econ.), and Timo Penttilä, DBA (Finance) work for Model IT Ltd. Model IT specialises on insurance and asset management solutions.