[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Practical stochastic modelling! (Part 2)

In our article in the June issue of The Actuary we identified the main stages in developing a stochastic model and covered the first three of these. Just to recap, the stages were:
1 Scope, timetable, and team
2 Systems, data, and support
3 Specification and decision rules
4 Stochastic assumptions
5 Adapting to unexpected problems
6 Testing and reasonableness
7 Interpretation, explanation, and reporting
In this article we look at stages 4 to 7 and draw some conclusions.

Stochastic assumptions
It is the scenario generator that will transform the dynamic corporate model into a stochastic model. The model is likely to be used to produce some form of ruin probability or the fair value of liabilities (or both). For either use, an economic scenario generator (ESG) is required.
For ruin probability calculations, a model that accurately reflects extreme market events and the correlations between asset classes in those scenarios is necessary. The expected rate of return, volatility of returns, and correlation between asset classes will typically be based on past experience and judgement.
For fair value calculations, market-consistent models are important, and have the advantage that less significance is attached to the extreme scenarios (which have a low probability weight applied). One simple way to achieve this is by using the types of model used by banks. For a fair value model, the expected rate of return on each asset class does not affect the results, because as the expected rate of return increases, so too does the discounting effect of the deflators.
Volatilities should be based on the volatilities of derivative instruments as observed in the market. The correlations between asset classes cannot be directly observed from the market, because instruments that rely on these are not publicly traded. Past experience and judgement will be needed instead.
While certain ESGs may support both fair value and ruin probability calculations, most are likely to improve over time from a less powerful starting point. Your model should be designed with sufficient flexibility to accommodate changes to existing and new ESGs. This flexibility would include being able to support both risk-neutral valuation and state price deflator methods.
Aside from the ESG, it may be appropriate to make allowance for non-financial assumptions and their interaction with projected economic circumstances. Incorporating the stochastic nature of the assumptions into the model should be relatively straightforward, and will be particularly relevant for persistency assumptions where some form of financial guarantee exists. Deriving credible statistics on policyholder behaviour is likely to be the most significant problem.

Adapting to unexpected problems
The successful management of unexpected problems will be critical to the success of the project. It is likely that you will need to revise your plans in order to achieve the goals, and a degree of pragmatism is required. ‘Workarounds’ can be effective, and are sometimes more appropriate than the originally intended solution. Having regular access to experienced individuals along with a good project manager can significantly improve progress.
Delivering a 95% solution on time and within budget with known well-documented weaknesses is likely to be preferable to significantly overrunning and overspending on a 100% solution. You can always address known problems in the next phase of development.

Testing and reasonableness
‘How do I know it’s right?’ will be the most frequently asked question once your model is producing output. Testing should be structured to answer this question. There are several techniques available for the testing and audit of a dynamic corporate model and these can be structured to meet your needs. Formulating a formal testing plan before the testing starts will ensure that the tests cover the required areas, and that overlap is minimised. Setting the sign-off criteria in advance can be useful, but such criteria need to be treated with an element of practicality.
Testing is bound to be an iterative process, so it is vital all test results are stored and documented, so they can be re-examined when later changes are made. If not, you will find yourself repeatedly going over the same areas.
Although this article cannot hope to cover all the testing techniques available, at a high level we can say that a ‘bottom-up’ approach is likely to be used. Individual asset classes and liability products are tested (normally against any established internal systems using an appropriate deterministic basis), and then ‘corporate-level’ testing is carried out. For most insurers, the corporate-level testing will be the most challenging, and will include testing for ‘internal leakage’ and sensible decision outcomes over a wide range of projected scenarios. Investment, bonuses, and new business mix are typically the key areas to be considered.
Separate from the model testing, you will also need to assess the appropriateness of any ESG assumed. Standard market-consistent tests, if appropriate, can be extended to validate the effect of smoothing on guarantees on any with-profits business. For some approaches, elapsed time will be a critical factor in this stage of testing, owing to the number of simulations likely to be required.
Overall, the test plan would be expected to include the following tasks:
– Test asset and liability products for individual model points against deterministic assumptions.
– Test asset and liability products for all data against deterministic assumptions.
– Test ESG calibration.
– Test asset and liability products against stochastic assumptions.
– Test the fit of model points to all data against deterministic assumptions.
– Test the corporate model against deterministic assumptions, varying scenarios (sometimes to extremes) to test the decision rules.
– Test the corporate model against stochastic assumptions.
– Test the fit of subsets of model points to subsets of all data against stochastic assumptions.
The testing structure can be designed to cover many of these steps in parallel, dramatically reducing the time required for the testing phase.

Interpreting, explaining, and reporting
For many actuaries, interpreting the results of stochastic models will be a new experience. There is obviously an overlap with the testing phase, as a lot of familiarity with the model will have been gained during testing. Interpreting the model will be much easier if the guarantees within products and the decision rules are broken down into their components, and the effect of each is valued step by step. Working this way, and interrogating the output of the economic model, makes it easier to build up a picture of what is happening and to assess the reasonableness of the results.
Explaining the results of the model to senior management, especially non-actuaries, will be difficult. Stochastic models are complex, but if the investment in developing them is to be worthwhile, and directors are to fulfil their increasingly demanding obligations, senior management must understand the results. In much the same way as your own knowledge will have been built from the bottom up, your explanations should take the same approach. In particular, the credibility of and sensitivity to the chosen economic model is likely to require considerable justification and explanation.
For most standard modelling packages there will be almost no limit to the amount and format of information available. The problem is deciding what you really want to see, understanding how to access it, and choosing the format to present it in. This can also affect how much information is stored from each run, which will affect run times and disc capacity requirements. This can be daunting to internal users, who may still be new to the system.
If you have access to experienced system users, or the system provider, the standard outputs should be relatively easy to produce. Figure 1 provides an example of standard output.
More company-specific statistics are likely to be developed over time, perhaps with some initial investment from experienced users. A popular solution is to start with relatively simple statistics, which can be developed and extended over time as management learns to interpret and to use the information. As with the projection model, the format and level of detail of your output are likely to be continually refined, so reducing the need for the full solution to be provided straight away.

Don’t reinvent wheels
Building these systems is a major investment. It could take a couple of years to develop the required capabilities, produce understandable answers, and ensure that the systems are reliable and auditable. While each company will be different, most of the problems are likely to have already been encountered and resolved by someone elsewhere. Find this expertise from experience within your organisation, contacts in other offices, consultants, or auditors. Be as pragmatic as you can.
And good luck!

03_10_07.pdf