The prospect of changing your modelling platform is seen as a significant task by most capital modellers. Andrew Cox and Cameron Heath show how to have a smooth ride through the process
Be it as a result of a merger, acquisition, the software no longer being supported or simply a result of sheer frustration with your current solution, change is sometimes necessary. The thought of what's involved can send even the most seasoned actuary into the kind of spin Strictly's Len Goodman would be purring over. Changing your capital model platform doesn't have to be as daunting as it seems. Let us explain why.
All is not lost
The first point to remember is that changing model doesn't mean you're starting again from scratch. A great deal of the experience, understanding and insight that you gained while building the original model will be retained and used with the new model.
To demonstrate this point, it is worth distinguishing between the model - the mathematical representation of the entity being modelled - and the implementation of that model. The latter is the piece of software used to crunch the numbers. Often changing one of these does not involve changing
An example might be turning a spreadsheet into code. However, it is often the case that the change of implementation is made for issues around the flexibility, clarity or speed of modelling. In such cases it is likely that the switch of modelling platform will involve some changes in the underlying mathematical model. In our experience, this is often the case around dependency modelling, which is approached differently in different pieces of software. Although this should not be seen as a bad thing - it forces you to think about the underlying assumptions that may have been made relatively quickly a long time ago.
Indeed, changing model is also an opportunity to put right all those quick fixes you've learnt to live with but that secretly bug the living daylights out of you. This could also be seen as improving the governance of the model which can help to satisfy Solvency II requirements.
Whatever the reason for changing the platform, the parameters that you used are still as valid in your new model as they are in the old one. So all the analysis that you did, all those discussions with underwriters, all those (expert judgment) picks that you made where you didn't have enough data can all be used again. As an aside, changing model is a great opportunity to review all those parameters and assumptions to make sure they are still valid (see figure 1).
Likewise, you'll know what the new model should look like - how it should be structured, which classes (and sub-classes) to split the business into, how the reinsurance programme functions, the format of the output reports needed and so on.
A journey of a thousand miles begins with a single step
When we - Barnett Waddingham and Guy Carpenter - were asked to look at the feasibility of switching models for a London Market insurer, we started by carrying out a proof of concept on one typical class of business. Our first task was to understand the current model (hopefully, but not certainly, a straightforward task if it's your own). Then we had to choose the class; we picked one that contained attritional, large and cat losses (from a vendor model), which had stand-alone reinsurance and old years. This way we could check underwriting and reserve risk at gross and net levels.
Having built the basic structure in the new platform we then selected the existing distributions and input the parameters. It's worth remembering that many distributions have variations, or different parameterisations, and it's just as well to check you're using the same one as your old model or at least something that is going to give a comparable answer. Likewise, if there are quick fixes or peculiarities of how your old model deals with different situations, you'll either need to replicate them or have an understanding of how you expect the results to change. For example, conversion from underwriting to accident years can be done before or after the losses have been generated; if the parameters are split before modelling the aggregate variability will be lower unless there is a 100% correlation between accident years.
Talking of variability, you'll need to decide which metrics you're interested in comparing, as we did. Clearly, you'll start with the mean, and you'll no doubt be interested in the 99.5%, but you may also want to focus on other points on the curve for internal reporting purposes - we also looked at 1-in-5 or 1-in-10.
Having run our new model and made sure the results were in the same ballpark (made sure we hadn't fundamentally messed something up), we reconciled the differences and understood what was driving them. This can be any number of reasons - the way the parameters are used, the distribution (many of them have numerous variations), different approaches to inflation and the like. All models will come up with different results but the key is whether the difference is significant. If you have taken the opportunity to change the mathematical model, you will be expecting there to be differences. Indeed, it may be necessary to replicate some of your quick fixes, in order to show that it's these changes - driven by the maths - and not the software that is driving the difference. Whatever you do, it is important to keep in mind that results do not need to be exactly the same to be considered statistically equivalent - all outputs of models are estimators with an associated error.
Buy one get one free
The proof of concept process is similar to another trend we are observing: companies supporting multiple platforms. This can be to leverage the relative strengths of different pieces of software. For example, using one or more platform(s) to analyse catastrophe risk, the results of which are fed into another where other risks are modelled; alternatively, using a second model as a validation of the primary one. Clearly, this is only part of the full validation process, but it can certainly increase users' confidence in the model.
This validation approach is especially useful when it covers areas that the two platforms approach differently - as mentioned above, dependency modelling is a common example of this. Different platforms approach interactions in different ways - some using correlation matrices and others using underlying drivers.
So by reconciling the results of multiple classes separately and together you're also able to understand better the impact of hidden assumptions you have been forced to make by your choice of software platform, and which it is impossible to stress from within that modelling paradigm.
A final thought
As you can see, there's plenty to do in order to change your modelling platform, but the challenge is not insurmountable and there are additional benefits of going through the process.
We couldn't talk about capital models without mentioning Solvency II. It may have taken a backseat of late, but it will soon be back, 1 January 2016 to be precise. If you are not ready by then, it will be more frightening than a Halloween episode of The Simpsons, repeated on Friday 13th. So it's worth thinking about this thorny issue sooner rather than later.