Model development and upgrades cost time and money. Errors in insurance companies models can be costly, in terms of money and time, to remediate as well as result in harmful effects such as under or over reserving, financial losses, reporting errors or missed opportunities
How well built is your model?
A number of tests can be employed to understand how well your model is built against the development requirements (functional and non-functional) which should be clearly defined at the outset, including:
- Path testing to test if the model correctly performs the calculations across the various routes in the code.
- Diagnostic tests to ensure that the code is efficient and does not cause undue bottlenecks when the model is run.
- Integration tests where components are combined and tested as a group to ensure integrity of the code.
- Performance tests to evaluate the performance of the model against specified performance requirements e.g. runtime, CPU efficiency, RAM usage, disk usage.
How robust is your model?
As well as testing that your model performs well against the requirements it is important that a model is built in a robust manner. There are a number of tests that can be conducted in this area, including:
- Error-handling tests to test that model handles run failures in a correct manner e.g. erroneous inputs by running through data that is expected to break the model.
- Stress tests to test that the model can handle extreme values (small and large).
- Destructive testing to deliberately try and break the model. For example, checking that User Access Control (UAC) is sufficient to prevent unauthorised users from making changes to the model or accessing parts they should not.
How accurate is your model?
When developing a model it is important that the model performs the calculations accurately as specified in the model specifications. A number of tests can be employed in this area, including:
- Unit testing in which individual units of code are tested using sample data to determine if they are fit for use.
- Regression testing which provides a thorough comparison of model output against previously signed-off results giving users confidence that developments have been implemented correctly and only have an impact on results where expected.
Does your model meet requirements of the end users?
Models should be tested to ensure that they meet the requirements of the users who will use the model. There are a number of ways this can be tested, including:
- Scope testing to determine if a model upgrade or development embraces everything set out during the scoping/design phase.
- Compatibility tests ensure that the development functions properly within production processes and hardware environments.
- Usability tests ensure the end users can easily operate, prepare inputs and interpret outputs of the model.
Best practice guidelines for tests
There are a number of useful guidelines that can be followed for testing. These practices should be documented to ensure consistency across model developments. Example guidelines include:
- Tests should be peer reviewed and signed off by the appropriate stakeholders.
- Test team is separate from the design team.
- Test suites are developed independently from the development team.
- Procedures and processes when a test fails and how it feeds back into the development cycle.
- Audit trails on testing conducted e.g. audit reports, compliance with any frameworks and standards that are in place.
- Clear acceptance and sign-off processes for the development to move models from development to production.
Aditi Parekh is a qualified actuary, working as a senior consultant at Deloitte. Andy Crichton works in the Actuarial Modelling Centre (AMC) of Deloitte as a senior consultant. Marc Fakkel is a partner at Deloitte who leads the AMC at Deloitte.