In the second of their articles on improving Internal Model Approval Process (IMAP), Ashish Kwatra and Jennifer Khaleghy explain the implementation of successful IMAP programmes, focusing on independent validation, a key area for the regulator

Day one IMAP firms (that is, firms that received model approval from their regulator by the first day of Solvency II implementation date of 1 Jan 2016) largely used external consultants to satisfy the independent validation requirement. More recently, companies have been looking to transition to an in-house validation. This article offers insight into the practical implementation of the independent validation process, and discusses key areas to ensure the process is both effective and regulatory-compliant. This includes consideration of key items on the regulator's 'checklist', as well as how to make validation less resource-intensive in a business-as-usual environment.
The independent validation process and its results will be scrutinised by the regulator, with the following key items on their 'checklist':
- Independence from the capital modelling team
- Implementation/adherence with a validation policy
- A complete validation, covering all aspects of the internal model
- Validation test plans with clear pre-defined pass/fail criteria
- Skilled and knowledgeable validators
- Effective challenge, reporting, escalation, remediation and monitoring
- Board's role in validation.
Ensure your validation process is independent
Demonstrating independence could be difficult when implementing an internal validation programme, particularly for small firms, where reporting lines are less distinctive. Areas of good practice include:
- Ensure no reporting overlap between the capital modelling team (Line 1) and the validation team (Line 2). An example of a commonly adopted approach is to have Line 1 reporting to the CFO and Line 2 reporting to the CRO, or vice versa.
- The validation team should not be involved in any aspect of Line 1 activities (that is, model design, model development, expert judgments). For example, Line 2 can be an observer in an expert judgment panel or governance process in order to validate the effectiveness of these processes. However, active participation in these processes would undermine their independence.
- Validators should be rotated regularly: Using the same validator for the same risk area for more than two to three consecutive years may undermine independence.
- Line 1 often carries out its own testing. Line 2 can review and challenge this as part of its independent validation. In order to demonstrate independence, it would also need to set out its own independent validation tests with pre-defined pass/fail criteria.
All internal model aspects need to be validated
- Quantitative areas are typically aligned to the structure of the internal model, broken down into different risk categories (such as reserve risk, dependencies). Aspects to be validated for these areas should include risk coverage, design, data, methodology, parameterisation, assumptions or expert judgments, model outputs and documentation.
- Qualitative areas are typically broken down into areas such as model governance, model use, model change or development, IT and systems, data and documentation. The level of effort required to validate qualitative aspects can often be underestimated. As an example, the validation of data should cover data governance (such as data policies and standards), data process and controls (data flow and transformation), data dictionary, data quality (data completeness, accuracy and appropriateness) and expert judgments related to data.
Careful readers may have spotted that data and documentation are mentioned in both categories, making the validation less clear cut. For example, documentation related to a particular risk category should be validated as a 'quantitative' area to ensure sufficient level of knowledge and better efficiency (an approach recommended by the regulator), but overarching aspects of documentation (for example, documentation governance process) should be validated as a separate 'qualitative' area.
Choosing skilled and knowledgeable validators
There are three different operating models to perform the validation.
External validation team: Outsourcing the validation process to an external consultancy was common practice for Day 1 IMAP firms. Consultants have a good understanding of Solvency II requirements, industry common/best practice, industry benchmarks, and latest regulatory focus. Importantly, one of the big advantages is that, in most cases, independence can automatically be assumed.
The obvious disadvantage is the cost. Also, consultants usually do not have daily face-to-face interaction with the firm, making the challenge or Q&A process long and inefficient.
One common misconception with this solution is that ownership and accountability of the validation function also gets outsourced. This is not the case - only the validation activity is being outsourced. In fact, the onus is on the company (or CRO) to drive the validation process. They should work with consultants to determine how an area should be validated, what tests should be performed, and what should go in the validation report.
Overall, external consultants can be a good solution for pre-model approval but may be costly in the long run.
Internal validation team: In addition to cost and Q&A process efficiency, internal resources have a deeper knowledge of the firm and can tailor the validation to the firm's internal model, risk profile and needs.
This solution is more viable for the longer term. However, it requires careful planning to ensure independence and the successful recruitment of skilled resources.
Combination:Having a combination of external and internal resources is becoming increasingly common. This draws upon the benefits from both approaches, and facilitates transition to an internal validation. Under this approach, some risk areas remain with consultants, whereas others remain in-house. This can be swapped periodically to ensure independence is not compromised over time. One issue we have noticed is that consistency in the process and reporting can be difficult to maintain.
Our personal view is that it is useful to complement an internally driven validation process with external resources (be it external consultants or experienced contractors). This brings a fresh outlook as your model evolves over time, ensuring independence and adding value.
Whichever operating model is used, ultimately the effectiveness of the validation process is dependent on the skills and knowledge of the validator. Appropriately skilled validators will ensure that the validation process becomes more value-adding than a box-ticking audit process with hundreds of tests merely confirming the existence of the required evidence, instead of appraising the quality of the evidence.
Do not underestimate the board's role in the validation process
The board is the ultimate owner of the validation process and results. The role of the board involves top-down validation (effective review and challenge of the internal model, including key assumptions, drivers, limitations and outputs), challenging the validation process and its results, as well as tracking remediation progress.
How many high/medium/low limitations can a firm have going into an IMAP submission?
We have seen firms going into IMAP with a varying number of limitations. However, it is not expected that you have a perfectly clean validation report with no limitations. An attempt should be made to eliminate all high limitations and minimise the number of medium limitations. The low limitations should not be ignored either. In particular, a view should be taken as to whether the low limitations can aggregate up to a medium or high finding. No matter how many limitations you have, a live remediation plan should be in place.
How should you make the validation exercise less resource intensive?
For an IMAP submission, the regulator would normally expect to see a fully validated internal model. In a business-as-usual environment, here are some tips to make a validation exercise less resource-intensive.
- Not all the validation tests need to be performed every single year. It is up to the firm to decide and justify how often to execute a particular test or validation area. The test frequency should be clearly set out in the test plan and/or validation policy, with triggers for additional ad-hoc validation defined.
- Validation can be focused on changes made to the internal model since the previous validation. If there has been no change and the previous validation passed without limitation, there is an argument of skipping it this time around.
- Not all the validation tests need to be performed at once. For example, different validation areas can be validated in different quarters of the year to spread the work over a period of time.
- Firms with generous IT support may consider automating the validation process, so all challenges from the validators and the corresponding responses are automatically recorded in the system. Such a tool can generate efficiencies and give clear evidence of challenge. This tool could also include progress monitoring and reporting tools.
Ashish Kwatra and Jennifer Khaleghy are experienced capital actuaries, specialising in Solvency II and implementation of the IMAP programmes