Insurers continue to experience pressure to produce audit-quality financial statements to ever-tighter timescales, and Solvency II reporting will not be immune to this. Gabi Baumgartner and Tejas Nandurdikar share some advice to speed up the process
As there is reasonable clarity on the reporting requirements under Solvency II, some firms have prepared by investing in technology, particularly parallel or cloud computing. This means that machine runtime can be reduced to a small proportion of the usual elapsed time of a reporting process.
The more challenging improvements in process involve people, particularly when judgment or problem-solving are involved. These steps can be hard to predict, and this is compounded when the calculations and subject matter are complex, such as in the use of economic scenario generators to value guarantees embedded in insurance contracts.
However, a number of approaches exist that can mitigate the human bottlenecks in financial and solvency capital reporting.
Automate mechanical tasks
Solvency II reporting processes, although in their relative infancy, may grow organically, with many manual adjustments and workarounds accumulating over the years. Automating these processes - from data collection to results production, including test outcomes - removes process bottlenecks and reduces the scope for transcription errors.
In other cases, reliance on the timing of key inputs can be reduced via automated tasks. For example, it is worthwhile to implement the Smith-Wilson algorithm to derive the Solvency II yield curves directly from source data. This is in order to reduce dependency on EIOPA's release of the yield curves.
Care is needed however, as automation can introduce new errors. Good practice is to incorporate informal checks - even if they are only on the 'copy-pasting' of data - and it is important to capture these intermediate checks in an automated process, too. Tremendous benefit can be achieved in gaining processing time through more computing power, as long as there are enough checks in place.
Be clear about tolerances
Tolerance bands are commonly used to monitor the outcome of various modelled variables. They are a useful way of tracking the outcome of variables against a specified range. For example, output yield curves and option prices should replicate market data within a given tolerance. Martingale tests for market consistency should not come out significantly different from unity. The tolerance bands can be derived in two different ways - statistical tolerances and accounting tolerances.
The statistical approach measures test results relative to the sampling error expected from a certain number of scenarios. If you run more scenarios, then the tolerance bands narrow.
If you run too few scenarios then the bands are wide and the test lacks power, so that even if the model is wrong you may not detect it.
In contrast, accounting tolerances relate to the size of any error relative to what is being measured, and the impact of any error on decisions. The accounting materiality threshold will include many possible sources of error besides statistical sampling. Acceptable tolerances are not affected by the number of scenarios, although the ability to comply with the tolerances should improve as the scenario count increases.
It is good practice to derive tolerances based on the test purpose and the reporting framework, and to be clear as to why each tolerance is needed. Market movements will often affect the volatility of the variables being modelled. With more volatile market conditions, more scenarios need to be run in order to maintain the same level of tolerance as under benign market conditions. The relevant tolerance - statistical or accounting - may vary according to the number of scenarios run, as Figure 1 (below) shows.
Using inaccurate tolerance levels will often lead to a number of false red flags being raised following a model run. This adds to the processing time as models need to be re-run.
To minimise these delays, the tolerance levels must be accurately set. Instead of blindly using last time's levels or levels used by peers, the tolerance levels must be linked to accounting materiality and objective sampling errors.
It is worth noting that we expect a number of fails from statistical tests. This is due to the construction of the tolerance bands, also referred to as confidence intervals in statistical textbooks. Confidence intervals are typically stated at the 95% confidence level. In this case, we expect the true answer to be within the confidence intervals 95% of the time if we were to repeat the calculation with new samples multiple times. It is therefore important to consider whether the fail is likely to be the result of a sampling error or a genuine problem with the model itself.
Use reliable algorithms
Complex algorithms are used in order to automate and industrialise the production of solvency capital numbers and financial reports. There are likely to be weak points in algorithms that need to be fully understood. Most difficult algorithms involve optimisation or solving simultaneous equations. For example, economic scenarios used for the best-estimate liabilities under Solvency II may be calibrated to replicate interest rate and equity derivative prices. Solutions may or may not exist and may not be unique.
Algorithms may fail even when a solution exists, or may report false solutions. This is more of a problem with complicated models, and it is often best to split a model into pieces and only try to solve equations in one or two variables at a time. Running legacy algorithms year on year without appropriate testing increases the chance of the algorithm failing.
There has to be a robust testing environment in place before the model is run.
Investing in making the algorithms reliable and fit for purpose prior to the main production process will go a long way towards reducing delays when the model is run.
Anticipate social and commercial constraints
There are often social and commercial constraints associated with reporting financial information. Such constraints need to be recognised and legitimised within the working environment. Computer code can be sped up, but it is a different matter to accelerate human judgments and negotiations. Reviewers will consider not only technical matters but also whether a particular result is likely to be acceptable from commercial or regulatory perspectives, sometimes with an eye on anticipated peer behaviour. This is a sensitive area that needs to be addressed in order to make the reporting process more transparent and, as a result, faster.
There will always be constraints that cannot be automated. Appropriate tolerance levels and triggers may help, however, in reducing human intervention. The aim is to direct human effort towards areas where expert judgment is required and to automate other areas of the process.
If it still goes wrong
Good preparation reduces the chance of something going awry. Some risk remains, however, and it is important to have a plan B. This must consider remote contingencies and practical and reasonable workarounds. Having a contingency framework agreed by management will help with reducing delays in responding if something goes wrong, recognising that in some, albeit extreme, circumstances delaying the publication of results may be the 'least-bad' option.
Overall, more investment is required upfront to set up systems capable of producing accurate and timely reports required by the business.
It is through such investment that problems may be anticipated early, and it may be possible to automate solutions for these. It should be clear where human judgment is needed, and rigorous testing should be carried out to make other areas of the process as independent as possible.
A realistic plan B can help reduce costs and delays in the event of something going wrong. Raising and promptly addressing some specific and difficult questions such as the following will be helpful.
? Is the business comfortable with the way tolerances are set?
? When was the last time that tolerances were refreshed?
? Is the business driven by statistical or accounting considerations?
? Where does the business feel controls are too weak, or, indeed, too strong?
? Where are the known problems that everyone is postponing grasping?
? If the business had the budget, what is the first process improvement it would make?
These could certainly be a starting point towards an even faster Solvency II close.
Gabi Baumgartner is a senior manager at Deloitte. She runs the team responsible for Deloitte's economic scenario generator software.
Tejas Nandurdikar is a senior consultant in Deloitte's actuarial and advanced analytics practice.