Ralph Baxter looks at the understated risks inherent within spreadsheets, and demonstrates how these might be tackled.

For many years the big stick of Solvency II has been waved at insurance firms. After many hundreds of millions of pounds and euros the reaction from many participants has been something like 'What has Solvency II done for us?'
Guideline 48 of EIOPA's final report on consultation 14/017 on guidelines on Systems of Governance states that, "Where relevant, the actuarial function should provide recommendations on internal procedures to improve data quality".
In this context, it is clear that the reference to 'data quality' is particularly relevant for data held in spreadsheets. Like EIOPA, the narrative from other regulators worldwide is increasingly focused on spreadsheets as primary carriers of data - and therefore fundamental in the debate about data quality.
For instance, the new definition of model risk emerging from the US and the work by the PRA in the UK is seeking to include data risk within model risk (see SR 11-7 Fed/OCC Guidance on Model Risk Management). Models are now being regulated/ governed as being made of multiple components - data ones, processing ones and reporting ones. Under previous guidance only the processing kernel was seen as the object of focus. This is not true any longer.
This is what we want Solvency II to do for us
Given Guideline 48, it would seem that there is now a regulatory specified opportunity for actuaries to say 'this is what we want Solvency II to do for us'. So what might be on the wish list?
The key things to address are the day-to-day issues that absorb a lot of manual and administrative effort - and you can't get closer to home than the actuarial workhorse: the Excel spreadsheet. Research by ClusterSeven has shown that about half of actuaries and financial professionals working at insurers use spreadsheets more than any other software application.
Excel is a brilliant tool but it does involve a lot of housekeeping to keep in line. In terms of EIOPA's reference under Guideline 48, could the systemic use of Excel be one of the more significant issues for actuaries to provide recommendations on?
From discussions with actuaries, it is clear that there are a number of commonplace issues and challenges with Excel, particularly the drain on time and resources that is usually required to check that data is correct and valid. Seven particularly common concerns are:
? Making sure that all your data feeds have been updated as expected e.g. no stale or out-of-tolerance data values have come through to your file;
? More specifically, that all the source spreadsheets on which yours depends have been updated as expected without you having to chase round and ask the other owners;
?Ensuring that no-one else has changed your spreadsheet while you were away - or if they did, then there is a reliable record of what they have done. This becomes particularly relevant when the same research showed that nearly half of respondents said that between four and 10+ people could change critical spreadsheets;
? Making sure that there are no embarrassing error cells tucked away in some of the tables you are about to send out;
? Ensuring that changes made by modelling teams to your data are easy to reconcile instead of being engaged in endless conversations;
? Easier ways to extract the numbers you need from input spreadsheets, e.g. from the finance department, without hunting through 10MB of irrelevant cells would also be valuable;
? Making it easier for IT to extract the numbers they need from some of your spreadsheet submissions without them complaining that you've broken one of their data extraction processes.
The above are just some of the regular 'pain points' encountered by many actuaries in the course of a normal working day.
Thinking beyond checking
But let's think beyond just checking. What sorts of things are current generation spreadsheets really bad at? High on the list is the handling of time series. Each new version of a spreadsheet is like a still picture of a dynamic process, whether you are changing the input parameters to a series of model runs or looking back over previous month end submissions.
Recent versions of Excel have incorporated the concept of a spark line in a cell that illustrates how a value changes with differing inputs. The most valuable changing input in business is time (e.g. reserves yesterday compared to reserves today). The problem is that Excel has no knowledge of the value yesterday unless the spreadsheet author has deliberately retained that value in the spreadsheet. Imagine if all the past values of spreadsheet cells were automatically available on-demand without the spreadsheet owner specifically capturing them. What if you could join spreadsheets together to create the 'movie' version of all those changing cell values, in the same way that cinema is created? That way you could fast forward or rewind through data values and compare these trends with others from the same spreadsheet, or maybe even a completely different sequence of spreadsheets. One could certainly spot anomalous behaviour much faster, and gain further insight into the data.
Secondly, a common piece of actuarial workflow is the completion of actuarial 'triangles' that position forecast and actual claims against time periods. The preparation of each new triangle requires that changes from the previous triangle can be explained and anomalous changes, e.g. due to stale data, can be rectified. To be able to roll this triangle forward and backward in time, with capabilities such as heat map colour coding for large value changes, would give actuaries a much clearer view on how the triangle was developing rather than having to open two triangles side by side and play 'spot the difference'.
Taking charge of your own destiny
Spreadsheets remain fundamental to the actuarial function and the regulatory focus on 'data quality'. The good news is that EIOPA wants your ideas, as actuaries, in the quest to improve data quality. Hopefully this article will provide some food for thought.
If the ideas above ring true, then the good news is that all this capability is available right now. It was originally built for quants in investment banking that faced many of the same issues - it was used to see which values had been updated across input ranges that covered tens of thousands of cells where manual 'spot the difference' was nearly impossible. These capabilities would keep the auditors and regulators happy, but the real question is what can they do for you?
Ralph Baxter is CEO of ClusterSeven, an international firm specialising in spreadsheet and database governance and audit. Prior to this, he spent 20 years in the energy sector. He is a former committee member of ISSIG, the information security section of the Institute of Internal Auditors.