[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Internal risk models now and in the future

Actuaries claim modelling as one of their core skills.
Within the past two or three years in the UK
there has been a significant investment by the
life and non-life insurance industries in stochastic ‘actuarial’ models in order to support realistic liability calculation (for most life companies) and assessment of capital adequacy. This is a welcome development for actuaries, for consultants, and software vendors, and probably for UK firms. This article draws out some of the questions and challenges that are likely to influence model development in future.

How much capital or how much risk?
General Eisenhower famously said ‘a plan is nothing, but planning is everything’. Crucial to sound planning is matching resources to ambitions, or as Paul Sharma of the FSA once put it ‘you would not open a sweetshop without thinking whether you were likely to have enough money’. It is therefore perhaps surprising that until relatively recently many financial firms implicitly relied on regulatory capital adequacy measures. Or perhaps it is not so surprising, in that both banking and insurance were relatively very heavily regulated until late in the last century.
The abandonment of price tariffs, pressures to improve transparency, and the growing power of the capital markets have meant that financial firms have generally had to address the concept of economic capital broadly the amount of capital required to sustain a given level of risk with a given level of confidence over a specified time horizon. Often this is more usefully expressed as ‘what are the limits we should have on the risks we take given the capital resources available to us?’
Leading investment and commercial banks, followed increasingly rapidly by reinsurers and the larger and multinational direct insurers, have established and are continually refining models which guide them towards answers to these questions. I emphasise ‘guide towards’ because, in the words of Donald Rumsfeld, ‘We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns the ones we don’t know we don’t know’.

Dialogue with supervisors
Banking supervisors working in the context of the Basel process have taken the view that they should define capital adequacy for purposes of consumer protection and public confidence by reference to something similar to economic capital assessment. Specifically they have defined advanced modelling processes for each of market, credit, and operational risks. Interestingly, the US supervisors will positively require that the largest internationally operating banks make use of the most advanced approaches possible.
Our own Financial Services Authority (FSA) espouses a similar approach in its requirements for individual capital assessment in that it regards capital assessment as the responsibility of the firm, while reserving the right as it does for UK banks to require additional amounts to cover risks such as management and/or control deficiencies. The FSA, together with some other European supervisors, is strongly committed to seeing internal modelling of risk and capital requirements as an option for firms under the Solvency II regime expected to take effect from approximately 2012 onwards.

Risk and uncertainty:
limitations of modelling
Enthusiasm for risk modelling can blind users (not least actuaries) to built-in limitations. Many of the underlying factors in risk models are representations of human behaviour, and Robert Shiller has demonstrated the essential unpredictability of behaviour and attitudes in books including Irrational Exuberance.
Even where one is dealing with more natural phenomena, there are pitfalls for the unwary. For example, climate change has unknowable implications for natural hazard risks. There are very many catastrophic phenomena for which there are no data television has hypothesised a supervolcano eruption, and the risk of a tsunami in the Atlantic is unfortunately not zero.
These are reasons to interpret confidence thresholds with approximation and caution it is better to calibrate by reference to ‘where the data are’ including data as to past ‘irrational’ shocks as to seek to identify a mythical 1-in-200-year scenario.

Differences between banking and insurance
The limitations of modelling loom larger in the insurance context for reasons including that it is often concerned with longer-term influences and operates in a less stable ‘cultural’ background than banking.
For example, most market risk in the banking context arises from very short-term influences typically measured over days. This means there are plenty of data, although it is still possible to be misled LTCM-style.
Similarly, while credit default is usually frowned on to some degree in most jurisdictions, the same is not true for insurance claimants. The US tort system continues to stimulate unpredictable growth of claim numbers.
A recent OECD report argued that plausible claim amounts arising out of possible terrorist incidents could bankrupt the global private insurance sector.
These are not arguments that modelling should not be part of risk management only that it is a starting point rather than a conclusion. All models are wrong, but nonetheless some models are useful!
Data and model-pointing
Even if we know the future will be different, the starting point of effective modelling is to understand past history and present exposures.
Thanks to steady falls in the cost of storing data, this is much less of a limitation than it once was. It usually makes sense to store data at the greatest level of granularity, with the option of aggregating it for the purposes of modelling calculations. Data storage should be independent of calculation logic, since the latter is likely to be subject to change.
Model-pointing can be a challenge in the modern context, and tests will have to be carried out in order to be satisfied that grouped data is fairly representative of the disaggregated portfolio for the particular purpose of the model.

Best practice model management
The model for this purpose is the overarching model of the firm, which will typically be made up of a series of constituent models, usually corresponding to either particular risk drivers or particular lines of business or both (see figure 1). Typically these constituent models will vary in sophistication, subject to fitness for purpose.
The requirements are documentation, control, and independent review separately for inputs to, outputs from, and the calculation processes of, the model. It is to be expected that models will be continuously changing with developments in technology, in data availability, and in the business environment.
Backtesting of models in the insurance context is necessarily less easy to define than in the context of, say, banks’ risk models, although some forms of testing can and should be carried out for certain lines and types of risk.

Future developments
In the banking sector, risk management processes including modelling are seen as an ingredient of overall competitive positioning. The best risk managers have advantages in risk selection and portfolio diversification and enjoy a lower cost of capital. This is the future for the insurance business and is likely to stimulate model development along the lines set out here.
– Data mining
With lower costs of data storage and cheap computing power, there is growing interest particularly on the part of larger firms in ‘mining’ their data for unsuspected patterns which can be used to compete more effectively.
– Data pooling/exchange
Smaller firms realise their potential competitive disadvantage in the internal model world, and this seems likely to create interest in pooling or exchanging data. The establishment by the ABI of a pooled operational risk database is a case in point, given the relative dearth of operational risk loss event data.
– Scenario creation
To this author it seems likely that we will see some de-emphasis regarding theoretical confidence probabilities and a growing emphasis on creative thinking about integrated disaster scenarios at the edge of plausibility. The supervisors will encourage this development, including encouraging firms to develop diverse ‘made-to-measure’ scenarios in the interests of reducing risk to the system.
– Retreat from Monte Carlo
There is a growing realisation that classic stochastic simulation techniques can involve a considerable waste of energy, in that a high proportion of simulations count only towards making up the numbers. Techniques are evolving which allow more focused exploration of the intersection of plausibility and insolvency, and it is likely that there will be growing emphasis on these, including probably making more granular use of data.
– Agent-based modelling
This is a technique that may offer the possibility of representing some of the apparent ‘irrationality’ described above. It is becoming possible to represent third parties, such as competitors or investors, as agents operating according to certain behavioural rules within an artificial environment.
These developments can be very exciting for actuaries with the requisite technical and communication skills. Equally, it is the case that actuaries are not the only individuals with such requisite skills a new financial risk management profession has developed within the global banking sector over not much more than the last 20 years. It is up to actuaries individually and collectively to establish their place in this new ‘internal model’ world.

05_09_02.pdf