[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

Indecent exposures

Every year, the PwC London Market Survey takes the pulse of the London insurance market and publishes its findings. One response generally grabs the headlines: ‘The most important issue on CEOs’ agendas’, and this year, for the first time, CEOs cited exposure aggregation management (EAM) as their number one priority.
This is hardly surprising after two record-breaking hurricane seasons, and the post-Katrina realisation that many (re)insurers did not have a good handle on their catastrophe exposures. As the chief risk officer of a large insurer recently put it to us: ‘If I get my pricing wrong by 510%, I won’t be as profitable as I expected. But if I get my aggregations wrong by 510%, this could mean the difference between solvency and insolvency after a large event.’

Under pressure
Under the pressure of various stakeholders, we have seen and are continuing to see a flurry of activity in this arena, aimed at developing more efficient and more robust EAM frameworks. A whopping 86% of the London Market Survey respondents were looking to enhance their monitoring in peak zones such as Florida, California, or the Gulf of Mexico before the year-end.
One of the main catalysts for change is the investment community. Last year’s hurricanes have enabled them to gauge the impact of good and bad EAM:
– (re)insurers who demonstrated they had a good handle on their risks, by reporting losses in line with expectations, were rewarded, easily accessing new capital (see figure 1);
– (re)insurers who had losses in excess of expectations, and had to repeatedly restate their losses, were penalised and saw their stock prices plummet (see figure 2).
They now demand assurance that the risks assumed are fully understood by management.
Rating agencies also voiced concerns over the data submitted by (re)insurers. They are worried that they do not always fully reflect actual exposures, and may not be completely reliable in assessing financial strength. All major rating agencies have taken some measures to improve the quality of the exposure information they receive, and some have even published fairly prescriptive requirements for submissions.
Finally, regulators have an obvious interest, as a (re)insurer’s probability of ruin can be very sensitive to the quality of its EAM. It is therefore not a surprise that the new Lloyd’s Underwriting Management Standards contain a comprehensive list of requirements in this area. We also expect that this will be a focus of the FSA internal capital assessment reviews.

Principles and practice
Reflecting on current practices, (re)insurers often find that, while the principles of sound EAM may appear reasonably straightforward, implementing them within a robust and effective framework requires dedicated attention and specialist expertise. The first task for an organisation is therefore to ensure that its current operational framework works well, before seeking to expand what it can do.
An EAM framework could be described by four steps:
1 Defining the risk appetite of the company. This consists of a metric to measure the risks assumed together with a quantum determining the amount of risk that is acceptable by the company (eg ‘exposing the company to losing no more than 30% of its capital with a 1% chance over a 12-month period’).
2 Capturing all the exposures written around the world.
3 Converting these exposures into the risk appetite metric.
4 Managing these exposures, through underwriting, pricing, reinsurance purchasing, or capitalisation strategies.
Pitfalls within this framework occur at various levels. We have observed that most issues tend to arise from exposure leakage, misuse of catastrophe modelling, or cascading the risk appetite down to daily underwriting decisions.

Exposure leakage
Exposure leakage occurs where some exposures are not being captured with the EAM system.
It is not uncommon to see policies either not captured or partially captured, in particular when risks are written through delegated underwriting authorities or when the quality of the exposure data is poor. As pointed out above, even a 510% exposure leakage could have a dramatic impact in the event of a major catastrophe and correspond to a large portion, if not all, of the (re)insurer’s capital base.

Catastrophe modelling
The appropriate usage of catastrophe models is necessary but often difficult, and the over-reliance on ‘black-box’ models sometimes undermines sound EAM principles.
The most prominent pitfalls lay with the handling of poor data quality or modelling assumptions when relying on third-party modelling (eg by brokers).
They also extend to classes of business where commercial models are either unavailable or not sufficiently sophisticated to include the subtleties of the business underwritten. A prime example of such a class would be offshore energy for which most industry experts would agree that the current catastrophe models are not well adapted, for instance to operators’ extra expenses (OEE) or pipeline exposures. In these instances, (re)insurers often rely on less sophisticated models based on deterministic scenarios, realistic disaster scenarios, or probable maximum loss (PML) factors.

Cascading risk appetite
Recent events have caused many risk carriers to either define or recalibrate their risk appetites. 64% of the London Market Survey respondents were looking to do so. By far the greater challenge has been embedding it into daily underwriting operations, or in other words, making a difference before transactions are written.
In particular, EAM takes a company-wide view of the organisation and looks at how risks interact (geographically or otherwise) with each other regardless of the class of business to which they belong. For instance, a drilling platform in the Gulf of Mexico may ‘clash’ with a hotel resort in New Orleans, even though one is an offshore energy risk and the other a property risk.
This can create complications with underwriters who traditionally work within their own ‘class of business silo’. They may be reluctant to decline underwriting a risk when the only reason is a lack of capacity to take on more exposure caused in a different class of business.

A speedometer for underwriting
The quest for better EAM does not stop with fixing current practical problems. Some organisations want to get more value out of their EAM. For these companies, the holy grail of EAM seems to be akin to a speedometer for underwriting. It tells management whether it is writing beyond its risk appetite (‘the speed limit’) or whether it has room to increase its exposures and receive additional premium income. Like any speedometer, one would expect their EAM to be reliable, produce real-time information, and work in any environment (ie for all types of risks, beyond traditional property catastrophe classes).
The London Market Survey highlighted the focus on wider risks with 86% of respondents anticipating to analyse exposure aggregations for terrorism (andabout half for avian flu, corporate scandals and stock market crashes).
Some (re)insurers are also developing ‘macro-models’ to speed up their reporting times. Like micro-economics, which models the behaviour of individual economic agents, current approaches seek to model the details of individual risks. By contrast, macro-models rely on variables to encapsulate most of the information on catastrophe exposures (in a similar way that macro-economists would forecast the GDP from variables like inflation or unemployment rate, rather than attempt to model the behaviour of every economic agent).

New frameworks
Many (re)insurers are striving to report more accurately and quickly on all the types of risks to which they are exposed. The pressure comes from investors, rating agencies, and regulators. Not surprisingly, new frameworks have emerged that are similar to the investment banking risk management frameworks these stakeholders are used to.

06_12_04.pdf