Simon Willes explains why investment de-risking a pension scheme without regard to employer covenant may not lead to optimal member outcomes
UK parliamentary scrutiny of defined benefit (DB) pension scheme debacles has focused on underfunding by employers as the bête noir. This can undoubtedly be a big temptation for finance directors, and deserves due criticism where proven.
If you dig deeper, the need for extra employer funding often reflects the failure of trustees and employers to adequately hedge the major economic risk exposures of interest rates and inflation, leading to deficits that are too high - and expensive to repair.
The relentless trend to reduce investment risk is partly driven by legislation that means surplus generation is potentially of no value to either members or employers. Combine this with risk-averse trustees and regulatory supervision, and de-risking becomes the key item on an investment committee's agenda.
Additionally, a liability-driven investment (LDI) approach to de-risking based on index-linked government bonds (ILGs) reduces the need to look for capital markets solutions to hedge exposure to inflation and interest rates. Thus presented, LDI has obvious appeal, and this naturally draws pension assets into ILGs - which have become over-bought and currently deliver negative returns.
This results in the need for more employer contributions in order to replace the investment returns lost through LDI - although leveraged LDI partially reduces the impact. Unfortunately for most schemes and employers, a quick check on the relative proportions of annual investment returns to contributions quickly reveals a fallacy: investment returns are doing all the driving forward, while contributions are just repairing what has gone wrong, as illustrated in Figure 1.
Payment risk on contributions
Investing in LDI assumes you can readily replace investment returns with employer contributions, but limited quantitative analysis is currently used in asset-liability modelling (ALM) to understand the increase in payment risk associated with such reliance, and the consequent impact on benefit security if the contributions are not forthcoming. Failure to analyse this transfer of investment risk to payment risk is a serious deficiency in an integrated risk management (IRM) world.
The relationship between benefit security and investment de-risking must be properly understood and will reflect, among other things, the scheme's combination of scheme funding strength and employer support.
It is probable that only very financially strong employers with relatively light DB scheme burdens are 'efficient' at replacing lower investment returns with contributions, and can be reliedon to do so. As the DB scheme burden increases relative to the strength of the employer, the 'conversion rate' of replacing returns with contributions gets worse; payment risk on contributions rises until the employer simply can't afford de-risking. At the extreme, it is worth considering the fallacy of embarking on a sharp de-risking response to corporate financial stress events - you are making a bad situation considerably worse!
To illustrate and quantify the inefficiency of all but the strongest employers to replace investment returns with contributions, we set out below some simplified integrated relationships between:
- Two metrics describing overall member benefit security (the probability of reaching self sufficiency, and expected benefit losses), and
- Investment policy simplified as the % of growth assets in a static portfolio.
We used scheme liabilities representative of a maturing closed scheme and deployed four different sponsor cases representing the UK Pension Regulator's four covenant grades, from strong through to weak (CG1-4). We ran our IRM model for 30 years for each sponsor to derive a 'curve' describing the relationship between benefit security and choice of investment policy, as shown in Figures 2-5.
In live cases, we build in a lot of scheme, investment and employer-specific information to give a detailed representation of the scheme and employer relationship. Broadly, the analysis should hold for DB schemes across different jurisdictions, although the quantum of the benefit security metrics would need to reflect different terminal financial outcomes for members.
Benefit security curves for CG1-CG4
The axes show (a) the % of simulations reaching self-sufficiency (in blue), and (b) expected benefit losses as a % of solvency liabilities (in red).
CG1 This employer is efficient at replacing returns with contributions and we observe a bias towards de-risking from a benefit security viewpoint tempered by a 'step-down' in simulations reaching self-sufficiency at high levels of de-risking.
CG2: With a slightly weaker employer, a similar overall relationship is observed but with lower levels of benefit security. A further difference is observed in that de-risking to a high level now has a negative impact on benefit security as the probability of reaching self-sufficiency is sharply reduced.
CG3: With a weaker employer the benefit security curve twists upwards as growth assets fall below 50% and also begins to twist upwards at high levels of growth assets, as the employer can no longer be relied on to repair higher variability of investment returns. The optimal point is around 60% growth assets.
CG4: With weak employers, the benefit security relationship is the mirror image of the CG1 employer, with a bias towards re-risking observed where driving funding progress towards self-sufficiency is a key requisite to improve the chances of members receiving full benefits.
Actuarial science based on ALM alone comes pretty close to assuming that the relationship between benefit security and investment policy is always reflected by a CG1-type relationship, instilling a bias towards de-risking because it assumes efficient conversion of returns into employer contributions without material payment risk on contributions. Most discussions between trustees and employers, therefore, are continually about further de-risking. The above graphs suggest this may generate the wrong answer for CG3 and CG4 schemes, and even in some CG2 cases. De-risking advice offered to CG3 and CG4 type clients on an unintegrated basis is not necessarily correct when payment risk is taken into account.
There are two opposing views held about DB pension risk in the UK. One viewpoint, largely based on ALM alone, is that by 2030 most schemes will have had their deficits repaired and remaining liabilities will be reducing fast. The other IRM view recognises that there is considerable payment risk embedded with weaker CG3 and CG4 employers (comprising around 40% of DB schemes in deficit), which will result in a pool of structural funding deficits that are very difficult to repair and may well ultimately transfer over time to the Pension Protection Fund.
You need to understand the IRM 'landscape' before deciding where to position yourself in it. Visually, each scheme presents its own particular IRM 'footprint', and you need to understand the contours of it before setting investment policy. Figure 6 illustrates one particular such 'footprint'; it is a 3D 'heat-map' measuring overall benefit security for combinations of covenant strength and investment policy, showing that weak covenant strength combined with low investment risk do not combine well.
Regulatory focus in the UK has been based on the view that schemes are carrying too much investment risk relative to the employer's ability to repair deficits and the use of Value at Risk for analytical purposes. Using IRM, regulators would be able to consider where on the 'benefit security curve' a scheme sits.
LDI and investment de-risking can also result in another risk transfer from pensioners to younger scheme members. If you model the impact of underfunding risk on active and deferred members, today's decisions about investment policy can have severe potential impacts. The risk of longer-term reliance on employer support lies with younger members, but this is not currently where the focus of pension risk management is directed.
Analysis of member outcomes split by pensioners and deferred members may well indicate that, unless a level of return potential is maintained, there will be a much-reduced chance of a happy ending for younger members. It is difficult to see how to ensure intergenerational fairness, or at least demonstrate consideration of it, without quantifying the probability of full payment of benefits, and expected losses where employers fail to support schemes, for different categories of membership. IRM could answer these questions.
Astute employers, trustees and regulators should seek to ensure that investment de-risking decisions are subjected to additional tests and analysis.
- Does a decision to further de-risk a scheme improve overall benefit security once risk transfers are taken into account?
- If it does, what is the price in terms of impact on employers, shareholders and debt providers?
- What is the impact on the most exposed younger categories of members - does de-risking increase the chances that they will be left high and dry?
Let's think harder before we jump at further de-risking, and make sure we undertake investment de-risking because we can justify that it improves overall member benefit security without putting extra risk on younger members. IRM allows us to do this, and to make judgments and decisions supported by fully integrated analysis.
Simon Willes is the Executive chairman of Gazelle Corporate Finance