Skip to main content
The Actuary: The magazine of the Institute and Faculty of Actuaries - return to the homepage Logo of The Actuary website
  • Search
  • Visit The Actuary Magazine on Facebook
  • Visit The Actuary Magazine on LinkedIn
  • Visit @TheActuaryMag on Twitter
Visit the website of the Institute and Faculty of Actuaries Logo of the Institute and Faculty of Actuaries

Main navigation

  • News
  • Features
    • General Features
    • Interviews
    • Students
    • Opinion
  • Topics
  • Knowledge
    • Business Skills
    • Careers
    • Events
    • Predictions by The Actuary
    • Whitepapers
    • Moody's - Climate Risk Insurers series
    • Webinars
    • Podcasts
  • Jobs
  • IFoA
    • CEO Comment
    • IFoA News
    • People & Social News
    • President Comment
  • Archive
Quick links:
  • Home
  • Sections
  • General Features
  • 2020
  • 02

Bayesian methods: mind your step

Open-access content Wednesday 5th February 2020 — updated 1.42pm, Friday 1st May 2020
Authors
ANDREW SMITH

Andrew Smith asks: can Bayesian methods give better long-term interest forecasts?

web_p28_COVER feature.png

The idea of a long-term interest rate is embedded in actuarial thought and practice. While market interest rates fluctuate, we think about long-run averages driven by economic fundamentals. Tasks ranging from budgeting for pension contributions to the ultimate forward rate in Solvency II require assessments of long-run average returns.

Estimation of long-run returns involves a mix of judgment and, sometimes, intricate quantitative models. Bayesian statistics gives us a framework for combining these elements: the judgment corresponds to a prior distribution of parameters, while the forecast is based on a posterior parameter distribution given some data.  

However, that is not how judgment is typically applied in actuarial problems. The most conspicuous use of judgment comes in the selection of parameters, by choosing (or not) to modify parameters emerging from a mechanical model fitting exercise.Less visibly, judgment is involved in the selection of data and models.

Since the 1990s, sampling-based methods such as the Metropolis-Hastings algorithm (MHA) have transformed Bayesian statistics in fields as diverse as agriculture, medicine and statistical physics. 

If we wanted to, we could now implement Bayesian methods to incorporate long-term return estimation. Do we want to? Why are so few actuaries doing this?

The interest rate modelling problem: Classical solution

The Bank of England publishes a data set of yield curves, based on gilts, with month-end nominal curves available from January 1970. Figure 1 shows the history of five, 10 and 15-year spot interest rates.

Statistically, a long-term average arises from a stationary stochastic process, such as an AR1 model. However, the path of interest rates in many countries since the 1970s looks more like a random walk with steady downward drift, which is not stationary and has no long-term average. To cover both cases - the stationary and random walk models - we propose a model for the 10-year yield yt in month t using a discrete Pearson process:

 

 

 

equation

This model class allows for mean reversion leading to stationary models, but also includes random walks with drift. The volatility term allows for constant conditional variance, but can also allow conditional variance to fall as yields fall, behaviour associated with some popular stochastic interest rate models such as Cox-Ingersoll-Ross or Black-Karasinski. The errors et are independent standard normal random variables.

Fit by conditional maximum likelihood

A maximum likelihood optimiser applied to the discrete Pearson process based on monthly data 

gives the parameters in Table 1. We show 20 paths from the model in Figure 2.

Table 1

Statistically, these are plausible extrapolations of the history we have. A striking feature of the fitted model is the high frequency of large negative interest rates - much more frequent than some would consider acceptable from an economic scenario generator. Furthermore, thanks to the quadratic form in the Pearson process definition, rates become more volatile as they drift into large negative territory. This is a mirror image of what happened in the 1970s and 1980s. 

The question is: are these statistically plausible paths economically credible? Even before we apply MHA, the Pearson scenarios are wilder than those produced by commercial scenario generators. We might ask if it is legitimate to use judgment to tame those negative scenarios.

All is not as it seems with the maximum likelihood fit, either. The optimiser has found a local maximum likelihood, but not a global maximum. Indeed, globally, the likelihood is unbounded, which you can see by setting ? equal to one of the observations and letting s become small. The optimiser has failed to solve the problem posed, yet we have no warning because the algorithm has wrongly converged to a suboptimal but plausible set of parameters. The difficulty of unbounded estimation is common in financial models with non-constant volatility, and is not special to the Pearson model we have fitted here.

figure 1

Prior parameters and Metropolis-Hastings 

The 1990s saw the widespread use of Markov chain Monte Carlo sampling-based methods, of which MHA is one of the most popular. The idea of MHA is to build a random walk model to describe parameter values. However, each step of the random walk is subject to an acceptance test. If a proposed new parameter value is much less plausible than the previous value (as measured based on a combination of data likelihood and prior distribution), the step is cancelled. The result is a constrained random walk that spends more time in more plausible parameter regions. We are free to choose the standard deviation of the random step size, but not the direction that has to be governed by the acceptance rule. There is a theorem saying that, over many steps, the distribution of random walk observations converges to the Bayesian posterior distribution.

There are some practical obstacles in the case of interest rates time series. Our likelihood function has a long ridge around the local maximum, which complicates how we set the step size in the random walk in MHA. In theory, we are free to choose whatever step size we want. Set the step too large and you keep falling off the ridge (and the step is rejected, meaning you are back where you started). If the steps are too small, you hardly move along the ridge at all. With many steps, the MHA mathematically converges to a limiting distribution, but that convergence may be so slow as to be useless in practice. We can fix the slow convergence in this case by mapping out the ridge around the maximised likelihood. Then we adjust MHA to make large steps parallel to the ridge but small steps orthogonal to it. 

Once we got the MHA working, we found in the context of interest rates that allowing for parameter error, and imposing a judgment that yields should be stationary, had an imperceptible or only modest impact on interest percentiles over horizons up to 10 years. 


Since the 1990s, sampling-based methods such as the Metropolis-Hastings algorithm (MHA) have transformed Bayesian statistics in fields as diverse as agriculture, medicine and statistical physics

 


The 'uncertainty onion'

Uncertainty matters in modelling long-term returns. If we say we know the long-term return, we are kidding ourselves and mispresenting long-term investment risks. We should consider different paths with different long-term returns, rather than assume all future paths are driven with the same parameters. Algorithms such as MHA can help us incorporate parameter uncertainty.

There are several layers of uncertainty, like unpeeling an onion. The outer layer is the stochastic uncertainty, as described by a model whose parameters we know. This model might be fitted mechanically to available data. What we usually call stochastic modelling does not penetrate all the layers of uncertainty, just the outer one. 

The next layer is parameter uncertainty, in which we vary our model choices, testing alternative plausible parameters. In this layer, we still assume a single set of parameters for each future path and, more significantly, we assume the same set of parameters held throughout the historic data. The constant historic parameters are important because this is how we judge what constitutes plausible future values. We can use Bayesian methods to penetrate the second layer. The advent of algorithms such as MHA has made it more feasible to unpeel this second layer, and this is common practice in other areas of statistics.

There are further layers of uncertainty to explore. We should consider whether markets in the 1970s could have operated with different parameters to today, or whether data may have been cherry-picked or wilful blindness employed to direct us away from painful scenarios. 

Compared to many other statistical applications, actuarial problems can have a thin second layer, due to large amounts of data that may not be relevant if parameters were not constant. It would be wrong to claim that the Bayesian methods we have applied penetrate the inner layers of the uncertainty onion, and it's not clear that actuarial judgment does, either.

Andrew Smith is an assistant professor in the School of Mathematics and Statistics at University College Dublin

 

Filed in
02
Topics
Risk & ERM

You might also like...

Share
  • Twitter
  • Facebook
  • Linked in
  • Mail
  • Print

Latest Jobs

Deputy Head of Capital Modelling

London (Central)
£110000 - £130000 per annum
Reference
144789

Head of Analytics (Actuarial)

London (Central)
£130000 - £165000 per annum
Reference
144788

Pensions Actuarial Analyst - GMP Equalisation

London (Central)
£ dependent upon experience
Reference
143745
See all jobs »
 
 

Today's top reads

 
 

Sign up to our newsletter

News, jobs and updates

Sign up

Subscribe to The Actuary

Receive the print edition straight to your door

Subscribe
Spread-iPad-slantB-june.png

Topics

  • Data Science
  • Investment
  • Risk & ERM
  • Pensions
  • Environment
  • Soft skills
  • General Insurance
  • Regulation Standards
  • Health care
  • Technology
  • Reinsurance
  • Global
  • Life insurance
​
FOLLOW US
The Actuary on LinkedIn
@TheActuaryMag on Twitter
Facebook: The Actuary Magazine
CONTACT US
The Actuary
Tel: (+44) 020 7880 6200
​

IFoA

About IFoA
Become an actuary
IFoA Events
About membership

Information

Privacy Policy
Terms & Conditions
Cookie Policy
Think Green

Get in touch

Contact us
Advertise with us
Subscribe to The Actuary Magazine
Contribute

The Actuary Jobs

Actuarial job search
Pensions jobs
General insurance jobs
Solvency II jobs

© 2023 The Actuary. The Actuary is published on behalf of the Institute and Faculty of Actuaries by Redactive Publishing Limited. All rights reserved. Reproduction of any part is not allowed without written permission.

Redactive Media Group Ltd, 71-75 Shelton Street, London WC2H 9JQ