Skip to main content
The Actuary: The magazine of the Institute and Faculty of Actuaries - return to the homepage Logo of The Actuary website
  • Search
  • Visit The Actuary Magazine on Facebook
  • Visit The Actuary Magazine on LinkedIn
  • Visit @TheActuaryMag on Twitter
Visit the website of the Institute and Faculty of Actuaries Logo of the Institute and Faculty of Actuaries

Main navigation

  • News
  • Features
    • General Features
    • Interviews
    • Students
    • Opinion
  • Topics
  • Knowledge
    • Business Skills
    • Careers
    • Events
    • Predictions by The Actuary
    • Whitepapers
  • Jobs
  • IFoA
    • CEO Comment
    • IFoA News
    • People & Social News
    • President Comment
  • Archive

Topics

  • Data Science
  • Investment
  • Risk & ERM
  • Pensions
  • Environment
  • Soft skills
  • General Insurance
  • Regulation Standards
  • Health care
  • Technology
  • Reinsurance
  • Global
  • Life insurance
Quick links:
  • Home
  • The Actuary Issues
  • September 2013
09

Modelling:  Mastering the correlation matrix

Open-access content 9th September 2013

Phil Joubert and Stephen Langdell explore the challenges of setting valid correlation matrices for risk modelling


9 SEPTEMBER 2013 | PHIL JOUBERT ANSD STEPHEN LANGDELL



A correlation matrix is used by actuaries in a variety of settings, for example in insurance capital modelling. It is central to risk calculations, as it specifi es correlations between all pairs of risk factors being modelled. Correlation matrices can be used directly to combine stand-alone risk capital requirements, for example in the Solvency II Standard Formula calculation, or in specifying copulae in more complex capital models. Sometimes a matrix 'looks like' a correlation matrix but isn't one mathematically, and we need to be able to 'fix' these matrices.

Correlations between variables vary between -1 (perfect negative correlation) and +1 (perfect positive correlation). The correlation of a variable with itself is +1, and the correlation between variables X and Y is the same as between Y and X. We deduce then that for a square matrix to be a correlation matrix, it must be symmetric, have elements in the range (-1,1) off the main diagonal and 1 on the main diagonal.

There is a fourth requirement, though, and more subtle - the matrix has to be 'internally consistent'. If we know the relationship between variables X and Y, and Y and Z, we should have an idea of the relationship between X and Z. This is where supposed correlation matrices often break down.


Why a matrix might be broken

Correlation matrices in some applications (e.g. portfolio risk) are calculated from historic data, but rarely in a consistent way. Data might be missing because a particular stock didn't trade on a given day, or a particular market was closed, or because the company didn't exist until five years ago. Instead correlations are calculated pairwise, and then put into the form of a symmetric matrix. There is no guarantee that this matrix satisfies the consistency requirement.

In insurance, lack of data means that a firm's correlation matrix is frequently set, at least partly, by 'expert judgement'. Managers might be asked to set correlations between risk factors as {high, medium, low, none, low negative, medium negative, high negative} = {0.75, 0.5, 0.25, 0, -0.25, -0.5, -0.75}. Even the technically inclined will struggle to keep the resulting matrix consistent for a matrix of more than a few elements. For a typical case with hundreds of risk factors this is almost impossible.

Whatever the origin of the problem, we are often presented with a broken correlation matrix and asked to fix it. We'll use a simple example to illustrate. The off ending matrix is:

Graph 1 September

In this case it is clear that there is no consistency - how can X be 95% correlated to Y, Y 95% correlated to Z, and X and Z be uncorrelated? This is not sensible.

The theory

Before we attempt to fi x this broken matrix, a brief mathematical interlude is needed. Recall that an eigenvector of a square matrix C is a non-zero vector, v, which satisfies the equation:

Cv = av

for some scalar a, known as the eigenvalue of C corresponding to v.

Mathematically, the consistency requirement implies that the correlation matrix must be 'positive defi nite', which is equivalent to the requirement for the correlation matrix to have non-negative eigenvalues. (These concepts are linked by the existence of the so-called Cholesky decomposition of the matrix - but that is one for another time.) Now a symmetric matrix C can be decomposed as


C = Q?QT


where Q is the matrix whose columns are the eigenvectors of C, is a matrix with the corresponding eigenvalues on the diagonal and zero elsewhere and QT is the transpose of Q. This is called the 'eigen-decomposition' of the matrix, and any decent statistical or numerical package should be able to do it.


Patch it up

A quick and dirty method of patching up our broken matrix exploits this property of positive definiteness. We can attempt to fix the matrix by calculating the eigen-decomposition of the matrix, setting any negative eigenvalues to zero, and then reconstructing the resulting matrix. So we form Q 'QT, where ' is the same as , but with negative entries replaced by zero, and scale the result to give a matrix C' with ones on the diagonal.

For our matrix the eigenvalues are {-0.3435, 1.0000, 2.3435}. Setting the fi rst negative entry to zero and calculating C' as above results in:

Graph 2 September

Find the closest correlation matrix

While the above method works, it is a bit arbitrary - we don't really understand its relationship to the original matrix. On the other hand, the closest correlation matrix method (Higham, 2002) explicitly minimizes the distance between the original matrix and the 'fixed' matrix. In this case we defi ne the 'distance' between two matrices as the square root of the sum of the squared diff erences of their elements:

                           ___________________

IIx,yII= vSi Sj(xij yij)2


This is called the Frobenius distance. Finding the matrix that minimises this distance while preserving positive definiteness can be done numerically. This results in the matrix below:

Graph 3 September

which is similar to the matrix we found in the previous section. A quick computation of the Frobenius distance using the formula above will show that this matrix is indeed 'closer' to the original.

Having an analytic framework to hang our method on means that we can introduce more flexibility. Suppose the manager setting the correlations miscommunicated his or her intentions, and that the 0 in the matrix should have been interpreted as "I don't care". In this case we can add a weighting to the algorithm, so that the important correlations are disturbed as little as possible. Assigning appropriate weights we now get

Graph 4 September

Complete the matrix

The last scenario occurs often in practice, especially when dealing with large matrices. Matrices might be set in 'blocks' by different business units, and need to be combined, or managers may be agnostic about the values of certain correlations, but certain about others. These are known as matrix completion problems.

Given the matrix:

Graph 5 September
We can set the missing entry to the product of the know correlations, giving:
Graph 6 September

A proof that this does result in a positive definite matrix is given in Kahl & Gu¨nther (2005), which confirms the intuition that it should work. This method can be used to combine block matrices. It also suggests an alternative method of correlation matrix construction, whereby business units are asked to provide pairwise correlations with one central random variable (say GDP growth or something similar), with all other correlations being inferred.


Conclusion

Correlation matrices are a key input to risk systems, and a major source of headaches. Technical actuaries and quants don't understand why business actuaries can't check that the matrices they provide are positive definite, and the business actuaries don't understand why the quants refuse to use a perfectly reasonable looking matrix. We've demonstrated a few ways out of this bind: either by finding a matrix which resembles the given matrix but which is positive definite, or by changing the method of specifying correlation matrices in the first place. These techniques are currently in use in a number of financial institutions.

This article appeared in our September 2013 issue of The Actuary.
Click here to view this issue
Filed in:
09
Topics:
Modelling/software
Share
  • Twitter
  • Facebook
  • Linked in
  • Mail
  • Print

Latest Jobs

Pricing Actuary (Marine)

London, England
£60000 - £80000 per annum
Reference
119007

Actuarial Manager - Reserving & Capital

England, London
£80000 - £110000 per annum
Reference
119006

Capital Actuarial Analyst

London (Central)
Up to £45000.00 per annum
Reference
119005
See all jobs »
 
 
 
 

Sign up to our newsletter

News, jobs and updates

Sign up

Subscribe to The Actuary

Receive the print edition straight to your door

Subscribe
Spread-iPad-slantB-june.png
​
FOLLOW US
The Actuary on LinkedIn
@TheActuaryMag on Twitter
Facebook: The Actuary Magazine
CONTACT US
The Actuary
Tel: (+44) 020 7880 6200
​

IFoA

About IFoA
Become an actuary
IFoA Events
About membership

Information

Privacy Policy
Terms & Conditions
Cookie Policy
Think Green

Get in touch

Contact us
Advertise with us
Subscribe to The Actuary Magazine
Contribute

The Actuary Jobs

Actuarial job search
Pensions jobs
General insurance jobs
Solvency II jobs

© 2020 The Actuary. The Actuary is published on behalf of the Institute and Faculty of Actuaries by Redactive Publishing Limited, Level 5, 78 Chamber Street, London, E1 8BL. Tel: 020 7880 6200