Hens Steehouwer discusses the frequency domain for correlations over different time periods
![web_p22-25_081218-03 [Converted].jpg](/sites/default/files/2020-11/web_p22-25_081218-03%20%5BConverted%5D.jpg)
Actuaries have to deal with time-series processes in terms of how, for example, yield curves, investment returns, mortality rates, lapses or insurance claims develop over time. They have a toolkit of stochastic models at their disposal to analyse and model these processes for the applications at hand. Typically, these models look at time-series processes from a ‘time domain’ angle. However, it is also possible to look at time-series processes from a ‘frequency domain’ angle.
Actuaries are not typically familiar with this other toolkit and the additional insights and modelling benefits it can bring. All frequency domain techniques are founded on the Fourier transform. With the Fourier transform, any time-series {xt, t = 0,…,T-1} can be written as a sum of cosine functions.
The parameters {Rj, ɷj and ϕj, j = 0,…,T-1} represent the amplitudes, frequencies and phases of T cosine functions that together offer an alternative representation of the time-series xt. An important property of this frequency domain representation is
If we assume the time-series xt to have an average value of zero, then this relation tells us that the frequency domain representation decomposes the total variance of the time-series into the squared amplitudes of the set of cosine functions. The higher the Rj for a certain ɷj, the more this frequency contributes to the total variance of the time-series. This is visible in the so-called periodogram of a time-series, which plots the variance per frequency as a function of the frequencies and thereby shows the relative importance of different frequencies for the total variance.
If you were to calculate the periodogram for different samples from a stochastic time-series process, you would produce different shapes of the periodogram. Doing this for a great number of samples of sufficient length, and calculating the average periodogram on all these samples, results in the spectral density, or auto-spectrum, of the stochastic process. A spectral density describes the expected distribution of the variance of the process over periodic fluctuations with a continuous range of frequencies. The word ‘spectrum’ comes from the analogy of decomposing white light into colours with different wavelengths. The word ‘density’ comes from the analogy with a probability density function.
A probability density function describes the distribution of a probability mass of one over some domain, while a spectral density describes the distribution of a variance mass over a range of frequencies. The concept of spectral densities generalises into a multivariate setting. In the form of coherence and phase spectra, conventional correlations can be decomposed across frequencies into a phase shift (move forwards or backwards in time) and the maximum attainable correlation after such a phase shift.
A spectrum in the frequency domain and the auto-covariances in the time domain contain the same information about the dynamics of a stochastic process. Neither can give information that cannot be derived from the other. The difference is how the information is presented. Nevertheless, a key benefit of a frequency domain perspective is that its tools are very powerful for understanding the dynamic behaviour in historical time-series data, as well as for analysing the dynamic properties of stochastic models. If there is one thing we know about the behaviour of time-series that are relevant to actuaries, it is that they move up and down all the time. Frequency domain techniques are the most natural way of analysing how they move up and down. What types of fluctuations dominate the behaviour of a variable, and what are the correlations and lead/lag relations with other variables at the various speeds of fluctuations?
“A frequency domain perspective is very powerful for understanding the dynamic behaviour in historical time-series data”
The leakage effect
Frequency domain techniques do not have widespread use within actuarial science. One reason is that conventional frequency domain techniques require large amounts of data if they are to work well, and in most non-experimental sciences, such volumes of data are simply not available. If these conventional frequency domain techniques are applied on time-series of limited sample sizes, the spectral density estimates will be disturbed and less informative. Such disturbances are caused by the leakage effect.
The leakage effect can best be understood by thinking of the Fourier transform of a perfect cosine function of some frequency. Obviously, in the periodogram of this cosine function, 100% of the variance should be located at the specific frequency of the cosine function. However, if one only has a limited sample of the cosine function available, this turns out not to be the case. Instead, the periodogram will show that a part of the variance at the specific frequency has ‘leaked away’ to surrounding frequencies. As the sample size increases, the leakage effect decreases and the periodogram gets better and better at revealing the identity of the time-series by locating a larger and larger portion of the variance at the specific frequency of the cosine function. Fortunately, if one looks carefully, there are special (parametric) versions of frequency domain techniques that are especially adapted to (also) work well on short sample time-series data. Key to these techniques is their use of smart algorithms that avoid the disturbing leakage effect.
An economic scenario generator example
There are potential benefits to using frequency domain techniques in the context of economic scenario generators. For example, for economic capital, asset liability management or own risk and solvency assessment purposes, it is of essential importance that scenario generator models can capture correlation structures between economic and financial market variables in a realistic and robust way. Unfortunately, correlations are very complex because they have many dimensions. As a result, calibrating scenario generator models to realistic correlation structures can be difficult and time-consuming, especially as the dimensions of the models (for example the number of economies and asset classes) increase.
One dimension of correlations is the horizon dimension: correlations can be different depending on the investment horizon. We typically see that correlations tend to increase as the investment horizon extends. More generally, we see a so-called ‘term structure of risk and return’, which means that not only correlations, but also expected returns, volatilities and distributional shapes can vary with the investment horizon. Scenario generator models are typically calibrated for a specific application and a specific investment horizon. If risk at multiple horizons matters across applications, as it does in many organisations, practitioners often have to resort to multiple model calibrations, each one targeted at the correlations at a particular investment horizon.
Although understandable, such a partial approach to correlation modelling is inefficient and inconsistent. It also increases the risk of inconsistent decisions being taken throughout an organisation. If multiple models are used, each calibrated to different correlation targets, there is no unifying underlying model that aggregates these correlations structures in a consistent way.
Trend-cycle decompositions
To see how frequency domain techniques can help solve this problem, we have to introduce the concept of trend-cycle decompositions into a scenario generator modelling framework. An important application of the frequency domain toolkit is to perform such decompositions by filtering time-series into components that correspond to non-overlapping frequency bands in the spectral densities. A simple and well-known filter is the first-order differencing filter, which transforms a time-series xt into a time-series yt = xt – xt -1. To see what such a filter does to a spectral density, one can look at the frequency response function of the filter. This shows that a first-order differencing filter suppresses the variance of low-frequency fluctuations in a time-series (ie removes the trend) and strongly enhances the variance of high-frequency fluctuations. The first-order differencing filter also shifts these fluctuations back in time. With more sophisticated frequency domain-based filters, it is possible to decompose time-series into a trend, a cyclical and an irregular component that split up the total variance more accurately and without inducing shifts in time. Figure 1 shows such a decomposition, splitting a time-series of monthly observations of a long-term UK government bond yield since 1900 into three components: trend, cycle and irregular. An advantage of such decompositions in terms of stochastic modelling is that the (orthogonal) components as shown in Figure 1 are, by construction of the filter, uncorrelated.
Such trend, cycle and irregular components of interest rates can be shown to approximately generate returns of different frequencies of corresponding bond returns. More specifically, the returns generated by these components can approximately be interpreted as the decade (trend), annual (cycle) and monthly (irregular) returns of a corresponding bond index. By performing the same decomposition for all relevant economic and financial market variables, these components allow a scenario generator model to ‘anchor’ on the correlations for returns of different frequencies and horizons. To do so, for each variable, three time-series (rather than one) feed into a scenario generator model: a trend, a cycle and an irregular time-series. This robust way of capturing the ‘term structure of risk and return’ supports the use of a single multi-horizon calibration of a scenario generator model and thereby adds to the efficiency and consistency of enterprise wide risk management.
“There are special versions of frequency domain techniques that are especially adapted to work well on short sample time-series data”
Combining time and frequency domain techniques
Frequency domain techniques can also be combined with more conventional time-series modelling techniques to produce a powerful combined toolkit. The correlations between components as in Figure 1 are zero – but per component, the correlations between economic and financial market variables are not zero. And in high-dimensional cases of multi-economy, multi-asset class scenario generator models, it is challenging to capture these correlations and the corresponding dynamics without overfitting. A classical time-series modelling solution to this problem is to use dynamic factor models (DFMs). DFMs can be applied per component or frequency band equally well, and are suitable for conventional time-series.
Figure 2 shows the first principal component analysis (PCA) factors from standardised trend, cycle and
irregular components of hundreds of economic and financial market variables. In this two-step approach,
one first filters every individual time-series into a trend, cycle and irregular component and then, as a second step, calculates PCA factors on the hundreds of trend, cycle and irregular component time-series separately. In these PCA factors it is, for example, easy to recognise the dating of familiar business cycle troughs, as well as more short-term market crashes. These and higher order (orthogonal) PCA factors can power dedicated DFM specifications per frequency band to produce a robust, efficient and consistent scenario generator modelling framework across investment horizons, economies and asset classes combined. More information on frequency domain approaches can be found at bit.ly/37W3wWC
Hens Steehouwer is head of research at Ortec Finance.