Matthew Edwards and Matt Fletcher report on behalf of the COVID-19 Actuaries Response Group on the data and model risks in trying to make sense of the pandemic
The volume of research on COVID-19 is huge; the pre-print service medrxiv.org (which publishes pre-peer-reviewed papers) has around 8,000 papers, while Google shows almost 6bn related web pages. Rather than try to summarise everything, we thought it would help actuaries tackle future problems (and second waves?) if we consider what we have learned about data and model risks.
To model the pandemic, its likely outcomes and mitigation strategies, it is essential to have a reliable and consistent source of data, the two most important items being the numbers of COVID-19 cases and deaths.
Defining COVID-19 deaths has been a surprisingly thorny issue. In the UK, early in the pandemic, the data reported by the Department of Health and Social Care (DHSC) included only deaths in hospital for those who tested positive for the disease. Then, in part due to the extent of deaths in care homes, all deaths of people who’d tested positive were included. This approach worked early on, with low testing volumes and short times from test to death, but was later recognised as problematic – for example, somebody testing positive in April and run over by the proverbial bus in July would have been included as a COVID-19 death. So, in August the DHSC started to publish three numbers, based on different lengths of time since a positive test.
In contrast, the Office for National Statistics (ONS) publishes data on all deaths in England and Wales where COVID-19 was mentioned on the death certificate, regardless of where the death occurred or whether the individual had tested positive. Early on, this figure was much higher than the DHSC figure, as the coverage was greater. More recently, the ‘all deaths’ figure from DHSC has been slightly higher than the ONS equivalent.
A third result can be derived from calculating excess deaths – that is, all-cause deaths ‘now’ compared with what would have been expected. This sidesteps some of the problems noted above, but adds other uncertainties (for example, variations in deaths from completely unrelated causes), quite apart from some subjectivity in defining expected deaths.
This highlights the importance of understanding exactly what any given figure represents, and how data definitions may have changed over time, before making use of a data source.
Getting a firm idea of the number of cases poses different and greater challenges from those for deaths. The most important statistics – the proportion of the population infected now, and the proportion who have ever been infected (and may therefore have immunity) – are fundamentally unknowable because we cannot feasibly test the whole population.
Cases can be estimated by surveillance testing of a representative sample (as in the UK by the COVID-19 Infection Survey). This gives useful results – particularly regarding new outbreaks – but they are indicative, with wide confidence intervals.
We can also track the number of positive cases from testing, but care is needed in interpreting and using the figures. Firstly, the number of cases will be a function of the number of tests, which has increased over time. The setting of tests also affects the results; early in the pandemic, the majority were carried out in hospitals, so those with less severe symptoms were not being tested. Consequently, the extent of the outbreak was much greater than the numbers reported.
Secondly, there have been a number of changes over time to the way the numbers of cases have been reported. In the UK, for example, there are four types of test considered (known as ‘pillars’): pillar 1 comprises tests in NHS and Public Health England laboratories; pillar 2 are commercial tests; pillar 3 are antibody tests; and pillar 4 are surveillance tests. This led to some double-counting early on (between pillars 1 and 2), and also reduced comparability across the four nations of the UK.
These step-changes and idiosyncrasies in the figures all need to be understood and allowed for.
“There are many facets of model risk, but the two that have been most relevant in the pandemic are parameter risk and the risk of stakeholder misunderstanding”
The reproductive rate (‘R’) of the SARS-CoV-2 virus is fundamental to its spread – if R is above 1, the infectious population grows as each infected individual passes the infection to more than one other person. If it is below 1 then the infectious population falls and the virus will eventually die out. Monitoring how R varies over time and by region, and projecting it into the near future, has also helped inform the response to outbreaks both regionally and nationally.
R can be estimated from the rate of change in numbers of cases or deaths (as well as from other indicators such as hospital admissions or emergency calls). When large numbers of deaths were occurring daily, the deaths data was the most reliable for estimating R. This gave a stable estimate, but the lag between infection and death meant it was not fully up to date. As the number of COVID-19 deaths has fallen and testing has become more widespread, the R value obtained from deaths has been more volatile, while that obtained by cases has become more stable (and more ‘recent’).
A key point is to ensure conclusions are not drawn that are not supported by the data. Models that estimate R based on growth rates, with regard to deaths or cases, give robust estimates only when R is broadly stable – they cannot reliably identify dates on which significant changes occurred, primarily because of the variation in the time from infection to death. So, for example, while we can infer from the deaths data that R fell materially between early March and early April, it is not possible to pinpoint a specific ‘step-change’ date.
There are many facets of model risk, but the two that have been most relevant in the pandemic are parameter risk and the risk of stakeholder misunderstanding. Here we look at parameter risk.
We outlined earlier some of the problems with estimating the numbers of COVID-19 cases and deaths. A crucial parameter in almost any pandemic model is the fatality rate, typically expressed in terms of all infections (the infection fatality rate [IFR], meaning deaths compared with infections).
In the article ‘Quantifying coronavirus’ (bit.ly/QuantifyingCovid) we noted the obvious problems around calculating a mortality rate where deaths are uncertain and infections even more so. Since then, as data collection has matured and we have better understood some of the other elements (for instance, the time from infection to death), we have reached a consensus view that the IFR is around 1%, noting recent published estimates such as those in Table 1.
But the ‘one figure’ rate is a potentially misleading average, because it conceals an extraordinary degree of heterogeneity. The most material factor is age, but it is now clear that many other material factors are involved.
The most comprehensive UK multivariate analysis is the OpenSafely study by Williamson et al (go.nature.com/31ZHFdu), which looked at an ‘exposed to risk’ of 17.3 million UK adults via their electronic health records and analysed what factors had contributed to the 10,926 COVID-19 deaths in that group. The hazard ratios (for example for a particular factor such as Index of Multiple Deprivation [IMD], the mortality effect of being in a particular IMD group compared with the ‘reference’ group) from this study combine both morbidity effects (likelihood to contract the disease) and the mortality effect post-infection. Table 2 shows some of the most material factors (in addition to age):
Heterogeneity in infectiousness also has a material impact on the results of typical SIR/SEIR pandemic models. If we assume that R is constant for everyone in a particular state at any point in time, we will produce substantially misleading results if the reality is that infectiousness varies materially across individuals, whether due to different ages or different degrees of activity within age bands.
The most evident ‘mis-calculation’ is the proportion of the population that needs to be infected to confer herd immunity. A recent paper by Britton et al. (bit.ly/3jOYPk6) considered variations in infectiousness by age and within age bands, and concluded that realistic herd immunity percentages were around two-thirds of those corresponding to uniform infectiousness:
While the above has drawn out the main learnings regarding data and modelling in a mortality context, the biggest lessons go well beyond mortality. The economic shock has had far more impact on insurers’ balance sheets than the mortality shock, while operational impacts have had a huge impact on staff and customers. Other effects are still to emerge – for instance, the mental health consequences. Many actuaries are trying to evaluate what these long-term impacts may be, on top of the direct 2020 effects.
Matthew Edwards is an actuary and director at Willis Towers Watson, co-lead of the COVID-19 Actuaries Response Group and chair of the CMI
Matt Fletcher is a senior consultant in Aon’s Demographic Horizons team and a member of the COVID-19 Actuaries Response Group