Tim Harford speaks to Ruolin Wang about why it’s so important to slow down and question things from emotive headlines to the numbers and algorithms we use in our work
Tim Harford is, in his words, a “professional nerd communicator”. He can be found on the radio, in the Financial Times or in the bookshop. A common thread throughout his work is helping us, nerd or not, to understand the world – and his latest book, How to Make the World Add Up, offers 10 rules that aim to do just that.
Rule one: Search your feelings
Users of social media will be no stranger to the role that emotions can play in processing information. “Certainly, my tweets that get the most engagement and the most retweets are things that annoy people,” Harford admits. “Either I annoy people or, more likely,
I have written about something that annoys people. And if I just write something that’s super calm and measured, it doesn’t get as much traction.”
The good news is that the very recognition of our emotions can be enough to break their spell and help us discern information more objectively. One indication comes from research by David Rand and Gordon Pennycook. “Just asking people to first evaluate the truth of a headline, before giving them some other task involving retweeting, makes a huge difference,” Harford explains. “Their behaviour becomes more focused on truth for the next couple of days.”
For Harford, this is cause for optimism and signals that good statistical thinking doesn’t have to be heavily technical. “If you get some basics right, get the right kind of context and ask the right kind of questions, and, in particular, if you calm down, there’s never been a better time to understand the world through numbers.
“So I’ll just keep writing in a calm, measured way as much as possible. In the long run, you can’t corrupt your communication in order to chase the headlines and chase the retweets. There is a tremendous value in trying to be impartial, objective and calm, and giving people the information they need.”
Rule three: Avoid premature enumeration
Actuaries might be well-trained to keep a cool head when it comes to numbers, but we can be susceptible to other pitfalls.
“Premature enumeration means charging into the analysis before you’ve stopped to ask: where does the number come from? What does it mean?” Harford explains. “It’s very common to see people furiously arguing over a number without having explored what the definition of the number is. This is something that affects more quantitatively-minded people.”
“If you get the right kind of context and ask the right kind of questions, and if you calm down, there’s never been a better time to understand the world through numbers”
For example, what should we make of the claim that ‘a new study shows children who play violent video games are more likely to be violent in reality’? Harford dissects the claim to demonstrate that its meaning is not at all clear.
“What is ‘playing video games’? Is it playing them once?
Or playing them a lot? What is a violent video game? Pac-Man eats living beings – well, ghosts. Is that a violent video game? And violent behaviour, how is that being measured? In a lab? A questionnaire? Criminal records?
“When we see a claim like ‘playing violent video games increases violent behaviour by 25%, say scientists’, we can argue about the 25% all we like – but if I don’t know what a violent video game is and I don’t know what violent behaviour is, who cares about the 25%?”
One of the many drivers behind the 2007–08 financial crisis, Harford argues, was premature enumeration. “There’s a risk, and we’re trading the risk, measuring the risk and repackaging the risk. And we are slicing and dicing the risk into different tranches, and then trading those risks. And then maybe repackaging again.
“Amid all of this, what’s lost in many cases is the sense of: what actually is risk? What are we really measuring? Where did that number come from? How stable is it? Is it fit for the purpose that we’re using it for?
“And the answer is: we don’t know where it came from.
It’s not stable. It’s not fit for the purpose we’re using it for, but we are deep into our very sophisticated modelling and it’s all built on a concept that we haven’t stopped to understand.”
Harford believes that the solution is not to stop modelling, but to avoid chasing numbers in a blinkered fashion. “Measurements are always imperfect. The moment the measure is also a target, the measure is going to start to degrade. In economics, this is called Goodhart’s law. Psychologists call it Campbell’s law. Everyone knows it but we just keep doing it.”
Rule seven: Demand transparency when the computer says no
One topic at the centre of actuarial discourse is the rise of complex algorithms, and how actuaries can harness its power responsibly.
“What we should be doing is judging the algorithms by results,” Harford says. “What do we want them to achieve?
Are they doing what we want? Do we have proper oversight? Do we have reasonable transparency? Do we have proof of efficacy?”
He highlights that better outcomes won’t necessarily come from ever more powerful algorithms; they also depend on proper use of those algorithms. “There is one example, in the 1980s, of a diagnostic tool to help doctors diagnose nonspecific lower abdominal pain. Nonspecific lower abdominal pain could be anything from appendicitis to pregnancy. Sir David Spiegelhalter was involved in evaluating this, and he said two interesting things about this algorithm.
“First, it’s rubbish. It doesn’t provide good diagnosis. Second, in a randomised control trial, the doctors who used the algorithm treated their patients best. These patients had better outcomes than patients of the doctors without the algorithm.”
Why did a “rubbish” algorithm lead to better results? “Because the algorithm structured the process of asking questions,” Harford explains. “The doctors didn’t miss any questions, and the algorithm got the doctors to think about things that they might have missed.”
Conversely, a good algorithm can lead to negative outcomes if used inappropriately. “There was a predictive policing algorithm in Chicago designed to help the police understand who was likely to be shot next after somebody had been shot – particularly in networks of gangs. The algorithm apparently was very good. But what the police did was to use it as a list of people they could arrest. They thought, if somebody’s at risk of getting shot, that means they are involved in the gangs and could be arrested.
“In the first case, you have a bad algorithm leading to better medical care. In the second case, a good algorithm – in the sense that it was accurate – was enabling abusive policing.”
Referencing the American mathematician Cathy O’Neil, Harford suggests that professionalising data science may promote more responsibility in this area. “As part of a profession, there would be a support group outside of your employer,” he says. “There would be people you could talk to who understand the technical side of what you’re doing and might be able to offer support or moral advice.”
Growing actuarial presence
During the past century, economists have moved beyond studying economies and ventured into such diverse fields as quantitative finance, politics and psychology – a phenomenon sometimes described as ‘economics imperialism’. What wisdom might Harford, an economist with wide-ranging interests, offer to actuaries who are looking to branch out from the traditional domains of pensions and insurance?
“It’s important to engage in a productive way with people who are already in those fields,” he says. “Not so that you absorb their groupthink, but so that you understand what they’ve done and what they haven’t done.”
He illustrates this caution with an example. “About 15 years ago, some economists started getting into epidemiology. They designed their models and proved their results. The study was published in an economic journal. But anybody – certainly the epidemiologists – could do the maths. They were saying: ‘We’ve known this since 1970. What’s useful about this?’
“I asked the editor of the economics journal: ‘Why didn’t you get an epidemiologist to peer review it?’ And he said:
‘Why would an epidemiologist volunteer to peer review an economics journal? There’s no incentive for them to do that.’ For me, this gives really interesting insight into how difficult it can be to collaborate when you’re entering a new field.”
The bottom line? “Go for it!” he says. “When you bring one set of tools and skills into a different field, that’s when the excitement happens.”