Skip to main content
The Actuary: The magazine of the Institute and Faculty of Actuaries - return to the homepage Logo of The Actuary website
  • Search
  • Visit The Actuary Magazine on Facebook
  • Visit The Actuary Magazine on LinkedIn
  • Visit @TheActuaryMag on Twitter
Visit the website of the Institute and Faculty of Actuaries Logo of the Institute and Faculty of Actuaries

Main navigation

  • News
  • Features
    • General Features
    • Interviews
    • Students
    • Opinion
  • Topics
  • Knowledge
    • Business Skills
    • Careers
    • Events
    • Predictions by The Actuary
    • Whitepapers
    • Moody's - Climate Risk Insurers series
    • Webinars
    • Podcasts
  • Jobs
  • IFoA
    • CEO Comment
    • IFoA News
    • People & Social News
    • President Comment
  • Archive
Quick links:
  • Home
  • The Actuary Issues
  • August 2014
08

Measure for measure

Open-access content Wednesday 30th July 2014 — updated 5.13pm, Wednesday 29th April 2020

A number of common misconceptions prevent the optimal use of data, argues Douglas Hubbard

2

It's been confirmed many times in many fields: even simple statistical models outperform human experts in a variety of forecasts and decisions. 

In a meta-study of 150 studies comparing expert judgment with statistical models, the models clearly outperformed the experts in 144 studies (Meehl, 1975). More recently, Philip Tetlock undertook a giant study to track more than 82,000 forecasts of 284 experts over a 20-year period. From this, Tetlock could confidently state: "It is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones." 

Many other researchers have studied the intuitions even fairly sophisticated managers have about interpreting data and they have concluded that "these intuitions are wrong in fundamental respects" (Tversky, Kahneman 1971). And there is far more research than the sources I cite here that concludes we are simply better off 'doing the math'.

But there are unfortunate misconceptions, which keep many managers from using powerful quantitative methods on key problems. Perhaps first and foremost among these misconceptions is that some things simply aren't measurable and, therefore, we can't use a quantitative model to improve our judgment. For most of my 25 years in consulting, I've worked on seemingly intractable measurement problems. In 2007, I wrote my first book, How to Measure Anything, to address common misconceptions I encountered. Among my many clients in several industries, I find that many educated professionals seem to be susceptible to at least some of these erroneous beliefs. 

I once met a vice-president in IT at a large insurance company who told me: "Doug, the problem with IT is that it's risky… and there is no way to measure risk." I responded "What do you mean there is no way to measure risk? You work for an insurance company." I believe I observed a man having an epiphany at that very moment. We happened to be in the area of the building where we were surrounded by actuaries, and he must have only then realised the irony in an opinion he may have held for quite some time. 

This is not an isolated story and risk is not the only thing that seems to defy measurement to some people. Even actuaries succumb to the belief that some important things they might like to know are nevertheless immeasurable. 

I believe that the insistence that some things cannot be measured - especially some of the most important issues - is sand in the gears of the entire economy, as well as a detriment to decisions in government, medicine, justice, military operations, the environment and many other areas of our lives. Big decisions are not as informed as they could be, because values like quality, brand, collaboration, innovation and even risk are routinely dismissed as immeasurable. You, yourself, have probably accepted more risk of decision error because you believed, incorrectly, that something was immeasurable and you doubtless have been affected by the failure to measure by others in business and government. 

The fact is that I haven't found a real 'immeasurable' yet. In the past several years, my staff and I have developed measures of the risk of a mine flooding, drought resilience in the Horn of Africa, the market for new laboratory devices, the risks of cyber attacks and the value of industry standards, to name a few. In each of these cases, something was perceived to be virtually impossible to measure and, yet, we showed that we can use informative observations to inform decisions. There are reasons why things are perceived to be immeasurable, and why all these reasons are mistaken. Here are a few of them.


Measurement doesn't mean what you think it does

Measurement is often perceived to be some 'exact' point value. But scientists effectively treat it as observations that reduce uncertainty about a quantity. This is the definition I use and is the most relevant use of the term in decision-making. Suppose you have a wide range of possible values for a quantity - say, the adoption rate of a new technology, the percentage of people with a certain medical condition, or the awareness of your brand in China. Then suppose you take a few observations, such as survey samples or controlled experiments, followed by some (usually trivial) maths, and now our range is narrower. This constitutes a measurement even though an 'exact' value is never reached. 

We use these new ranges to populate Monte Carlo simulations, to compute how much the remaining uncertainty affects risk, and to determine whether further measurements are justified. 


Define exactly what you are talking about

When clients ask me to measure image, collaboration, productivity, or the value of an improved office environment, I first ask them what they mean, specifically. I ask: "What do you see when you see more of this thing?" After prompting, they start to identify a few anecdotes they've seen. Perhaps when they imagine improved 'collaboration', they imagine that people from different areas of the firm talk more. That's at least observable. 

Perhaps they also mean product development teams reach goals faster. Good, that's observable too. Then I ask: "Why do you care?" Measuring collaboration, for example, may help to identify the teams that are more likely to be successful in projects and, therefore, they can intervene when necessary. Being less ambiguous is a big help in measurement.


You need less data than you think you do

A manager might say: "It would be great if we could measure this, but we just don't have enough data." This is usually a claim unsupported by actual calculations, and they often seriously underestimate how much uncertainty reduction they get from a small amount of data. We use Bayesian methods to update prior uncertainties with new data and, when these methods are used, managers will often be surprised how even paltry data has some impact on their decisions. 

Many managers believe that when we have a lot of uncertainty, we need a lot of data to measure it but, mathematically speaking, just the opposite is true. When you have a lot of uncertainty, you get a large uncertainty reduction from the first few observations. 

I often say when you know almost nothing, almost anything will tell you something.


You have more data than you think you do

Managers also greatly underestimate how much informative data they have access to. This sometimes comes in the form of the 'uniqueness fallacy' - the belief that only similar or even nearly identical examples are informative. Managers may claim that they can't make estimates about implementing a new technology because it is so unique - even if they have a long history of doing so. 

Using that same logic, an insurance company couldn't compute a life insurance premium because a given person is unique and because he/she hasn't died yet. In fact, insurance actuaries know how to extrapolate from larger, more heterogeneous populations. Big data, social media, mobile phones and personal measurement devices are making the "we don't have enough data" excuse much harder to justify.


Information has computable value, and results can surprise

In our decision models, we routinely compute the value of information using some decades-old methods from decision theory. This usually generates surprising results for a client. The high-payoff measurements are not what they would have otherwise measured. The things they would have spent more time measuring are those that are less likely to improve decisions. 

This is a pervasive phenomenon we've observed in many industries and we call it the 'measurement inversion'. It is not just that people measure the wrong things - they measure almost exactly the wrong things. 

A list of measurement categories sorted by how much attention they get historically would not just be different from a list sorted by information values. It would be nearly exactly inverted. I honestly can't imagine how this would not affect the profit of any company or the GDP of any country. 

This simple set of calculations also shows that we need to measure relatively few things even in a big decision model with over a hundred variables (common in our business).

You can, in fact, measure anything if you get past some common and deeply entrenched misconceptions. Corporations, governments and even the average voter and consumer will be better off once this is understood.


Douglas Hubbard is the author of How to Measure Anything: Finding the Value of Intangibles in Business

This article appeared in our August 2014 issue of The Actuary.
Click here to view this issue
Filed in
08
Topics
Professional

You might also like...

Share
  • Twitter
  • Facebook
  • Linked in
  • Mail
  • Print

Latest Jobs

Senior Catastrophe Analyst

London, England
£70000 - £100000 per annum
Reference
146055

Catastrophe Analyst

London, England
Up to £50000 per annum + + Bonus
Reference
146053

Principal Pricing Analyst

England, London
£60000 - £70000 per annum
Reference
146052
See all jobs »
 
 

Today's top reads

 
 

Sign up to our newsletter

News, jobs and updates

Sign up

Subscribe to The Actuary

Receive the print edition straight to your door

Subscribe
Spread-iPad-slantB-june.png

Topics

  • Data Science
  • Investment
  • Risk & ERM
  • Pensions
  • Environment
  • Soft skills
  • General Insurance
  • Regulation Standards
  • Health care
  • Technology
  • Reinsurance
  • Global
  • Life insurance
​
FOLLOW US
The Actuary on LinkedIn
@TheActuaryMag on Twitter
Facebook: The Actuary Magazine
CONTACT US
The Actuary
Tel: (+44) 020 7880 6200
​

IFoA

About IFoA
Become an actuary
IFoA Events
About membership

Information

Privacy Policy
Terms & Conditions
Cookie Policy
Think Green

Get in touch

Contact us
Advertise with us
Subscribe to The Actuary Magazine
Contribute

The Actuary Jobs

Actuarial job search
Pensions jobs
General insurance jobs
Solvency II jobs

© 2023 The Actuary. The Actuary is published on behalf of the Institute and Faculty of Actuaries by Redactive Publishing Limited. All rights reserved. Reproduction of any part is not allowed without written permission.

Redactive Media Group Ltd, 71-75 Shelton Street, London WC2H 9JQ