A number of common misconceptions prevent the optimal use of data, argues Douglas Hubbard

It's been confirmed many times in many fields: even simple statistical models outperform human experts in a variety of forecasts and decisions.
In a meta-study of 150 studies comparing expert judgment with statistical models, the models clearly outperformed the experts in 144 studies (Meehl, 1975). More recently, Philip Tetlock undertook a giant study to track more than 82,000 forecasts of 284 experts over a 20-year period. From this, Tetlock could confidently state: "It is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones."
Many other researchers have studied the intuitions even fairly sophisticated managers have about interpreting data and they have concluded that "these intuitions are wrong in fundamental respects" (Tversky, Kahneman 1971). And there is far more research than the sources I cite here that concludes we are simply better off 'doing the math'.
But there are unfortunate misconceptions, which keep many managers from using powerful quantitative methods on key problems. Perhaps first and foremost among these misconceptions is that some things simply aren't measurable and, therefore, we can't use a quantitative model to improve our judgment. For most of my 25 years in consulting, I've worked on seemingly intractable measurement problems. In 2007, I wrote my first book, How to Measure Anything, to address common misconceptions I encountered. Among my many clients in several industries, I find that many educated professionals seem to be susceptible to at least some of these erroneous beliefs.
I once met a vice-president in IT at a large insurance company who told me: "Doug, the problem with IT is that it's risky and there is no way to measure risk." I responded "What do you mean there is no way to measure risk? You work for an insurance company." I believe I observed a man having an epiphany at that very moment. We happened to be in the area of the building where we were surrounded by actuaries, and he must have only then realised the irony in an opinion he may have held for quite some time.
This is not an isolated story and risk is not the only thing that seems to defy measurement to some people. Even actuaries succumb to the belief that some important things they might like to know are nevertheless immeasurable.
I believe that the insistence that some things cannot be measured - especially some of the most important issues - is sand in the gears of the entire economy, as well as a detriment to decisions in government, medicine, justice, military operations, the environment and many other areas of our lives. Big decisions are not as informed as they could be, because values like quality, brand, collaboration, innovation and even risk are routinely dismissed as immeasurable. You, yourself, have probably accepted more risk of decision error because you believed, incorrectly, that something was immeasurable and you doubtless have been affected by the failure to measure by others in business and government.
The fact is that I haven't found a real 'immeasurable' yet. In the past several years, my staff and I have developed measures of the risk of a mine flooding, drought resilience in the Horn of Africa, the market for new laboratory devices, the risks of cyber attacks and the value of industry standards, to name a few. In each of these cases, something was perceived to be virtually impossible to measure and, yet, we showed that we can use informative observations to inform decisions. There are reasons why things are perceived to be immeasurable, and why all these reasons are mistaken. Here are a few of them.
Measurement doesn't mean what you think it does
Measurement is often perceived to be some 'exact' point value. But scientists effectively treat it as observations that reduce uncertainty about a quantity. This is the definition I use and is the most relevant use of the term in decision-making. Suppose you have a wide range of possible values for a quantity - say, the adoption rate of a new technology, the percentage of people with a certain medical condition, or the awareness of your brand in China. Then suppose you take a few observations, such as survey samples or controlled experiments, followed by some (usually trivial) maths, and now our range is narrower. This constitutes a measurement even though an 'exact' value is never reached.
We use these new ranges to populate Monte Carlo simulations, to compute how much the remaining uncertainty affects risk, and to determine whether further measurements are justified.
Define exactly what you are talking about
When clients ask me to measure image, collaboration, productivity, or the value of an improved office environment, I first ask them what they mean, specifically. I ask: "What do you see when you see more of this thing?" After prompting, they start to identify a few anecdotes they've seen. Perhaps when they imagine improved 'collaboration', they imagine that people from different areas of the firm talk more. That's at least observable.
Perhaps they also mean product development teams reach goals faster. Good, that's observable too. Then I ask: "Why do you care?" Measuring collaboration, for example, may help to identify the teams that are more likely to be successful in projects and, therefore, they can intervene when necessary. Being less ambiguous is a big help in measurement.
You need less data than you think you do
A manager might say: "It would be great if we could measure this, but we just don't have enough data." This is usually a claim unsupported by actual calculations, and they often seriously underestimate how much uncertainty reduction they get from a small amount of data. We use Bayesian methods to update prior uncertainties with new data and, when these methods are used, managers will often be surprised how even paltry data has some impact on their decisions.
Many managers believe that when we have a lot of uncertainty, we need a lot of data to measure it but, mathematically speaking, just the opposite is true. When you have a lot of uncertainty, you get a large uncertainty reduction from the first few observations.
I often say when you know almost nothing, almost anything will tell you something.
You have more data than you think you do
Managers also greatly underestimate how much informative data they have access to. This sometimes comes in the form of the 'uniqueness fallacy' - the belief that only similar or even nearly identical examples are informative. Managers may claim that they can't make estimates about implementing a new technology because it is so unique - even if they have a long history of doing so.
Using that same logic, an insurance company couldn't compute a life insurance premium because a given person is unique and because he/she hasn't died yet. In fact, insurance actuaries know how to extrapolate from larger, more heterogeneous populations. Big data, social media, mobile phones and personal measurement devices are making the "we don't have enough data" excuse much harder to justify.
Information has computable value, and results can surprise
In our decision models, we routinely compute the value of information using some decades-old methods from decision theory. This usually generates surprising results for a client. The high-payoff measurements are not what they would have otherwise measured. The things they would have spent more time measuring are those that are less likely to improve decisions.
This is a pervasive phenomenon we've observed in many industries and we call it the 'measurement inversion'. It is not just that people measure the wrong things - they measure almost exactly the wrong things.
A list of measurement categories sorted by how much attention they get historically would not just be different from a list sorted by information values. It would be nearly exactly inverted. I honestly can't imagine how this would not affect the profit of any company or the GDP of any country.
This simple set of calculations also shows that we need to measure relatively few things even in a big decision model with over a hundred variables (common in our business).
You can, in fact, measure anything if you get past some common and deeply entrenched misconceptions. Corporations, governments and even the average voter and consumer will be better off once this is understood.
Douglas Hubbard is the author of How to Measure Anything: Finding the Value of Intangibles in Business