Managers are better equipped than ever to make good decisions. They are more aware that human judgment is fallible. They have oodles of data about their customers and products. They can use artificial intelligence (AI) to analyse, summarise and synthesise information with unprecedented speed. But as the pendulum swings inexorably away from gut instinct and towards data-based decisions, firms need to be alive to a different set of dangers.
In a recent paper Linda Chang of the Toyota Research Institute and her co-authors identify a cognitive bias that they call “quantification fixation”. The risk of depending on data alone to make decisions is familiar: it is sometimes referred to as the McNamara fallacy, after the emphasis that an American secretary of defence put on misleading quantitative measures in assessing the Vietnam war. But Ms Chang and her co-authors help explain why people put disproportionate weight on numbers.
The reason seems to be that data are particularly suited to making comparisons. In one experiment, participants were asked to imagine choosing between two software engineers for a promotion. One engineer had been assessed as more likely to climb the ladder but less likely to stay at the firm; the other, by contrast, had a higher probability of retention but a lower chance of advancement. The researchers varied the way that this information was presented. They found that participants were more likely to choose on the basis of future promotion prospects when only that criterion was quantified, and to select on retention probability when that was the thing with a number attached.
One answer to this bias is to quantify everything. But, as the authors point out, some things are mushier than others. A firm’s culture is harder to express as a number for job-seekers than its salary levels. Data can tell an early-stage investor more about a startup’s financials than a founder’s resilience. Numbers allow for easy comparisons. The problem is that they do not always tell the whole story.
There are other risks, too. Humans bring the same cognitive biases to their analysis of numbers as they do to other decisions. Take confirmation bias, the propensity to interpret information as support for your point of view. In another experiment Itai Yanai of New York University and Martin Lercher of Heinrich Heine University asked computer-science undergraduates to say what general correlation they expected between wealth and happiness, before showing them a fictitious dataset of the relationship between these two variables for 1,000 individuals. Faced with an identical graph, students who expected a positive correlation were much more likely to see one in the data. Beliefs influenced interpretation.
Plenty of people struggle with basic data literacy: consumers are less likely to participate in competitions with higher numbers of contestants, even when the odds of winning a prize are exactly the same. In a world giddy with excitement over AI models, relying on algorithms may seem like the sensible solution to this. In one more experiment, Hossein Nikpayam and Mirko Kremer of the Frankfurt School of Finance and Management and Francis de Véricourt of ESMT Berlin found that managers were unimpressed when other decision-makers ignored machine-led recommendations and exercised their own judgment. They blamed them if the outcome was bad, and did not reward them if it was good. People used to say that nobody ever got fired for buying IBM. It’s not hard to imagine “nobody gets fired for following the algorithm” becoming the modern-day equivalent.
But there are times when humans have an advantage. Datasets reflect back the world as it is, for example, not the world as it might be. It’s harder to evaluate radically new ideas by looking at existing patterns. In the early days of HBO, a pioneering TV channel, executives operated on a mixture of instinct and contrarianism to commission programmes that broke the mould: profane comedy specials, a prison drama that killed off a main character in the first episode. Other networks turned down the idea of a violent mobster in therapy; HBO did not. Relying on data might have led to more explicable decisions, but they would also have been safer.
None of this is to say that instinct trumps data, or to claim that humans make better decisions than machines. Far from it. But it is a warning. Numbers promise rigour, certainty and objectivity. They have flaws, too.
© 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com