If indeed there was someone who took behavioural science to an art form it was Prof. Daniel Kahneman, a Nobel laureate in economic sciences, renowned for his groundbreaking work in the field of psychology and behavioural economics. Kahneman, who passed away on March 27, 2024, did some pioneering work on heuristics (mental shortcuts) and biases that culminated into the 2011 best bookseller "Thinking, Fast and Slow," a fascinating look at how the mind works. In an exclusive interview with Fortune India, Olivier Sibony, an affiliate professor of strategy at HEC Paris, revisits the origins of his collaboration with the late Kahneman and how it led to the groundwork for, "Noise: A Flaw in Human Judgment," also co-authored by Cass Sunstein of Harvard Law School. The collaboration, as Olivier mentions, was a blend of Kahneman's productive pessimism and Olivier's optimism, a dynamic that propelled them through moments of doubt. Olivier also shares his views on the importance of decision hygiene to mitigate biases and noise, and why generative AI is not without its flaws.
How did Prof. Kahneman's earlier work on heuristics and biases influence the conceptual framework for "Noise: A Flaw in Human Judgment"?
The basic idea behind "heuristics and biases," the school of thought Kahneman and Tversky pioneered in the 1970s, is that people resort to "shortcuts," called heuristics, which are generally useful, but sometimes result in predictable errors. That is what we now understand when they hear "bias": an error that is predictable, because most people, most of the time, err in the same direction. A classic example is the planning fallacy: if you are planning a large project, you are much more likely to underestimate how long it will take and how much it will cost than to make the opposite error.
But not all errors are biases! Not all errors are predictable; and everyone does not make precisely the same errors. The reason for writing Noise was to draw attention to the "other" flaw in human judgment, the variability of judgments that should be identical. Error is noise plus bias; and if we focus only on bias, we miss part of the problem.
Can you describe the moment or discussion that led to your collaborative endeavour with Kahneman on the topic of noise, and what initially sparked your interest in this specific area of study?
Danny shared with me his experience of working with an insurance company in which experts were making wildly different judgments about the same technical problems. This was an observation I had often made at the clients I was serving, but I had failed to see its significance. Talking to Danny, I realised that these errors matter just as much as the predictable ones. But we knew little about their magnitude or their causes. So, we got to work.
Working with someone as esteemed as Kahneman must have been incredibly enriching...
Incredibly stimulating, indeed; and certainly, the most exciting intellectual experience of my career. Initially, Danny, Cass Sunstein and I drafted separate chapters on topics closer to our areas of expertise. But it soon became apparent that all of us – and especially Danny – had something to say about what the others were writing... When I read the finished product today, with few exceptions, I cannot remember who wrote what. It really was a team effort, under Danny's leadership, of course.
Given that Kahneman was of the view that if you're a pessimist, life never disappoints you, how would you define your own philosophy? Were there points of views where you disagreed with him and how did you bridge those differing viewpoints during your work together?
Danny did not put it that way, but he was a very productive pessimist. His pessimism was not the sort where you give up. It was the sort where you believe that anything you cannot control will probably go wrong, so you have to work incredibly hard on everything you can control. The beauty of writing a book is that unlike, for instance, launching a business, it is something that you can control almost completely. So, in my role as the team’s official optimist, all I had to do was remind him that we had the process under control. Whenever Danny thought we would never get anywhere, I pointed out how far we'd come.
Could you share a memorable anecdote or a behind-the-scenes moment from your time working with Kahneman that reflects his personality or approach to work?
Danny’s way of pressure-testing ideas was to argue with the other side, very energetically and very convincingly. He did this with other people’s ideas (which requires some humility if you work with him). But he also did it, even more mercilessly, with his own ideas. This meant he would sometimes argue the opposite of what he believed one day before, thus reverting to the very same thing he had believed two days before. One day I got fed up with this and told him, “Danny, we are going around in circles.” To which he replied, “Yes – but we are spiraling up.” That’s a nice summary of how his ideas would progress!
How has the work you did together influenced your own thinking or approach to decision-making?
One observation I had made was that businesspeople who have read about biases tend to see biases everywhere, sometimes even to "weaponise" their knowledge of biases: when they disagree with someone, they accuse that person of some bias or other. The work we did on Noise helped me see more clearly what is going on here. Simply put, there are many psychological biases, and even though their effects in a lab are predictable, their real-life interplay can produce unpredictable consequences: noise. This has a very important consequence for businesspeople: knowing about biases, in itself, does little good. You need to put in place processes that protect your decision making against biases, and against other possible sources of error. That’s what we call decision hygiene.
What are the most effective strategies or tools that organisations can use to reduce noise in their decision-making processes?
We describe several decision hygiene principles. One of the easiest to apply is to get multiple, independent judgments about the same question. The key word here is “independent”: this is not about having a discussion in a meeting, but about making sure that multiple people separately produce assessments before they get together.
Will generative AI’s power to analyse vast datasets with complex variables, surpassing human capabilities, lead to better insights and decisions?
One of the points Danny made in Thinking, Fast and Slow, which we expand on in Noise, is that it is not that hard to surpass human capabilities. We have known since the 1960s that even very simple formulas can outperform humans. Yet, humans hate to trust machines: by and large, they prefer to make more mistakes, but to remain in control. What is new about AI is that, on many decision problems, it can reach a level of accuracy that is good enough to be acceptable as a substitute for human decisions, not as a complement.
Can Generative AI be trained to identify and correct for biases in data and decision-making processes ensure more objective decision outcomes?
Much is being written about the biases of AI. It is indeed a very serious problem. But we should not lose sight of the fact that if AI is biased, it is because the data on which it is trained is biased. And if the data is biased, it means human decisions were biased! So, the solution cannot be to say, as many people unfortunately do, “the AI is too biased, let’s revert to human decision making.” This means algorithms must be designed to control the biases that we want to correct. It is entirely possible, but it raises a very complicated question: in a given algorithm, what biases, exactly, do you intend to correct (even though they were present in the data), and what biases are you prepared to tolerate? That is partly a technical problem, because defining a bias in algorithmic decision making is not as simple as people think. But, mostly, it is a strategic and political question, and often a legal one, too.
In the context of investing, where 'price is what you pay and value is what you get,' how can investors differentiate between market noise and genuine value signals to make informed decisions? How can investors ensure that short-term volatility does not detract from the long-term assessment of an asset's true value?
We did a few experiments in the world of investing, and we were surprised to see how much variability there is in the technical judgments of investment professionals. However, one thing investors have that many other professions do not is that they make a lot of decisions, and they get feedback on whether they were right or wrong. This may be judged over the long term or quite quickly, depending on the type of investment you’re doing, but you always get feedback. Feedback is not perfect: some people who are lucky for a long time may start to seem uniquely talented (including to their own eyes). But, in principle, people who have really bad judgment should be weeded out. I can think of some professions where there is no reason to believe that happens.
Given Warren Buffett's scepticism predicting a bad ending for cryptocurrencies and Charlie Munger's stark analogy of 'trading in turds,' juxtaposed with Wall Street's recent entry into Bitcoin, are investors in a position to effectively cut through the noise around cryptos to make informed decisions?
Crypto is a complete puzzle to me. I, honestly, don’t understand how this can work out in the long run. But there are lots of things in life I don’t understand, and people have made money on some of them!
Kahneman has observed that individuals, especially professionals, often hold overly confident opinions of their own judgments. In the context of a CEO's role in capital allocation—specifically when deciding to venture into a sunrise sector such as renewables, wherein the company lacks prior experience, and the decision is predicated on anticipated fair returns—could cognitive biases, particularly stemming from past successes in traditional business areas, influence their decision-making?
You are describing overconfidence in market entry decisions. It is one of the most clearly documented biases in corporate strategy. Whether you are talking about renewables today, the Internet thirty years ago, or railroads in the 19th century, the problem is the same. Of course, this does not mean that no one should invest in new, exciting businesses. It just means that the average direction of error in forecasts is easy to predict.
What is the most common misconception about noise in decision-making, and how can individuals and CEOs get better at it?
The most common misconception about noise is that it does not matter, because random errors cancel out. But that’s like saying that two shots that miss the target in opposite directions are the same as two shots in the bullseye... If you are making deals, overpaying half of the time and underbidding the other half of the time are not the same thing as having the right price every time.
Finally, can you share that one memorable quote or insight from your interaction with Kahneman that truly encapsulates his intellectual brilliance?
There are so many... One of my favorites is an illustration of his productive pessimism, and how funny he could be. This was towards the beginning of our work on Noise, and he was skeptical at the time that there would ever be a book. One day, we had an especially productive session. Ideas were flowing, I could see the project taking shape. So, at the end of the day, I asked Danny how he felt about it. He answered: "We have failed again – we have failed to confirm my doubts."
Leave a Comment
Your email address will not be published. Required field are marked*