Our hunger for certainty.
This book is very good, why isn’t it more famous?
Because it is saying precisely what we don’t want to hear.
This book and Everything is Obvious* have a large overlap in theme. However, they have almost zero overlap in content. They are much more complements than competitors.
Four quotes from the book introduce the key problems with predictions:
With natural science increasingly aware of the limits of prediction, and with prediction even more difficult when people are involved, it would seem obvious that social science — the study of people — would follow the lead of natural science and accept that much of what we would like to predict will forever be unpredictable. But that has not happened, at least not to the extent that it should.
Experts are smart, informed people. Treating their predictions as gospel seems perfectly rational, at least in a superficial sense, and when there is a psychological craving to be satisfied, a superficial appearance of rationality will do.
We always feel the uncertainty and change we are experiencing now is greater than ever. That feeling makes us hunger for predictions about the future.
The future will forever be shrouded in darkness. Only if we accept and embrace this fundamental fact can we hope to be prepared for the inevitable surprises that lie ahead.
This is the first description of Philip Tetlock’s study that I’ve come across with which I’ve felt satisfied. Of course, reading Tetlock’s book would be the real cure.
Tetlock did a large, long study on expert prediction. Some of the experts were quite ideological, but:
Experts who did better than the average of the group — and better than random guessing — thought very differently. They had no template. Instead, they drew information and ideas from multiple sources and sought to synthesize it. They were self-critical, always questioning whether what they believed really was. And when they were shown that they had made mistakes, they didn’t try to minimize, hedge, or evade. … Most of all, these experts were comfortable seeing the world as complex and uncertain — so comfortable that they tended to doubt the ability of anyone to predict the future.
This paragraph has a humorous and quite profound conclusion:
The experts who were more accurate than others tended to be much less confident that they were right.
Lao-tzu would not be surprised:
Those who say don’t know.
Those who know don’t say.
The way that our brains operate is quite problematic in terms of wanting and evaluating predictions.
One thing is that probability is not an intuitive operation for us.
Blame evolution. In the Stone Age environment in which our brains evolved, there were no casinos, no lotteries, and no iPods. A cave man with a good intuitive grasp of randomness couldn’t have used it to get rich and marry the best-looking woman in the tribe. It wouldn’t have made him healthier or longer-lived, and it wouldn’t have increased his chances of having children. In evolutionary terms, it would be a dud.
(There’s an amusing reason why iPods are mentioned.)
On the other hand:
Pattern recognition was literally a matter of life and death, so natural selection got involved and it became a hard-wired feature of the human brain.
Someone who does a dance, gets rained on, and concludes that dancing causes rain has made a serious mistake but he won’t increase his chances of an early death if he dances when he wants rain.
There was a big difference between confabulating a pattern that wasn’t there versus missing a pattern (a lion, for instance) that was there. We’re now using that same brain in situations where false positives can have big consequences.
Suppose you are given the task of predicting if a green light or a red light will come on. After a while you notice that red comes on about 80% of the time, but otherwise it seems random. What is your strategy for guessing?
The typical human strategy is to randomly guess with about 80% reds. That is, we try to mimic the process.
What is the optimal strategy? Always guess red. Pigeons and rats get this test right.
The book describes some other tests by Michael Gazzaniga that are fascinating, and I think worth the price of admission by themselves.
Television is probably not the best place to seek wisdom:
Using Google hits as a simple way to measure the fame of each of his 284 experts, Tetlock found that the more famous the expert, the worse he did. … Surely experts who consistently deliver lousy results will be weeded out … The cream should rise, and yet, it doesn’t.
How is this possible? Very simply, it’s what people want. Or to put it in economic terms, it’s supply and demand: We demand it; they supply.
The main commodity that the talking heads are delivering is certainty. We are willing to give up a lot for the anti-anxiety effect of the illusion of certainty.
Markets are machines for creating uncertainty, so finance is very problematic for prediction.
But we were talking about the price of oil. It’s not complicated. It’s the product of two factors: supply and demand. There’s nothing else to it. Surely it can be determined by one of those linear equations scientists use to predict the movements of the planets, eclipses, and the tides?
How to improve
Experts who cannot learn from their own stumbles are beyond help but it is possible for the rest of us — experts and laypeople alike — to adopt a considered, reasonable skepticism about predictions.
The author presents three steps to improve our predictions:
Aggregation is about using multiple sources and multiple points of view.
Metacognition is thinking about thinking. How did you arrive at your prediction? What biases might you have in making this prediction?
A basic method for overcoming confirmation bias, for example, is to draft a list of the reasons why your belief may be wrong, but the analysts “are reluctant to use even that extremely simple tool”
Humility includes not even attempting predictions far into the future.
There is a forecasting tournament now in progress. Tetlock is on a team that has The good judgement project blog. However, there hasn’t been much action on the blog lately.
This interview with the author is about 14 minutes long.