After the spectacular failure of financial experts everywhere to predict the 2008 crash the whole business of prediction has come under scrutiny. The consensus is that prediction is difficult, especially about the future, as everyone from Neils Bohr to Yogi Berra is supposed to have said, and so the questioning extends to the closely-related topic of how to act in the face of a future that we cannot foresee?
I've just read three books on these topics, by a Canadian, a Briton, and an American, and I'll do a post on each. Today, it's Dan Gardner's Future Babble. Next up is Tim Harford's Adapt, and I'll finish with Duncan Watts' Everything is Obvious Once You Know the Answer. Time-saver: on a scale of one to five, Gardner gets 2, Harford 1.5, and Watts 4.
So, Future Babble. It's a straightforward journalistic book built on the work of psychologist Philip Tetlock, who also figures prominently in Adaptand makes an appearance in Everything is Obvious. Tetlock (home page) is famous for an extended experiment in which he assembled "284 experts – political scientists, economists, and journalists – whose jobs involve commenting on or giving advice on political or economic trends."(p25) Several years and over 28,000 predictions later, he assessed their results and concluded that, on average, experts did only a little better than "a dart-throwing chimpanzee", and by some measures no better at all.
Not all experts did equally badly though, and Tetlock was able to identify the traits that made for more and less successful punditry. Those who did particularly badly "were not comfortable with complexity and uncertainty [and] sought to 'reduce the problem to some core theoretical scheme'… and they used that theme over and over, like a template, to stamp out predictions. These experts were also more confident than others that their predictions were accurate." (26) Those who did well "drew information and ideas from multiple sources and sought to synthesize it. They were self-critical, always questioning whether what they believed to be true really was… Most of all, these experts were comfortable seeing the world as complex and uncertain – so comfortable that they tended to doubt the ability of anyone to predict the future."
In other words, "The experts who were more accurate than others tended to be much less confident that they were right." (27)
Tetlock calls his less-unsuccessful experts "foxes" (those who know many things) and the even-more-unsuccessful ones "hedgehogs" (those who know one big thing), after an essay by Isaiah Berlin.
Future Babble chronicles many failed prophets and their off-base predictions, and shows how hedgehogs hold on to their beliefs even in the light of their continued failure. His stories are weighted towards prophets of doom (Paul Ehrlich gets particularly harsh treatment, but Y2K, Peak Oil, Arnold Toynbee's theory of history, the inexorable rise of Japan and many others get a mention) although some pollyannas are included too (Dow 36,000, for example). The impression I was left with is that Gardner sees unorthodox, cultist predictions as particularly likely to be false.
The bad news does not stop here, Gardner tells us. Not only are experts unsuccessful at prediction, and not only are "hedgehog" experts even worse than others, but the experts most in demand as TV pundits, keynote speakers, and corporate consultants are overwhelmingly those spiky, one big idea types. It is not reassuring that companies not entirely unlike the one I work for look to exactly this kind of expert to guide their strategy. Why do hedgehogs do well? As the book's subtitle tells us, while Gardner spends much of his book exploring "why expert predictions fail" he does also explore "why we believe them anyway", mainly in Chapter 6.
The roots of our love for hedgehogs despite their objectively bad rates of success are, he argues, psychological. Gardner leans heavily on the work of Kahnemann and Tversky on the psychology of decision-making and behavioural economics – priming, the availability heuristic and so on – material that has appeared many times in popular books over recent years, and backs this up with several other hedgehog-loving traits: our tendency to follow authority (Milgram yet again1), our love of "simple, clean, confident" messages delivered in easily-digestible story form, the media's focus on successful predictions and its sieve-like memory for unsuccessful ones, and our own similar tendencies. To my mind, Gardner understates the social and political sources of demand for expert prediction in favour of the psychological. The spread of an idea depends on how easily it can diffuse through a network of people, and the psychology of people is only one factor that governs that diffusion. Some ideas are easy to communicate from one person to another, others difficult. Some messages inherently generate new connections ("communication is good for you!") whereas others reshape networks so as to make spreading different ("silence is golden").
What's in the book is interesting, and entertaining enough. The problem is, Future Babble stops too soon and leaves many questions unanswered. Skewering the failed predictions of the past is, after all, an easy game. What we need to know is how to distinguish reliable predictions from unreliable ones, and how to proceed in the face of unpredictability.
The problems with prediction are chaotic systems driven by non-linearity, the unpredictability of people, and the fact that interactions among people often makes the future more, rather than less, tricky to predict. But not all predictions are hopeless; the weather in some parts of the world is unpredictable, but it's easy to predict that Arizona in August will be hot and dry. It is difficult to foresee the future shape of our digital world, but Moore's Law has been with us for five decades, and I Hereby Predict that the computer chips of the future will be smaller and faster than those of today. Chaotic systems are ubiquitous, but not everything is chaotic. Distinguishing one from the other would be helpful, and although prediction is the subject of the book, Gardner does little to spell out exactly what kind of predictions he is talking about. He focuses on big economic, ecological, and political predictions, but is not clear about how broad a net he is casting. And while he spends much of his time skewering hedgehogs, it seems to me that there were many foxes who did not see the financial crisis coming as well.
And there is a contradiction at the heart of the book. Dan Gardner has written a simple, clean and confident argument to warn us against simple, clean and confident arguments. He tells us stories to warn us of the dangers of placing too much faith in stories. He gives us a book with one big idea ("Why Expert Predictions Fail and Why We Believe Them Anyway"), which is that big ideas are the most likely to be wrong. Perhaps the book needs to be written this way – part of his message, after all, is that everyone loves hedgehogs – but surely the contradictions deserve to be addressed?
While some predictions are disinterested forecasts of the future, many are made because we want to take actions that affect that future, and want to choose the right action: if we do this, then we can bring about that. Gardner quotes Kenneth Arrow at the beginning of his final chapter: "The Commanding General is well aware the forecasts are no good. However, he needs them for planning purposes."(p237) But he does not really engage with the question of how to make decisions in the absence of reliable forecasts beyond general exhortations to caution and humility. These are fine so far as they go, but they do not take us very far. What are we supposed to do about the continued success of unreliable "hedgehogs"?
Take my job. I work for a software company as something called a product manager, and one of the things product managers do is argue for new products, new product directions and new features. So I'm trying to influence decisions the company makes and using predictions to do it, while others are arguing for alternative courses. And many of those predictions, mine and theirs, are Future Babble. So what do I do? Do I take the tactical route of arguing over-confidently, hedgehog like, for my position – essentially lying about my confidence in my predictions? Or do I act like a fox and look on as the decisions inevitably follow the suggestions of more charismatic presenters? If what I want is for my ideas to be taken on board, then I guess I should put my scruples to one side and become a hedgehog. But that's not really what I want: what I really want is for the right decision to be taken, and sometimes that's not going to be the one I am arguing for. I was hoping for inspiration and insight into these concrete issues around my daily work, but found nothing.
I enjoyed much of Future Babble, but in the end found it too limited to warrant recommendation. So I went on to the next book – Tim Harford'sAdapt - in search of answers. Next post, I'll tell you whether I found them.
—–
1 Is it just me, or is the standard interpretation of the classic Milgram electric shock experiment all wrong? Participants followed instructions to administer "shocks", judging that the authority figure in the white coat would not tell them to do something harmful without good reason, even though it looked like the subject was feeling pain. And… the participants were 100% right to trust their judgement. The subject was not feeling pain, and the authority figure was not instructing the participant to do something terrible. Why is this anything other than a story of good judgement on the part of the participants?
—–
HTML generated by org-mode 7.4 in emacs 23
> After the spectacular failure of financial experts everywhere to predict the 2008 crash …
The media system does not select for accuracy. It selects for popularity. This does not mean accuracy is impossible. It means it isn’t VALUED. More complicated, it also isn’t very useful either (to some extent, it is, but far less than one might think).
Quite a few things are predictable, but it’s much better to say afterwards “No one could have predicted …” rather than “We all knew it was going to blow-up eventually, but we were on the gravy train and we got while the getting was good”.
Regarding Milgram – I think you slipped in a question-begging and equivocation right here: “judging that the authority figure in the white coat would not tell them to do something harmful without good reason”. That is, “good reason” covers a lot of ground – e.g. torture. And as far as they were led to believe, they subject was feeling intense pain.
The media system does not select for accuracy. It selects for popularity. This does not mean accuracy is impossible. It means it isn’t VALUED.
A good distinction, and one I overlooked. And predictions of crashes are only useful if they come with pretty good timing too, of course. So yes, I think you have something that the “who could have known?” response is in part a big shrug of the shoulders from someone with their hand caught in the cookie jar. More on this in the next post…
Re Milgram “as far as they were led to believe, the subject was feeling intense pain” – yes, this is the standard. But that’s asserting that the participants believed the lie that the experimenter was telling them, and I don’t see how that step is justified.
When it comes to product management, I suspect that you are placing too much weight on the value of “predictions” compared to the knowledge that you already possess. If you were tasked with laying out the plan for a new product at Proctor and Gamble I think you would spend the majority of your time acquiring the existing knowledge that P&G product planners have.
> But that’s asserting that the participants believed the lie that the experimenter was telling them, and I don’t see how that step is justified.
Have you ever seen the videos which were recorded? Their belief is very clear. It’s quite informative.
There’s a sort of presentism in projecting back decades to the participants thinking “This isn’t a real experiment, those aren’t real shocks, it’s all a set-up to test if I’d really kill someone”.
It’s not too much of a reach to see the fake experiment replicated for real in interrogation with torture, and the memoirs of torturees (and even torturers) match pretty well with what’s shown, in terms of the psychology.
Fair enough. I have seen them, but it was a long time ago and I had forgotten. I concede (good job it was just a footnote).
Thanks. I think it almost makes Internet history when someone concedes a point due to blog comments :-).
FYI, on predicting the financial crash, see this blog post from good financial blog:
http://www.ritholtz.com/blog/2011/02/the-truth-about-the-financial-crisis-part-iii/
Myth 7: Nobody saw it coming.
Reality 7: No. Plenty of people saw it coming and said something.The problem wasn’t seeing, it was listening.
I’m a bit late to the party, but in defence of the suggested interpretation of Milgram, I think there are a range of possible belief states between “its a set-up” and “its fully, devastatingly real”.
I’m assuming things like this were going through their heads on some level: “This can’t be happenning. I don’t have to deal with it because it just can’t be real”. That isn’t exactly thinking that it was all a set-up, but is not exactly fully believing in it either. And that (possibly semi-conscious) intuition was right – it wasn’t real.
The importance of this is just that such a state of shocked disbelief might fade quite rapidly, as the subject orientated themselves again.