A review of Tetlock’s ‘Superforecasting’ (2015)

Spectator Review, October 2015

Forecasts have been fundamental to mankind’s journey from a small tribe on the African savannah to a species that can sling objects across the solar system with extreme precision. In physics, we developed models that are extremely accurate across vastly different scales from the sub-atomic to the visible universe. In politics we bumbled along making the same sort of errors repeatedly.

Until the 20th century, medicine was more like politics than physics. Its forecasts were often bogus and its record grim. In the 1920s, statisticians invaded medicine and devised randomised controlled trials. Doctors, hating the challenge to their prestige, resisted but lost. Evidence-based medicine became routine and saved millions of lives. A similar battle has begun in politics. The result could be more dramatic.

In 1984, Philip Tetlock, a political scientist, did something new – he considered how to assess the accuracy of political forecasts in a scientific way. In politics, it is usually impossible to make progress because forecasts are so vague as to be useless. People don’t do what is normal in physics – use precise measurements – so nobody can make a scientific judgement in the future about whether, say, George Osborne or Ed Balls is ‘right’.

Tetlock established a precise measurement system to track political forecasts made by experts to gauge their accuracy. After twenty years he published the results. The average expert was no more accurate than the proverbial dart-throwing chimp on many questions. Few could beat simple rules like ‘always predict no change’.

Tetlock also found that a small fraction did significantly better than average. Why? The worst forecasters were those with great self-confidence who stuck to their big ideas (‘hedgehogs’). They were often worse than the dart-throwing chimp. The most successful were those who were cautious, humble, numerate, actively open-minded, looked at many points of view, and updated their predictions (‘foxes’). TV programmes recruit hedgehogs so the more likely an expert was to appear on TV, the less accurate he was. Tetlock dug further: how much could training improve performance?

In the aftermath of disastrous intelligence forecasts about Iraq’s WMD, an obscure American intelligence agency explored Tetlock’s ideas. They created an online tournament in which thousands of volunteers would make many predictions. They framed specific questions with specific timescales, required forecasts using numerical probability scales, and created a robust statistical scoring system. Tetlock created a team – the Good Judgement Project (GJP) – to compete in the tournament.

The results? GJP beat the official control group by 60% in year 1 and by 78% in year 2. GJP beat all competitors so easily the tournament was shut down early.

How did they do it? GJP recruited a team of hundreds, aggregated the forecasts, gave extra weight to the most successful, and applied a simple statistical rule. A few hundred ordinary people and simple maths outperformed a bureaucracy costing tens of billions.

Tetlock also found ‘superforecasters’. These individuals outperformed others by 60% and also, despite a lack of subject-specific knowledge, comfortably beat the average of professional intelligence analysts using classified data (the size of the difference is secret but was significant).

Superforecasters explores the nature of these unusual individuals. Crucially, Tetlock has shown that training programmes can yield big improvements. Even a mere sixty minute tutorial on some basics of statistics improves performance by 10%. The cost:benefit ratio of training forecasting is huge.

It would be natural to assume that this work must be the focus of intense thought and funding in Whitehall. Wrong. Whitehall has ignored this entire research programme. Whitehall experiences repeated predictable failure while simultaneously seeing no alternative to their antiquated methods, like 1950s doctors resisting randomised control trials that threaten prestige.

This may change. Early adopters could use Tetlock’s techniques to improve performance. Success sparks mimicry. Everybody reading this could do one simple thing: ask their MP whether they have done Tetlock’s training programme. A website could track candidates’ answers before the next election. News programmes could require quantifiable predictions from their pundits and record their accuracy.

We now expect that every medicine is tested before it is used. We ought to expect that everybody who aspires to high office is trained to understand why they are so likely to make mistakes forecasting complex events. The cost is tiny. The potential benefits run to trillions of pounds and millions of lives. Politics is harder than physics but Tetlock has shown that it doesn’t have to be like astrology.

Superforecasting: the art and science of prediction, by Philip Tetlock (Random House, 352 pages)

Ps. When I wrote this (August/September 2015) I was assembling the team to fight the referendum. One of the things I did was hire people with very high quantitative skills, as I describe in this blog HERE.