Thursday, December 2, 2010

Putting Predictions to the Test: Pundits Flunk

There is a psychology professor named Philip Tetlock who spent twenty years evaluating people who make their living by making predictions in the fields of politics and economics. He studied 284 people who ended up making 82,361 predictions over the course of that period. He wrote a book summarizing his findings: Expert Political Judgment: How Good Is It? How Can We Know? The book came out in 2005. I stumbled across a reference to Tetlock’s work just yesterday. His conclusions are fascinating. There is a great review of the book in the New Yorker magazine by Louis Menand.

“It is the somewhat gratifying lesson of Philip Tetlock’s new book.... that people who make prediction their business—people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables—are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.”
Tetlock set up a system whereby he could perform statistical analyses by forcing the questions to be posed in a three component format. The three components might be stay the same, increase, or decrease; or they might be stay the same, get better, get worse.
“....the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.”
Now that Tetlock has the knife firmly implanted, he proceeds to twist it.
“Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. ‘We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,’ he reports. ‘In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in “reading” emerging situations’.”
Most might find this surprising, but psychologists spend their lives observing the biases, prejudices and emotional attachments that cause a supposedly rational human to make irrational decisions.


Tetlock was interviewed by CNN in 2009 concerning the financial crisis and the associated punditry. He had this advice about trying to assess a pundit’s credibility.
“The most important factor was not how much education or experience the experts had but how they thought. You know the famous line that [philosopher] Isaiah Berlin borrowed from a Greek poet, ‘The fox knows many things, but the hedgehog knows one big thing’? The better forecasters were like Berlin's foxes: self-critical, eclectic thinkers who were willing to update their beliefs when faced with contrary evidence, were doubtful of grand schemes and were rather modest about their predictive ability. The less successful forecasters were like hedgehogs: They tended to have one big, beautiful idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything. The media often love hedgehogs’.”
Given the inaccuracy of “experts,” why do people still cling to their predictions? Tetlock had a very revealing answer.
“We need to believe we live in a predictable, controllable world, so we turn to authoritative-sounding people who promise to satisfy that need. That's why part of the responsibility for experts' poor record falls on us. We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score.”
Interesting stuff!

No comments:

Post a Comment

Lets Talk Books And Politics - Blogged