Friday, December 14, 2012

"How To Win At Forecasting" (Philip Tetlock and the Intelligence Advanced Research Projects Agency)

We've linked to Edge a few times. The Observer called it "The World's Smartest Website" but sometimes they're a bit too precious for my taste. This isn't one of those times.

"IARPA: It's like DARPA but for spies!" 
 ----------------
"IARPA's mission [is] to invest in high-risk/high-payoff research programs that have the potential to provide the United States with an overwhelming intelligence advantage over future adversaries."
FBI National Press Release, 2009


From Edge:
A Conversation with Philip Tetlock [12.6.12]
Introduction by Daniel Kahenman

The question becomes, is it possible to set up a system for learning from history that's not simply programmed to avoid the most recent mistake in a very simple, mechanistic fashion? Is it possible to set up a system for learning from history that actually learns in our sophisticated way that manages to bring down both false positive and false negatives to some degree? That's a big question mark.
Nobody has really systematically addressed that question until IARPA, the Intelligence Advanced Research Projects Agency, sponsored this particular project, which is very, very ambitious in scale. It's an attempt to address the question of whether you can push political forecasting closer to what philosophers might call an optimal forecasting frontier. That an optimal forecasting frontier is a frontier along which you just can't get any better.

PHILIP E. TETLOCK is Annenberg University Professor at the University of Pennsylvania (School of Arts and Sciences and Wharton School). He is author of Expert Political Judgment: How Good Is It? How Can We Know? 

INTRODUCTION
by Daniel Kahneman

Philip Tetlock’s 2005 book Expert Political Judgment: How Good Is It? How Can We Know? demonstrated that accurate long-term political forecasting is, to a good approximation, impossible. The work was a landmark in social science, and its importance was quickly recognized and rewarded in two academic disciplines—political science and psychology. Perhaps more significantly, the work was recognized in the intelligence community, which accepted the challenge of investing significant resources in a search for improved accuracy. The work is ongoing, important discoveries are being made, and Tetlock gives us a chance to peek at what is happening. 
Tetlock’s current message is far more positive than his earlier dismantling of long-term political forecasting. He focuses on the near term, where accurate prediction is possible to some degree, and he takes on the task of making political predictions as accurate as they can be. He has successes to report. As he points out in his comments, these  successes will be destabilizing to many institutions, in ways both multiple and profound. With some confidence, we can predict that another landmark of applied social science will soon be reached.
Daniel Kahneman, recipient of the Nobel Prize in Economics, 2002, is the Eugene Higgins Professor of Psychology Emeritus at Princeton University and author of Thinking Fast and Slow.

HOW TO WIN AT FORECASTING
A Conversation with Philip Tetlock 
There's a question that I've been asking myself for nearly three decades now and trying to get a research handle on, and that is why is the quality of public debate so low and why is it that the quality often seems to deteriorate the more important the stakes get?

About 30 years ago I started my work on expert political judgment. It was the height of the Cold War. There was a ferocious debate about how to deal with the Soviet Union. There was a liberal view; there was a conservative view. Each position led to certain predictions about how the Soviets would be likely to react to various policy initiatives.

One thing that became very clear, especially after Gorbachev came to power and confounded the predictions of both liberals and conservatives, was that even though nobody predicted the direction that Gorbachev was taking the Soviet Union, virtually everybody after the fact had a compelling explanation for it. We seemed to be working in what one psychologist called an "outcome irrelevant learning situation." People drew whatever lessons they wanted from history.

There is quite a bit of skepticism about political punditry, but there's also a huge appetite for it. I was struck 30 years ago and I'm struck now by how little interest there is in holding political pundits who wield great influence accountable for predictions they make on important matters of public policy.

The presidential election of 2012, of course, brought about the Nate Silver controversy and a lot of people, mostly Democrats, took great satisfaction out of Silver being more accurate than leading Republican pundits. It's undeniably true that he was more accurate. He was using more rigorous techniques in analyzing and aggregating data than his competitors and debunkers were.

But it’s not something uniquely closed-minded about conservatives that caused them to dislike Silver. When you go back to presidential elections that Republicans won, it's easy to find commentaries in which liberals disputed the polls and complained the polls were biased. That was true even in a blow-out political election like 1972, the McGovern-Nixon election. There were some liberals who had convinced themselves that the polls were profoundly inaccurate. It's easy for partisans to believe what they want to believe and political pundits are often more in the business of bolstering the prejudices of their audience than they are in trying to generate accurate predictions of the future.

Thirty years ago we started running some very simple forecasting tournaments and they gradually expanded. We were interested in answering a very simple question, and that is what, if anything, distinguishes political analysts who are more accurate from those who are less accurate on various categories of issues? We looked hard for correlates of accuracy. We were also interested in the prior question of whether political analysts can do appreciably better than chance.

We found two things. One, it's very hard for political analysts to do appreciably better than chance when you move beyond about one year. Second, political analysts think they know a lot more about the future than they actually do. When they say they're 80 or 90 percent confident they're often right only 60 or 70 percent of the time.

There was systematic overconfidence. Moreover, political analysts were disinclined to change their minds when they get it wrong. When they made strong predictions that something was going to happen and it didn’t, they were inclined to argue something along the lines of, "Well, I predicted that the Soviet Union would continue and it would have if the coup plotters against Gorbachev had been more organized," or "I predicted the Canada would disintegrate or Nigeria would disintegrate and it's still, but it's just a matter of time before it disappears," or "I predicted that the Dow would be down 36,000 by the year 2000 and it's going to get there eventually, but it will just take a bit longer."

So, we found three basic things: many pundits were hardpressed to do better than chance, were overconfident, and were reluctant to change their minds in response to new evidence. That combination doesn't exactly make for a flattering portrait of the punditocracy.

We did a book in 2005 and it's been quite widely discussed. Perhaps the most important consequence of publishing the book is that it encouraged some people within the US intelligence community to start thinking seriously about the challenge of creating accuracy metrics and for monitoring how accurate analysts are–which has led to the major project that we're involved in now, sponsored by the Intelligence Advanced Research Projects Activities (IARPA). It extends from 2011 to 2015, and involves thousands of forecasters making predictions on hundreds of questions over time and tracking in accuracy....MUCH MORE
Here's the Director of National Intelligence's IARPA page.

Some of our Edge links:
"The Man Who Runs The World's Smartest Website"

 J. CRAIG VENTER: THE BIOLOGICAL-DIGITAL CONVERTER, OR, BIOLOGY AT THE SPEED OF LIGHT @ THE EDGE DINNER IN TURIN