Monday, July 18, 2011

“What should one do: predict specifics, or forecast broad trends that necessarily miss specifics?”

The headline, from the post below, comes from Falkenblog and was used by FT Alphaville in their 'Further reading' link.
This is one of the most important questions that modelers  have to ask themselves, along with "How much of my bias am I introducing into the model?"

We first met prognostication psychologist Philip Tetlock in 2008's "Dan Gardner . I predict your prediction is wrong" and again in 2009's "Why Pundits Get Things Wrong". Here's the latest via Cato Unbound:
Overcoming Our Aversion to Acknowledging Our Ignorance
Each December, The Economist forecasts the coming year in a special issue called The World in Whatever-The-Next-Year-Is. It’s avidly read around the world. But then, like most forecasts, it’s forgotten.
The editors may regret that short shelf-life some years, but surely not this one. Even now, only halfway through the year, The World in 2011 bears little resemblance to the world in 2011. Of the political turmoil in the Middle East—the revolutionary movements in Tunisia, Egypt, Libya, Yemen, Bahrain, and Syria—we find no hint in The Economist‘s forecast. Nor do we find a word about the earthquake/tsunami and consequent disasters in Japan or the spillover effects on the viability of nuclear power around the world. Or the killing of Osama bin Laden and the spillover effects for al Qaeda and Pakistani and Afghan politics. So each of the top three global events of the first half of 2011 were as unforeseen by The Economist as the next great asteroid strike.

This is not to mock The Economist, which has an unusually deep bench of well-connected observers and analytical talent. A vast array of other individuals and organizations issued forecasts for 2011 and none, to the best of our knowledge, correctly predicted the top three global events of the first half of the year. None predicted two of the events. Or even one. No doubt, there are sporadic exceptions of which we’re unaware. So many pundits make so many predictions that a few are bound to be bull’s eyes. But it is a fact that almost all the best and brightest—in governments, universities, corporations, and intelligence agencies—were taken by surprise. Repeatedly.

That is all too typical. Despite massive investments of money, effort, and ingenuity, our ability to predict human affairs is impressive only in its mediocrity. With metronomic regularity, what is expected does not come to pass, while what isn’t, does....MORE
Falkenblog says, in his post 'Predicting Tsunamis':
...Killing Osama, revolution in Tunisia, and the tsunami in Japan are all pretty specific outcomes that were unforeseen. But I would say that rather than this year being really crazy, it is rather similar to last year: US mired in Iraq and Afghanistan, deficits as far as the eye can see, no shovel-ready government spending, inner city America pathetic, Harvard outdoing Mississippi State academically, etc. Predicting broad trends is eminently doable but we often just take those for granted. Indeed, later they point out that experts tend to do worse than simple extrapolation models, which just highlights the futility of predicting outliers. So what should one do: predict specifics, or forecast broad trends that necessarily miss specifics?

When I worked as an economist I remember that the statistically optimal forecast was generally an exponential curve from where we are to the long-run historical average. But that's pretty boring, so we would add some little wiggles at the end based on some theory (A causes B which causes C in 18 months), because that kind of reasoning really resonated with the audience; they wanted to learn some new story to apply to their understanding of the world, something more novel than the future will be a lot like the past. Similarly, Tetlock appears to want tsunami forecasts, surely fun to have....MORE
We have so many posts on models and modeling that it is easiest to just give you the Google search results:  models
Here is a taste of some of the topics we've looked at:
How Models Caused the Credit Crisis
Quants Lose that Old Black (Box) Magic
Finance: "Blame the models"
Climate Models Overheat Antarctica, New Study Finds
Climate modeling to require new breed of supercomputer
Computer Models: Climate scientists call for their own 'Manhattan Project'
Computer Models: " Misuse of Models" and "No model for policymaking"
Climate prediction: No model for success
Climate Models and Modeling
Based on Our Proprietary "What's on T.V." Timing Model...
How many Nobel Laureates Does it Take to Make Change...And: End of the Universe Puts
The New Math (Quant Funds)
Modeling*: The Map is Not the Territory
Inside Wall Street's Black Hole
and many, many more. One that is definitely worth a deeper dive:
Computer Models: " Misuse of Models" and "No model for policymaking"
I'm going to bring out the big guns.

I read a book last year, Useless Arithmetic: Why Environmental Scientists Can't Predict the Future, that, while a bit light on the 'whys', packs more understanding of computer modeling into230 pages than you are likely to find anywhere else.

The author, Orrin H. Pilkey is Emeritus Professor of Geology at Duke.
The first review I'm going to link to appeared in American Scientist. The reviewer is Carl Wunsch, Carl and Ida Green Professor of Physical Oceanography in the Department of Earth, Atmospheric and Planetary Sciences at the Massachusetts Institute of Technology....