Saturday, March 25, 2017

"It’s very likely we don’t understand probabilities"

From the WaPo's Joel Achenbach:

Pit Manager Nicole Mavromatis uses a level to check the balance of a roulette wheel at Maryland Live Casino in 2013. (Photo by Linda Davidson / The Washington Post)
If there’s one thing I know absolutely, irrefutably, 100 percent for certain, it’s that people don’t understand probabilities.

This is on my mind because of March Madness, and this new feature at 538 where they not only tell you who is most likely to win but also update the probabilities as the game goes on. While watching the game on TV, you can follow the changing odds on your computer screen while you simultaneously live-tweet the event and text your friends on your smartphone. Ideally, you will do this while switching channels between CBS and TBS, except when the networks show both games on a split screen. Also you should make calls to your bookie. And your psychiatrist.

According to 538, my Gators have a 54 percent win probability Friday night. But Neil Greenberg’s fancy stats column gives the Gators a 62 percent chance of winning. Is that a contradiction? No: Just two different estimates of something innately uncertain and involving multiple metrics of imprecise significance.

That’s my guess, at least.
This shifting-probabilities gimmick at 538 reminds us that probabilities aren’t the same things as predictions. We don’t know how the probability cloud will collapse into a singular reality (sorry to go all quantum physics on you). We live in a world that at both micro and macro levels is chaotic, fluid, and fundamentally — if I may use another highly technical term — squirrelly.

Unfortunately, it’s pretty much impossible to live a normal, emotionally stable life without finding various perches of certainty, belief, faith, conviction, etc. You can’t go around in a probabilistic daze.
Evolution rewards snap judgments. Sometimes you just have to take off running. But we make mental errors all the time. For example, we typically fail to see how low-probability outcomes will become far more likely, if not a certainty, given enough opportunities. We also overestimate the extent to which our direct experience predicts future probabilities. Anecdotes mislead. So do statistical studies with very small data sets. (Here in the science pod we keep on the lookout for studies that turn out to be based on the thoughts of three guys on bar stools.)

My friend Michael Lewis has published a book, “The Undoing Project,” that explores the long collaboration of Amos Tversky and Daniel Kahneman. The Tversky-Kahneman research showed that people are not rational when it comes to probabilities. Consider the “Linda problem.” (Wikipedia has an article on this, titled the Conjunction fallacy.) Tversky and Kahneman ran an experiment in which students were given the characteristics and background of someone named Linda (majored in philosophy, concerned about justice) and then were asked to identify which sentence most likely describes her. “Linda is a bank teller and is active in the feminist movement” was considered by a majority of students to be more probable than “Linda is a bank teller” — even though you can clearly see that the first has to be a subset of, and thus less probable than, the second.

We struggle with probabilities embedded in a low-confidence framework — such as a snow forecast. Earlier this month we prepared for a big snowstorm here on the East Coast. Early computer modeling showed it might be historic — with one model showing 20 inches for the District. Our ace weather bloggers at the Capital Weather Gang wrote a series of posts in which they clearly explained that there were many uncertainties. Then the storm hit and the heaviest snow was out in the middle of nowhere and not in the big cities along the East Coast, and, sure enough, some people complained bitterly that the forecast was wrong. My colleagues acknowledged that it wasn’t a perfect forecast, but was pretty darn good, and in fact I think they did a bang-up job, as always.

Marshall Shepherd published a blog post this week defending the forecast community in general:
Hurricane track forecasts by NOAA’s National Hurricane Center (see below) have significantly improved in the last several decades, and tornado warning lead-times are on the order of 13 minutes. Even with such positive metrics, forecasts will never be perfect. There will be challenges with uncertainty, probabilistic forecasts, inadequate data, coarse model resolution, and non-linearities associated with trying to predict how a fluid on a rotating body changes in time....
...MORE

Because uncertainty multiplies over time we end up with track forecasts that look like this:

https://upload.wikimedia.org/wikipedia/commons/4/42/09L_2011_5day.gif

And are called the Cone of Uncertainty.