Tuesday, March 27, 2018

Questions the Bank of England Is Asking: "Can central bankers become Superforecasters?"

From Bank Underground:
Tetlock and Gardner’s acclaimed work on Superforecasting provides a compelling case for seeing forecasting as a skill that can be improved, and one that is related to the behavioural traits of the forecaster. These so-called Superforecasters have in recent years been pitted against experts ranging from U.S intelligence analysts to participants in the World Economic Forum, and have performed on par or better by accurately predicting the outcomes of a broad range of questions. Sounds like music to a central banker’s ears? In this post, we examine the traits of these individuals, compare them with economic forecasting and draw some related lessons. We conclude that considering the principles and applications of Superforecasting can enhance the work of central bank forecasting.


Setting the scene
It is helpful to begin by considering the purpose of forecasting in central banks, and how the process works in practice.  This speech by Gertjan Vlieghe explains how forecasting is an important tool that helps policymakers diagnose the state and outlook for the economy, and in turn assess – and communicate – the implications for current and future policy. So achieving accuracy is not always the sole aim of the forecast. However, forecasts are also a means to provide public accountability of central bank actions, and the presence of persistent or significant forecast errors may damage the credibility of the policy making institution amongst key stakeholders (individuals, governments and financial and capital markets).

A typical forecast set-up at a central bank is (see here) supported by two pillars: i) statistical frameworks underpinned by specific (for example, New Keynesian General Equilibrium) economic concepts, which can be supported by tools that process a range of economic and financial data, and ii) monetary policymakers’ judgements and deliberations that overlay these strict model-based forecasts – all of which form part of the deliberation process.

The accuracy of such forecasts has come under much scrutiny (see here) since the financial crisis, resulting in a great deal of effort to improve their performance. Several reviews and studies (see Stockton (2012), BoE IEO (2015), FRBNY Staff Report (2014) and ECB WP 1635 (2014)) have evaluated forecast performance across many major central banks and suggested improvements in calibrating economic models (e.g. to reduce bias), challenging prior conventions more and learning more from other central banks/economic forecasters.  The BoE’s MPC for example has also started commenting on its own ‘key judgements’ in its quarterly Inflation Report.

This is all welcome progress. But this iterative process from inside the central banking community over time leaves us with an impression that improving forecast performance could benefit from further considering the successes of forecasting  in other fields (similar to taking an “outside view” when forecasting as described by Kahneman). We may then move forward from this process of gradual evolution… to a potential revolution.

Enter Superforecasting
Superforecasters have been described as “unusually thoughtful humans on a wide spectrum of problems”. They are drawn from necessarily diverse backgrounds, and include amateurs and experts in a given field. They compete in tournaments which test their judgements on a range of questions about economic or geopolitical events. And through making these predictions they are expected to hone a range of forecasting skills. They are judged on several measures (including a daily average ‘Brier score’ – a measure of forecast accuracy originally proposed to test weather forecasting), and they receive their title by consistently outperforming a top percentile of their peers.

Superforecasters were first identified on the back of The Good Judgement Project (now a private enterprise), which was part of a US Intelligence Agency program in 2011. The GJP’s testing team included renowned advisors from psychology, statistics and economics. Their work used personality trait tests and training methods to reduce cognitive biases and improve the forecasting abilities of their volunteer forecasters. They then identified individuals who consistently out-performed their peers. Subsequent studies of this experiment found that when these top forecasters were placed in teams with other such forecasters  (described here as ‘group of average citizens doing Google searches in their suburban town homes’), they performed around 30% better than the average for intelligence community analysts who had access to confidential intercepts and other relevant data. Pretty Super-ising results one might say!

Can central banks become this ‘super’?
The story so far could imply that the answer simply lies in replacing central bank forecasters with these Superforecasters and leaving them to it. However, central bank forecasting is as much about forming a coherent economic narrative (the preserve of economists) as it is about numerical accuracy (for which the traits that make these individuals outperform the ‘experts’ matter). Central bankers may have a comparative advantage in the former, but their forecasting can be enhanced by considering key behavioural traits of those responsible for forecasting....MUCH MORE
We have been following Tetlock since the CIA days, here's the results page from a search of the blog, if interested.