Monday, March 18, 2024

"Elite-Only Financial Markets"

From Professor Hanson's mini-bio at George Mason University: 

Robin Hanson is associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from California Institute of Technology, master's degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer, at Lockheed and NASA.

Professor Hanson has 5555 citations, a citation h-index of 35, and over ninety academic publications, including in Algorithmica, Applied Optics, Astrophysical Journal, Communications of the ACM, Economics Letters, Economica, Econometrica, Economics of Governance, Foundations of Physics, IEEE Intelligent Systems, Information Systems Frontiers, Innovations, International Joint Conference on Artificial Intelligence, Journal of Economic Behavior and Organization, Journal of Evolution and Technology, Journal of Law Economics and Policy, Journal of Political Philosophy, Journal of Prediction Markets, Journal of Public Economics, Maximum Entropy and Bayesian Methods, Medical Hypotheses, Proceedings of the Royal Society, Public Choice, Science, Social Epistemology, Social Philosophy and Policy, and Theory and Decision....

....and on and on. I hate him.

From his Overcoming Bias substack (you may remember it as a stand-alone blog), January 31, 2024: 

Prediction markets are financial markets, but compared to typical financial markets they are intended more to aggregate info than to hedge risks. Thus we can use our general understanding of financial markets to understand prediction markets, and can also try to apply whatever we learn about prediction markets to financial markets more generally.

With this in mind, consider the newly published paper Crowd prediction systems: Markets, polls, and elite forecasters, by Atanasov, Witkowski, Mellers, and Tetlock.

They use data from 1300 forecasters on 147 questions over four years from the Good Judgement Project’s entry in the IARPA ACE tournament, 2011-2015. (I was part of another team in that tournament.) They judge outcomes by averaging quadratic Brier-score accuracy over questions and time. 

They find that:

  1. Participants who used my logarithmic market scoring rule (LMSR) mechanism did better than those using a continuous double auction market (mainly better when few traders), and did about as well as those using a complex poll aggregation mechanism, which did better than simpler polling aggregation methods. 

  2. One element of the complex polling mechanism, an “extremization” power-law transformation of probabilities, also makes market prices more accurate. 

  3. Participants who were put together into teams did better than those who were not. 

  4. Accuracy is much (~14-18%) better if you take the 2% of participants most accurate in a year, and then using only these “elites” in future years. It didn’t matter which mechanism was used when selecting that 2% elite. 

The authors see this last result as their most important:

The practical question we set to address focused on a manager who seeks to maximize forecasting performance in a crowdsourcing environment through her choices about forecasting systems and crowds. Our investigation points to specific recommendations. …Our results offer a clear recommendation for improving accuracy: employ smaller, elite crowds. These findings are relevant to corporate forecasting tournaments as well as to the growing research literature on public forecasting tournaments. Whether the prediction system is an LMSR market or prediction polls, managers could improve performance by selecting a smaller, elite crowd based on prior performance in the competition. Small, elite forecaster crowds may yield benefits beyond accuracy. For example, when forecasts use proprietary data or relate to confidential outcomes, employing a smaller group of forecasters may help minimize information leakage. 

This makes sense for a manager who plans to ask ~>1300 participants ~>150 questions over ~>4 years, and who trusts some subordinate to judge how exactly to select this elite, and how to set the complex polling parameters, if they use a polling mechanism. But I’ve been mainly interested in using prediction markets as public institutions for cases where there’s a lot of distrust re motives and rationality. Such as in law, governance, policy, academia, and science. And in such contexts, I worry a lot more about the discretionary powers required to implement an elite selection system.

To see my concern, consider stock markets, whose main social function is to channel investment into the most valuable opportunities. More accurate stock prices better achieve this function, and the above results suggest that we’d get much more accurate stock prices by greatly limiting who can speculate in stock markets....

....MUCH MORE