Pseudo-mathematics and financial charlatanism
Your financial advisor calls you up to suggest a new investment scheme. Drawing on 20 years of data, he has set his computer to work on this question: If you had invested according to this scheme in the past, which portfolio would have been the best? His computer assembled thousands of such simulated portfolios and calculated for each one an industry-standard measure of return on risk. Out of this gargantuan calculation, your advisor has chosen the optimal portfolio. After briefly reminding you of the oft-repeated slogan that "past performance is not an indicator of future results", the advisor enthusiastically recommends the portfolio, noting that it is based on sound mathematical methods. Should you invest?
The somewhat suprising answer is, probably not. Examining a huge number of sample past portfolios---known as "backtesting"---might seem like a good way to zero in on the best future portfolio. But if the number of portfolios in the backtest is so large as to be out of balance with the number of years of data in the backtest, the portfolios that look best are actually just those that target extremes in the dataset. When an investment strategy "overfits" a backtest in this way, the strategy is not capitalizing on any general financial structure but is simply highlighting vagaries in the data.
The perils of backtest overfitting are dissected in the article "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance", which will appear in the May 2014 issue of the NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY. The authors are David H. Bailey, Jonathan M. Borwein, Marcos Lopez de Prado, and Qiji Jim Zhu.
"Recent computational advances allow investment managers to methodically search through thousands or even millions of potential options for a profitable investment strategy," the authors write. "In many instances, that search involves a pseudo-mathematical argument which is spuriously validated through a backtest."
Unfortunately, the overfitting of backtests is commonplace not only in the offerings of financial advisors but also in research papers in mathematical finance. One way to lessen the problems of backtest overfitting is to test how well the investment strategy performs on data outside of the original dataset on which the strategy is based; this is called "out-of-sample" testing. However, few investment companies and researchers do out-of-sample testing.
The design of an investment strategy usually starts with identifying a pattern that one believes will help to predict the future value of a financial variable. The next step is to construct a mathematical model of how that variable could change over time. The number of ways of configuring the model is enormous, and the aim is to identify the model configuration that maximizes the performance of the investment strategy. To do this, practitioners often backtest the model using historical data on the financial variable in question. They also rely on measures such as the "Sharpe ratio", which evaluates the performance of a strategy on the basis of a sample of past returns....MORE
HT: Ritholtz@Bloomberg
Previously on the Mountebank channel:
UPDATED--Are You a Recent Graduate Who Hasn't Found a Job? Consider Becoming a Charlatan
Follow-up: Choosing the Charlatan Career Path
Re-post: Peak Oil Stalwart to Shutter Forum/News Site, Persue Career as Astrologer
See also:
Technical analysis
Fundamental analysis
Divination for Dummies
Pitfalls in Prognostication: Fortune Magazine's August, 2000 "Ten Stocks to Last the Decade"