Posts Tagged ‘backtesting’

A bit off topic today, but we think those who work for a living and hope one day not to have to, may find this of interest in the pursuit of sound retirement investment strategies.

We are gradually learning these past few years that when it comes to making money by investing in the stock market, the lowly “exchange traded fund” (or ETF) that tracks a broad basket of stocks across many companies or sectors (the S&P 500 tracking funds being among the more obvious indexes) generally beat the returns of funds managed by human money managers.  In other words, the overall market, over time, will exceed what most of even the best money managers can do for you.

This of course is disconcerting to those who earn their living picking those stocks, or selling their funds.  Nonetheless, it’s proving true.  According to several sources we’ve reviewed, something like 86% of managed funds do NOT beat the market, or even the so-called “benchmarks” they are measured against.

The core of the problem is that it is very hard to beat the market.  Obviously.  But in this era of rocket scientist, algorithm producing, quant-based, big data stock picking… it doesn’t mean folks aren’t trying. And that may be the problem.

With today’s computing power and increasing wealth of raw data, it is possible to test thousands, even millions of data sets, ideas and trading philosophies.  The standard method of doing this involves something called “backtesting” in which someone comes up with a market hypothesis, and then looks back over, say, twenty years, to see how their strategy would have performed against real markets, with their unpredictable ups and downs, over that time.  To check the validity of their results, the technique is then checked against “out-of-sample” data, consisting of market history that was not used to create the original technique.

But in the wrong hands, things can go, well… wrong.  There is a powerful temptation to get published in finance journals among researchers, analysts and economics.  Too often, this leads to “torturing the data” as a recent Bloomberg BusinessWeek article pointed out (April, 2017).  This in turn has led to some exchange-traded funds using flawed statistical techniques according to a couple of experts at Duke University, implying that “half the financial products promising outperformance that companies are selling to clients are false.”

For example, a batch of research involving United Nations data once found that the best (backtested) predictor of stock performance in the S&P 500 was butter production in Bangladesh.  That is to say, out of millions of data sets, tested backwards in time, the one with that came closest to tracking the S&P 500’s actual market performance over time was the output of butter production in a third world nation half-way around the world.

Researches can “twist  the knobs” on their assumptions in search of a prized “anomaly” that they can write about – or sell.  They can vary, say, the “time period covered, or the set of securities under consideration, or even the statistical method,” according to Bloomberg’s Peter Coy.  Negative findings get round-filed; positive findings get published — or make into an ETF whose performance we may be relying upon for our retirement.  With enough tests, notes Coy, “eventually by chance even your safety check will show the effect you want.”

So next time you read about some great new investment strategy vetted by a Wall Street hedge fund’s top “quants” (the math wizards who come up with stuff)… take a deep breath, turn the page, and leave your long-term funds in a plain old vanilla stock market index fund from Vanguard, Fidelity or American Funds.

It will help to ensure that when it’s time to retire, your money will be ready too.

Read Full Post »