Swedroe: Problems With The Factor Zoo

May 19, 2017

Since the mid-1990s, factor-based exchange-traded funds have experienced spectacular growth. By mid-2016, these funds had about $1.35 trillion under management, accounting for about 10% of the market capitalization of U.S. traded securities.

At its most basic level, factor-based investing is simply about defining, and then systematically following, a set of rules that produce diversified portfolios. An example of factor-based investing is a value strategy: buying cheap (low valuation) assets and selling expensive (high valuation) assets.

A problem with factor-based investing is that smart people with even smarter computers can find factors that have worked in the past but are not real—they are the product of randomness and selection bias (referred to as data snooping, or data mining).

The problem of data mining is compounded when researchers snoop without first having a theory to explain the finding they expect—or hope—to find. Without a logical explanation for an outcome, one should not have confidence in its predictive ability.

The Problem Of P-Hacking
“P-hacking” refers to the practice of reanalyzing data in many different ways until you get a desired result. For most studies, statistical significance is defined as a “p-value” less than 0.05—the difference observed between two groups would not be seen even 1 in 20 times by chance. That may seem like a high hurdle to clear to prove that a difference is real. However, what if 20 comparisons are done and only the one that “looks” significant is presented?

The problem of data mining, or p-hacking, is so acute that professor John Cochrane famously said that financial academics and practitioners have created a “zoo of factors.” For example, a May 11, 2017, article in the Wall Street Journal states: “Most of the supposed market anomalies academics have identified don’t exist, or are too small to matter.”

In their 2014 paper “Long-Term Capital Budgeting,” authors Yaron Levi and Ivo Welch examined 600 factors from both the academic and practitioner literature. And authors Campbell Harvey (past editor of The Journal of Finance), Yan Liu and Heqing Zhu, in their paper “…and the Cross-Section of Expected Returns,” which was published in the January 2016 issue of the Review of Financial Studies, reported that 59 new factors were discovered between 2010 and 2012 alone.

Kewei Hou, Chen Xue and Lu Zhang contribute to the literature on anomalies and market efficiency with their May 2017 paper “Replicating Anomalies.” They conducted the largest replication of the entire anomalies literature, compiling a data library with 447 anomaly variables.

The list includes 57, 68, 38, 79, 103 and 102 variables from the momentum, value-versus-growth, investment, profitability, intangibles and trading frictions categories, respectively. To control for microcaps that are smaller than the 20th percentile of market equity for New York Stock Exchange (NYSE) stocks, they formed testing deciles with NYSE breakpoints and value-weighted returns. They treated an anomaly as a replication success if the average return of its high-minus-low decile is significant at the 5% level (t ≥ 1.96).

Following is a summary of their findings:


Find your next ETF

Reset All