As the director of research for The BAM Alliance—a community of about 140 like-minded RIA firms who believe in providing a fiduciary standard of care using an evidence-based investment strategy—I often get requests from other advisors for my help in answering questions from clients about articles they’ve read in the financial media.
As such, I thought I’d share my thoughts on a January article from Pension Partners LLC, an investment advisor and manager of the ATAC Rotation mutual funds, titled: “Do Small Caps Really Outperform Over Time?”
The article makes the case that while small-caps have outperformed over the full history for which we have data (beginning in 1926), they have underperformed since 1979. The article supports this contention by comparing the return of the Russell 2000 Index (R2K) to that of the S&P 500 Index over the 37-year period from 1979 (when the R2K was introduced) through the end of 2015.
During this period, the article states, the R2K underperformed the S&P 500 by 1.4 percentage points (11.7% versus 10.3%). I went and checked the data, and while the R2K did underperform, it actually returned 11.4%, not 10.3%. Thus, its underperformance was just 0.3 percentage points.
It’s interesting to note Pension Partners’ use of the R2K to represent small-cap funds, because it’s well-known that there are problems with that index—problems that have allowed active managers to front-run the indexers. This led to poor returns on the index, and eventually most index funds (such as Vanguard’s small-cap fund) abandoned the use of that benchmark.
A Better Small-Cap Benchmark
A superior small-cap index is the CRSP (Center for Research in Securities Prices at the University of Chicago) 6-10 Index. For the period January 1979 through November 2015, the CRSP 6-10 returned 13.0%, outperforming the S&P 500 Index’s return of 11.8% by 1.2 percentage points per annum.
During the same period, the R2K returned 11.6%, underperforming the very similar CRSP 6-10 Index by 1.4 percentage points. Thus, simply by using a more appropriate index, the underperformance of small-caps disappeared and they outperformed.
I would also note that the 1.2 percentage point outperformance of the CRSP 6-10 Index for the period January 1979 through November 2015 isn’t that far off from the performance differential in the period prior to 1979. From 1926 through 1978, the CRSP 6-10 Index returned 10.5% per year, outperforming the S&P 500 Index’s return of 8.9% per year by 1.6 percentage points per year.
I’d add that the lesser small-cap premium shouldn’t really come as a surprise. Small-caps are more expensive to trade, especially in tough times. Thus, investors should demand a liquidity premium for that risk. And liquidity costs have dropped sharply as bid/offer spreads have fallen since the advent of decimalization and with the impact of high-frequency traders.
Further, thanks to financial innovation, investors can now access less-liquid, small-cap equities indirectly through low-cost index funds and ETFs. These regime changes also help explain why the equity risk premium is now lower as well. However, we should not be surprised that there has been a small-cap premium. And we should continue to expect one in the future.
Explaining The Small-Cap Premium
What makes small-caps riskier than large-caps? Another way of asking this question is: What is the source of the small-cap premium? There are numerous intuitively logical explanations, all of which are well-known. Relative to large companies, small companies typically are characterized by:
- Greater leverage.
- A smaller capital base, reducing their ability to deal with economic adversity.
- Fewer and more expensive alternatives in terms of access to capital. Their lower levels of collateral leave them more susceptible to tight credit conditions that exist during recessions.
- Greater volatility of earnings.
- Lower levels of profitability.
Other explanations might include:
- A less-proven, or even unproven, track record for the business model.
- Less depth of management.
Furthermore, small-caps are more volatile than large-caps. The standard deviation of small-caps has been about 27% versus about 19% for large-caps. That’s a relative difference of approximately 42%. Additionally, small-caps tend to perform relatively poorly in bad times—and assets that do poorly in bad times should require a risk premium.
Also, an interesting 2002 study, “Monetary Policy and the Cross-Section of Expected Returns” by Gerald Jensen and Jeffrey Mercer, examined the relationship between economic-cycle risk and the size effect. They found that when size is isolated, there is a significant small-firm premium only in periods of expansionary monetary policy. In restrictive periods, the size effect is not statistically significant.
The authors concluded that monetary policy has a significant impact on the size effect. Good economic times generally occur when the Federal Reserve is either expansionist in its policy or simply “leaning against the wind,” and bad times occur when the Fed is being restrictive in its policy.
Jensen and Mercer also observed that since small firms are typically highly leveraged, they are more negatively impacted in their ability to access capital during periods of restrictive monetary policy. Thus, small and value firms are more susceptible to distress in times of restrictive monetary policy (a weak economy).
These relationships have all been well-documented and are well-known. An important tenet of financial theory is that one must differentiate between information and value-relevant information.
In other words, if the market already knows the information, it should already be embedded in prices, and so it’s unlikely that investors can exploit it. Clearly, this relationship between small-cap stocks and economic-cycle risk is known very well. It seems unlikely that investors could benefit from it.
Torturing The Data
There’s a wonderful new book out by economics professor Gary Smith: “Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics.” In it, Smith shows through dozens of examples how data is frequently manipulated, ransacked or “tortured until it confesses.” The R2K example demonstrates how easy it is to manipulate data to tell the story you want to convey. This brings me to another concerning issue with the article.
While Pension Partners stated—in my opinion incorrectly—that small-cap stocks had underperformed large-cap stocks since 1979, they did note that there were periods in which small caps had outperformed. They then described a metric, the relative performance of lumber versus gold, that allows them to tactically allocate—swing between small-caps and large-caps—and to exploit those shifts.
The article cites a paper written by Pension Partners that shows the evidence. I’ve learned it is important to be skeptical when presented with metrics that have high correlation; the reason is that “correlation” doesn’t mean “causation.” And in the era of big data, it’s easy to torture the data to uncover some correlation that appears to explain returns.
What we don’t know is whether the researchers at Pension Partners actually had the idea to use this metric, the relative performance of lumber versus gold, or whether they tortured the data until they came up with some metric that appeared to explain the relationship.
With high-speed computers, it’s possible to test thousands of relationships until you find the one that “works.” If you do that, you’ll almost certainly find some metric with a high correlation, just as you can find that the winner of the Super Bowl has a high correlation with the stock market’s return that year. But unless you come up with the theory first, the data may not have any value.
As Gary Smith explains, because of the problems with big data, it’s important that any research start not with data, but with a theory. The data should be used only to find evidence that supports the theory, or doesn’t. Unfortunately, too many people work in reverse, mining the data and only then coming up with (concocting) a theory to support the data.
Smith writes: “We shouldn’t be persuaded by anything less than overwhelming evidence, and even then be skeptical.” What’s more, be especially skeptical if a theory sounds nonsensical. Smith states: “Extraordinary claims require extraordinary evidence.” And be sure to require out-of-sample evidence. He then explains: “It is not sensible to test a theory with the very data that were ransacked to concoct the theory.”
Here’s another problem I have with the use of the lumber-versus-gold metric. As I explained, it’s long been documented, and thus well known, that there’s a relationship between the relative performance of small- and large-caps dependent on the state of the economy. Thus, why should you believe that you can benefit from this information?
One way to test the idea whether it’s likely that one can benefit from this information is to look at the performance of funds that tactically allocate assets (TAA funds). Morningstar has done a series of studies on tactical allocation funds, and in each case, they found that these funds underperformed. For example, Morningstar’s most recent study found that during the three years ending July 2014, TAA funds gained an annual average of 7.8%, or 3.8 percentage points per year behind their benchmarks. This finding was consistent with their prior ones.
Another study found that for the 12 years ending 1997, the S&P 500 Index on a total return basis rose 734%. The average return during that period for 186 TAA funds was a mere 384%, or roughly half the return of the S&P 500 Index. Charles Ellis, in his book, “Investment Policy,” cited a study on the performance of 100 pension plans that used TAA, and found that not one single plan benefited from their efforts. Not one. Even randomly we would have expected some to succeed. Yet none did.
TAA Vs. DFA
There’s another way we can test the assertion that investors are likely to benefit from tactically allocating assets between small-cap and large-cap stocks. We can examine the relative performance of small-cap funds from Dimensional Fund Advisors (DFA), which are passively managed. (In the interest of full disclosure, my firm, Buckingham, recommends DFA funds in constructing client portfolios.)
DFA funds don’t engage in any tactical asset allocation or individual stock selection. If it was likely that investors could benefit from tactically allocating, we would expect a large percentage of actively managed small-cap funds to have outperformed the DFA funds. We can test whether this is true using Morningstar data. The table below shows the performance ranking of DFA’s small-cap funds.
|DFA U.S. Micro Cap (DFSCX)||22|
|DFA U.S. Micro Cap (DFSCX)||22|
|DFA U.S. Small Value (DFSVX)||15|
|DFA International Small (DFISX)||40|
|DFA International Small Value (DISVX)||1|
|DFA Emerging Markets Small (DEMSX)||1|
|Average DFA Ranking||17|
The six DFA funds, on average, outperformed 83% of actively managed funds, which had the ability to take advantage of the well-known relationship described by Pension Partners. And it is important to understand that Morningstar’s data contains survivorship bias. About 7% of actively managed funds disappear each year (likely due to poor performance). Thus, the longer the period, the worse the bias becomes.
I want to repeat that I have no idea whether the team at Pension Partners came up with the lumber-versus-gold metric after ransacking the data or before testing it. We don’t know how many other relationships they may have tested before finding that one. Again, investors should be suspicious of claims that they can benefit from well-known information.
As Gary Smith teaches us, we should be skeptical of claims that fail to provide out-of-sample evidence. For example, Pension Partners could have bolstered its case if it presented evidence that the strategy worked in other countries.
Later this week, we’ll continue our look at whether or not small-cap stocks outperform over time, but we’ll extend the discussion by examining their returns through a multifactor, rather than single-factor, lens.
Larry Swedroe is the director of research for The BAM Alliance, a community of more than 140 independent registered investment advisors throughout the country.