Swedroe: Valuation Metrics In Perspective

Some metrics have predictive value, but not enough to make market timing worth it.

TwitterTwitterTwitter
LarrySwedroe_200x200.png
|
Reviewed by: Larry Swedroe
,
Edited by: Larry Swedroe

It’s well-established in the literature that valuation metrics—such as the dividend yield (D/P) and the earnings yield (E/P), as well as its cousin, the Shiller CAPE 10—provide important information in terms of future expected returns. In fact, these metrics are the best that investors have for predicting long-term equity results. For instance, the Shiller CAPE 10, a cyclically adjusted price-to-earnings ratio, has been found to explain about 40 percent of returns for the next 10 years.

The negative relationship between current valuations and future returns can tempt many investors into pursuing what are often referred to as tactical asset allocation (TAA) strategies. Such investors increase their equity allocation when valuations are low relative to their historical mean, and lower it when valuations are relatively high.

Which, of course, raises the question: Does the historical evidence support market-timing strategies based on valuations? Javier Estrada sought to answer that question in his new paper, “Multiples, Forecasting, and Asset Allocation,” which was published in the Summer 2015 issue of the Journal of Applied Corporate Finance.

A Long-Term Trend

Estrada begins by showing how tempting historical results can be. For example, for the period December 1899 through December 2014, when the current price-to-earnings (P/E) ratio was less than 10, the 10-year forward return to stocks averaged 14.8 percent. In contrast, when the P/E was above 18.8, the 10-year forward return to stocks averaged just 6.3 percent. And the relationship was monotonic. As the P/E levels rose, forward returns fell. The more you paid for a dollar of earnings, the lower the 10-year return.

However, when Estrada examined one-year forward returns, the monotonic relationship broke down. For example, when the current P/E was between 10.4 and 13.3, the one-year forward return was 7.3 percent. When it was higher, between 16.4 and 18.9, the one-year forward return averaged 11.7 percent. And when the current P/E was above 19, the one-year forward return averaged 10.0 percent.

Using the dividend yield instead of the earnings yield produced similar results. Thus, while valuation metrics do provide valuable information regarding long-term returns, the evidence is much weaker when looking at the short term.

Estrada found that the correlation of the dividend yield and the 10-year forward return was -0.43. Using the earnings yield produced an even stronger negative correlation of -0.52. While the correlations of the dividend yield and the earnings yield with the one-year forward return were also negative, they were much weaker, at -0.18 and -0.10, respectively.

Testing TAA

To test the concept behind TAA, Estrada used several different tactical strategies and compared their performance to a 60 percent equity/40 percent bond portfolio, with 5 percent rebalancing bands (meaning an investor would buy stocks if the equity allocation fell below 55 percent and sell them if the allocation rose above 65 percent). Rebalancing would be done annually. Estrada’s sample consisted of monthly total return indexes for stocks and bonds between September 1899 and December 2014. Stocks in the sample were represented by the S&P 500 and bonds by 90-day U.S. Treasury bills.

The TAA valuation-based strategies Estrada considered in his analysis seek to implement an aggressive portfolio when stocks are cheap, and a conservative one when stocks are expensive. When starting from the benchmark 60/40 allocation, an aggressive portfolio increases its allocation to stocks by 20 percentage points (to 80 percent) while the conservative portfolio reduces its allocation to stocks by 20 percentage points (to 40 percent).

At the close of each year, stocks are determined to be cheap (expensive) if a multiple is more than one standard deviation below (above) its long-term mean, and fairly valued if the multiple ends up within one standard deviation of its long-term mean. When stocks are fairly valued (as just defined) at the end of each year, the same tolerance bands used for the benchmark 60/40 portfolio are then applied to the valuation-based portfolios.

Estrada also looked at a tactical strategy that required a two-standard-deviation event, as well as a more aggressive shift in allocations. Instead of a 20 percentage point shift in equity allocations, he used a 30 percentage point shift (allocations could move from 60/40 to 90/10 or 30/70).

The Evidence

Estrada found that when employing the 20 percentage point shift and one-standard-deviation rules, even before considering trading costs and taxes, results for the static 60/40 portfolio, with rebalancing, were virtually the same as for the tactical strategies (whether they used D/P, E/P or the CAPE 10). Not only were their returns virtually identical, so were their standard deviations. All the portfolios produced returns of about 8.1 percent with a standard deviation of roughly 11.0.

However, the 60/40 portfolio produced by far the fewest transactions. It required 37 rebalancing events, while the other three portfolios needed almost twice as many (from 69 to 71). The results were similar when using the more aggressive, 30 percentage point shift rule.

When Estrada looked at the 20 percentage point shift and two-standard-deviation rules, he again found similar results. In this case, the 60/40 benchmark portfolio produced the highest return (8.1 percent versus between 7.9 percent and 8.0 percent for the tactical strategies), but did so with slightly higher volatility (11.0 versus 10.6 for the tactical strategies).

Again, the static portfolio produced the fewest transactions at 37, although the gap was now smaller because of the increasing hurdle represented by two standard deviations (the three valuation-based strategies required 45 to 56 transactions, with the CAPE 10 producing the fewest). And once again, very similar results were found when the 30 percentage point shift and two-standard-deviation rules were applied.

Estrada noted that the Sharpe ratios for all the portfolios were virtually identical, at about 0.22. Thus, whether more or less aggressive shifting strategies were used, or higher or lower hurdles (one or two standard deviations) were employed, results were the same. There was no improvement in either returns or risk-adjusted returns. And this was even before considering transactions costs or taxes.

Frequency Of Rebalancing

While the historical evidence suggests that infrequent rebalancing (perhaps once a year) is sufficient to efficiently control risks, some investors do rebalance on a more frequent basis. Thus, Estrada also examined whether the performance of the strategies would benefit from monthly rebalancing.

The tactical strategy based on the P/E metric did achieve a slightly higher return and slightly lower volatility, and therefore a slightly higher risk-adjusted return, than the static strategy. However, the other two tactical strategies (the ones based on D/P and the CAPE 10) either did no better than the static strategy or did worse (with both lower returns and higher volatility).

However, the P/E-based strategy also triggers many more rebalancing events than the 60/40 strategy, from 3.5 times to as much as almost 9 times greater depending on the choice of volatility metrics (one or two standard deviations) and percent allocation shifts (either 20 percent or 30 percent). Thus, the 60/40 portfolio has much lower transaction and tax costs.

Estrada also noted that while the P/E-based strategy performs slightly better under monthly rebalancing than under annual rebalancing, the opposite is the case for the other three strategies considered, which raises the question of whether or not data mining has occurred. Torture the data with enough strategies and eventually it will relinquish a confession in the form of a strategy that worked in sample.

Estrada concluded: “The evidence does not support the superiority of valuation-based strategies; if anything, it points moderately in the opposite direction. In fact, the slight advantage of the 60-40 portfolio does not even take into account that this strategy does not require investors to track the historical performance of multiples and to evaluate whether they signal overvaluation, undervaluation, or fair valuation. In other words, simplicity would add another vote for the 60-40 portfolio.”

The bottom line is that the bulk of the evidence on valuation-based strategies seems to suggest that it’s difficult to find some trading rule that would have significantly outperformed a balanced portfolio in the past, even with the benefit of hindsight. What’s more, that’s without taking into account the real-world trading costs that all investors incur, or the taxes that taxable investors must face.

And as Estrada noted: “The evidence also seems to suggest that such rules would have been nearly impossible to determine ex-ante, in addition to being psychologically very difficult to implement. The evidence in this article adds to the doubts about the past and future success of valuation-based strategies.”

Estrada also warned that it’s always possible to look back, manipulate the data enough, and “find some thresholds for the multiples that would have produced valuable signals and successful strategies.”

That said, there’s an important message here that should not be lost. As Estrada notes: “The fact that multiples are not helpful for asset allocation should not be interpreted as suggesting that they are not helpful for forecasting long-term returns. In fact, the evidence … suggests that multiples do have predictive power in the long term.” And that is why my firm uses the CAPE 10 in providing our estimate of future long-term returns.

There’s one more important point I would like to add regarding the use of metrics like the CAPE 10 to time the market instead of using them only for forecasting long-term expected returns, as we do. It’s that there is a very good reason TAA strategies based on valuation metrics are not likely to work.

Even if multiples are relatively high, generally an equity risk premium is still at work (although there wasn’t one in March 2009 when the D/P and E/P were both well below the yield on virtually riskless 10-year TIPS, a pretty good sign that there was a bubble). And investors reducing their exposure to equities when a premium exists are forgoing that premium.

Ignore The 135-Year Mean

Furthermore, as explained in an article I wrote earlier this year, there are many logical explanations for why using a 135-year mean for the CAPE 10 isn’t appropriate. Using a strategy that relies on a metric reverting to its long-term mean of 16.6 is, in my view, based on bad assumptions. Following is a summary of some key issues from that article.

The fact is that our economy is much less volatile than it was 100 years ago. In addition, the United States was a much less wealthy country 100 years ago; capital was much scarcer, resulting in low valuations. Also, 100 years ago, we didn’t have the Federal Reserve. And there was no SEC, or a Financial Accounting Standards Board, either.

Moreover, trading costs today are much lower, and the expense ratios of mutual funds and ETFs are much lower as well. All of these “regime changes” should lead to the equity risk premium demanded by investors migrating lower. Since 1960, the mean of the CAPE 10 has been about 20. It’s also been about 20 since 1970, and above 21 since 1980. If you are expecting mean reversion, perhaps a mean of 20 is a more appropriate one at which to be looking.

And there have been accounting changes, which, while providing a better measure of earnings, also make current valuations look much higher than they actually are relative to past valuations. It’s estimated that adjusting for accounting changes (FAS 142 and 144, which have to do with the writing off of intangible assets) would push the current CAPE 10 down by about 4 points.

Additionally, dividend payout ratios are much lower, which should lead to higher earnings growth, justifying higher valuations. Adjusting for that would change the CAPE 10 downward by another 1 point.

Using the CAPE 10’s average of 20 since 1960, and adding the adjustments for the accounting changes and dividend payout ratios, brings us to an adjusted CAPE 10 of about 25, which is right about where it is now. And finally, cash on corporate balance sheets is above the long-term average, and debt ratios are below it.

Comparing valuations across time without taking the above factors into account is a mistake, because more cash and lower debt ratios make equities less risky, and justify higher valuations. Thus, even if you believed in some magical reversion to the mean of the CAPE 10, when viewed through the proper lens, it doesn’t really look overvalued at all, just highly valued.



Larry Swedroe is the director of research for The BAM Alliance, a community of more than 140 independent registered investment advisors throughout the country.

Larry Swedroe is a principal and the director of research for Buckingham Strategic Wealth, an independent member of the BAM Alliance. Previously, he was vice chairman of Prudential Home Mortgage.

Loading