Sometimes the hype is real.
My recent articles decrying active management and poking holes at seeming outperformance have filled my email box with a bit of backlash. You could sum them all up by saying, “Nadig hates everything that’s not 10 basis point vanilla indexing.”
It’s a fair comment, I suppose, since I am an academic finance geek at heart.
But I want to be clear—it’s not that I don’t think it’s possible to have consistent, risk-adjusted outperformance, I just think it’s incredibly difficult, and easy to fake.
Difficult, because you’re playing in the same sandbox as the rest of the market, and easy to fake because as long as the market at large is in any kind of trend, you can “beat the market” just by adjusting your beta.
Take a bit more risk, and you’ll outperform in up markets. Take a bit less risk, you’ll outperform in down markets.
That’s why we always talk about “risk-adjusted” alpha here at ETF.com. What that means is that when we look at a fund, we don’t just look at whether it beat its segment benchmark, we run a regression analysis of the fund versus that benchmark and look at beta capture, and ultimately, alpha.
We test that alpha, and we only report it if it passes a fairly strict significance test. Unless the math says “we’re 90 percent sure the outperformance of this fund is not based on random chance,” we assume it’s random.
If you’re not a stats person, that may sound harsh, but it’s actually pretty loose by stats standards (95 percent is more common).
So do any funds ever throw off real alpha? Let’s look at one in particular that I find intriguing: the ProShares Large Cap Core Plus ETF (CSM | B-89).
CSM follows the Credit Suisse 130/30 Index, which is one of my favorite wonky academic takes on “smart beta.” The premise is pretty simple: Start with the S&P 500, and based on a bunch of quantitative metrics, figure out which stocks are “better” and which stocks are “worse.”
The metrics used are largely the same ones everyone is looking at: momentum, profitability, growth, value metrics and a few technical indicators. You run everything through a giant black box, and you get a rank ordering of the good and the bad.
That’s all well and good, and plenty of folks construct big portfolios from black boxes like that. The problem is, most of those black boxes just overweight the good stuff and ignore the bad stuff, leading to highly skewed portfolios with crazy industry concentrations and cap tilts.
The reason CSM is more interesting is that it was developed by a bunch of academics (notably Andrew Lo and Pankaj Patel, from MIT and ISI Group, respectively, back in 2007) who specifically wanted to avoid all of those noisy skews. So they constrain their methodology by positing: