We certainly wouldn’t bet on it.
Yet history tells us that, over our 45-year exam period, this is precisely the case.
Are we merely letting our natural biases and risk aversion get the better of us? Or, is our gut telling us something our brain still has yet to figure out?
At the risk of stating the obvious, the future is far less certain than the past. Here’s what we know about the past:
- Small-caps did exceptionally well, outpacing their large-cap peers by over 300 basis points per year.
- Treasuries and precious metals were excellent diversifiers to U.S. equities.
- Precious metals had exceptional early-period returns during the inflationary regime of the 1970s and early 1980s.
Yet history is just a sample size of 1. Here is what we do not know about the future:
- Whether the size premium is real.
- Whether U.S. Treasuries will continue to diversify U.S. equities (e.g., monthly correlation from 1973-1985 between long-term U.S. Treasuries and large-cap equities was 0.37).
- Whether an inflationary regime will manifest and whether precious metals will again serve as a hedge.
Our result of 60% small-cap equities, 30% long-term U.S. Treasuries, and 10% precious metals is definitively data-mined on this sample.
While 45 years may appear to be a sufficiently long horizon, in reality there are just a handful of meaningfully different economic regimes. A single outlier event (e.g., small-cap outperformance) can completely dominate the results. But what if that outlier was noise, not signal?
Using Randomness To Create Certainty
Is there a way to improve our answer? An obvious step would be to gather more data, either over time or across different geographies. But what if we do not have any more data?
One potential answer is subset resampling, which averages together a large number of optimizations where each optimization represents a randomly selected subset of the investable universe. In this case, we utilized the following approach:
- Randomly select four of the eight investable assets.
- Optimize for the portfolio that maximizes annualized returns, subject to the end-of-period loss constraint.
- If the solution is infeasible (e.g., not having losses after three months when T-bills are not in the investible universe), throw the answer away.
- Repeat 1-3 until 1,000 solutions have been found.
- Average the 1,000 solutions together.
The intuition behind this approach is that each individual optimization forgoes diversification to reduce estimation error. Specifically, we are not reducing estimation error per se, but trying to reduce the impact of noise that may exist in historical returns (e.g., the magnitude of the realized size premium). Averaging the results together is a naive application of ensemble techniques that can help decrease variance and avoid overfitting.
We plot the results below and further include graphs capturing end-of-period loss constraints of -10%, -20% and -30%.
For a larger view, please click on the image above.