Nate Silver: Confidence Kills Predictions
(Editor's note: IndexUniverse's Inside Indexing conference in Boston was postponed in May due to the Boston marathon bombings. The rescheduled event is this month.)
Perhaps best known for his highly accurate election predictions, statistician Nate Silver is the creator of the blog FiveThirtyEight.com (now part of the New York Times website) and the author of “The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t.” Journal of Indexes Managing Editor Heather Bell recently spoke with Silver, a keynote speaker at Inside Indexing to be held in Boston June 17-18.
IU: You’re basically the world’s only celebrity statistician. How did you get into that line of work?
Silver: I kind of fell into it. I had a consulting job after college, and I was really bored there. So I left that to play poker and write about baseball, both of which involve a lot of math. I kind of stumbled into it. Then, the election stuff—where people are so starved for substantive or quantitative coverage of elections, as compared to what they get most of the time—took off in 2008, and then again last year, of course.
Part of the job is figuring out the balance between being very rigorous with your work, but also finding ways to have fun with it, and do TV appearances and talks to promote your idea. The book was obviously a labor of love. Books are really, really hard things to write, but it helps to make the case I’m trying to make.
IU: What I’ve taken away from the book is that the vast majority of experts are stunningly bad at making predictions.
Silver: That’s the whole irony, I guess. There are specific studies that find that the more often people go on TV, the worse advice they tend to give. When I talk to groups, I try to preach a certain amount of humility before these big, difficult problems that we face and to not tell people that if they do this and that, then magic will occur.
IU: What do you see as the common theme among bad predictions? What most often leads people astray?
Silver: A lot of it is overconfidence. People tend to underestimate what the uncertainty that is intrinsic to a problem actually is. If you have someone estimate what they think a confidence interval is that’s supposed to cover 90 percent of all outcomes, it usually only covers 50 percent. You have upside outcomes and downside outcomes in the market certainly more often than people realize.
There are a variety of reasons for this. Part of it is that we can sometimes get stuck in the recent past and examples that are most familiar to us, kind of what Daniel Kahneman called “the availability heuristic,” where we assume that the current trend will always perpetuate itself, when actually it can be an anomaly or a fluke, or where we always think that the period we’re living through is the “signal,” so to speak. That’s often not true—sometimes you’re living in the outlier period, like when you have a housing bubble period that you haven’t historically had before.
Overconfidence is the core linkage between most of the failures of predictions that we’ve looked at. Obviously, you can look at that in a more technical sense and see where sometimes people are fitting models where they don’t have as much data as they think, but the root of it comes down to a failure to understand that it’s tough to be objective and that we often come at a problem with different biases and perverse incentives—and if we don’t check those, we tend to get ourselves into trouble.
Are small-cap stocks modern-day dot-coms?
Not all automated advisory services are created equal.
Start talking with your kids about investing their own money.
Even ETF shorts care about exposure.