The Signal And The Noise explains why so many predictions end up being wrong, and how statisticians, politicians and meteorologists fall prey to masses of data, when finding important signals is mostly a matter of being cautious, diligent and, most importantly, human.
Nate Silver deserves some props. He predicted the voting outcome of 49 out of 50 US states correctly in 2008, and then nailed all 50 in 2012. Since revealing his identity in public and making those predictions, his popularity, and that of his blog, FiveThirtyEight, where he writes about his predictions, has exploded.
Eventually, the blog was acquired by ESPN (baseball is another field Nate likes to analyze) and Silver made editor in chief. Given his track record, it’s also the best place to turn to in case you want to know who’s likely to win the seemingly-eternal Trump vs. Clinton battle.
In this instant New York Times bestseller, he explains why so many predictions fail and how you can use a few tools and principles to make better calls about the future.
Here are 3 lessons to help you tell the signal from the noise:
- Most economists try to predict too accurately and are too confident about their skills.
- Every prediction always needs the proper assessment of a human being.
- You can use Bayes’ theorem to account for errors in your own predictions.
Ready to beat the weatherman? Let’s update your statistics software!
Lesson 1: Exact numbers and accuracy estimates rarely hold up.
Here are some of the people who make a living off predictions: sports commentators and broadcasters, stock analysts, the people in charge of the weather forecast, pollsters, poker players, economists, marketers and, of course, fortune tellers. Sadly, most of the people in the other categories have more in common with the last one, a typical fun fair scam, than we’d like them to.
Due to making so many errors, it’s hard to trust these people after a while. But why do they make so many of them in the first place?
Looking at economists as an example, they’ll usually say things like this: “We expect GDP (gross domestic product) to grow by 2.9% next year.”
In reality though, the result of their analysis had yielded something like this: “There’s a likelihood of 90% that GDP growth will lie somewhere between 2.1% and 3.7% next year.” This is a whole other story. Instead of just picking the middle and predicting an exact number, economists should admit that the best they can do is to give an interval.
Secondly, the accuracy of that interval is often greatly overestimated – forget those 90%. More like 50%. Since 1968, the actual GDP growth percentage has fallen completely outside of the given interval half of the time. They’re not only wrong, they’re confident about it too!
Lesson 2: Human judgment is a necessity for all good predictions.
Where does this hubris and false sense of judgment come from? Mostly from turning off their common sense and relying solely on statistical data. Since the dawn of the internet, we have more information available to us than ever. Over 4,000,000 economic indicators are being tracked constantly, so it feels natural to rely on the hard facts and statistical data for making predictions.
Given this extreme amount of data though, critical thinking and filtering based on your own reasoning has become all the more important. With so many correlating factors, some coincidences are bound to arise, and relying on them is certain to backfire after a while.
For example, for 30 years, all the data pointed towards the stock market experiencing a surge in gains for the rest of the year, if the Super Bowl winner was a team from the NFL. If a team from the AFL won, that meant losses for the stock market. This hypothesis held up 28 out of 30 years between 1967 and 1997, leaving only a 1 in 4,700,000 chance that this is a coincidence.
But guess what: it IS a coincidence, because stocks and football are totally unrelated. Since 1998, this trend has reversed.
No matter how much technology we use to maneuver the wealth of data available to us, it is of crucial importance to always have a human being sit at the table, take a skeptical look at the analysis and call the shots.
Lesson 3: If you want to make your predictions better, use Bayes’ theorem.
To help you make better decisions, here’s a cool tool you can use: Bayes’ theorem. What it comes down to is a simple mathematical formula, that you can use to predict the likelihood of something under the assumption that a given fact is true.
A popular example used to explain it is the likelihood of having breast cancer, if the result of your mammogram is positive. You might know that roughly 10% of all positive mammogram results are so-called false positives – 1 out of 10 people with a positive result does not actually have cancer.
This might lead you to think that even if you have a positive result, there’s still “just” a 90% chance that you have cancer. Actually, it’s a lot less still – because you need to account for how many people overall have cancer.
For example, if you factor in that only 1% of people eventually end up with the disease, and that even for those that have it, the mammogram result is only positive 3 out of 4 times, you can put together the equation as follows:
Divide the percentage of correctly positive labeled people with cancer (0.75*0.01) by ALL positively labeled people – aka those who actually have cancer and a positive result + those with a false positive (0.75*0.01+0.1*0.99) – which’ll show you that a positive mammogram result means you still just have a 7% chance of actually having cancer.
Note: I made those numbers up, research shows the number of people with cancer after a positive mammogram result to be roughly 10%.
My personal take-aways
Statistics isn’t an easy topic to understand. But a crucial one. It’s one of the best ways to work against those cognitive biases that keep you from making good decisions. I highly recommend checking out Nate’s work and this book!