How Not To Be Wrong shows you that math is really just the science of common sense and that studying a few key mathematical ideas can help you assess risks better, make the right decisions, navigate the world effortlessly and be wrong a lot less.

Over the past few weeks, all roads have led towards math and statistics for me. One of my favorite writers, Nat Eliason, has been dealing a lot with fighting the biases that damage our decision-making, by cataloging mental models on his blog. A lot of the books I’ve read through Blinkist recently were on how to use statistics and math to make your life easier. Finally, three of the classes I attended last week were also about statistics, for example one on Friday explained Bayes’ Theorem.

Because you can never learn enough about navigating the world in a better way, I feel the positive changes from studying these mental biases keep compounding, which is why I’m happy to announce that today’s book “How Not To Be Wrong” falls into the same category.

Renowned mathematician Jordan Ellenberg has been writing about his mathematical research for the general public for over 15 years, which surely helped in making this book a bestseller (and one of Bill Gates’s favorites).

Here are 3 lessons from it to help you be wrong less often:

- Mathematics is mostly based on common sense, and we use it more than we think.
- Probability and risk are two different things.
- The findings of scientific research are often wrong, for three reasons.

I hope you’re ready for yet another software upgrade for your mind, because numbers don’t lie! Let’s go!

Lesson 1: You use mathematics more than you think, because it’s mostly just common sense.

The most beautiful thing about math is that it allows you to determine with 100% certainty whether something is true or not. Of course, the occasions where you apply Pythagoras’s Theorem in daily life are few, but that doesn’t mean you don’t use the underlying principles of math.

Jordan thinks of math as “the science of not being wrong.” This makes solving even common problems by intuitively using logic and reason “math” problems, though you’d never call them that.

For example, in WWII, military advisors looked at all the American planes that returned from Europe, covered in bullet holes. Because the fuselage often had a lot more holes in it than the engine, they suggested better protecting that part of the plane.

However, one mathematician pointed out that those planes were only the ones that survived and returned home, suggesting that the ones who did take lots of shots to the engines were probably those that never made it back.

This is called survivorship bias, which is the mistake of focusing on only the positive results or data points, when analyzing things. It’s the same force at play when you hear about another huge startup exit, because the media always neglect the thousands of companies that fail.

Note: I recently found a great video that explains survivorship bias in even more detail, which you can watch here.

Lesson 2: We often use probability to assess risk, but they’re not the same.

Here’s another mistake we often make: Confusing probability and risk. Because we use probability to assess how risky a bet, an investment, or an action we want to take is, we think that’s all there is to it – but it’s not.

For example, if you went to play roulette at a casino, you could simply calculate the probability of winning vs. losing money in the long run by computing what’s called your expected value. On a French roulette wheel, there are 37 numbers, ranging from 0 to 36.

Half are red, half are black, with the 0 being green, a neutral color, which you can’t bet on. If you bet $1 on red, you have an 18/37 chance of doubling your money (because 18 of the 37 numbers are red) and an 18/37 chance of losing that dollar. However, because there’s an extra 1/37 chance of losing your dollar (because you’ll also lose if the wheel lands on 0), your expected value becomes: 18/37 * 1 (you win) – 19/37 * 1 (you lose) = -$0.027

Knowing that in the long run, you’ll lose money, you can then decide not to take this risk.

But that’s not all the risk a bet entails. Now consider this example. Would you rather…

Get $50,000.

Have a 50:50 chance of losing $100,000 or getting $200,000?

The expected value is the same, $50,000, but because the negative result in the second scenario would be really bad, the risk is a lot higher, even though it’s not reflected in the probability at all.

You can’t use just probability to assess risk, you also have to think about how bad potential negative outcomes really are, if they do occur, and take that into account.

Lesson 3: You should always question the findings of scientific research, because there are several problems with them.

“New study shows milk is related to Alzheimer’s.” “This study reveals how much work you really do while at the office.”

New headlines like these pop up every day, but Jordan says we should always take these with a grain of salt, because of three reasons:

Sometimes even insignificant results can pass statistic tests. For example with a standard significance level of 95%, 5,000 out of 100,000 genes tested for causing schizophrenia will show up as positive by chance – but imagine only 10 really cause schizophrenia, then that result is useless.

Unsuccessful studies are rarely published. This is the exact survivorship bias described above. If 19 studies testing chocolate for causing constipation fail, but one finds a significant correlation, that last, 20th one is usually published, changing your perception of the issue completely.

Researchers fake results. Even though they have great intentions, researchers are humans too, so if they need just one more percentage point in the results to be positive, in order to comply with scientific standards, they might slightly tweak the data, because they’re convinced what they found is true.

As you can see, statistical errors sneak their way all the way up into even the highest circles of scientific research, so it’s normal that they have a tremendous impact on you too. But by becoming aware of them, you’re taking the first step towards avoiding mistakes caused by biases – like a true mathematician.

My personal take-aways

I could’ve cited a every lesson, this book is awesome, it’s an unequivocal YES from me!