Has Super Thinking by Gabriel Weinberg with Lauren McCann been sitting on your reading list? Pick up the key ideas in the book with this quick summary.
What is worldly wisdom? Let’s take each word in turn. Worldliness means being concerned with ordinary life. Wisdom, on the other hand, implies knowledge of the big picture when it comes to life as a whole. But there’s more to it than that.
Simply knowing lots of isolated facts isn’t particularly useful. As the American investor Charlie Munger once put it, “If the facts don’t hang together on a latticework of theory, you don’t have them in useable form.”
And that’s our definition: worldly wisdom is about plugging our knowledge about life into theoretical models that illuminate everyday problems. So where do you find those models?
Well, that’s exactly the question Gabriel Weinberg and Lauren McCann set out to answer. Drawing on economics, physics, philosophy and a host of other disciplines, this book summary present a set of mental models that will help you understand the world, make better decisions and take your thinking to the next level.
In this summary of Super Thinking by Gabriel Weinberg with Lauren McCann, you’ll learn
- what a fourteenth-century philosopher can teach you about dating;
- why avoiding errors is more important than being right; and
- what an Israeli daycare center can teach us about reciprocity.
Super Thinking Key Idea #1: Super thinking leverages tried-and-true concepts to help us explain the world and make better decisions.
We make dozens of decisions every day. They aren’t all big decisions, but enough bad calls can cumulatively result in overdrawn bank accounts, unhappy marriages and dead-end jobs. That means that every decision counts.
Life, however, is full of complicated conundrums and ambiguous evidence. Making up our minds often feels less like an act of reason than a stab in the dark.
Surely there’s a better way? Well, there is – super thinking, a way of understanding the world that relies on proven cognitive blueprints to make sense of the jumble of data out there.
Let’s unpack that. Every industry has its own mental models that allow practitioners to create “mental pictures” of a problem. These aren’t one-off snapshots, but techniques that can be reapplied time and again – that’s the “model” part. Put differently, they’re recurring concepts that explain the world.
Most mental models are pretty rarefied and used only by specialists. Others are much more widely applicable. These super models can help us make sense of everyday life. Take critical mass. Physicists use it to describe the minimum amount of nuclear mass needed to create a critical state that triggers a nuclear chain reaction. But it’s also a handy model in other contexts, like that of technological change.
Fax machines were invented in the 1840s, but languished in obscurity for over a century. Why? Their cost meant only a few wealthy individuals and organizations could afford to adopt the technology. That affected the perceived value of faxing: even if you bought a machine, you wouldn’t be able to communicate with anyone you knew.
As the cost came down, more people bought fax machines and more connections became possible. To put that into numbers, two devices can make one connection, five can make ten and twelve can make sixty-six. By the 1970s, faxing had reached critical mass. There were enough machines that the network itself became useful – if you had your own device, chances were you’d be able to contact anyone.
Contemporary businesses have made a killing leveraging that insight. Critical mass told ride-sharing services like Uber and Lyft, for example, how many drivers they needed in cities before people would begin relying on their services.
That’s not the only super model that you can use to cut through complexity. In the following book summarys, we’ll be looking at a ton of shortcuts to help you boost your cognitive performance.
Super Thinking Key Idea #2: Avoiding unforced errors and arguing from first principles can help you be wrong less often.
How we look at something determines the way we think about it. Take it from nineteenth-century German mathematician Carl Jacobi. His motto? “Invert, always invert,” which means that it’s often easier to solve a problem when you approach it from the opposite or “inverse” point of view.
Investors, for instance, usually assume their aim is to make money. But Jacobi would say that it’s actually to avoid losing money. That goes for decision-making, too. Common sense says that making better calls is about being right more often; inversion tells us it’s actually about being wrong less often.
That means avoiding what tennis players call unforced errors – mistakes caused by your own sloppiness rather than your opponent’s brilliance. Imagine returning a weak serve straight into the net and you get the idea.
Avoiding unforced errors involves paying attention to the way you reason things out. This is called arguing from first principles, and it’s all about thinking from the bottom up, starting with assumptions you’re sure are true. Elon Musk, the founder of the automotive and energy company Tesla, defines this as boiling things down to “fundamental truths” and going from there.
When Musk was looking into battery packs for his self-driving vehicles, for example, he was told that he’d never get the price below $600 per kilowatt-hour. He wasn’t convinced. First principles led him to pose two questions: What do you need to make battery packs, and how much do those materials cost on the stock market?
The answer was cobalt, nickel, aluminium, carbon, polymer and a seal can, each of which costs the equivalent of $80 per kilowatt-hour. Musk realized that the solution was to buy the materials directly and produce his own cells. The result? Cheaper batteries than anyone had previously thought possible.
You can also apply bottom-up reasoning to everyday decisions. Take job hunting. Lots of people waste energy applying for far too many positions, and then jump at the first opportunity that comes their way. First principles suggest a different approach.
Before sending your résumé out, sit down and define your values. Is independence, status or money most important to you? Next, lay down your red lines – how far you’re prepared to commute, say, or the least-senior position you’re willing to accept. Finally, check those values against existing positions. That’s inversion in action: you’re not asking which jobs are available, but which ones suit your needs!
Check it out here!
Super Thinking Key Idea #3: Ockham’s razor might just hold the key to your love life.
We’ve just seen that solid reasoning is built on fundamental truths, but how do you know those are correct? It’s an age-old question. The second-century Roman astronomer Ptolemy, for example, argued that it was a “good principle to explain phenomena by the simplest hypothesis possible.”
Twelve hundred years later, the English philosopher William of Ockham reached the same conclusion: when confronted with equally plausible and competing assumptions, the simpler is more likely to be true. Call it Ockham’s razor. The idea is to “shave off” unnecessarily intricate explanations. In other words, think horses, not zebras, when you hear hoofbeats.
It’s a model with plenty of useful applications. Take dating. The rise of dating apps and websites has made it easier than ever to apply highly specific filters to potential mates. To exaggerate slightly, lots of people are looking for blue-eyed Brazilians who love hot yoga and raspberry ice cream. It’s not exactly an impossible combination, but it does limit your dating pool.
Ockham’s razor suggests that this is overcomplicating things. We all know from past experience that it’s possible to get along with someone even if they prefer chocolate to raspberry and have brown rather than blue eyes. If they’re not funny, attractive or interesting, on the other hand, it’s hardly likely to work out. And even if not loving hot yoga is a deal breaker, it still makes more sense to add that filter back in later on; in the beginning, stick to the basics.
That’s also a great way of beating common logical traps. In 1983, the psychologists Amos Tversky and Daniel Kahneman presented a now-famous thought experiment that considered the following: Linda is 31, single, outspoken and clever. She studied philosophy and attended demonstrations at university. Which is more likely: that Linda is a bank teller or that Linda is a bank teller who is active in the feminist movement?
Most people plump for the second option. That’s an example of the conjunction fallacy in action. This states that the probability of two events occurring in “conjunction” is always less than or equal to the probability of one event occurring alone. Remember: not all bank tellers are feminists. To go back to dating, what’s more likely – finding a mate who makes you laugh or finding someone who is funny and shares your exact cultural preferences, down to ice cream flavors? The answer tells you why it pays to simplify assumptions!
Super Thinking Key Idea #4: Putting yourself in others’ shoes is hard but the veil of ignorance can help you make fairer decisions.
Working out other people’s motivations is tricky, especially when they’re strangers. The upshot? We jump to unfair conclusions. If a colleague sends us a curt one-line email, we think he’s being dismissive and discount the idea that he might just be in a rush.
Psychologists call that a fundamental attribution error: While we explain our own behavior by looking to our intentions and external circumstances, we ascribe other people’s actions to essential characteristics. If you run a red light, it’s because you need to get to the hospital; if another driver does it, it’s because she’s inherently reckless.
That’s why it’s a good idea to rethink your assumptions, which is where Hanlon’s razor comes in. This model states that you should never attribute to malice what can be more easily explained by carelessness. Is your neighbor playing loud music to annoy you? Unlikely – he’s probably just forgotten how thin the walls are.
If you want to go a step farther and think more objectively, you can use the veil of ignorance. That’s the name of a model developed by the American philosopher John Rawls in his 1971 book A Theory of Justice. Here’s how it works:
Birth is a lottery. Some people luck out and are blessed with a wealth of opportunities; others go empty-handed. That doesn’t stop us believing that our own privileges and others’ disadvantages are deserved. This, Rawls argued, distorts our understanding of fairness.
But imagine you had to design a fair society without knowing where you’d end up in it. Rawls concluded that if you couldn’t be sure whether you’d be born a slave or a free person, you’d decide that slavery itself was unfair. Put differently, you’d take the feelings and interests of everyone affected by a decision into account – not just your own.
It’s a useful model to apply to your own decision-making. So say you’re a manager in a company that’s thinking about abolishing an established policy that allows employees to work remotely. As you see it, there are a lot of good reasons for doing that. But what if you put yourself behind the veil of ignorance – would you advocate against the policy no matter who you were? What if you were an employee caring for an elderly relative, or a single parent?
Even if you end up sticking to your guns, it’s a worthwhile experiment, as it will help you appreciate other people’s perspectives and challenges.
Super Thinking Key Idea #5: If you don’t want to get left behind by social change, you need to become adaptable.
The peppered moth is a night-active species native to temperate climates, and a familiar sight across Britain. Before the Industrial Revolution, most peppered moths in northwestern England were light-colored, a trait that helped them camouflage themselves on light tree bark. But between 1811 and 1895, the percentage of dark peppered moths rose from 0.01 percent to 98 percent. What happened?
It’s a classic example of Darwin’s theory of the “survival of the fittest.” Pollution from coal mines covered trees in a thick layer of soot. Lighter moths then became easy prey for predators, while their darker counterparts remained hidden – and thrived.
As the American scientist Leon Megginson pointed out, people sometimes confuse “the fittest” with strength or intellect or superior genes. The key, however, is adaptability – adjusting to a changing environment. And that’s the lesson the story of the dark peppered moth teaches us. Society also evolves over time. If you want to thrive in your social environment, you also need to become adaptable. So how do you that?
One great way is by adopting an experimental mind-set. That’s a cycle of making scientific observations, developing hypotheses, testing them, analyzing data and formulating new theories. The most successful people and organizations adopt precisely that mind-set. They’re constantly on the lookout for new tools to boost their productivity, well-being and fitness.
Take your health, for example. There’s an overwhelming mass of data out there suggesting that this or that diet is the best thing you can do for your body. Should you go vegan or paleo, or is fasting a better option? The only way to find out is to experiment. That’s more than just randomly trying different things, though. It means adopting a rigorous trial-and-error approach, trying different diets one by one and analyzing the results to figure out which works for you.
The same goes for your “intellectual diet.” If you aren’t experimenting with new ideas, you’re probably stuck with old, out-of-date ones. Notions change all the time, after all. You were probably taught that an asteroid wiped out the dinosaurs and that Tyrannosaurus rex were smooth-skinned reptilians. Well, today, a lot of experts have come to view that theory as outdated, while most paleontologists believe the T. rex was partly covered in feathers! Stick with old ideas, and your thinking will end up just as obsolete as those light-colored moths clinging to sooty trees.
Super Thinking Key Idea #6: Anecdotal evidence and the fallacy that correlation implies causation skew our understanding of statistical data.
We live in a data-driven world. Quantification, once the preserve of scientists and engineers, shapes the way we understand everything from climate change to our personal lives. You might even be using apps that tally up your daily movements and tell you how much sleep you’re getting.
That’s good, right? Well, yes and no. As the American writer Mark Twain liked to point out, there are “Lies, damned lies and statistics.” Numbers, in other words, can mislead as easily as they can inform. And that’s why it’s so important to avoid statistical stumbles.
After all, whether it’s nutrition or government policy, there are people everywhere claiming that the numbers are on their side. Even if they’re not trying to mislead you deliberately, there’s a good chance their “evidence” has been misinterpreted or overhyped.
The most common cause of error is relying on anecdotal evidence. That’s basically hearsay and personal experience. Evolutionarily, trusting such partial data makes a lot of sense. If you’ve seen someone eat a berry and get sick, you simply avoid the shrub in question rather than conducting a controlled berry-eating experiment. In other contexts, however, it’s a source of confusion.
Think of the stories people tell about, say, their grandfather who smoked a pack of cigarettes a day and lived to the age of 90. These are out-of-the-ordinary cases that tell us nothing about average experiences. It’s a bit like going to a restaurant: you’re more likely to tell friends about your meal if it was outrageously good or bad than if it was merely mediocre. Smoking might not cause lung cancer in every case, but that doesn’t mean it doesn’t significantly increase the average risk.
Then there’s the mistaken idea that correlation implies causation – that is, if two events occur consecutively, one must have caused the other. You can see this fallacy in effect every year around the time people get their flu vaccinations. Inevitably, someone will get a cold at the same time, mistake it for the flu and attribute it to the shot they just got.
What’s missing in this account is the confounding factor, an often less obvious but correct explanation – namely, that people are vaccinated largely when more people get sick than usual. A common cold, against which a flu shot doesn’t provide immunity, ends up being misidentified, and the vaccination gets blamed for causing the very thing it prevented! And that’s the trouble with data – it helps us understand the world, but sometimes it helps us misunderstand it, too.
Super Thinking Key Idea #7: Confusing social and market norms undermine reciprocity.
Let’s face it: life is full of conflict. In economics, these adversarial situations are sometimes described as “social games” pitting self-interested “players” against one another. Most conventional games are zero-sum: if you win, I lose. Unlike chess, however, real life isn’t black-and-white. Sometimes everyone can win.
Take a study of tipping discussed in American psychologist Robert Cialdini’s 1984 book Influence. It found that waiters who give their customers small gifts increase their tips. An after-dinner mint equalled a three percent rise. Two mints took that to 14 percent, while two mints and a line like, “And for you nice folks, an extra mint” pushed it to 23 percent.
That’s a great example of reciprocity, the perceived obligation to return favors. It’s a pretty universal cultural concept. The Romans called it quid pro quo or “something for something.” In modern English, we rephrase that as, “I’ll scratch your back if you scratch mine.”
Reciprocity is a social norm, a recognized if unwritten rule regulating social life. Obviously, we don’t always behave reciprocally. Sometimes it’s more appropriate to follow market norms. When you look at something from a “market perspective,” you think of it in terms of your personal interests. A “social perspective,” by contrast, asks if it’s the right thing to do. It’s the difference between asking, “Is this $60 babysitting job worth my time?” and “Should I help my friend out by looking after his kids for four hours?”
They’re both valuable perspectives. Things get messy, however, when they’re confused. Take the case of an Israeli kindergarten cited in economist Dan Ariely’s 2008 book Predictable Irrationality. Parents were often late when they picked their kids up, and the school decided to introduce fines for tardiness. The policy had the opposite effect: parents were late even more frequently!
Here’s why. Before the fines, latecomers felt guilty and made an effort to be more punctual. The introduction of a market norm undermined their sense that they owed the kindergarten teachers something – after all, they were paying, so there was no reason to feel bad. Interestingly, when the kindergarten abolished the fines, parents didn’t return to their old behavior – the experiment had undermined both types of norms.
That just goes to show how important it is to apply the right conceptual models to situations. As the example of that Israeli daycare center shows, it pays to make sure you’re framing things in the right way!
Final summary
The key message in this book summary:
Super thinking leverages tried-and-true conceptual models to improve your decision-making and help you avoid common logical pitfalls. Gleaned from disciplines as diverse as physics, economics and philosophy, these “super models” cut through the complexity and shed light on the thorniest conundrums. Whether you’re using Ockham’s razor to select potential mates, applying critical mass to understand an emerging market or using the veil of ignorance to put yourself in people’s shoes, these ultra-adaptable tools are guaranteed to up your cognitive game.
Actionable advice:
Weigh your decisions with a numbered pro-con list.
The simplest way of approaching a big question is to draw up a list of pros and cons – arguments for and against a decision. Chances are, however, that you’ll attach different value to different factors. That’s where numbered pro-con lists come in. Here you’ll be valuing each factor on a scale ranging from minus ten to zero for negatives and zero to ten for positives, thus giving you an insight into the relative value of each item. So say you’ve been offered a job, but taking it involves moving to a new city. What gets a higher score – the location or the improved salary offer? As simple as it sounds, this is a powerful mental model that will help you conduct a systematic cost-benefit analysis before taking the plunge.