The Great Mental Models: General Thinking Concepts
by Shane Parrish
Buy a print copy from Bookshop.org (affiliate)This page contains highlights I saved while reading The Great Mental Models: General Thinking Concepts by Shane Parrish. These quotes were captured using Readwise and reflect the ideas or passages that stood out to me most.
Highlights
Robert Heinlein's character Doc Graves describes the Devil Fallacy in the 1941 sci-fi story "Logic of Empire", as he explains the theory to another character:
"I would say you've fallen into the commonest fallacy of all in dealing with social and economic subjects—the 'devil' theory. You have attributed conditions to villainy that simply result from stupidity…. You think bankers are scoundrels. They are not. Nor are company officials, nor patrons, nor the governing classes back on earth. Men are constrained by necessity and build up rationalizations to account for their acts."
If we present the evidence in a certain light, the brain malfunctions. It doesn't weigh out the variables in a rational way.
Hanlon's Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities. This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.
Always assuming malice puts you at the center of everyone else's world. This is an incredibly self-centered approach to life. In reality, for every act of malice, there is almost certainly far more ignorance, stupidity, and laziness.
It is often easier to find examples of when second-order thinking didn't happen—when people did not consider the effects of the effects. When they tried to do something good, or even just benign, and instead brought calamity, we can safely assume the negative outcomes weren't factored into the original thinking. Very often, the second level of effects is not considered until it's too late. This concept is often referred to as the "Law of Unintended Consequences" for this very reason.
The goal of the Five Whys is to land on a "what" or "how". It is not about introspection, such as "Why do I feel like this?" Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your "whys" result in a statement of falsifiable fact, you have hit a first principle. If they end up with a "because I said so" or "it just is", you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.
Socratic questioning generally follows this process:
-
Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?)
-
Challenging assumptions. (How do I know this is true? What if I thought the opposite?)
-
Looking for evidence. (How can I back this up? What are the sources?)
-
Considering alternative perspectives. (What might others think? How do I know I am correct?)
-
Examining consequences and implications. (What if I am wrong? What are the consequences if I am?)
-
Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)
When we read the news, we're consuming abstractions created by other people. The authors consumed vast amounts of information, reflected upon it, and drew some abstractions and conclusions that they share with us. But something is lost in the process. We can lose the specific and relevant details that were distilled into an abstraction. And, because we often consume these abstractions as gospel, without having done the hard mental work ourselves, it's tricky to see when the map no longer agrees with the territory. We inadvertently forget that the map is not reality.
Admitting that we're wrong is tough. It's easier to fool ourselves that we're right at a high level than at the micro level, because at the micro level we see and feel the immediate consequences. When we touch that hot stove, the feedback is powerful and instantaneous. At a high or macro level we are removed from the immediacy of the situation, and our ego steps in to create a narrative that suits what we want to believe, instead of what really happened.
Fallacy of Conjunction
Sagan wrote that "extraordinary claims require extraordinary proof."10 He dedicated much ink to a rational investigation of extraordinary claims. He felt most, or nearly all, were susceptible to simpler and more parsimonious explanations. UFOs, paranormal activity, telepathy, and a hundred other seemingly mystifying occurrences could be better explained with a few simple real world variables. And as Hume suggested, if they couldn't, it was a lot more likely that we needed to update our understanding of the world than that a miracle had occurred.
One of the theoretical foundations for this type of thinking comes from psychologist Kurt Lewin.10 In the 1930s he came up with the idea of force field analysis, which essentially recognizes that in any situation where change is desired, successful management of that change requires applied inversion. Here is a brief explanation of his process:
-
Identify the problem
-
Define your objective
-
Identify the forces that support change towards your objective
-
Identify the forces that impede change towards the objective
-
Strategize a solution! This may involve both augmenting or adding to the forces in step 3, and reducing or eliminating the forces in step 4.
Another common asymmetry is people's ability to estimate the effect of traffic on travel time. How often do you leave "on time" and arrive 20% early? Almost never? How often do you leave "on time" and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction. This is often the case with probabilistic decision-making.
Inversion shows us that we don't always need to be geniuses, nor do we need to limit its application to mathematical and scientific proofs. Simply invert, always invert, when you are stuck. If you take the results of your inversion seriously, you might make a great deal of progress on solving your problems.
There are two approaches to applying inversion in your life.
-
Start by assuming that what you're trying to prove is either true or false, then show what else would have to be true.
-
Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.
Garrett Hardin smartly addresses this in Filters Against Folly:
Those who take the wedge (Slippery Slope) argument with the utmost seriousness act as though they think human beings are completely devoid of practical judgment. Countless examples from everyday life show the pessimists are wrong…If we took the wedge argument seriously, we would pass a law forbidding all vehicles to travel at any speed greater than zero. That would be an easy way out of the moral problem. But we pass no such law.
An example of this is the famous "veil of ignorance" proposed by philosopher John Rawls in his influential T**heory of Justice. In order to figure out the most fair and equitable way to structure society, he proposed that the designers of said society operate behind a veil of ignorance. This means that they could not know who they would be in the society they were creating. If they designed the society without knowing their economic status, their ethnic background, talents and interests, or even their gender, they would have to put in place a structure that was as fair as possible in order to guarantee the best possible outcome for themselves.
Applying the filter of falsifiability helps us sort through which theories are more robust. If they can't ever be proven false because we have no way of testing them, then the best we can do is try to determine their probability of being true.
Karl Popper wrote "A theory is part of empirical science if and only if it conflicts with possible experiences13 and is therefore in principle falsifiable by experience." The idea here is that if you can't prove something wrong, you can't really prove it right either.
Thus, in Popper's words, science requires testability: "If observation shows that the predicted effect is definitely absent, then the theory is simply refuted." This means a good theory must have an element of risk to it—namely, it has to risk being wrong. It must be able to be proven wrong under stated conditions.
It became possible also to map out master plans for the statistical city, and people take these more seriously, for we are all accustomed to believe that maps and reality are necessarily related, or that if they are not, we can make them so by altering reality." 12
Any user of a map or model must realize that we do not understand a model, map, or reduction unless we understand and respect its limitations. If we don't understand what the map does and doesn't tell us, it can be useless or even dangerous.
We also tend to undervalue the elementary ideas and overvalue the complicated ones. Most of us get jobs based on some form of specialized knowledge, so this makes sense. We don't think we have much value if we know the things everyone else does, so we focus our effort on developing unique expertise to set ourselves apart. The problem is then that we reject the simple to make sure what we offer can't be contributed by someone else. But simple ideas are of great value because they can help us prevent complex problems.
Our failures to update from interacting with reality spring primarily from three things: not having the right perspective or vantage point, ego-induced denial, and distance from the consequences of our decisions. As we will learn in greater detail throughout the volumes on mental models, these can all get in the way. They make it easier to keep our existing and flawed beliefs than to update them accordingly.
How do you know when you have a circle of competence? Within our circles of competence, we know exactly what we don't know. We are able to make decisions quickly and relatively accurately. We possess detailed knowledge of additional information we might need to make a decision with full understanding, or even what information is unobtainable. We know what is knowable and what is unknowable and can distinguish between the two.
Consider the cartographer: Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators.
First principles thinking doesn't have to be quite so grand. When we do it, we aren't necessarily looking for absolute truths. Millennia of epistemological inquiry have shown us that these are hard to come by, and the scientific method has demonstrated that knowledge can only be built when we are actively trying to falsify it (see Supporting Idea: Falsifiability). Rather, first principles thinking identifies the elements that are, in the context of any given situation, non-reducible.
The author and explorer of mental models, Peter Bevelin, put it best: "I don't want to be a great problem solver. I want to avoid problems—prevent them from happening and doing it right from the beginning."