Mental Models

Mental Models: What Are They? How Are They Used?

The concept of a “mental model” is not the easiest to grasp. Everyone seems to have a different idea of what it is, not to mention how to use it in practice.

Here’s the simplest (useful) definition I can define.

Mental models are abstract mental representations of some process we see reliably repeated in the world.

As we see a process repeat itself, we naturally create models in our heads: Generalizations of the actual causative structure of the world. What causes what to happen? Every biological creature must do it to survive, but humans are extraordinary at it.

This is an automatic process: A week old baby has already begun modeling the world; its physical processes, the constant bombardment of language, the familiarity of the people around her, and a hell of a lot else.

As we learn new things, we begin chunking away knowledge into ever-more (and ever-more complex) categories. Cognitive psychologists call this process chunking. Here’s Steven Pinker’s explanation:

As children we see one person hand a cookie to another, and we remember it as an act of giving. One person gives another one a cookie in exchange for a banana; we chunk the two acts of giving together and think of the sequence as trading. Person 1 trades a banana to Person 2 for a shiny piece of metal, because he knows he can trade it to Person 3 for a cookie; we think of it as selling. Lots of people buying and selling make up a market. Activity aggregated over many markets get chunked into the economy. The economy can now be thought of as an entity which responds to action by central banks; we call that monetary policy. One kind of monetary policy, which involves the central bank buying private assets, is chunked as quantitative easing.

As we read and learn, we master a vast number of these abstractions, and each becomes a mental unit which we can bring to mind in an instant and share with others by uttering its name.

Once we chunk things together in higher and higher forms (our brain can do an infinite recursion, essentially), we use a process of associative pattern-matching to retrieve them and make decisions. Our brains are not actually like computers, searching through everything to find an answer.

They’re far more efficient: They search through pre-existing linkages. A chess grandmaster quickly sees the promising moves. The grandmaster doesn’t have consider all possible moves. A computer must.

Laying the Foundation

Charlie Munger explains the idea of models pretty well. The idea of a “mental models” approach to learning is to pick up all of the most fundamental processes. These are high level “chunks” of knowledge that are useful and reliably repeat.

I think it is undeniably true that the human brain must work in models. The trick is to have your brain work better than the other person’s brain because it understands the most fundamental models: ones that will do most work per unit. If you get into the mental habit of relating what you’re reading to the basic structure of the underlying ideas being demonstrated, you gradually accumulate some wisdom.

[…]

You have to recognize how these things how these things combine. And you have to realize the truth of the biologist Julian Huxley’s idea that ‘Life is just one damn relatedness after another.’ So you must have the models, and you must see the relatedness and the effects of the relatedness.

[…]

…if you don’t have the full repertoire, I guarantee you that you’ll overutilize the limited repertoire you have – including use of models that are inappropriate just because they’re available to you in the limited stock you have in mind.

Well, that’s a ghastly way to operate I the world. So what I’m doing is laying down the iron prescription that you must have all of the main models of the world. Fortunatley, 98% of the world can be explained pretty well by a fairly limited number of models. So what I’m suggesting is perfectly doable, but it can’t be done it a week or a weekend. You have to work at it over a period of time.

Munger probably got many of these ideas from the father of Artificial Intelligence, Herbert Simon.

Simon put expert decision-making like this.

Experts, human and computer, do much of their problem solving not by searching selectively but by simply recognizing the relevant cues in situations similar to those they have experienced before.

On another occasion, Simon put it thus:

If one could open the lid, so to speak, and see what was in the head of the experienced decision-maker, one would find that he had at his disposal repertoires of possible actions; that he had checklists of things to think about before he acted; and that he had mechanisms in his mind to evoke these, and bring these to his conscious attention when the situations for decisions arose.

Simons gives us three steps here:

1. Repertoire of possible actions
2. Checklists of things to think about (models)
3. Mechanisms to evoke these

Another Artificial Intelligence pioneer, Marvin Minsky, put in very similarly:

The way people solve problems is first by having an enormous amount of common sense knowledge, like maybe 50 million anecdotes or entries, and then having some unknown system for finding among those 50 million old stories the 5 or 10 that seem most relevant to the situation. This is reasoning by analogy.

Psychologists call the “unknown system” Minsky is referring to “associative activation: “Ideas that have been evoked trigger many other ideas, in a spread cascade of activity in your brain.”

The limitation of this system has been outlined by the great psychologist Daniel Kahneman: The Availability Heuristic, a “rule of thumb” for how humans make decisions and solve problems. He calls it WYSATI: What You See Is All There Is.

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. System 1 excels are construction the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information that it does not have.

So the problem and the opportunity come here. Our brain cannot do anything with models it does not have, and have deeply and well (in a retrievable form).

If we seek to become better thinkers and better decision makers, we have to add in the models that are, in the words of Munger, performing the most “work per unit.”

Thomas Edison put it best as far as how we search for these things.

I regard it as a criminal waste of time to go through the slow and painful ordeal of ascertaining things for one’s self if these same things have already been ascertained and made available by others.

We master the best of what other people have already figured out.