In an effort to understand how best to approach idea generation and collaboration, two critical components of successful investing, AIQ speaks to leading academics in the fields of analogical and model thinking.
8 minute read
To make sense of complex situations, using and contrasting multiple models that complement each other’s “blind spots” is extremely powerful. But to spark the insights that allow us to identify hidden patterns, we have yet to find a better tool than the human brain to draw analogies.
Idea generation and collaboration to maximise the efficiency and effectiveness of output
In this article, AIQ looks to uncover the secrets of how we approach idea generation and collaboration to maximise the efficiency and effectiveness of output. Contrasting two approaches, we speak with Dedre Gentner, professor in the Department of Psychology at Northwestern University and a prominent researcher in the study of analogical reasoning, and Scott E. Page, social scientist and John Seely Brown Distinguished University Professor, Ross School of Business, University of Michigan, author of The Model Thinker.
Gentner contends that using analogies enables us to identify commonalities in relationships between two sets of different objects. This can uncover new insights, and is at the heart of scientific discovery and, more broadly, areas based on a search for knowledge and understanding. It is crucial in helping us make sense of the world’s complexity by identifying hidden relationships.
In The Model Thinker, Page takes a broad definition of models, from the intuitive mappings we make in our minds to formal mathematical models and machine-learning algorithms. Page argues we can use this diversity to explain the multiple dimensions of complex phenomena, fully exploit the vast amounts of data at our disposal and make better decisions, in an array of business, policy, academic and human fields.
Dedre Gentner – The analogical thinker
You have been working in this field for over three decades. How did you get into it?
Having started in math, I ended up graduating in physics. I then discovered the field of psychology which I studied in grad school at UCSD. It was fantastic. We worked on the nature of how people represent and process knowledge. My topic was analysing verb meaning. Playing with verbs makes you realise they really are institutionalised analogies. For example, many verbs have abstract meanings as well as concrete meanings (e.g. ‘Fred gave Ida a vase / an idea / a hard time.’).
Studying the way language works, I kept running into this notion that one of the things humans do superbly well is see relational patterns across different domains. We first learn something in a concrete way, but then we map it repeatedly to other things.
I was also influenced by my undergraduate work in math and science. For example, in abstract algebra, you can map a structure from one domain to another. And in science, I had noticed that many new discoveries are made by analogy. That is where structure-mapping came from – synthesising work on verbs with work on how scientists create new ideas, and how mathematicians map functions across domains.
In structure-mapping theory,1 you talk about structural alignment. Can you explain this?
Structural alignment refers to our ability to match two situations based on their common relational patterns, even when the concrete details are different. For example, if I ask people what is alike between “Martha divorced George” and “Wallcorp sold off Acme Tires,” they say, “they both got rid of something they no longer wanted.” The fact that Martha is totally different from Wallcorp doesn’t faze people; we can align two situations based on abstract relational patterns.
Analogical comparison can uncover an idea you didn’t have before
A great thing about structural alignment is that even when people have weak or incomplete models of both topics, when we compare them the common relational pattern will often leap out. This is what frequently happens in scientific discovery. Analogical comparison can uncover an idea you didn’t have before, which might change the way you think about both notions.
What are the dangers of analogical thinking?
All analogies break down at some point, so if you want to reason with analogies, you should think about the differences as well as the commonalities. The more you articulate what matches and what doesn’t, the better you can use the analogy for explaining and making predictions.
How should we use them to think about relational structures?
If we say, ‘Tigers are like lions,’ it is useful, but when there are so many commonalities the relational pattern will not emerge. In contrast, when I say, ‘lawyers are like sharks,’ they do not share anything superficially, so the only thing you can infer is something like ferocious predators.
That has some big advantages. First, it allows you to understand something you have nothing familiar to compare it with. Second, even if you do, it is better to compare it to something different, because that will let you inspect the relational pattern in a way that is very hard if things share all kinds of other commonalities.
Does your work give a steer on whether it is better to be a specialist or generalist?
There’s a role for both specialists and generalists, but I would say it’s best to combine these two extremes. It’s great to know some area well enough to go deep; but it’s also good to have knowledge and curiosity about lots of other areas. Creative analogies very often come from seeing patterns across different domains, and this requires curiosity beyond just one specialised domain.
What we already know is not good enough to rely on for all the situations that come up
In the world we live in, we constantly face new issues. What we already know is not good enough to rely on for all the situations that come up. You have to be sensitive to recurring relational patterns.
How can individuals and organisations get better at tapping into analogical thinking?
For individuals, I would say learn all you can, not just in your own domain but in other areas as well. For organisations, I would say make sure teams include people with knowledge from different areas – you will not get as many interesting analogies if everyone has the same background. For both individuals and organisations, I would suggest a two-stage process. First, encourage analogies and be open to playing with them, but in the second stage, apply your critical thinking; articulate the common principle and pay attention to differences. If something doesn’t fit, say, ‘Wait a minute. We predicted that. What is going on?’ You don’t want to immediately shoot an analogy down, but eventually they all have to account for themselves.
Every kind of creative thinking has to involve critical thinking as well as generative thinking
An analogy I like to use is that most mutations are lethal, and so are most analogies. As long as you are critical, it shouldn’t be an issue, so in addition to being open to analogies, you also need to think them through. What are the inferences? Are there critical differences? Does it make nonsensical predictions? It’s a bit of an art-form, really. Every kind of creative thinking has to involve critical thinking as well as generative thinking.
What is the next frontier of your focus?
There are quite a few. I am working on whether analogical processes occur in young infants (the answer seems to be yes); and on whether learning language changes the kinds of analogies we can process. I am also working on how to help children be better at relational thinking. There is a big difference here between children who succeed academically and those who fail; I think we can help all children be better at analogical thinking.
I’m also interested in how analogical processes influence the history of language and culture. With my colleague, Kensy Cooperrider, I just wrote a paper on ‘the career of measurement’. Historically, people used to measure dimensions in context-specific ways—for example, measuring the length of a field in ox lengths but the depth of water in rope lengths. Over the centuries, through comparing and aligning units, we gradually formed abstract systems where the same terms (such as metres or miles) can be used regardless of context.
This is another case where we first learn things in a concrete way and make them less concrete by comparing them. In general, we start with concrete knowledge; then we make ourselves smarter by the comparisons we make and the language we apply to them.
Scott E. Page – The model thinker
How do models help us understand the world around us?
Let me give an example, on Amazon’s recent decision to relocate.2 Amazon could think about minimising shipping costs, but also labour-market geography, long-term potential growth areas… Amazon looked at it six ways to Sunday. Yet it forgot to look at a simple model of the potential political response, and the decision backfired. That’s an example of coming up one model too short.
Can you briefly explain the REDCAPE uses of models?
There are seven core reasons to model. The first is to reason, to nail down the logic of a phenomenon. Similarly, we use models to explain phenomena or patterns. We can also use them to design things, take action, predict and explore. The final reason is to communicate, and this is what makes models so powerful. Models are of tremendous value because they allow us to articulate what we mean.
In a recent Harvard Business Review article, you mention using models to “spread attention broadly, boost predictions and seek conflict”
I wrote the article in the context of using artificial intelligence for hiring decisions. Some firms get hundreds of thousands of applicants for maybe 5,000 to 10,000 positions. One way to deal with it is to train a model to predict simple metrics like the first-year job rating or long-term success rate.
An approach to boost those predictions is the “random forest”. Starting with one model of many decision trees, you test it to find which cases it would get wrong. You then create another model that only focuses on those blind spots.
Using many models will focus on different dimensions
Another possibility is to take a different approach entirely. Imagine that out of 70,000 employees, 100 are superstars. Could you predict those? You wouldn’t want to hire everyone based on this, but if one candidate falls just short of other criteria yet is the most likely out of a million to be a superstar, you would be crazy not to hire her.
Using a single criterion will give you very similar people, while using many models will focus on different dimensions. And once all the data is coded in, it is not that expensive to construct additional models. Once you have the data, you want to array it on a lattice of models.
In The Model Thinker, you wrote: “Policy choices made on the basis of single models may ignore important features such as income disparity, identity, diversity and external interdependencies.” Is this what we are doing with the Paris Agreement target to reduce CO2 emissions rather than taking on a broader, more complex view?
Something I talk about in the book is Campbell’s Law, which says that as soon as you try to base policy on a metric it will no longer work because people figure out workarounds. It is obviously better to have CO2 restrictions than not, but will companies find other ways of polluting – that don’t emit CO2 but maybe produce something else with just as bad an effect? If there are other ways to produce energy or create production processes, will that compensating behaviour negate the effect of the CO2 restriction?
On the other hand – and potentially this is another model – once you place large costs on CO2 or other greenhouse gases, you create huge incentives for innovation. Once incandescent bulbs became illegal, all of a sudden there was this amazing level of innovation. Previously, lightbulbs were so cheap it made no sense to try and innovate.
You advocate the use of multiple models to get the “diversity bonus” out of them. Can you give us a brief introduction to this concept?
The key question is figuring out the right ensemble
Two people with the same training and experience will think about the world in similar ways and their predictions will be correlated. Here is where diversity delivers this big bonus. If we add someone whose predictions are not as good but are negatively correlated to the others, the collective prediction will be much better.
With models, the key question is figuring out the right ensemble. What you want is models that get different things wrong differently.
How can the use of multiple models bridge gaps between different points of view?
As an example, no one knows what will happen to the economy 30 years out. Any one person will likely be horribly wrong so, in addition to using many models to make an average, you can use many models to define a set of bounds, to get a sense of worst and best case. There is a real robustness advantage. Ideally, you would like to use the models with the best measurements of facts but maintain the diversity of conjectures.
What about the availability of quality data?
If Big Data is what can be gathered from the web, there is also richer, thick description data from observing and interviewing people, which shouldn’t be overlooked now that Big Data is so cheap. If anything, qualitative data is now far more valuable.
Going back to where we began, think about how job interviews have changed. Typical questions used to be, “Where did you go to college? What’s your grade-point average?”. That is now a waste of time because an algorithm has already taken it into account. That frees interviewers to focus on different things, like the ability to think on one’s feet.
Thinking about thinking is hard work. However, analogies and models are tools that help us every day. Given that investing is about understanding the world and making sense of where things may be headed, any improvements we can make are worthwhile.
The power of analogies lies in the nature of language. They enable complex information to be communicated more easily and help us reach new levels of understanding. By making creative connections, they allow us to see things anew, to uncover patterns and new relationships.
Models are more formal and systematic. Multiple and diverse models allow complex data sets and problems to be analysed from a broader array of perspectives, thereby improving procedural rigour and – hopefully – leading to more intelligent decisions.
In this sense, it is no coincidence that effective analogies and model thinking require breadth and diverse thinking, both at the individual and group level. In an uncertain world where complexity abounds, we cannot afford to ignore their significance.