Should You Trust Your Insight?
You’re standing in the shower when you suddenly remember who wrote that song you heard in the car last night. Or you’re lying there half asleep after hitting the snooze button and a perfect caption for last week’s New Yorker cartoon contest pops into your brain. Or you suddenly realize what's in the picture that heads this article. This is insight. It’s sudden. It’s often unexpected. And it seems so certain.
The question is...can you trust it?
This week, there’s a new take on the validity of insight from a team including some of the leading voices in insight research, namely John Kounios, Edward Bowden, and Mark Beeman, working with an idea by Carola Salvi. For years, they’ve been doing nifty things like using fMRI to see the basis of insight in the brain, and showing how to prepare the brain to create insight. Now, in the journal Thinking & Reasoning, they ask whether these answers that spring from insight are likely to be correct. Specifically, how does a solution created by insight stack up against a solution created by analysis?
Unfortunately, this is an impossible question – I mean, you can’t just give people a bunch of little puzzles, ask them whether they used insight or analysis to solve them, and then compare the quality of the answers, can you?
Actually, you can.
That's because over the years Kounios, Beeman, Bowden and Salvi have done the groundwork. For example, they've shown types of problems that can be solved using either strategy. Four of these kinds of problems are remote association (e.g. “What one word goes with the words crab, pine and sauce to make a compound word or common phrase?”), anagrams (e.g. making “dear” from “read”), rebus puzzles (e.g. na fish, na fish making “tunafish”), and visual aha problems (e.g. the indistinct outline of an owl that comes into focus in the brain). And they’ve also shown that people are darn good at distinguishing whether they solved the problem with insight (“the answer suddenly coming to mind, being somewhat surprising, and with the participant having difficulty stating how the solution was obtained,” they write) or with analysis (“the answer coming to mind gradually, using a strategy such as generating a compound for one word and testing it with other words, and being able to state how the solution was obtained,” they write).
So, yes, you can just give people a bunch of little puzzles, ask them whether they used insight or analysis to solve them, and then compare the quality of the answers. In fact that’s exactly what the researchers did.
They started with 120 remote association problems. Of the answers that people said came through insight, 93.7 percent were correct. Of the answers that came through analysis, only 78.3 were correct. Then they moved on to 180 anagrams: Insight was correct 97.6 percent of the time, whereas analysis gave the correct answer 91.6 percent of the time. On rebus puzzles, insight won 78.5 percent to 63.2 percent. On visual aha problems, insight won 78.4 percent to 41.5 percent. The name of the paper itself says it all: “Insight solutions are correct more often than analytic solutions.”
Now, you can probably come up with a myriad of other factors that could pollute these results – maybe the easier questions were the ones that succumbed to insight, or maybe people tried insight first and resorted to analysis only when it failed? – but you can rest assured that the researchers had these same concerns and tried their hardest to ensure that insight and analysis were compared in the most apples-to-apples way possible.
So why should this be? Why should a quick solution without a known method be better than thinking something through in a reasoned, rational, methodical way? A big piece of the explanation, the authors write, may be that insight is black or white – it either returns a sure answer or no answer at all (many participants timed out on many questions, failing to give an answer – perhaps they were looking for insight that never came?). On the other hand, analysis isn’t black and white – it goes from black, through various shades of grey until participants finally give an answer…even an incorrect one.
Another explanation may come from how a stimulus “spreads” to become an answer. For example, the authors point to the words pine, crab and sauce in the remote association test. Try using analysis to start from any single word. What do you associate with “pine”? You’ll probably come up with cone and tree. These concepts are “strongly primed” by the word “pine”. But it takes weak priming by all three words pine, crab and sauce to get the solution, “apple,” that goes with each. In this example, it’s hard to get to “apple” with analysis from a single word and much easier to get there with insight drawn from the fuzzy combination of all three.
But these explanations try to explain something that might not matter that much to you, namely why you might come up with a solution using analysis or insight. A more important question for everyday life might be what to do with insight when it happens to show up. And in that case, the takeaway seems to be that you should trust it. Or, at least it’s likely to be better than a solution you could have found through analysis!
Life doesn’t always offer insight. But when it does, it’s probably right.
By Garth Sundem