We often feel like our actions result from deliberate reasoning and explicit, carefully-orchestrated intentions, but psychologists have been telling us otherwise for decades.
Freud famously suggested that people's everyday speech and behavior betray unconscious ideas fighting for expression. A man's complexes, he wrote, are "what directs his conduct in the world." But the idea that the human mind contains vast, unconscious resources, like an iceberg that hides the bulk of its mass below water, predates Freud to earlier thinkers, including Wilhelm Leibniz, Johann Friedrich Herbart and Gustav Theodor Fechner.
The idea also appears in contemporary psychological theories. In a classic paper published in 1977, for example, Richard Nisbett and Timothy Wilson argued that people rarely have access to complex cognitive processes, such as decision-making, via introspection. (You think you chose the bagel on the right because it looked fresher, but that was just be a plausible story you told yourself; really, you would have chosen the other bagel had it been the one on the right.) More recently, in a 2001 paper, Jonathan Haidt wrote about "the emotional dog and its rational tail," arguing that many of our explicitly-held moral judgments result from unconscious social and cultural influences rather than conscious deliberation.
The idea that unconscious beliefs govern a great deal of everyday attitudes and behavior has been especially influential in explaining bias and discrimination, which can happen at the hands of well-intentioned people who don't explicitly hold sexist or racist beliefs. We know that biased behavior is pervasive but that explicitly-endorsed bias is not, so unconscious (or "implicit") beliefs seem like the obvious culprits.
But are they?
A paper just published in the Journal of Personality and Social Psychology raises some doubts. The paper analyzed dozens of studies investigating the relationship between implicit beliefs concerning people of other races and ethnicities, explicit beliefs concerning people of other races and ethnicities, and actual behavior towards people of various races and ethnicities, ranging from what people said in group interactions to policy and voting choices.
To measure implicit beliefs, the papers included in the analysis used a popular tool known as the Implicit Association Test, or "IAT." (You can try it yourself at Harvard's Project Implicit.) The IAT comes in many flavors, but the basic idea is to measure the strength of unconscious associations between different entities, such as social groups and attributes. For example, in a "black-white" IAT, you might be asked to perform two tasks simultaneously and as quickly as possible. When you see a word appear on the screen, you should press one button if the word is good (e.g., joy, peace, love, laughter), and another button if the word is bad (e.g., agony, terrible, evil, failure). If you see a face appear on the screen, you should press one button if the person looks African American, and another if the person looks European American.
Here's the key: the buttons for the good/bad word sorting are the same as those for the black/white face sorting. Some of the time, the "good" button will correspond to black faces; at other times, the pairing will be reversed. If people implicitly associate black and white faces (and by extension, black and white people) with differentially positive or negative attributes, then one of these pairings should be easier than the other. In fact, most European Americans perform the task more quickly when "good" and "white" have the same button and "bad" and "black" have the same button than when the reverse is true, and this difference in speed is thought to reflect the strength of an individual's association between the racial categories and positive or negative attributes.
The million-dollar question is what such implicit associations tell us about individuals' discriminatory behavior and how it can be changed. For example, are people who exhibit large biases on a black-white IAT more likely to interpret a black face as hostile, to subtly alter their body posture to communicate more aggression towards African Americans, or to engage in unfair hiring practices?
These questions are difficult to study, but the answer from a prior review of the literature seemed to be yes: implicit attitudes about race and ethnicity, as measured by the IAT, predicted discriminatory behavior to some extent, and often better than people's explicit ratings of their own biases.
The new analysis, by researchers Frederick Oswald, Gregory Mitchell, Hart Blanton, James Jaccard, and Philip Tetlock, however, suggests some different conclusions. Analyzing the largest sample of studies on racial and ethnic bias to date, and breaking down their analysis into more fine-grained categories than was done in the past, the researchers found that IATs were poor predictors of every type of behavior examined, and predicted behavior no more effectively than measures of explicit belief. Measures of implicit and explicit belief jointly accounted for well under 7% of the variance in people's behavior in most cases, with the two measures being largely redundant.
In some ways this low number isn't too surprising. Human psychology is complex, and successfully predicting human behavior is the exception rather than the rule. Moreover, small effects can be consequential when you consider the scale on which they operate: millions of people behaving in ways that can positively and negatively affect others on a daily basis. So the biggest surprise, for me, wasn't the relatively low predictive value of the IAT when it came to human behavior, but that the IAT did no better than explicit measures of belief. After all, implicit beliefs are supposed to explain how a person who doesn't consider herself racist might inadvertently engage in discriminatory behavior.
Another surprise was that the strongest correlations to emerge in the new analysis were between IAT performance and brain activity (as measured by fMRI or other brain imaging techniques). In fact, this is one reason why the new analysis revealed more modest relationships between IAT performance and discriminatory behavior. In the previous analysis, brain activity was lumped in with direct measures of behavior, such as body language and policy preferences. In the present analysis, brain activity was considered on its own - not only because brain activity isn't properly "behavior," but also because it isn't clearly discrimination. As the researchers note:
We cannot conceive of any socially meaningful definition of discrimination that treats differences in brain activity - independent of relevant behavioral outcomes - as discrimination.
So we're left with a puzzle. Discriminatory behavior certainly occurs, but is not strongly predicted by either IAT performance or the kinds of explicit measures of belief employed in most research to date.
To quote the authors, is the problem "flawed theories" or "flawed instruments"? In other words, do we need to revise the idea - yet again - that our behaviors are governed by (conscious or unconscious) beliefs, or are we failing to adequately measure those beliefs?
I don't have the answers to these questions, so instead let me raise a few more.
It's tempting to think that unconscious beliefs reflect what others "really" believe, that they reveal the true self. But why should unconscious beliefs be considered any more genuine than conscious beliefs, the ones over which we have more control? Is it reasonable to hold people responsible for their implicit beliefs, or their brain activity, even if it doesn't translate into action?
Of course, these questions go well beyond the conclusions of a single paper, and even beyond descriptive psychology to questions of ethics and values. But perhaps they're questions we need to be asking.
You can keep up with more of what Tania Lombrozo is thinking on Twitter: @TaniaLombrozo