Why ‘Implicit Bias Training’ Makes No Sense

Does it make sense for Starbucks to put its workforce through “implicit bias training”?

Maybe as a public relations gesture to apologize for the arrest of two peaceful black men who were there for a meeting without buying anything at a Philadelphia Starbucks and then asking to use the restroom. But if the company’s goal is to remove hidden prejudice from its employees’ minds, this “training” makes no sense at all.

The reason: despite many attempts to define it, no one knows what “implicit bias” is, and so no one has any idea what “training” might be useful to combat it.

Can It Be Defined?

I attended a special National Science Foundation (NSF) conference last fall to address controversies surrounding “implicit bias.” In the end, there was a discussion to see if the attendees could agree on what the term meant. They (we) could not do so.

Dr. Anthony Greenwald, a social psychologist at the University of Washington, and one of the foremost proponents of implicit bias presented two different definitions (which is fine; scientists changing their opinions or views of some phenomenon or measure is a natural part of the evolution of scientific understanding). Here is Greenwald’s first definition, based on work done in the 1990s:

Implicit bias is introspectively unidentified (or inaccurately identified) as effects of past experience that mediate discriminatory behavior.

Much of the time the term is thrown around, even in scientific articles, without being defined. When it is defined, different people define it differently. Yet training has been instituted at many colleges, universities, and corporations around the country.  This training, endorsed by the American Association of Colleges and Universities, is wildly premature because research has only just begun to examine effectiveness. Some training may have actual adverse consequences.

The most common measure of what some people refer to as implicit bias is the implicit association test (IAT), which was also developed by Dr. Greenwald. It assesses how closely various concepts in memory are linked (or “associated”). For example, I suspect most people would associate vegetables with green, and comets with space, more strongly, say, than they would associate comets with green or vegetables with space. The IAT assesses this strength of association by comparing how quickly people complete various categorization tasks (for a simple description of how the IAT works see this article). Although the association of comets with space might be harmless, perhaps other associations, such as Jews with banks, women with family, or African Americans with crime are not so harmless. Or perhaps they are.

How does speed of completing this categorization task map onto Dr. Greenwald’s definition of implicit bias? It doesn’t. The IAT might sometimes reflect a person’s past experience or predict discrimination, but whether it does or not are empirical questions; it does not measure either. It does not measure mediation (mediation refers to the idea that A causes C because A first causes B which then causes C). Because research over the last few years has shown that people are quite good at predicting their IAT scores, one cannot say they are introspectively un- or inaccurately identified. So if we delete what the IAT does not measure from Dr. Greenwald’s definition, we get this:

Implicit bias is introspectively unidentified (or inaccurately identified) effects of past experience that mediate discriminatory behavior.

Dr. Greenwald also offered this second, updated definition

Implicit bias is automatic cultural filtering.

I do not recall him explaining what he meant by this in any detail, and, frankly, I have no idea what it means. What about other definitions?

In many areas of academia, the definition is not as important as the meme. An article in the Brown Political Review, The Dangerous Mind: Unconscious Bias In Higher Education, the authors conclude, “As universities attempt to promote equality for people of all races, genders, and ethnicities, they must combat the unconscious bias that plagues relationships between students and professors. To do this, they should adopt unconscious bias training and help assist all professors counter their biases.”

A recent article in Scientific American defined implicit bias as “the tendency for stereotype-confirming thoughts to pass spontaneously through our minds.” This makes no sense either, because, in fact, the evidence so far shows that stereotype accuracy is one of the largest and most replicable findings in social psychology. Why is it reasonable to refer to accurate beliefs as any sort of bias? It isn’t. This CNN article on the Starbucks incident defined implicit bias as: “the automatic associations people have in their minds about groups of people, including stereotype,” a definition that also runs up against the absurdity of defining beliefs that are accurate as “bias.”

Measuring the Wrong Thing

Much of the usage of the term is tautological. The IAT claims to measure implicit bias. But how do we know people’s implicit associations are biased? That is what the IAT measures.

From a purely empirical standpoint, there is considerable scholarship showing that: 1. The IAT measures lots of things besides prejudice; 2. Its reliability is low; 3. There seems to be something fishy in the test such that scores well above zero (usually interpreted as bias) correspond to egalitarian responses on other measures; and 4. Its ability to predict individual behavior is quite low. But once there’s a score that could brand someone a racist, it could follow them throughout their lives and affect their futures.

The Scientific American article argued that, even if the IAT is suboptimal, there is other evidence of such biases. But much of what they cite is not “bias” in any recognizable sense. For example, they cite evidence that people are faster to recognize bad words when paired with white faces than with black faces. Whatever theoretical value this might have, linking such findings to discrimination or inequality is rarely, if ever done.

On the other hand, they did review evidence of ongoing discrimination as if that is evidence of “implicit bias.” Is it? If so, then we now have yet another definition of “implicit bias” – discrimination. And if discrimination=implicit bias, why do we need a new term for an old phenomenon? Regardless, they did not cite any of the research showing no discrimination at all, such as this paper reporting the nonexistence of discrimination in 17 large sample nationally representative studies. To be sure, the evidence of discrimination they cited is real enough. But cherry-picking evidence is a sure way to overstate discrimination.

Starbucks’ go-to solution – subjecting the entire company to a day of “implicit bias training” borders on ridiculous, at least as a plan to actually reduce discrimination. Admittedly, it may have other purposes – public relations, preventing lawsuits, etc.

As discrimination-reduction, implicit bias training is silly. Think about it – if “implicit bias” really is the automatic and uncontrolled prejudice so often claimed, one cannot “train” people out of it anyway. More important, it is very difficult to change implicit associations about groups, and no evidence that doing so produces changes in behavior.

Furthermore, much of the impetus behind the push to bring “implicit bias” from the ivory tower to the real world (faculty training, policy, law) has been the assumption that unconscious prejudices lead to biases against individuals. Although that surely does sometimes happen and might have even happened in the Starbucks incident, we recently published a paper showing how easy it is to eliminate biases against particular individuals.

In our study, people evaluated the intelligence of Luke (white) and Jamal (African American). We found a typical bias score on the IAT (favoring Luke) when people only knew the person’s name. However, when they found out that both Luke and Jamal were weak students (low GPAs and SATs), there was no bias. When they found out that both were strong students, there was no bias. One simple solution to “implicit bias “might simply involve encouraging people to be sure to judge others on their merits, rather than their categories. When they do so, biases are often greatly reduced and, sometimes, disappear.

What Should Starbucks Do to Solve an Actual Problem?

First, they need to figure out what the problem really is. With thousands of stores and millions of social interactions a year, some may go bad sometimes. Most likely, the cop-calling employee just needs a good talking to. Perhaps it was simply bad customer-relations, and the cop-caller was also overly aggressive with White noncustomers. Or, perhaps, upon further review, all concerned would conclude there really was a racist element here.

If so, the employee could be retrained or, if needed, fired. The solution would depend on the judgment about the nature of the problem. Personally, I tend to err on the forgiving side; if this was the first time the cop-caller screwed up, I’d lean towards a second chance. If it was not the first, I would be less forgiving.

Lee Jussim

Lee Jussim

Lee Jussim, Ph.D., is a professor of social psychology at Rutgers University and was a Fellow and Consulting Scholar at the Center for Advanced Study in the Behavioral Sciences at Stanford University (2013-15).

Leave a Reply

Your email address will not be published. Required fields are marked *