Avoiding Discriminatory Information Improves Personnel Decisions, New Research Finds
New Ross School of Business research concludes that not using information about a person's race or gender can result in more accurate hiring decisions on hiring.
The new Michigan Ross study — from Professor Felipe Csaszar, Professor Michael Jensen, and PhD student Diana Jue-Rajasingh, published online by the journal Organization Science — offers new insight into the phenomenon of statistical discrimination.
The authors explained that one reason discrimination is so prevalent is that some believe it improves decision accuracy.
“For example, a hiring manager sees that two job applicants, one white and one black, list proficiency in a programming language on their resumes. The manager knows that, on average, white job applicants have more opportunities to develop programming ability than black applicants and hires the white applicant. The manager discriminates by using information about applicants' race in an attempt to make better decisions,” the authors explain. “This insight is not new. Prominent economists, including Nobel laureates Kenneth Arrow and Edmund Phelps, have formalized this insight in statistical discrimination theory.
“There is a problem with this theory, however. Aside from the fact that it has been used to justify discrimination, it also relies on the questionable assumption that humans are consistent decision makers. However, as argued by Nobel laureate Daniel Kahneman and other scholars of judgment and decision making, this is just not true. Humans are terribly inconsistent in making decisions. They can take the same information and make different decisions depending on random factors like their mood, the weather, hunger, and other distractions.”
Therefore, the Michigan Ross researchers created a new model of statistical discrimination that incorporates what is known about human fallibility. ”When we do this, the gains in decision accuracy from discrimination decrease dramatically. In some cases, the benefits to accuracy from discriminating can even turn into costs,” they write.
The key implication of the results is that “human decision makers are typically better off not discriminating. That said, our model raises troubling questions about the use of artificial intelligence to make decisions. Unlike inconsistent human discriminators, AIs can make optimal use of information in a consistent fashion. This brings up serious questions about the ethical use of AI in decision making.” The authors also propose ways in which governments could ameliorate this situation.
However, the researchers conclude that when it comes to humans, "less is more."
“Not using information about a person's race or gender can result in more accurate decisions,” they write. “Discrimination is prevalent because it is tempting to use group membership such as race and gender as information cues. But using discriminatory cues is not only morally questionable, it is more likely to result in less accurate decisions. This makes the case against discrimination even stronger.”