facial recognition 3d illustration

Study Outlines What Creates Racial Bias in Facial Recognition Technology

As facial recognition technology comes into wider use worldwide, more attention has fallen on the imbalance in the technology’s performance across races.

In a study published online Sept. 29, 2020, in IEEE Transactions on Biometrics, Behavior, and Identity Science, researchers from UT Dallas’ School of Behavioral and Brain Sciences (BBS) outlined the factors that contribute to these deficits in facial recognition accuracy and offered a guide to assess the algorithms as the technology improves.

Dr. Alice O’Toole, the Aage and Margareta Møller Professor in BBS, is the senior author of the study, which she describes as both “profound and unsatisfying” because it clarifies the scale of the challenge.

“Everybody’s looking for a simple solution, but the fact that we outline these different ways that biases can happen — none of them being mutually exclusive — makes this a cautionary paper,” she said.

Dr. Alice O'TooleDr. Alice O’Toole

In a study in 2019 conducted by the National Institute for Standards and Technology, the government agency found that the majority of facial recognition algorithms were far more likely to misidentify racial minorities than whites.

As a result of their research, the UT Dallas scientists concluded that while there isn’t a one-size-fits-all solution for racial bias in facial recognition algorithms, there are specific approaches that can improve the technology’s performance.

Jacqueline Cavazos PhD’20, the study’s lead author and a former psychological sciences student, divided the factors contributing to bias into two categories: data-driven and operationally defined. The former influence the algorithm’s performance itself, while the latter originate with the user.

“Data-driven factors center on the most commonly theorized issues — that the training pool of images is in itself skewed,” Cavazos said. “Are the images being used representative of groups? Are the training images of the same quality across races?”

O’Toole added, “Our discussion of image difficulty for racial bias is a relatively new topic. We show that as pairs of images become more difficult to distinguish — as quality is reduced — racial bias becomes more pronounced. That hasn’t been shown before.”

“We show that as pairs of images become more difficult to distinguish — as quality is reduced — racial bias becomes more pronounced.”

-Dr. Alice O’Toole

O’Toole believes their research could help users understand which algorithms should be expected to show bias and how potentially to calibrate for that bias. She said researchers are still fighting myths about facial recognition bias. One is the notion that bias is a problem unique to machines. Another is the perception that race is an all-or-nothing descriptor.

“Race must not be viewed as … if there’s a finite list of races,” O’Toole said. “In truth, biologically, race is continuous, so it’s an unreasonable expectation to think you can say ‘race equity’ and tune an algorithm for two races. This might disadvantage people of mixed race.”

The research was supported by the National Eye Institute and the Intelligence Advanced Research Projects Activity, part of the Office of the Director of National Intelligence.

– Stephen Fontenot