ÌÇÐÄÊÓÆµ

August 27, 2024

Navigating the complex relationship between race and AI

Credit: Pixabay/CC0 Public Domain
× close
Credit: Pixabay/CC0 Public Domain

Race is intricately woven into the fabric of daily culture, and as a result, it influences artificial intelligence (AI) systems that automate tasks traditionally performed by humans.

Whether in disease detection, , loan approvals, or generative imagery, race is an embedded element in numerous technologies. Consequently, AI systems have frequently been criticized for their insensitivity and inaccuracies in addressing race, often neglecting the complex social nuances it entails.

Two members of the Pamplin College of Business—marketing doctoral candidate Angela Yi and Assistant Professor of Marketing Broderick Turner—recently explored the complex relationship between race and AI in their paper, "," appearing in the August edition of the journal Current Opinion in Psychology.

To address the issue, Yi and Turner first had to find the definition of race.

"There's actually no behind race," Yi said. "It's more of a social categorization system that we still see used today."

The concept of race first emerged during the European Enlightenment. It was used to classify people based on superficial traits, which were then used to reinforce existing social hierarchies. These categories, established during periods of colonialism and slavery, have persisted and continue to influence modern AI systems, often perpetuating outdated and inaccurate assumptions.

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

According to Yi, race is often treated as a fixed category—such as Black, white, or Asian—in many AI systems. This static representation fails to capture the social and cultural dimensions of race, leading to inaccuracies and potential biases.

"For example, when a person using Google Gemini searched for an image of the Founding Fathers of the United States, the system outputted an image that included non-white individuals," Yi said. "It is speculated that this occurred because Google was trying to overcorrect for diverse representation."

As illustrated by the issues encountered when using Google Gemini, integrating race into AI systems presents significant challenges. The tendency to treat race as a fixed category overlooks its social and historical dimensions, leading to inaccuracies and potential biases.

In their paper, Turner and Yi offered recommendations for appropriately incorporating race in AI systems.

"People need to recognize that race is a dynamic social construct that changes over time," Yi said. "AI systems should reflect this complexity rather than relying on outdated or overly simplistic categories."

Yi also suggested that developers of AI systems consider the broader implications of including race in AI systems and embrace a more nuanced representation.

"Including race in AI systems is not always going to be a simple answer, but it's going to need to be a nuanced answer because race is social, and understanding the social and historical context of race can help developers create more equitable and accurate models," she said.

More information: Angela Yi et al, Representations and consequences of race in AI systems, Current Opinion in Psychology (2024).

Provided by Virginia Tech

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

fact-checked
trusted source
proofread

Get Instant Summarized Text (GIST)

This summary was automatically generated using LLM.