Opposites hot verus cold autonomous android face

Addressing AI impact on communities of color

By UF CJC

Artificial Intelligence (AI) has a color problem. Various studies have demonstrated how African Americans, in particular, are negatively affected by AI-based algorithms.

For example, a 2019 article in Nature reported on a study that showed that an algorithm widely used in U.S. hospitals to allocate health care to patients has been systematically discriminating against Black people. The study “concluded that the algorithm was less likely to refer Black people than White people who were equally sick to programs that aim to improve care for patients with complex medical needs.”

Other studies have shown AI-based discrimination in predictive policing, facial recognition, employment application screening, bank loan approvals, and many other areas.

Part of the problem is that most AI applications are created by White engineers who lack information and the cultural perspectives of communities of color. Jasmine McNealy, UF College of Journalism and Communications Telecommunication associate professor, in her article “Amplifying Otherness,” discusses both intentional and unintentional neglect by coders, which she describes as “the creation, use, and deployment of technology without regard for the disparate impacts they may have on communities different than the imagined audience.”

“Creators embed their creations with their own values, and values reflect culture and politics. If communities are outside of the scope of the creator’s purview, they may fail to recognize the consequences of that technology for that community,” McNealy wrote.

Equitable AI

McNealy, who is also associate director of the Marion B. Brechner First Amendment Project and Consortium on Trust in Media and Technology Trust Scholar, has been exploring the impact of AI on marginalized and vulnerable communities. “You can’t start from the perspective that we need to make a technology equitable, because technology reflects society…. The problem is how do we look at the system in which the technology is going to work or be active or behave and try to make that system more equitable?”

In a recent interview, McNealy explained that “we insert technology, and it amplifies the problem because people think, ‘Oh, technology is neutral,’ and it’s not. And then people double down on it not wanting to stop using the technology because we like to be efficient. We like to use tools. That’s the nature of humans. But these extensions of ourselves are just making the problem worse, and we still haven’t fixed the underlying root cause. Until that happens, technology can continue to amplify these tragic and terrible events and systems that we have in place already.”

In October 2020, McNealy received a prestigious Google Award for Inclusion Research for her project exploring community-based mechanisms to combat algorithmic bias.  In December, she was named one of the “100 Brilliant Women in AI Ethics” during the Women in AI Ethics Summit.

Culturally Competent AI

Telecommunication Professor Sylvia Chan-Olmsted and Advertising Associate Professor Huan Chen believe that AI can help address inequities in the dissemination of information, which is often distributed uniformly at scale without any cultural considerations. This can lead to unfairness in information access for certain social groups due to their inherent, unique cultural backgrounds.

Chan-Olmsted and Chen were recently awarded a UF AI Research Catalyst Fund grant for their research on “Fairness in Information Access Through Culturally Competent AI Systems.” In their grant proposal, the scholars explain that “Access to information is essential in today’s knowledge economy and fundamental to American democracy. However, certain groups of the population might be excluded or lack access to participate fully in public discourse/economy because their cultural background presents obstacles to access or comprehend the uniformly disseminated information without consideration of cultural relevance. Such information access inequality can result in social injustice to certain cultural groups.”

Their research will explore how sophisticated AI models might incorporate more intricate cultural dimensions by identifying means of disseminating information effectively through cultural resonance. “The project is novel in that it seeks to develop a fair AI system by using social theories to inform the training of [machine-learning] models and addresses the critical but challenging aspect of culture in a multicultural society.”

Check out other stories on the UF AI Initiative.

This story originally appeared on the UF College of Journalism & Communications.