Biases in large image-text AI model favor wealthier, Western perspectives
AI model that pairs text, images performs poorly on lower-income or non-Western images, potentially increasing inequality in digital technology representation
In a study evaluating the bias in OpenAI's CLIP, a model that pairs text and images and operates behind the scenes in the popular DALL-E image generator, University of Michigan researchers found that CLIP performs poorly on images that portray low-income and non-Western lifestyles.
"During a time when AI tools are being deployed across the world, having everyone represented in these tools is critical. Yet, we see that a large fraction of the population is not reflected by these applications—not surprisingly, those from the lowest social incomes. This can quickly lead to even larger inequality gaps," said Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, who initiated and advised the project.
AI models like CLIP act as foundation models, or models trained on a large amount of unlabeled data that can be adapted to many applications. When AI models are trained with data reflecting a one-sided view of the world, that bias can propagate into downstream applications and tools that rely on the AI.
"If a software was using CLIP to screen images, it could exclude images from a lower-income or minority group instead of truly mislabeled images. It could sweep away all the diversity that a database curator worked hard to include," said Joan Nwatu, a doctoral student in computer science and engineering.
Nwatu led the research team together with Oana Ignat, a postdoctoral researcher in the same department. They co-authored a paper presented at the Empirical Methods in Natural Language Processing conference Dec. 8 in Singapore.
The researchers evaluated the performance of CLIP using Dollar Street, a globally diverse image dataset created by the Gapminder Foundation. Dollar Street contains more than 38,000 images collected from households of various incomes across Africa, the Americas, Asia and Europe. Monthly incomes represented in the dataset range from $26 to nearly $20,000. The images capture everyday items, and are manually annotated with one or more contextual topics, such as "kitchen" or "bed."
CLIP pairs text and images by creating a score that is meant to represent how well the image and text match. That score can then be fed into downstream applications for further processing such as image flagging and labeling. The performance of OpenAI's DALL-E relies heavily on CLIP, which was used to evaluate the model's performance and create a database of image captions that trained DALL-E.
The researchers assessed CLIP's bias by first scoring the match between the Dollar Street dataset's images and manually annotated text in CLIP, then measuring the correlation between the CLIP score and household income.
"We found that most of the images from higher income households always had higher CLIP scores compared to images from lower income households," Nwatu said.
The topic "light source," for example, typically has higher CLIP scores for electric lamps from wealthier households compared to kerosene lamps from poorer households.
CLIP also demonstrated geographic bias as the majority of the countries with the lowest scores were from low-income African countries. That bias could potentially eliminate diversity in large image datasets and cause low-income, non-Western households to be underrepresented in applications that rely on CLIP.
"Many AI models aim to achieve a 'general understanding' by utilizing English data from Western countries. However, our research shows this approach results in a considerable performance gap across demographics," Ignat said.
"This gap is important in that demographic factors shape our identities and directly impact the model's effectiveness in the real world. Neglecting these factors could exacerbate discrimination and poverty. Our research aims to bridge this gap and pave the way for more inclusive and reliable models."
The researchers offer several actionable steps for AI developers to build more equitable AI models:
- Invest in geographically diverse datasets to help AI tools learn more diverse backgrounds and perspectives.
- Define evaluation metrics that represent everyone by taking into account location and income.
- Document the demographics of the data AI models are trained on.
"The public should know what the AI was trained on so that they can make informed decisions when using a tool," Nwatu said.
The research was funded by the John Templeton Foundation (#62256) and the U.S. Department of State (#STC10023GR0014).
Study: Bridging the Digital Divide: Performance Variation across Socio-Economic Factors in Vision-Language Models (DOI: 10.48550/arXiv.2311.05746)
DOI
Battle of the AIs in medical research: ChatGPT vs Elicit
Efforts to streamline the process of academic research collection in the medical field using generative AI
Peer-Reviewed PublicationCan AI save us from the arduous and time-consuming task of academic research collection? An international team of researchers investigated the credibility and efficiency of generative AI as an information-gathering tool in the medical field.
The research team, led by Professor Masaru Enomoto of the Graduate School of Medicine at Osaka Metropolitan University, fed identical clinical questions and literature selection criteria to two generative AIs; ChatGPT and Elicit. The results showed that while ChatGPT suggested fictitious articles, Elicit was efficient, suggesting multiple references within a few minutes with the same level of accuracy as the researchers.
“This research was conceived out of our experience with managing vast amounts of medical literature over long periods of time. Access to information using generative AI is still in its infancy, so we need to exercise caution as the current information is not accurate or up-to-date.” Said Dr. Enomoto. “However, ChatGPT and other generative AIs are constantly evolving and are expected to revolutionize the field of medical research in the future.”
Their findings were published in Hepatology Communications.
###
About OMU
Osaka Metropolitan University is the third largest public university in Japan, formed by a merger between Osaka City University and Osaka Prefecture University in 2022. OMU upholds "Convergence of Knowledge" through 11 undergraduate schools, a college, and 15 graduate schools. For more research news, visit https://www.omu.ac.jp/en/ or follow us on Twitter: @OsakaMetUniv_en, or Facebook.
JOURNAL
Hepatology Communications
METHOD OF RESEARCH
Meta-analysis
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Collaborating with AI in Literature Search – An Important Frontier
ARTICLE PUBLICATION DATE
7-Dec-2023
COI STATEMENT
Cheng-Hao Tseng is on the speakers’ bureau for Roche. Yao-Chun Hsu consults, advises, is on the speakers’ bureau, and received grants from Gilead. He is on the speakers’ bureau and received grants from AbbVie, Bristol-Myers Squibb, and Roche. He advises Sysmex and is on the speakers’ bureau for MSD. Mindie Nguyen consults and received grants from Gilead, GlaxoSmithKline, and Exact Science. She consults for Intercept and Exelixis. She received grants from Pfizer, Enanta, AstraZeneca, Delfi, Innogen, Curve Bio, Vir, Healio, NCI, and Glycotest. The remaining authors have no conflicts to report.