Friday, June 16, 2023

Generative AI Tools Are Perpetuating Harmful Gender Stereotypes


These new systems reflect the inequitable, racist and sexist biases of their source material.

Marie Lamensch
June 14, 2023
Photo illustration by Jonathan Raa. (NurPhoto via REUTERS)


Over the past few months, generative artificial intelligence (AI) has undergone a boom, with the arrival and widespread availability of tools such as Midjourney, DALL-E 2 and, most impressively, ChatGPT. As big companies such as OpenAI, Google and Microsoft rush to develop machine intelligence tools, governments, businesses and artists are taking stock and frantically debating how AI will impact their work and environment.

Alongside the hype, however, there is also a pervasive current of doom, much of it coming from AI pioneers and technologists themselves. Last March, a group of prominent experts, including Canadian computer scientist Yoshua Bengio, Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, signed an open letter calling for a pause in the development of AI systems more powerful than ChatGPT-4. And in April, the “godfather of AI,” Geoffrey Hinton, quit Google, citing concerns about the “existential” risk posed by AI.

In an interview with The Guardian, Hinton argued that AI will not only create “so much fake news that people won’t have any grip on what the truth is” but also eventually surpass the human brain. He further suggested that humanity is at a crossroads, which we may not survive as a species. Sam Altman, the CEO of OpenAI, wrote in February: “A misaligned superintelligent AGI [artificial general intelligence] could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.”

Are these apocalyptic scenarios credible? It’s certainly important to think early and hard about the coming impact of generative AI — something we failed to do with social media. When Facebook and Twitter launched, technologists and policy makers did not imagine the platforms would eventually be used as tools for online disinformation, hate and foreign interference.

That said, Hinton’s hypothetical scenarios also miss the point, by ignoring the present. AI, including generative AI, is already causing harms, particularly to historically marginalized and underrepresented groups. The failure to acknowledge this was apparent in Altman’s testimony before the US Senate Subcommittee on Privacy, Technology, and the Law on May 16.

While Altman and members of the committee discussed ethical and national security concerns, they made little mention of the impacts some experts have been warning about for years: the particular effects of the technology on women. Indeed, the word women was mentioned only once during the three-hour hearing. When asked about this by Politico’s Women Rule, Senator Richard Blumenthal said the committee would eventually hear witnesses focus on the “harassment of women.” Such omissions reflect a continuing lack of understanding of continuing gendered risks.

Replicating and Perpetuating Gender Inequity and Stereotypes

Generative AI creates images, text, audio and video based on word prompts. OpenAI’s DALL-E 2, for example, claims to “create realistic images and art from a description in natural language.”

But the assumption that generative AI can provide a realistic image is highly dubious. Most text-to-image models are trained on LAION-5B, a large open-source data set compiled by scraping content, including images, from the internet. But the internet lacks gender-representative data sets and is littered with mis- and disinformation and xenophobic and sexist content. This means that, without the necessary filters and mitigation in place, generative AI tools are being trained on and shaped by flawed, sometimes unethical, data. The new tools exhibit the same inequitable, racist and sexist biases as their source material.

As I have written in several previous articles, the digital gender divide is real: women have less access to technology than men and are online less than men. They are widely underrepresented in the tech sector and in the data found on the internet. And women are the principal victims of online hate, disinformation and algorithmic biases. Indeed, the online experiences of women, especially women of colour, mirror historical and existing inequalities. There has been a consistent reluctance by tech companies to build systems that will not harm women.

In her 2023 Pulitzer Prize–winning research, New York University professor Hilke Schellmann showed how the algorithmic biases of social media arbitrarily suppress certain content about women. On Instagram, for example, a photo of a woman wearing yoga pants and showing a little bit of skin may be “shadowbanned” — not removed, but restricted in its reach or sharability due to an algorithm ranking it as “too racy.” Yet an image of a shirtless man will not be scored in the same way.

That the internet is filled with images of barely dressed or naked women means that AI image generators not only replicate these stereotypes, but also create hypersexualized images of women.

Replicating and Exacerbating Gender Stereotypes

Consider some further examples. In an ongoing research experiment, the United Nations Development Programme (UNDP) Accelerator Lab tested two AI image generators’ view of the STEM (science, technology, engineering and mathematics) fields with respect to the representation of women. When the researchers asked DALL-E and Stable Diffusion, a product of Stability.AI, for visual representations of an engineer, a scientist and an IT expert, between 75 and 100 percent of the generated results portrayed men.

Perhaps surprisingly, OpenAI acknowledges that DALL-E replicates stereotypes. For example, the prompt lawyer results disproportionately in images of people who look like older Caucasian men and wear Western dress. The prompt nurse tends to result in images of people who look female. Similarly, the term flight attendant tends to generate images of Asian women. As Gabriela Ramos, assistant director-general for the social and human sciences at the UN Educational, Scientific and Cultural Organization, wrote for the World Economic Forum, “these systems replicate patterns of gender bias in ways that can exacerbate the current gender divide.” As visual AI becomes part of our lives, there’s a real risk that the technologies will exacerbate gender stereotypes.

Hypersexualized Content

That the internet is filled with images of barely dressed or naked women means that AI image generators not only replicate these stereotypes, but also create hypersexualized images of women.

In 2022, Melissa Heikkilä, a senior reporter covering AI at the MIT Technology Review, tested an avatar-generating app called Lensa, which turns selfies into avatars using Stable Diffusion. Heikkilä reported on the results of the experiment. When she tried to create an avatar of herself, she was met with a collection of predominantly nude or skimpily dressed and “cartoonishly pornified” avatars that looked nothing like her.

By contrast, the avatars of Heikkilä’s male colleagues were fully dressed and “got to be astronauts, explorers, and inventors.” Worse, Heikkilä, who is Asian American, received fetishistic avatars of “generic Asian women” modelled on anime or video-game characters, whereas her white female colleague “got significantly fewer sexualized images, with only a couple of nudes and hints of cleavage.” Heikkilä’s experience displays both the sexist and the racist biases of some generative AI tools.

Deepfakes and Porn

One of the biggest concerns about generative AI is its capacity to generate disinformation — which is particularly worrying when it comes to visual content, because it can so easily fool us. Deepfakes are not new, but their widespread availability and increasing realism should be cause for concern.

In January, Twitter user @mileszim shared a tweet featuring young women at a party — an image that, at first sight, seemed quite ordinary. The catch: these women do not exist but were entirely generated using Midjourney. While the account holder’s intentions were not nefarious, the capacity of deepfake technology to create such images can cause great harm if used by bad actors. Earlier this year, for example, several Reddit users were tricked into buying realistic nude images of an AI-generated figure named “Claudia,” thinking the image was of a real person. While the culprits were quickly found, one can imagine such a scam on a larger scale, including through the use of exploitative conversational video chatbots that masquerade as real women.

A 2019 report published by Deeptrace Labs reported that of 15,000 deepfake videos it found online, an astonishing 96 percent were non-consensual pornographic content featuring the swapped-in faces of women. Since AI is built on surveillance, anyone can become a victim. But women are the main targets. Pornographic deepfakes are already being used against women, in particular, celebrities. They have also been used against politicians and journalists such as Rana Ayyub in order to silence, humiliate or blackmail them.

In May of 2023, Bloomberg reported that child predators have exploited generative AI to generate images of child abuse. As these technologies become widely available, we can only expect these forms of abuse and criminality to worsen.

Continuous Tech Industry Failures

Advocates for ethical technology, such as Meredith Whittaker and Timnit Gebru, observe the present AI scare with some irony; they’ve been raising alarms about AI harms against women and racial groups for years. In an interview with Slate, Whittaker, co-founder of the AI Now Institute at New York University and president of the US-based Signal Foundation, identifies what has changed in the past year: technologists such as Hinton are envisioning a future in which AI tools may not impact “simply” women, Black people and low-wage workers, but also the privileged.

As long as the industry doesn’t involve all those impacted by AI to help shape the product, this problem will worsen. As with previous waves of technology, the gender biases in generative AI are caused by the exclusion of women “at every stage of the AI life cycle,” as Gabriela Ramos argued in her article for the World Economic Forum. The problem, at its most basic level, is that this field remains male-dominated. The founders of OpenAI, for example, are men, among them Sam Altman and Elon Musk; the current eight-person executive team includes just one female member, Mira Murati. It’s an industry-wide problem: globally, only 22 percent of AI professionals are women, making them virtually invisible.

This raises key questions. Can these technologies be designed with the female experience in mind? How can data be more equitably curated? Who should decide on source content? And how can harms against certain groups, including women, be mitigated using filters or gender-affirmative practices? It’s not encouraging that key tech leaders appear to be aware of these harms yet have deployed the tools regardless. OpenAI, for example, claims: “We develop risk mitigation tools, best practices for responsible use, and monitor our platforms for misuse.” But shouldn’t this have been worked out before DALL-E 2 was made available to the public?

We should also carefully parse the calls for government regulation. In an op-ed for The Guardian, author Stephen Marche argues that “Silicon Valley uses apocalypse for marketing purposes: they tell you their tech is going to end the world to show you how important they are” and how their product might change the world. This draws attention to the product while also giving policy makers and the general public the impression that tech companies are concerned with values over profit. But as Gebru, the executive director of DAIR, the Distributed AI Research Institute, told The Guardian late in May, “It is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”

Citizens everywhere should welcome that governments and regional organizations such as the European Union seek to develop clear AI regulations, which they failed to do with so many earlier technologies, including spyware and social media. At the same time, we should remember that “slowing down” or “pausing” AI innovation for a few months will not reverse societal inequalities. While generative AI has the capacity to replicate the ills of gender, racial, religious and ethnic bias, we must address the sources of the problem, not simply its transmission.

Generative AI can work for women. For example, an Indian artist, Sk Md Abu Sahid, reimagined the world’s richest men as women using Midjourney, asking us to imagine a world in which the corporate, political and tech sectors were led by more women. Similarly, the UNDP’s “Digital Imaginings: Women’s CampAIgn for equality” employed AI-generated art to portray a world in which women have more opportunities and power. That world is one technologists can and should aim for.


The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

ABOUT THE AUTHOR
Marie Lamensch is the project coordinator at the Montreal Institute for Genocide and Human Rights Studies at Concordia University.

No comments:

Post a Comment