Saturday, March 22, 2025

Is AI sexist? How artificial images are perpetuating gender bias in reality

AI is increasingly a feature of everyday life. But with its models based on often outdated data and the field still dominated by male researchers, as its influence on society grows it is also perpetuating sexist stereotypes.

Images generated by AI tool Dall-E. When asked to generate an image of someone who practises medicine or runs a restaurant or business, it suggests men. When asked to generate images of someone who works as a nurse, home help or domestic assistant, it suggests women. © ChatGPT

A simple request to an image-generating artificial intelligence (AI) tool such as Stable Diffusion or Dall-E is all it takes to demonstrate this.

When given requests such as "generate the image of someone who runs a company" or "someone who runs a big restaurant" or "someone working in medicine", what appears, each time, is the image of a white man.

When these programmes are asked to generate an image of "someone who works as a nurse" or "a domestic worker" or "a home help", these images were of women.

As part of a Unesco study published last year, researchers asked various generative AI platforms to write stories featuring characters of different genders, sexualities and origins. The results showed that stories about "people from minority cultures or women were often more repetitive and based on stereotypes".

The report showed a tendency to attribute more prestigious and professional jobs to men – teacher or doctor, for example – while often relegating women to traditionally undervalued or more controversial roles, such as domestic worker, cook or prostitute.

Why the African continent has a role to play in developing AI

The broad language patterns used by these Large Language Model (LLM) tools also tend to associate female names with words such as "home", "family" or "children", while male names are more closely associated with the words "business", "salary" and "career".

As such, these models demonstrate "unequivocal prejudice against women," warned Unesco in a press release.

"Discrimination in the real world is not only reflected in the digital sphere, it is also amplified there," said Tawfik Jelassi, Unesco's assistant director-general for communication and information.

A mirror of society

To create content, generative AI is "trained on billions of documents produced at a certain time," explained Justine Cassell, director of research at France's National Institute for Research in Digital Science and Technology (Inria). 

She explained that such documents, depending on when they were produced, often contain dated and discriminatory stereotypes, with the result that AI trained on them then conveys and reiterates these.

This is the case with image and text generators, but also for facial recognition programmes, which feed off millions of existing photos.

From breast cancer to HIV, how AI is set to revolutionise healthcare

In 2019, a US federal agency warned that some facial recognition systems were having difficulty correctly identifying women, particularly those of African-American origin – which has consequences for public safety, law enforcement and individual freedoms.

This is also an issue in the world of work, where AI is increasingly being used by HR managers to assist with recruitment.

In 2018, news agency Reuters reported that Amazon had to abandon an AI recruitment tool. The reason? The system did not evaluate candidates in a gender-neutral manner, as it was based on data accumulated from CVs submitted to the company – mainly by men. This led it to reject female applicants.

Diversifying data

AI is first and foremost a question of data. And if this data is incomplete or only represents one category of people, or if it contains conscious or unconscious bias, AI programmes will still use it – and broadcast it on a massive scale.

"It is vital that the data used to drive the systems is diverse and represents all genders, races and communities," said Zinnya del Villar, director of data, technology and innovation at the Data-Pop Alliance think tank. 

In an interview with the UN Women agency, del Villar explained: "It is necessary to select data that reflects different social backgrounds, cultures and roles, while eliminating historical prejudices, such as those that associate certain jobs or character traits with one gender."

One fundamental problem, according to Cassell at Inria, is that "most developers today are still predominantly white men, who may not be as sensitive to the presence of bias".

‘By humans, for humans’: French dubbing industry speaks out against AI threat

Because they are not subject to the prejudices suffered by women and minorities, male designers are often less aware of the problem – and 88 percent of algorithms are built by men. In addition to raising awareness of bias, researchers are urging companies in the sector to employ more diverse engineering teams.

"We need a lot more women coding AI models, because they're the ones who will be asking the question: doesn't this data contain abnormal behaviour or behaviour that we shouldn't reproduce in the future?" Nelly Chatue-Diop, CEO and co-founder of the start-up Ejara, told RFI.

Under-representation of women

Currently, women account for just 22 percent of people working in artificial intelligence worldwide, according to the World Economic Forum.

The European AI barometer carried out by Join Forces & Dare (JFD – formerly Digital Women's Day) reveals that of the companies surveyed with an AI manager on their executive committee, only 29 per cent of these managers are women. Globally, women account for 12 percent of AI researchers.

"The lack of diversity in the development of AI reinforces biases, perpetuates stereotypes and slows down innovation," warns the report.

It's an observation echoed by Unesco, which posits that the under-representation of women in the field, and in management positions, "leads to the creation of socio-technical systems that do not take into account the diverse needs and perspectives of all genders" and reinforces "disparities between men and women".

Could European AI create a more unified European identity?

Both organisations have emphasised the need to ensure that girls are made aware of and guided towards STEM (science, technology, engineering and mathematics) subjects from a young age – areas which are still too often the preserve of men, and in which high-achieving women are often invisible. 

With AI applications increasingly used by both the general public and businesses, "they have the power to shape the perception of millions of people," noted Audrey Azoulay, director-general of Unesco. "The presence of even the slightest gender bias in their content can significantly increase inequalities in the real world."

Unesco, alongside numerous specialists in the sector, is calling for mechanisms to be put in place on an international level to regulate the sector within an ethical framework.

But this seems a long way off. The United States, with its colossal weight in this field, did not sign the Paris Summit declaration on AI, issued last month. Nor did the United Kingdom.

While the UK government said the statement ha not gone far enough in terms of addressing global governance of AI, US vice-president JD Vance criticised what he called Europe's "excessive regulation" of the technology.

This article has been adapted from the original version in French.

No comments: