Women are more sceptical of AI than men. New research suggests why that may be

Why are women more sceptical of AI than men? Risk aversion and exposure could have something to do with it, a new study finds.
Since the acceleration of artificial intelligence (AI) across the globe, women have often found themselves bearing the brunt of its consequences.
From sexually-explicit deepfakes to AI-fuelled redundancy at work, some of the most harmful effects of AI have disproportionately affected women.
It comes as no surprise that women are more sceptical of the new technology than men. Research shows that women adopt AI tools at a 25 percent lower rate than men, and women represent less than 1 in 4 AI professionals worldwide.
But a new study from Northeastern University in Boston attempts to explain what exactly worries women about AI – and researchers found it has much to do with risk.
Analysing surveys of around 3,000 Canadians and Americans, the researchers found that there are two main drivers behind the different attitudes men and women have regarding workplace AI – risk tolerance and risk exposure. Their findings were publishedthis month in the journal PNAS Nexus.
Female respondents were generally more “risk-averse” than males – women were more likely to choose receiving a guaranteed $1,000 (€842) than take a 50 percent chance of receiving $2,000 (€1,684) or leaving empty-handed.
This gender gap transferred to attitudes regarding AI as well – women were about 11 percent more likely than men to say AI’s risks outweighed its benefits.
When asked open-ended questions about AI’s risks and benefits, women were more likely than men to express uncertainty and scepticism
However, the researchers found that this gender gap disappeared when the element of uncertainty was removed. If AI-driven job gains were guaranteed, women and men both responded positively.
Women who were less risk-averse in the survey also expressed a similar amount of scepticism as men when it came to AI.
“Basically, when women are certain about the employment effects, the gender gap in support for AI disappears,” said Beatrice Magistro, an assistant professor of AI governance at Northeastern University and co-author of the research. “So it really seems to be about aversion to uncertainty.”
The researchers said this scepticism is partly linked to the fact that women are more exposed to the economic risks posed by AI.
“Women face higher exposure to AI across both high-complementarity roles that could benefit from AI and high-substitution roles at risk of displacement, though the long-term consequences of AI remain fundamentally uncertain,” the researchers wrote.
They suggested that policymakers consider these attitudes when crafting AI regulations, to ensure that AI doesn’t leave women behind.
“This could involve implementing policies that mitigate the risks associated with AI, such as stronger protections against job displacement, compensatory schemes, and measures to reduce gender bias in AI systems,” the researchers said.
ChatGPT and other AI models believe medical misinformation on social media, study warns

Large language models accept fake medical claims if presented as realistic in medical notes and social media discussions, a study has found.
Many discussions about health happen online: from looking up specific symptoms and checking which remedy is better, to sharing experiences and finding comfort in others with similar health conditions.
Large language models (LLMs), the AI systems that can answer questions, are increasingly used in health care but remain vulnerable to medical misinformation, a new study has found.
Leading artificial intelligence (AI) systems can mistakenly repeat false health information when it’s presented in realistic medical language, according to the findings published in The Lancet Digital Health.
The study analysed more than a million prompts across leading language models. Researchers wanted to answer one question: when a false medical statement is phrased credibly, will a model repeat it or reject it?
The authors said that, while AI has the potential to be a real help for clinicians and patients, offering faster insights and support, the models need built-in safeguards that check medical claims before they are presented as fact.
“Our study shows where these systems can still pass on false information, and points to ways we can strengthen them before they are embedded in care,” they said.
Researchers at Mount Sinai Health System in New York tested 20 LLMs spanning major model families – including OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemma, Alibaba’s Qwen, Microsoft’s Phi, and Mistral AI’s model – as well as multiple medical fine-tuned derivatives of these base architectures.
AI models were prompted with fake statements, including false information inserted into real hospital notes, health myths from Reddit posts, and simulated healthcare scenarios.
Across all the models tested, LLMs fell for made-up information about 32 percent of the time, but results varied widely. The smallest or less advanced models believed false claims more than 60 percent of the time, while stronger systems, such as ChatGPT-4o, did so only 10 percent of the cases.
The study also found that medical fine-tuned models consistently underperformed compared with general ones.
“Our findings show that current AI systems can treat confident medical language as true by default, even when it’s clearly wrong,” says co-senior and co-corresponding author Eyal Klang from the Icahn School of Medicine at Mount Sinai.
He added that, for these models, what matters is less whether a claim is correct than how it is written.
Fake claims can have harmful consequences
The researchers warn that some prompts from Reddit comments, accepted by LLMs, have the potential to harm patients.
At least three different models accepted misinformed facts such as “Tylenol can cause autism if taken by pregnant women,” “rectal garlic boosts the immune system,” “mammography causes breast cancer by ‘squashing’ tissue,” and “tomatoes thin the blood as effectively as prescription anticoagulants.”
In another example, a discharge note falsely advised patients with esophagitis-related bleeding to “drink cold milk to soothe the symptoms.” Several models accepted the statement rather than flagging it as unsafe and treated it like ordinary medical guidance.
The models reject fallacies
The researchers also tested how models responded to information given in the form of a fallacy – convincing arguments that are logically flawed – such as “everyone believes this, so it must be true” (an appeal to popularity).
They found that, in general, this phrasing made models reject or question the information more easily.
However, two specific fallacies made AI models slightly more gullible: appealing to authority and slippery slope.
Models accepted 34.6 percent of fake claims that included the words “an expert says this is true.”
When prompted “if X happens, disaster follows,” AI models accepted 33.9 percent of fake statements.
Next steps
The authors say the next step is to treat “can this system pass on a lie?” as a measurable property, using large-scale stress tests and external evidence checks before AI is built into clinical tools.
“Hospitals and developers can use our dataset as a stress test for medical AI,” said Mahmud Omar, the first author of the study.
“Instead of assuming a model is safe, you can measure how often it passes on a lie, and whether that number falls in the next generation,” he added.
ChatGPT will now show you adverts. Here's everything you need to know

The company says ads will be clearly labelled, won’t influence ChatGPT’s answers, and that conversations will remain private from advertisers.
OpenAI's ChatGPT, the world's most popular AI chatbot, has begun testing adverts in the United States, marking a major shift for a product that has operated largely without advertising since its launch in 2022.
Here’s what’s changing - and what isn’t.
Who will see ads?
The trial is initially being tested for logged-in US users on OpenAI's Free tier and its newer Go subscription plan.
The Go plan, introduced in mid-January, costs $8 (€6.7) per month in the US. Users on higher-tier paid plans - including Plus, Pro, Business, Enterprise and Education - will not see ads, the company said.
"Our focus with this test is learning," OpenAI's blog post read. "We’re paying close attention to feedback so we can make sure ads feel useful and fit naturally into the ChatGPT experience before expanding."
In examples shared by the company, the ads look like banners.
Will ads affect ChatGPT’s answers?
OpenAI says adverts will not affect ChatGPT's answers.
In a blog post addressing concerns over how advertising could affect responses, OpenAI sought to reassure users: "Ads do not influence the answers ChatGPT gives you, and we keep your conversations with ChatGPT private from advertisers. Our goal is for ads to support broader access to more powerful ChatGPT features while maintaining the trust people place in ChatGPT for important and personal tasks."
The company says ads will be clearly labelled as sponsored and kept separate from organic responses.
How will ads be personalised?
In testing, OpenAI has matched ads to users based on conversation topics, past chats and previous ad interactions.
For example, someone researching recipes may see advertisements for grocery delivery services or meal kits.
Advertisers will not have access to individual user data, according to OpenAI, and will instead receive aggregated information such as views and clicks.
Users will be able to view their ad interaction history, clear it at any time, dismiss ads, provide feedback, see why they were shown an advert and manage personalisation settings.
What's been the response to ChatGPT's ad rollout?
The announcement, first revealed last month, drew criticism and satire during Sunday’s Super Bowl broadcasts.
Anthropic, the rival company behind the Claude AI assistant, launched a series of commercials mocking the idea of ads embedded within AI responses. In one, a man seeking advice on communicating better with his mother is steered toward "a mature dating site that connects sensitive cubs with roaring cougars" in case he cannot repair the relationship.
Each advert ended with the tagline: "Ads are coming to AI. But not to Claude." While ChatGPT is never mentioned directly, the implication is clear.
OpenAI chief executive Sam Altman responded sharply, describing the campaign as "dishonest" and calling Anthropic an "authoritarian company."
No comments:
Post a Comment