Wednesday, October 08, 2025

 

Shipping Backup as Belgian Pilots Stage Work Slowdown Over Pension Reform

Belgian pilot boat
Ships are waiting as Belgian pilots stage a slowdown to protest pension reforms (Maritime Services and Coast Agency (MDK))

Published Oct 7, 2025 12:37 PM by The Maritime Executive



The ports of Antwerp, Zeebrugge, and Ghent are all reporting building delays and a backlog of ships after the Belgian sea pilots’ association began a work slowdown to protest the government’s proposed pension reforms. It marks the latest in a series of strikes across Belgium during 2025 over the government's plans for financial reforms.

The pilots’ association is saying talks with the government stalled after a provisional deal was reached in the summer. Federal Pensions minister Jan Jambon and Flemish Ports minister Annick De Ridder said they were proceeding with the earlier agreement to conclude the talks by the end of November for a framework for an agreement.

The Professional Association of Pilots issued its notice of a job action due to start on Sunday, October 5, saying the government talks were not proceeding. The strike notice called for a work-to-rule action whereby the pilots limited their working hours to between 0800 and 1700 daily and maximized rest periods and office work. According to the Port of Antwerp Bruges, the reality is that pilots are only available starting at 1000 at the earliest and work stops by 1700.

A port spokesperson told the Belgian News Agency the impact would be “significant.” Belgium has approximately 300 maritime pilots responsible for the movement of ocean and inland shipping. Reports are that younger pilots could be facing up to a 45 percent decrease in their pensions under the proposed reforms. The association is also angered that their pensions are being treated differently than other salaried workers.

“This action will cause serious disruption to the resumption of shipping to and from Antwerp and Zeebrugge, with severe disruptions to arrivals and departures in the coming days,” the port warned.

Pilot services were suspended on Sunday due to weather conditions, but the impact of the work slowdown began to emerge on Monday, October 6. 

Monday morning, the reports said there were 27 ships waiting in the North Sea for pilots so that they could proceed to Antwerp. An additional 24 ships were waiting at the dock in Antwerp for pilots so that they could depart.

The latest figures as of the afternoon on Tuesday, October 7, are that a total of 70 ships are now waiting. The number on the North Sea has grown to a total of 54, with 44 heading to Antwerp and smaller amounts for Zeebrugge and Ghent. A total of 15 ships were reported to be waiting to depart Antwerp.

Reports from the port authority are saying that there are 54 ships in Antwerp for which no pilot has been scheduled, and 32 vessels that are experiencing delays.

The ministers are urging the pilots to stop their action and return to the negotiations. The association said in its statement that it “regretted the situation and its impact on the nautical sector.” It, however, says it is waiting for a political response to its request for more negotiations.

A similar strike in the spring interrupted shipping. Farmers and truckers also staged strikes blocking the ports in ongoing reactions to the proposed financial reports. Belgium formed a coalition government at the beginning of 2025 following the June 2024 elections and months of partisan squabbling in part due to the country’s deficit spending. The government has called for reforms to social programs to address the financial challenges. 


‘Stablecoins are going to grow dramatically’: cryptocurrency expert

By Joshua Santos
Published: October 02, 2025 

Ronit Ghose, global head of future of finance at Citi Institute, joins BNN Bloomberg to discuss banking in the age of stablecoins.

A global investment bank expects stablecoins to explode dramatically reaching $1.9 trillion in issuance by 2030, up from a previous estimate of $1.6 trillion amid regulatory changes.

Citi Institute projects rapid growth in stablecoins, driven by digitally native companies and global American dollar demand. In a bullish case, Citi anticipates issuance of $4 trillion while a bear case sees the asset hovering around $1 trillion.

“We argue in our report that stablecoins are going to grow dramatically over the next five years, driven by the continued growth in the crypto ecosystem, but also existing corporate clients, merchants, particularly the smaller and medium sized ones looking to do cross border finance quicker and faster,” Ronit Ghose, Citi’s global head of future finance told BNNBloomberg.ca in a Thursday interview.

Stablecoin is a cryptocurrency that runs on a blockchain and is linked to currencies or commodities such as the U.S. dollar, euro or gold. It is different from other cryptocurrencies, such as Bitcoin, Ether and XRP which fluctuate based on supply and demand.Latest updates on crypto news here

According to a report from McKinsey and Company, stablecoins can help smaller companies and merchants in emerging markets by addressing significant cross-border payment challenges. Currently, businesses face problems with international transactions in terms of speed and cost. Stablecoins offer a solution by running on blockchain technology, which can make cross-border finance quicker and faster.

High hopes follow regulatory changes from U.S. President Donald Trump after he signed the GENIUS Act into law. The act establishes federal rules and guidelines for cryptocurrency tokens pegged to traditional currencies such as U.S. dollars.

“This all means that for traditional financial institutions such as the banks, the payment companies and others, we can now get more involved in the space, and so can our clients,” said Ghose. “That’s why it’s a big deal.”

The European Union established its own framework into law. Markets in Crypto-Assets (MiCA) was established in 2023 to regulate stablecoins. Member states can issue licences and dubbed passports, that allow crypto companies to operate throughout the 27-nation bloc.

A critic of stablecoins however warns that weak oversight could trigger multi-billion dollar bailouts. Jean Triol, a Noble Prize-winning economist said a loss of confidence in reserves could lead to a financial crisis and trigger a rush of withdrawals undermining the peg to traditional assets.

Ghose is not worried as world governments establish frameworks to protect and control digital assets with hard funds.

“If you issue a stable coin in the U.S. under the GENIUS Act, or in Europe under MiCA, you have to follow the rules of MiCA in Europe or the GENIUS act in the U.S. and other jurisdictions around the world,” said Ghose.

“Those rules will require clear reserve assets, clear backing. In the U.S., there’ll be largely treasury bills. In Europe, bank deposits.”

Triol argues there is the potential for issuers to invest in riskier assets to chase higher returns, eroding confidence in their reserves and threaten the peg to real-world assets.

“In the past, it was a bit of a wild west,” said Ghose. “There were no clear rules. You were basically running on the back of what’s called in the crypto industry, “trust me, bro”, and that’s where industry came out of, the stablecoin industry. But this is changing really, really fast.”

Citi said stablecoins has grown to a $280 billion market cap from $200 billion this year alone.



Joshua Santos

Journalist, BNNBloomberg.ca
Johnson & Johnson ordered to pay US$966 million in talc cancer case after jury finds company liable

By Reuters
October 07, 2025 

The Johnson & Johnson logo appears above a trading post on the floor of the New York Stock Exchange. (AP Photo/Richard Drew, file)

A Los Angeles jury ordered Johnson & Johnson to pay US$966 million to the family of a woman who died from mesothelioma, finding the company liable in the latest trial alleging its talc products cause cancer.

The family of Mae Moore, a California resident who died at age 88 in 2021, sued the company the same year, claiming J&J’s talc baby powder products contained asbestos fibers that caused her rare cancer. The jury late on Monday ordered J&J to pay $16 million in compensatory damages and $950 million in punitive damages, according to court filings.

The verdict could be reduced on appeal as the U.S. Supreme Court has found that punitive damages should generally be no more than nine times compensatory damages.

Erik Haas, Johnson & Johnson’s worldwide vice president of litigation, said in a statement that the company plans to immediately appeal, calling the verdict “egregious and unconstitutional.”

“The plaintiff lawyers in this Moore case based their arguments on ‘junk science’ that never should have been presented to the jury,” Haas said.


The company has said its products are safe, do not contain asbestos, and do not cause cancer. J&J stopped selling talc-based baby powder in the U.S. in 2020, switching to a cornstarch product. Mesothelioma has been linked to asbestos exposure.

Trey Branham, one of the attorneys representing Moore’s family, said after the verdict that his team is “hopeful that Johnson & Johnson will finally accept responsibility for these senseless deaths.”

J&J is facing lawsuits from more than 67,000 plaintiffs who say they were diagnosed with cancer after using baby powder and other talc products, according to court filings. The number of lawsuits alleging talc caused mesothelioma is a small subset of these cases, with the vast majority involving ovarian cancer claims.

J&J has sought to resolve the litigation through bankruptcy, a proposal that has been rejected three times by federal courts.

Lawsuits alleging talc caused mesothelioma were not part of the last bankruptcy proposal. The company has previously settled some of those claims but has not struck a nationwide settlement, so many lawsuits over mesothelioma have proceeded to trial in state courts in recent months.

In the past year, J&J has been hit with several substantial verdicts in mesothelioma cases, but Monday’s is among the largest. The company has won some of the mesothelioma trials, including last week in South Carolina, where a jury found J&J not liable.

The company has been successful in reducing some of the awards on appeal, including in one Oregon case where a state judge granted J&J’s motion to throw out a $260 million verdict and hold a new trial.

(Reporting by Diana Novak Jones; Editing by Alexia Garamfalvi, Rod Nickel and Bill Berkrot)




Experts says Ottawa’s new AI task force is skewed towards industry


By The Canadian Press
 October 08, 2025 

Minister of Artificial Intelligence and Digital Innovation Evan Solomon, left, shakes hands with Aidan Gomez of Cohere after participating in a talk at the All In AI conference in Montreal on Thursday, Sept. 25, 2025. THE CANADIAN PRESS/Christopher Katsarov

OTTAWA — The Liberal government has given its new AI “task force” until the end of the month to fast-track changes to the national artificial intelligence strategy — a plan that critics say leans too much on the perspective of industry and the tech sector.

Teresa Scassa, a law professor at the University of Ottawa and Canada research chair in information law and policy, said the makeup of the 27-member task force is “skewed towards industry voices and the adoption of AI technologies.”

The risks posed by artificial intelligence to Canada’s culture, environment and workforce “deserve more attention in a national strategy,” Scassa said in an email.

Artificial Intelligence Minister Evan Solomon announced the task force last month and tasked it with a 30-day “national sprint” to draft recommendations for a “refreshed” AI strategy. Solomon said that new strategy will land later this year, nearly two years earlier than planned.

The group has been asked to look at various aspects of AI, including research, adoption, commercialization, investment, infrastructure, skills, and safety and security. The government is also holding a public consultation on its AI strategy.


Canada became the first country to launch a national AI strategy in 2017; it updated the strategy in 2022. Last year’s federal budget included an additional $2.4-billion investment in AI, the bulk of which goes to building up computing capabilities and technological infrastructure. Ottawa also launched an AI strategy for the federal public service earlier this year.

Joel Blit, an associate professor of economics at the University of Waterloo, said he has been encouraged by the government’s approach.

“I really like the urgency of it,” he said, adding that while a 30-day timeline for updating a national AI strategy is “almost unheard of,” the technology is moving fast and Canada isn’t keeping up.

Canada has “always struggled to adopt new technologies as quickly as some other countries,” Blit said.

A recent paper from the C.D. Howe Institute noted that while Canada ranks second globally in top-tier AI researchers and is “first in the G7 for per capita academic AI papers,” it ranked 20th among OECD countries on AI adoption in 2023.

Blit said the government hasn’t invested enough in AI literacy and education and called for AI literacy campaigns “in the same way that maybe 100 years ago we had… literacy campaigns for reading and writing.”

Luc Vinet is a physics professor at the Université de Montréal and CEO of IVADO, a research consortium focused on AI adoption. He said his reaction to the task force and Ottawa’s approach to AI was “quite positive overall.”

He suggested the government could focus on building up “national human infrastructure” in AI by linking up professionals in academia and industry.

“We have remarkable experts in AI, but they might not accompany a medical doctor who wants to adopt AI,” he said. “We have people in universities graduating with PhD, say in chemistry, biology, in economics, (who) still today do not really have much knowledge about AI.”

While Blit said he didn’t want to criticize the task force, he noted that its membership seems to be weighted toward industry representatives.

“Who is going to be advocating to make sure that every Canadian benefits from this, that we invest in education and in literacy and all the other things that we’re going to need?” he asked.


The next “big Canadian economic champion” might not be a big AI company, he suggested.

“It might be a nurse that encounters AI and finds a way to re-imagine health care around AI,” he added.

Scassa said in a recent online post that few of the task force members specialize in social science or studying the ethical dimensions of AI.

“There are no experts in labour and employment issues (which are top of mind for many Canadians these days), nor is there representation from those with expertise in the environmental issues we already know are raised by AI innovation,” she wrote.

Companies with representatives on the task force include generative AI developer Cohere, IT and business consulting company CGI, the Royal Bank of Canada, venture capital firm Inovia Capital, AI search company Coveo, cloud computing company Aptum, data storage company Vdura and crisis alert company Samdesk.

Among the academics on the task force are three professors of computer science, a dean of engineering, a professor of strategic management, a professor of medicine, and the founding director of a research centre for media, technology and democracy.

The group does have representatives from a public sector union, a tech sector group, a think tank, a new safe AI organization launched by AI pioneer Yoshua Bengio, and the First Nations Technology Council.

Only three of the more than two dozen task force members have been asked to work on safe AI systems and public trust, Scassa said.

She said she also has concerns about the government instructing task force members to consult their networks to develop recommendations. Scassa wrote that “sounds a lot like insider networking, which should frankly raise concerns. This does not lend itself to ensuring fair and appropriate representation of diverse voices.”

On Monday, a coalition representing the cultural sector told MPs on the Heritage committee it was disappointed it wasn’t represented on the task force, despite the threat AI poses to the sector.

Jennifer Pybus, assistant professor and Canada research chair in data, democracy and AI at York University, said she would have liked to see more civic partners or humanities-based scholars on the task force.

A spokesperson for Solomon said the task force “has a diverse group of folks from across Canada as well as across sectors.”

Pybus said she was still cautiously optimistic about the strategy, partly due to the government’s approach to digital infrastructure. She said the government is recognizing “they have to own the tools and set the rules for the digital age.”

In his speech announcing the task force, Solomon emphasized the principle of digital sovereignty, calling it “the most pressing policy and democratic issue of our time.”

Pybus pointed out that the “vast majority of Canadian AI compute and data storage capacity sits entirely with platforms that are owned by” U.S.-based companies like Amazon, Google and Microsoft.

The Canadian Press reported in September that since 2021, the federal government has spent almost $1.3 billion on cloud services provided by Amazon, Microsoft and Google. Some of those services were used for what the Department of National Defence described as “mission-critical applications that directly support operational readiness and national security.”

Even when AI companies have Canadian subsidiaries, Pybus said, “their governance is still in the U.S., which ultimately means that” legislation on managing that Canadian data “is being shaped by American companies and by the American government.”

Ottawa’s embrace of AI comes as many warn of a potential bubble in AI investment. Blit compared the situation to the dotcom bubble and crash of the early 2000s.

“That doesn’t mean that that technology wasn’t real. It doesn’t mean that that technology didn’t then transform a big part of the economy (or) society,” he said.

Blit said there is a certain amount of AI “hype” about, “but give it a decade and it’s not hype.”

This report by The Canadian Press was first published Oct. 7, 2025.

Anja Karadeglija, The Canadian Press

 

Hardware vulnerability allows attackers to hack AI training data



North Carolina State University





Researchers from NC State University have identified the first hardware vulnerability that allows attackers to compromise the data privacy of artificial intelligence (AI) users by exploiting the physical hardware on which AI is run.

“What we’ve discovered is an AI privacy attack,” says Joshua Kalyanapu, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Security attacks refer to stealing things actually stored somewhere in a system’s memory – such as stealing an AI model itself or stealing the hyperparameters of the model. That’s not what we found. Privacy attacks steal stuff not actually stored on the system, such as the data used to train the model and attributes of the data input to the model. These facts are leaked through the behavior of the AI model. What we found is the first vulnerability that allows successfully attacking AI privacy via hardware.”

The vulnerability is associated with “machine learning (ML) accelerators,” hardware components on computer chips that increase the performance of machine-learning models in AI systems while reducing the models’ power requirements. Machine learning refers to a subset of AI models that use algorithms to identify patterns in training data, then use those patterns to draw conclusions from new data.

Specifically, the vulnerability allows an attacker with access to a server that uses the ML accelerator to determine what data was used to train AI systems running on that server and leak other private information. The vulnerability – named GATEBLEED – works by monitoring the timing of software-level functions taking place on hardware, bypassing state-of-the-art malware detectors. The finding raises security concerns for AI users and liability concerns for AI companies.

“The goal of ML accelerators is to reduce the total cost of ownership by reducing the cost of machines that can train and run AI systems,” says Samira Mirbagher Ajorpaz, corresponding author of the paper and an assistant professor of electrical and computer engineering at NC State.

“These AI accelerators are being incorporated into general-purpose CPUs used in a wide variety of computers,” says Mirbagher Ajorpaz. “The idea is that these next-generation chips would be able to switch back and forth between running AI applications with on-core AI accelerators and executing general-purpose workloads on CPUs. Since this technology looks like it will be in widespread use, we wanted to investigate whether AI accelerators can create novel security vulnerabilities.”

For this study, the researchers focused on Intel’s Advanced Matrix Extensions, or AMX, which is an AI accelerator that was first incorporated into the 4th Generation Intel Xeon Scalable CPU.

“We found a vulnerability that effectively exploits the exact behaviors that make AI accelerators effective at speeding up the execution of AI functions while reducing energy use,” says Kalyanapu.

“Chips are designed in such a way that they power up different segments of the chip depending on their usage and demand to conserve energy,” says Darsh Asher, co-author of the paper and a Ph.D. student at NC State. “This phenomenon is known as power gating and is the root cause of this attack. Almost every major company implements power gating in different parts of their CPUs to gain a competitive advantage.”

“The processor powers different parts of on-chip accelerators depending on usage and demand; AI algorithms and accelerators may take shortcuts when they encounter data sets on which they were trained,” says Farshad Dizani, co-author of the paper and a Ph.D. student at NC State. “Powering up different parts of accelerators creates an observable timing channel for attackers. In other words, the behavior of the AI accelerator fluctuates in an identifiable way when it encounters data the AI was trained on versus data it was not trained on. These differences in timing create a novel privacy leakage for attackers who have not been granted direct access to privileged information.”

“So if you plug data into a server that uses an AI accelerator to run an AI system, we can tell whether the system was trained on that data by observing fluctuations in the AI accelerator usage,” says Azam Ghanbari, an author of the paper and a Ph.D. student at NC State. “And we found a way to monitor accelerator usage using a custom program that requires no permissions.”

“In addition, this attack becomes more effective when the networks are deep,” says Asher. “The deeper the network is, the more vulnerable it becomes to this attack.”

“And traditional approaches to defend against attacks don’t appear to work as well against this vulnerability, because other attacks rely on outputs from the model or reading power consumption,” says Mirbagher Ajorpaz. “GATEBLEED does neither.
“Rather, GATEBLEED is the first vulnerability to exploit hardware to leak user data privacy by leveraging the interaction between AI execution and accelerator power-gating states,” Mirbagher Ajorpaz says. “Unlike software vulnerabilities, hardware flaws cannot simply be patched with an update. Effective mitigation requires hardware redesign, which takes years to propagate into new CPUs. In the meantime, microcode updates or operating system (OS)-level defenses impose heavy performance slowdowns or increased power consumption, both of which are unacceptable in production AI deployments.
“Moreover, because hardware sits beneath the OS, hypervisor, and application stack, a hardware attack like GATEBLEED can undermine all higher-level privacy guarantees – regardless of encryption, sandboxing, or privilege separation,” Mirbagher Ajorpaz says. “Hardware vulnerabilities thus open a fundamentally new channel for AI user data privacy leakage and it bypasses all existing defenses designed for AI inference attacks.”
The ability to identify the data an AI system was trained on raises a number of concerns for both AI users and AI companies.

“For one thing, if you know what data an AI system was trained on, this opens the door to a range of adversarial attacks and other security concerns,” Mirbagher Ajorpaz says. “In addition, this could also create liability for companies if the vulnerability is used to demonstrate that a company trained its systems on data it did not have the right to use.”

The vulnerability can also be used to give attackers additional information about how an AI system was trained.

“Mixtures of Experts (MoEs), where AI systems draw on multiple networks called ‘experts,’ are becoming the next AI architecture – especially with new natural language processing models,” Mirbagher Ajorpaz says. “The fact that GATEBLEED reveals which experts responded to the user query means that this vulnerability leaks sensitive private information. GATEBLEED shows for the first time that MoE execution can leave a footprint in hardware that can be extracted. We found a dozen such vulnerabilities on the deployed and popular AI codes and modern AI agent designs across popular machine-learning libraries used by a variety of AI systems (HuggingFace, PyTorch, TensorFlow, etc.). This raises concerns regarding the extent to which hardware design decisions can affect our everyday privacy, particularly with more and more AI applications and AI agents being deployed.

“The work in this paper is a proof-of-concept finding, demonstrating that this sort of vulnerability is real and can be exploited even if you do not have physical access to the server,” Mirbagher Ajorpaz says. “And our findings suggest that, now that we know what to look for, it would be possible to find many similar vulnerabilities. The next step is to identify solutions that will help us address these vulnerabilities without sacrificing the benefits associated with AI accelerators.”

The paper, “GATEBLEED: A Timing-Only Membership Inference Attack, MoE-Routing Inference, and a Stealthy, Generic Magnifier Via Hardware Power Gating in AI Accelerators,” will be presented at the IEEE/ACM International Symposium on Microarchitecture (MICRO 2025), being held Oct. 18-22 in Seoul, South Korea. The paper was co-authored by Darsh Asher, Farshad Dizani, and Azam Ghanbari, all of whom are Ph.D. students at NC State; Aydin Aysu, an associate professor of electrical and computer engineering at NC State; and by Rosario Cammarota of Intel.

This work was done with support from Semiconductor Research Corporation, under contract #2025-HW-3306, and from Intel Labs.

Women portrayed as younger than men online, and AI amplifies the bias



Sweeping study finds online images and algorithms reflect a culture-wide bias against older women




University of California - Berkeley Haas School of Business





U.S. Census data shows no systematic age differences between men and women in the workforce over the past decade. And globally, women on average live about five years longer than men. But that’s not what you’ll see if you search Google or YouTube or query an AI like ChatGPT.

study published today  in the journal Nature analyzed 1.4 million online images and videos plus nine large language models trained on billions of words and found that women are systematically presented as younger than men. The researchers looked at content from Google, Wikipedia, IMDb, Flickr, and YouTube, and major large language models including GPT2, and concluded that women consistently appeared younger than men across 3,495 occupational and social categories.

"This kind of age-related gender bias has been seen in other studies of specific industries, and anecdotally, such as in reports of women who are referred to as girls," says Berkeley Haas Assistant Professor Solène Delecourt, who co-authored the study with Douglas Guilbeault of Stanford’s Graduate School of Business and Bhargav Srinivasa Desikan from the University of Oxford/Autonomy Institute. "But no one has previously been able to examine this at such scale."

"This kind of age-related gender bias has been seen in other studies of specific industries, and anecdotally, such as in reports of women who are referred to as 'girls.' But no one has previously been able to examine this at such scale."

—Assistant Professor Solène Delecourt, UC Berkeley Haas

The distortion was most stark for high-status, high-earning occupations. What’s more, the researchers found mainstream algorithms further amplify age-related gender bias: When generating and evaluating nearly 40,000 resumes, ChatGPT assumed women were younger and less experienced while rating older male applicants as more qualified.

"Online images show the opposite of reality. And even though the internet is wrong, when it tells us this ‘fact’ about the world, and we start believing it to be true," Guilbeault says. "It brings us deeper into bias and error."

"Online images show the opposite of reality. And even though the internet is wrong, when it tells us this ‘fact’ about the world, and we start believing it to be true."

—Assistant Professor Douglas Guilbeault, Stanford Graduate School of Business

A ‘culture-wide, statistical distortion of reality’

The research team used several approaches to assess gender and age in images and videos gathered from a variety of platforms (for video analysis, they captured still images). In one case, they hired thousands of online workers to classify gender (male, female, nonbinary) and estimate age within a set of ranges. In other cases, the datasets allowed them to cross-reference the image timestamp with the subject’s birthdate to calculate an objectively precise age.

Across all methods and datasets, women were strongly associated with youth and men with older ages, either based on how old they appeared to be or what their true age was. This relationship held whether the researchers measured by:

  • Human judgement
  • Machine learning
  • Objective information

This distortion not only grew stronger as the prestige of the job increased—CEO or astronaut, for instance—but also for jobs with larger pay gaps between men and women.

The researchers found the same relationship when shifting their analysis from images to text. They studied the relationship between gender and age using billions of words from across the internet, including Reddit, Google News, Wikipedia, and Twitter. Words related to youth were much more closely tied to women.

“One concern people might have is that images and videos are kind of unique in that people can wear makeup or apply filters, using image-specific strategies to make themselves look younger,” Delecourt says. “That’s why we also looked at text, and we found exactly the same pattern.”

"Our study shows that age-related gender bias is a culture-wide, statistical distortion of reality, pervading online media through images, search engines, videos, text, and generative AI."

—Assistant Professor Solène Delecourt, UC Berkeley Haas

Real-world effects of distorted perceptions

Following on those findings, the researchers conducted two experiments to understand how online algorithms amplify this bias. In the first, roughly 500 participants were split into two groups. Half searched Google Images for specific occupations, labeled the gender of the people in the images, then estimated the average ages and hiring preferences for those roles. The control group searched for unrelated images, such as an apple or guitar, and then estimated ages and gender associations for those same occupations, but without exposure to images of them.

Participants who viewed women in occupation-related images estimated the average age for that job to be significantly lower than those in the control group, while those who saw a man performing the same job assumed the average age was significantly higher. For occupations perceived as female-dominated, participants recommended younger ideal hiring ages; for male-dominated occupations, they recommended older hiring ages.

In the second experiment, the researchers prompted ChatGPT (gpt-4o-mini) to generate nearly 40,000 resumes across 54 occupations, using distinctively male and female names matched for popularity, ethnicity, and other factors. When generating resumes for women, ChatGPT assumed they were younger (by 1.6 years), had more recent graduation dates, and had less work experience compared to resumes with male names.

When evaluating resumes, ChatGPT rated older men more highly than women for the same positions. This result appeared whether the researchers provided names or ChatGPT generated its own applicants, showing that the bias is deeply embedded in the system.

A problematic feedback loop

The research follows a study published in Nature last year by Delecourt and Guilbeault—then a professor at UC Berkeley Haas—finding female and male gender associations are more extreme among Google Images than within text from Google News. While the text is slightly more focused on men than women, this bias is over four times stronger in images. They also found that biases are more psychologically potent in visual form.

One of the major takeaways from the new study, Guilbeault notes, is that this evaluation of online information at unprecedented scale reveals a deeply inaccurate picture of the world in which we live. “This is of particular concern given the internet is increasingly how we learn about the social world,” he says. “People are spending more time online, and we rely on algorithms that curate information. And so, what if these biased beliefs are spreading and becoming a self-fulfilling prophecy? Our study shows that they are reinforcing stereotypical expectations about how the world should be.”

These questions are all the more urgent given the tremendous amount of investment in AI tools, which are trained on ever-larger online datasets of image and text. When these tools are applied in real-world settings, they are likely to reshape the world even more in line with the stereotypes inherent in their training. In the case of resume screening—in which AI is already widely used—the biases of AI are directly skewing its perceptions of who is and is not qualified for a given job.

Delecourt also pointed to the amount of information young people absorb, actively and subconsciously, through online experience. Given what images present for the average male or female doctor, for example, children may be imprinted with biased ideas about the occupation.

“What was most striking to me, ultimately, was how this online presentation has a much broader effect than I imagined when going into this,” she says. “These misrepresentations feed directly into the real world in ways that could be widening gaps in the labor market and skewing the ways we associate gender with authority and power.”

Takeaways

“Overall, our study shows that age-related gender bias is a culture-wide, statistical distortion of reality, pervading online media through images, search engines, videos, text, search engines, and generative AI,” Delecourt says. 

  • Women are systematically portrayed as younger than men across online platforms. Analysis of 1.4 million images and videos plus nine large language models found women consistently appear younger than men across 3,495 occupational and social categories—with the distortion strongest in high-status, high-earning occupations.
  • Algorithms amplify this age-gender bias. When ChatGPT generated nearly 40,000 resumes, it assumed women were younger (by 1.6 years) with less work experience, and rated older male applicants as more qualified—even though real-world data shows no systematic age differences between men and women in the workforce.
  • This creates a problematic feedback loop that distorts reality. The researchers found that people who viewed occupation-related images online adopted the biased age assumptions they saw, potentially creating a self-fulfilling prophecy that reinforces stereotypical expectations and widens real-world gaps in the labor market.

"To fight pervasive cultural inequalities," Delecourt says, "the first step is to recognize how stereotypes are coded into our culture, our algorithms, and our own minds."

Read the full paper:

Age and gender distortion in online media and large language models

Nature, October 8, 2025

Authors:

  • Douglas Guilbeault, Stanford Graduate School of Business
  • Solène Delecourt: UC Berkeley Haas School of Business
  • Bhargav Srinivasa Desikan, Oxford University & Autonomy Institute

The project was funded with grants from:

Deloitte to partially refund Australian government for report with apparent AI-generated errors

By The Associated Press
Published: October 07, 2025 

P
eople arrive at the offices of Deloitte in Melbourne, Australia, on Tuesday, Oct. 7, 2025. 
(AP Photo/Rod McGuirk)

MELBOURNE, Australia — Deloitte Australia will partially refund the 440,000 Australian dollars (US$290,000) paid by the Australian government for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to nonexistent academic research papers.

The financial services firm’s report to the Department of Employment and Workplace Relations was originally published on the department’s website in July. A revised version was published Friday after Chris Rudge, a Sydney University researcher of health and welfare law, said he alerted the media that the report was “full of fabricated references.”

Deloitte had reviewed the 237-page report and “confirmed some footnotes and references were incorrect,” the department said in a statement Tuesday.

“Deloitte had agreed to repay the final instalment under its contract,” the department said. The amount will be made public after the refund is reimbursed.

Asked to comment on the report’s inaccuracies, Deloitte told The Associated Press in a statement the “matter has been resolved directly with the client.”


Deloitte did not respond when asked if the errors were generated by AI.

A tendency for generative AI systems to fabricate information is known as hallucination.

The report reviewed departmental IT systems’ use of automated penalties in Australia’s welfare system. The department said the “substance” of the report had been maintained and there were no changes to its recommendations.

The revised version included a disclosure that a generative AI language system, Azure OpenAI, was used in writing the report.

Quotes attributed to a federal court judge were removed, as well as references to nonexistent reports attributed to law and software engineering experts.

Rudge said he found up to 20 errors in the first version of the report.

The first error that jumped out at him wrongly stated that Lisa Burton Crawford, a Sydney University professor of public and constitutional law, had written a nonexistent book with a title suggesting it was outside her field of expertise.

“I instantaneously knew it was either hallucinated by AI or the world’s best kept secret because I’d never heard of the book and it sounded preposterous,” Rudge said.

Work by his academic colleagues had been used as “tokens of legitimacy,” cited by the report’s authors but not read, Rudge said, addding that he considered misquoting a judge was a more serious error in a report that was effectively an audit of the department’s legal compliance.

“They’ve totally misquoted a court case then made up a quotation from a judge and I thought, well hang on: that’s actually a bit bigger than academics’ egos. That’s about misstating the law to the Australian government in a report that they rely on. So I thought it was important to stand up for diligence,” Rudge said.

Senator Barbara Pocock, the Australian Greens party’s spokesperson on the public sector, said Deloitte should refund the entire AU$440,000 ($290,000).


Deloitte “misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent,” Pocock told Australian Broadcasting Corp. ”I mean, the kinds of things that a first-year university student would be in deep trouble for.”

Rod Mcguirk, The Associated Press