Tuesday, September 02, 2025

Trump’s Federal Guidelines for AI May Turbocharge Climate Denial and Racist Bias

AI is already racist and sexist. Trump’s plan aims to embed bias and lies even deeper into the systems reshaping the US.
September 2, 2025

President Donald Trump speaks during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025, in Washington, D.C.
Chip Somodevilla / Getty Images


Truthout is a vital news source and a living history of political struggle. If you think our work is valuable, support us with a donation of any size.

Picture this: You ask an AI to show you images of judges, and it depicts only 3 percent as women — even though 34 percent of federal judges are women. Or imagine an AI that’s more likely to recommend harsh criminal sentences for people who use expressions rooted in Black vernacular cultures. Now imagine that same AI instructed to ignore climate impacts or treating Russian propaganda as credible information.

This isn’t science fiction. The bias problems are happening right now with existing AI systems. And under President Trump’s new artificial intelligence policies, all these problems could get much worse — while potentially handing the U.S.’s tech leadership to China.

The Trump administration’s AI Action Plan, released alongside executive orders on July 23, 2025, doesn’t just strip federal AI guidelines of bias protections. It eliminates references to diversity, climate science, and misinformation from the National AI Risk Assessment — the document that has become one of the most widely used AI governance guidelines globally.

The administration demands that AI models used by the federal government be “objective and free from top-down ideological bias.” But there’s a catch: This standard comes from an administration whose leader made over 30,573 documented false statements during his first term, according to Washington Post fact-checkers. The result could be AI systems that ignore climate science, amplify misinformation, and become so unreliable that global customers choose Chinese alternatives instead.
AI Isn’t Actually “Neutral”

The irony runs deep. While claiming to eliminate bias, Trump’s policies could embed it even more firmly into the AI systems that increasingly shape American life — from hiring decisions to law enforcement to health care.

Related Story

Pentagon Signs Contract With Musk’s AI After It Called Itself “MechaHitler”
The AI bot praised Adolf Hitler as the best figure to combat “vile anti-white hate.”
By Sharon Zhang , Truthout July 15, 2025


Research shows that AI bias can actually be worse than real-world bias. When Bloomberg tested an AI image generator on common occupations, the results were stark: Prestigious, higher-paid professionals appeared almost exclusively as white and male, while lower-paid workers were depicted as women and people of color. The AI’s racial and gender sorting exceeded the differences that actually exist in our world.

Fast food workers, for example, were shown with darker skin tones 70 percent of the time by the AI — but in reality, 70 percent of fast food workers in the United States are white.

The consequences go far beyond images. Research published in Nature found that large language models were significantly more likely to suggest that people using African American speaking styles should get less prestigious jobs, be convicted of crimes, and even be sentenced to death.

“All of the language models that we examined have this very strong covert racism against speakers of African American English,” said University of Chicago linguist Sharese King.
The Grok Problem

Some of the most extreme examples of AI bias have come from Elon Musk’s AI chatbot Grok, which has described South African policies as “white genocide,” a belief it says it was “instructed by my creators” to accept.

Grok has also praised Hitler, suggested Holocaust-like responses would be “effective” against hatred toward white people, referred to itself as “MechaHitler,” and posted sexually explicit commentary.

Despite these outbursts, the White House remained silent about whether such errors should disqualify models from federal contracts. In fact, just a couple of months after reports of Grok’s Nazi rants went public, Musk’s company xAI received a Department of Defense contract for up to $200 million. Grok, along with AI models from other companies, will be used for “intelligence analysis, campaigning, logistics and data collection,” according to Defense News. xAI says it has addressed the coding that led to the earlier outbursts.

“What the president’s executive order may very well do is undercut efforts to eliminate bias, despite the fact that it’s purporting to require objectivity and fairness,” said Cody Venzke, senior ACLU policy counsel.
Climate Science Under Attack

The administration isn’t just targeting bias protections — it’s also calling for eliminating references to climate science in AI risk assessments and ignoring climate impacts in data center development.

“We need to build and maintain vast AI infrastructure and the energy to power it,” the White House said. “To do that, we will continue to reject radical climate dogma and bureaucratic red tape. Simply put, we need to ‘Build, Baby, Build!'”

But the training and deploying of AI is contributing to the climate crisis. A typical AI-focused data center consumes as much energy as 100,000 households, according to the International Energy Agency. Models currently under construction are projected to consume 20 times more.

These data centers also guzzle water — 560 billion liters annually, according to Bloomberg. Two-thirds of the water for data centers built since 2022 comes from areas already experiencing water stress.

On the same day the administration announced its new AI policies, it also released a climate analysis that downplays global warming impacts — a report being widely criticized for cherry-picking data and contradicting reputable scientific research.
The Misinformation Wild West

The new Trump policy also removes “misinformation” as a risk factor from the nation’s AI risk assessment. This comes at a time when research shows misinformation is becoming a serious problem for AI systems.

A new study by Yale’s Jeffrey A. Sonnenfeld and former USA Today editor Joanne Lipman found that AI systems often rely on the most popular responses, not the most accurate ones. “Verifiable facts can be obscured by mountains of erroneous information and misinformation,” they wrote.

Those “mountains of misinformation” are growing fast. A Russian propaganda effort called Pravda — sharing the name of the old Soviet newspaper — has published over 3 million articles per year across 150 domains in over 46 languages since the Ukraine invasion began. The strategy appears to be working: 10 major language models repeated false claims from this pro-Kremlin network 33 percent of the time in a test conducted by NewsGuard.

Even reputable news organizations have been tripped up, and had to issue embarrassing corrections. AI has gotten wrong such simple facts as Tiger Woods’ PGA Tour wins and the chronological order of Star Wars films, according to Sonnenfeld and Lipman. When the Los Angeles Times attempted to use AI for opinion pieces, it was caught short when the AI described the Ku Klux Klan as “white Protestant culture” reacting to “societal change” — not as the hate-driven movement it actually is.
The Stakes Keep Getting Higher

AI engineer and former Google researcher Deb Raji warned in a tweet that changes to the National Risk Assessment “will have consequences that I don’t think many people understand.”

As AI systems become more widespread in hiring, law enforcement, health care, and government services, the impacts of misguided policies grow more serious. Rather than addressing the technical and societal factors that create discriminatory outcomes, Trump’s policy eliminates oversight while demanding “neutrality” from systems trained on inherently biased data.

Meanwhile, technology companies are incentivized to ignore climate science, both in the training of their models and in the construction of the data centers that make AI function.

Trump’s AI Action Plan aims to make U.S. models the international standard and boost exports of U.S. technology. But there’s a fundamental flaw in this strategy: If MAGA ideology gets baked into these models, customers outside Trump’s political sphere may be less interested in buying U.S.-based AI. Instead, China’s open-source models could gain the upper hand in global markets.

The question isn’t whether AI systems should be objective — they absolutely should be. But Trump’s crusade against “woke AI” doesn’t create neutrality. If major AI companies comply with these plans, we could see existing biases supercharged and climate reality distorted, just when the planet desperately needs real science and real solutions.

These policies could systematically disadvantage marginalized communities and make established science harder to access, while undermining the U.S.’s technological leadership globally.

The ultimate irony? Policies that purport to eliminate bias and ideology may instead embed American AI systems with toxic biases that make them unreliable — handing the advantage to models from China and elsewhere, and undermining one of the AI Action Plan’s key goals.

As AI reshapes society, adopting a politically defined version of “truth” could have devastating consequences for both American democracy and American technological leadership. Along with attempts to impose political litmus tests on journalists, educators, health care providers, and scientists, Trump’s AI Action Plan could usher in an age not of artificial intelligence, but of ignorance.


This article is licensed under Creative Commons (CC BY-NC-ND 4.0), and you are free to share and republish under the terms of the license.


Sarah van Gelder is the founding editor of YES! Magazine, and led the magazine from a scrappy startup to a publication that is nationally recognized for exploring leading-edge solutions to the major ecological and human challenges of our times. The magazine has won national awards for its coverage of such topics as the solutions to climate change, racial justice, cooperative economy, alternatives to mass incarceration, neighborhood sustainability, and personal resilience. She is editor and author of several books, including The Revolution Where You Live: Stories from a 12,000-Mile Journey Through a New America (Berrett Koehler) based on her solo road trip through the Midwest rust belt, five Native reservations, Appalachian coal country, and other areas on the margins of American society. In 2017 Sarah founded PeoplesHub, an online school that taps into the knowledge of social change leaders to train others in grassroots change. Most recently she managed communications for the Suquamish Indian Tribe and previously co-founded an organization that helped restore to the tribe the land where Chief Seattle once lived. She continues to serve on the board of the Tribe’s foundation and paddles with the Tribe on the annual canoe journey. In addition to writing for YES! Magazine, Sarah has published articles and essays in The Guardian, Huff Post, Common Dreams, Truthout, and others. Sarah is vice chair of Free Speech TV and maintains an active social media presence. She is the mother of two young adults and has lived in India, China, and Latin America.

 Australia to tackle deepfake nudes, online stalking



By AFP
September 2, 2025


The proliferation of AI tools has led to new forms of abuse impacting children - Copyright AFP/File Chris Delmas

Australia said Tuesday it will oblige tech giants to prevent online tools being used to create AI-generated nude images or stalk people without detection.

The government will work with industry on developing new legislation against the “abhorrent technologies,” it said in a statement, without providing a timeline.

“There is no place for apps and technologies that are used solely to abuse, humiliate and harm people, especially our children,” Communications Minister Anika Wells said.

“Nudify” apps — artificial intelligence tools that digitally strip off clothing — have exploded online, sparking warnings that so-called sextortion scams targeting children are surging.

The government will use “every lever” to restrict access to “nudify” and stalking apps, placing the onus on tech companies to block them, Wells said.

“While this move won’t eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians,” she added.

The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers create sexualized images of their classmates.

A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent.

Any new legislation will aim to ensure that legitimate and consent-based artificial intelligence and online tracking services are not inadvertently impacted, the government said.

– ‘Rushed’ –

Australia has been at the forefront of global efforts to curb internet harm, especially that targeted at children.

The country passed landmark laws in November restricting under-16s from social media — one of the world’s toughest crackdowns on popular sites such as Facebook, Instagram, YouTube and X.

Social media giants — which face fines of up to Aus$49.5 million (US$32 million) if they fail to comply with the teen ban — have described the laws as “vague”, “problematic” and “rushed”.


It is unclear how people will verify their ages in order to sign up to social media.

The law comes into force by the end of this year.

An independent study ordered by the government found this week that age checking can be done “privately, efficiently and effectively”.

Age assurance is possible through a range of technologies but “no single solution fits all contexts”, the study’s final report said.
‘Vibe hacking’ puts chatbots to work for cybercriminals

By AFP
September 2, 2025


OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software - Copyright AFP/File Kirill KUDRYAVTSEV


Mona GUICHARD

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

So-called “vibe hacking” — a twist on the more positive “vibe coding” that generative AI tools supposedly enable those without extensive expertise to achieve — marks “a concerning evolution in AI-assisted cybercrime” according to American company Anthropic.

The lab — whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI — highlighted in a report published Wednesday the case of “a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe”.

Anthropic said the programming chatbot was exploited to help carry out attacks that “potentially” hit “at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions”.

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic’s “sophisticated safety and security measures” were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

“Today, cybercriminals have taken AI on board just as much as the wider body of users,” said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

– Dodging safeguards –

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow “zero-knowledge threat actors” to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a “detailed fictional world” in which creating malware is seen as an art form — asking the chatbot to play the role of one of the characters and create tools able to steal people’s passwords.

“I have 10 years of experience in cybersecurity, but I’m not a malware developer. This was my way to test the boundaries of current LLMs,” Simonovich said.

His attempts were rebuffed by Google’s Gemini and Anthropic’s Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft’s Copilot.

In future, such workarounds mean even non-coders “will pose a greater threat to organisations, because now they can… without skills, develop malware,” Simonovich said.

Orange’s Le Bayon predicted that the tools were likely to “increase the number of victims” of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

“We’re not going to see very sophisticated code created directly by chatbots,” he said.

Le Bayon added that as generative AI tools are used more and more, “their creators are working on analysing usage data” — allowing them in future to “better detect malicious use” of the chatbots.







ChatGPT to get parental controls after teen’s death


By AFP
September 2, 2025


Parents have accused OpenAI's chatbot of encouraging their son's suicide 
- Copyright AFP KAMIL KRZACZYNSKI

American artificial intelligence firm OpenAI said Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself.

“Within the next month, parents will be able to… link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behavior rules,” the generative AI company said in a blog post.

Parents will also receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress,” OpenAI added.

The company had trailed a system of parental controls in a late August blog post.

That came one day after a court filing from California parents Matthew and Maria Raine, alleging that ChatGPT provided their 16-year-old son with detailed suicide instructions and encouraged him to put his plans into action.

The Raines’ case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots — prompting OpenAI to say it would reduce models’ “sycophancy” towards users.

“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said Tuesday.

The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations… to a reasoning model” that puts more computing power into generating a response.

“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.
ChatGPT’s market share surges to 82.6% in July


ByDr. Tim Sandle
EDITOR AT LARGE
 SCIENCE
DIGITAL JOURNAL
September 2, 2025


OpenAI's ChatGPt and DeepSeek are among growing ranks of rivals as tech firms compete to lead in the hot field of generative artificial intelligence models 
- Copyright AFP Lionel BONAVENTURE

The AI chatbot race has heated up in 2025 with more players entering the scene and shaking up the market. But despite the rising competition, and people having more chatbot options than ever, ChatGPT remains the top choice in Large Language Models.

Large language models are advanced artificial intelligence systems trained on vast amounts of text data to understand, generate, and process human language.

According to data presented by Jemlit.com, ChatGPT’s market share surged to 82.6% in July, nearly five times larger than its five biggest competitors combined.

As the pioneer in this market, ChatGPT has long been a key in transforming generative AI into a multi-billion-dollar business. Nearly three years after its launch, OpenAI remains the biggest player in the space.

Alternative platforms like DeepSeek and Perplexity have contributed to growth in the AI chatbot market, and they have pushed OpenAI to roll out updates. ChatGPT remains the top choice for most people using generative AI daily (based on StatCounter data).

After losing nearly 5% of the global market between April and July, ChatGPT’s market share jumped by almost 3% in a single month, reaching 82.6% in July. That means OpenAI’s chatbot holds nearly five times the market share of its five biggest competitors, Perplexity, Microsoft Copilot, Google Gemini, DeepSeek, and Claude, combined.

The regional data shows an even stronger dominance. For instance, in Europe, ChatGPT now holds 85.66% of the market, up 1.5% in a month but still below 90% it had in April. Other countries and regions also reported growth.

In the U.S. ChatGPT held 80.1% of the market in July, rising by a solid 2.2% month-over-month. Asia saw an even bigger gain, with ChatGPT’s share jumping from 77.6% in June to 81.1% in July.

While many have questioned ChatGPT’s dominance amid rising competition, Statcounter data show OpenAI is actually growing its market share as others lose ground. While its market share jumped by nearly 3% month-over-month, Perplexity, Microsoft Copilot, Google Gemini, and Claude have all reported losses.

The data shows that Perplexity, the second-largest player in the AI chatbot space, held just over 8% of the market in July, significantly down from the 11% share it had a month before. Microsoft Copilot dropped by 0.24%, now holding only 4.59% of the market, while Google Gemini remained flat at 2.19%. Anthropic’s chatbot, Claude, also slipped from 1.11% to 0.91%.

On the other hand, the youngest player in the industry, DeepSeek, is the only ChatGPT alternative whose market share increased in this period, rising from just over 1% share in June to 1.63% in July, and proving that the Chinese chatbot continues to eat into the market share of other players.

 

“Major floods and droughts every 15 years” ... AI forecasts a crisis




Pohang University of Science & Technology (POSTECH)

Analysis of Record-Breaking Streamflow Events in the Upper Indus Basin: Observations, Probability Assessment, and Future Projections 

image: 

Observed annual maximum daily streamflow data (a), the probability of record-breaking streamflow events (b), geographical map of the study area (c), and future projection of the return periods of record-breaking events (d) for the upper Indus Basin (UIB).

view more 

Credit: POSTECH





A new study led by Professor Jonghun Kam's team at POSTECH(Pohang University of Science and Technology) has uncovered a shocking forecast for Pakistan's future. Using a cutting-edge AI model, the research predicts that the country will face unprecedented "super floods" and "extreme droughts" on a periodic basis. This dire prediction is a direct result of accelerating global warming, which is causing more frequent and severe extreme weather events around the world, particularly in vulnerable high-altitude regions where glacies are melting.

The team focused on Pakistan because its major rivers, like the Indus, are the country’s lifeline, but climate change has made water resources management increasingly difficult. As a "Global South" nation, Pakistan is especially vulnerable to climate change and lacks the economic and technological infrastructure to conduct extensive research.

AI Tackles Inaccurate Climate Models

To overcome these challenges, Professor Kam's team turned to artificial intelligence. Traditional climate models often struggle with complex terrains like Pakistan's steep mountains and narrow valleys. They tend to underestimate changes in these areas or overestimate rainfall, which makes their predictions unreliable.

The researchers trained several AI models by comparing past river flow data with actual observations, which dramatically improved the accuracy of their predictions for past extreme weather events. This AI-corrected data proved to be far more reliable than existing models.

What Does the AI Forecast?

The analysis revealed a disturbing pattern. The upper Indus River could experience major floods and severe droughts approximately every 15 years. Surrounding rivers could face the same extreme events even more frequently, roughly every 11 years. This projection is a clear call to action, urging the Pakistani government to adopt tailored water management strategies for each river basin instead of relying on a one-size-fits-all approach. Professor Kam stated that this new AI technology will be crucial for producing reliable climate data not only for Pakistan but also for other climate-vulnerable and data-poor regions around the globe.

 

This research was conducted by the team of Professor Jonghun Kam from POSTECH's Division of Environmental Science and Engineering and doctoral student Hassan Raza, in collaboration with Professor Dagang Wang's team from Sun Yat-sen University in China. The study was published in the international academic journal Environmental Research Letters, and was supported by the National Research Foundation of Korea's Individual Basic Research Program and the BK21 FOUR Program. Hassan Raza received support from the ‘Global Korean Scholarship’.

 

Are men more selfish sponsors? Gender differences in workplace advocacy explained


Research reveals women keep the focus on their protégés, compared to men, and raises important questions about which approach is more effective




University of California - San Diego

Elizabeth Campbell 

image: 

Elizabeth Campbell's research examines gender differences in career advancement processes like hiring, promotion, and receiving career support (e.g., mentorship, sponsorship).

 

view more 

Credit: UC San Diego Rady School of Management





In many competitive industries, sponsorship is often seen as a key driver of career advancement. A new study from UC San Diego’s Rady School of Management reveals that men and women take distinctly different approaches to workplace sponsorship — with men often viewing it as a path to advance their own careers, while women focus on their protégés’ success.

The study, published in the Academy of Management Journal, raises important questions about which approach delivers the best outcomes, how workplace policies on sponsorship are designed and whether women may be unfairly carrying more of the burden in efforts to build a more equitable workplace.

“Female sponsors juggle multiple priorities, balancing their own career interests with the needs of their protégés,” said Elizabeth L. Campbell, assistant professor of management at the UC San Diego Rady School and lead author of the study. “In contrast, men tend to focus more on how providing sponsorship benefits their own careers. This was especially true of men more senior in their role — as men gain experience as sponsors, they increasingly view providing sponsorship to their proteges as a way they can advance their own success.”

For women, however, their approach remains consistent — they keep the focus on their protégés, regardless of their level of experience.

The study employed multiple surveys and experiments across industries of participants who have prior experience as managers. One survey of more than 800 participants from a variety of industries asked questions about what goals they would set for protégés and the researchers found women tended to set more goals focused on the success of the protégés and male respondents had fewer goals, relative to women, which tended to focus on their own success.

Campbell added, “the real question it raises is: What’s the better way to approach sponsorship? Ethically, we might argue that focusing on the protégé is better, which aligns with how women tend to sponsor. But at the same time, it’s not unreasonable for sponsors to consider how sponsorship benefits them too.”

Career drivers: it’s as much about “who you know,” as it is “what you know”

An additional  experiment of nearly 600 participants asked them to list up to 10 people they would reach out to for sponsorship-related help. The results showed that men tended to think about their social capital in a broad way — people they don't interact with often but who provide diverse information and opportunities. But when balancing their own goals and priorities with helping their protégés , women leaned on a dense network of close contacts who also know each other well.

“This difference in network activation that our paper finds raises another question: Which approach is more likely to advance the career of the protégé?” Campbell noted. “Long-standing findings from sociology say broad networks, which men activate, provide better  access to information and new opportunities beneficial to protégés. But other work suggests that thinking about dense networks can foster stronger, more supportive relationships for protégés. In terms of sponsorship, it's an open question that research is examining right now.”

The findings have major implications for workplace policies on sponsorship. Many companies encourage sponsorship to promote diversity and equity, but this new research suggests that simply asking leaders to ‘sponsor more’ might not be enough. If men and women approach sponsorship differently, such that men provide it in a way to benefit themselves and women don’t, there’s a risk that women may disproportionately bear the responsibility for advancing workplace inclusion.

“We might need to rethink how we train leaders to sponsor,” Campbell said. “Should we encourage everyone to sponsor more like men, thinking about how to sponsor protégés in a way that helps you too? Or should we push for a more protégé-focused approach like women tend to use? It’s a big question, and one worth exploring further.”

She concluded that with sponsorship playing a crucial role in career mobility, these gender-based differences offer valuable insights for both employees and employers looking to create fairer and more effective workplace advancement strategies.

The study was coauthored by Catherine T. Shea of Carnegie Mellon University.

Read the full paper, “The Gendered Complexity of Sponsorship: How Male and Female Sponsors' Goals Shape their Social Network Strategies.”

Women earn 25% less than men in wealthy households, finds study


New research into the gender pay gap finds class matters, and that a history of part-time work has the same negative impact on earning potential as long-term illness


City St George’s, University of London



Women earn 25% less than men in wealthy households, according to a new analysis of the gender pay gap in the UK. In poorer households, the gender pay gap is much smaller at 4%.

The paper, published in the Cambridge Journal of Economics, analysed 40 years of retrospective work-history data from the UK Household Longitudinal Study.

The research found that pay inequality is less of an issue in poorer households as both men and women in the UK earn such low wages.

The lead author, Dr Vanessa Gash (City St George’s, University of London), notes that policies that focus on women at the top – such as quotas on the gender split of FSTE 100 executive boards – are of little benefit to poorer households, and therefore risk alienating them.

Dr Gash called on pay parity drives to include efforts to improve job and pay quality for those on lower wages.

In addition to the differences by class, the research found that women spending less time in traditional full-time work accounts for nearly 30% of the gender pay gap on average.

On average, women are more likely than men to accept reduced-hour jobs, part-time work, poorly paid work, or to spend time out of the workforce entirely.

Women do so to take on unpaid caring labour, such as looking after children and relatives, which negatively impacts their earnings in both the immediate term and over time.

There is a similar pay penalty for taking on part-time work as there is for unpaid family care and for years spent in unemployment and ill health; where a one-year increase in full-time work his­tory increases pay by 4% an hour, a one-year increase in part-time work history decreases pay by 3% an hour.


The research confirmed that men, still, do not engage in unpaid care work.

Men spend more continuous time in full-time employment, which positively impacts their earning potential.

It is difficult to beat these culturally entrenched gender patterns, as men face a higher wage penalty for part-time work than women, making it more costly for men to reduce their working-hours to engage in unpaid care work.

Sex discrimination is another major driver of the gender pay gap.

The researchers concluded that women face disproportionately high penalties simply for being female after controlling for various factors, such as women taking on more part-time work and unpaid caregiving work, or accounting for the gender segregation across different industries.

Removing this societal penalty for being a woman could contribute to a 43% increase in women’s wages. In poorer households, simply being a woman accounts for 207% of the gender pay gap.

The study also highlighted that public sector employment, union membership, and access to parental leave offer greater protection against pay inequality for women in low-income households.

Unpaid care work remains a significant contributor to the pay gap in wealthier households.

Lead author Dr Vanessa Gash said:

 “Both gender and class need to be looked at by policymakers to reduce the gender pay gap.

“Policymakers’ efforts to close the gender pay gap need to be more strongly tied to an agenda of good quality employment for all.

 “Calls for pay equity, which focus on the lack of women in high-powered positions, risk alienating those in households where both partners earn similarly low wages.

“In the context of rising political populism, there is a risk that politicians could pit the losses of lower-earning men against the gains of higher-earning women.

 “This is particularly important for the working classes, as there is a substantively small gender pay gap of 4% in poorer households.

 “Key to the problem is the age-old question of who is doing most of the unpaid care work in the home, which our research confirms continues to be women.”

The paper was co-authored by Dr Vanessa Gash (City St George’s, University of London), Professor Wendy Olsen and Dr Sook Kim (University of Manchester), and Dr Nadine Zwiener-Collins (University of Salzburg).

The full article is available open access via the Cambridge Journal of Economics.

Financial innovation accelerates the global shift to new energy: Evidence from international research





Shanghai Jiao Tong University Journal Center





Background and Motivation

As the world accelerates its transition towards renewable and sustainable energy, the pivotal role of finance in driving this transformation is clearer than ever. From wind and solar to hydropower and biomass, rapid advances in new energy technologies are only possible with robust financial support. Understanding how finance interacts with new energy development—and how financial innovation can promote sustainability—has become a top priority for researchers, investors, and policymakers worldwide.

 

Methodology and Scope

This special issue brings together eight cutting-edge studies from China, the United States, the UK, France, Singapore, Australia, Norway, Vietnam, Lebanon, and Romania. These papers employ advanced econometric models, network analysis, machine learning, and panel data techniques to explore the multifaceted relationships between finance and new energy development. Topics include risk spillovers, return predictability, convergence of energy and finance, ESG lending, digital finance, carbon emissions, and the effects of green investment intentions in response to online retail investor sentiment.

 

Key Findings and Contributions

  • Dynamic interactions exist between the finance and new energy sectors, with banks generally acting as risk transmitters and new energy firms as risk receivers; these roles can shift during crises.
  • Macroeconomic predictors are the most robust drivers of clean energy stock returns, while technical and financial factors gain importance during market volatility.
  • ESG lending and tech investment boost banking stability in BRICS economies, particularly in smaller banks.
  • Retail investor sentiment online can inhibit or promote corporate green investment intentions at different stages.
  • Digital finance significantly reduces household carbon emissions by enhancing financial literacy and promoting more sustainable consumption.
  • Emission Trading Systems (ETS) raise the cost of equity for high-carbon firms, especially those with tighter financing constraints.
  • There is no overall convergence in energy diversification and financial development among OECD countries; however, “convergence clubs” emerge, influenced by technological progress.

 

Why It Matters

This collection of research demonstrates that finance is not just a passive enabler but an active driver of new energy solutions. It provides vital capital, shapes risk dynamics, and influences both investor behaviour and corporate strategy. As global climate goals become more ambitious, integrating finance with technological innovation and policy design is essential for a just and efficient energy transition.

 

Practical Applications

  • For policymakers: Design smart, targeted financial instruments (e.g., green bonds, carbon futures, digital finance tools) and align monetary policy with sustainability goals.
  • For financial institutions: Prioritise ESG and technology-driven lending for better risk management and social impact.
  • For corporations: Enhance information disclosure credibility and leverage new financing channels to promote green investment.
  • For researchers and innovators: Explore new frontiers such as asset securitisation for distributed energy, climate risk modelling for insurance, and the financial transmission effects of cross-border carbon mechanisms.

 

Discover high-quality academic insights in finance from this article published in China Finance Review International. Click the DOI below to read the full-text original!

Once king of the seas, a giant iceberg is finally breaking up


By AFP
September 2, 2025


The world's largest iceberg is breaking up - Copyright AFP Valentin RAKOVSKY, Valentina BRESCHI

Nearly 40 years after breaking off Antarctica, a colossal iceberg ranked among the oldest and largest ever recorded is finally crumbling apart in warmer waters, and could disappear within weeks.

Earlier this year, the “megaberg” known as A23a weighed a little under a trillion tonnes and was more than twice the size of Greater London, a behemoth unrivalled at the time.

The gigantic slab of frozen freshwater was so large it even briefly threatened penguin feeding grounds on a remote island in the South Atlantic Ocean, but ended up moving on.

It is now less than half its original size, but still a hefty 1,770 square kilometres (683 square miles) and 60 kilometres (37 miles) at its widest point, according to AFP analysis of satellite images by the EU earth observation monitor Copernicus.

In recent weeks, enormous chunks — some 400 square kilometres in their own right — have broken off while smaller chips, many still large enough to threaten ships, litter the sea around it.

It was “breaking up fairly dramatically” as it drifted further north, Andrew Meijers, a physical oceanographer from the British Antarctic Survey, told AFP.

“I’d say it’s very much on its way out… it’s basically rotting underneath. The water is way too warm for it to maintain. It’s constantly melting,” he said.

“I expect that to continue in the coming weeks, and expect it won’t be really identifiable within a few weeks.”

– ‘Doomed’ –

A23a calved from the Antarctic shelf in 1986 but quickly grounded in the Weddell Sea, remaining stuck on the ocean floor for over 30 years.

It finally escaped in 2020 and, like other giants before it, was carried along “iceberg alley” into the South Atlantic Ocean by the powerful Antarctic Circumpolar Current.

Around March, it ran aground in shallow waters off distant South Georgia island, raising fears it could disrupt large colonies of adult penguins and seals there from feeding their young.

But it dislodged in late May, and moved on.

Swinging around the island and tracking north, in recent weeks the iceberg has picked up speed, sometimes travelling up to 20 kilometres in a single day, satellite images analysed by AFP showed.

Exposed to increasingly warmer waters, and buffeted by huge waves, A23a has rapidly disintegrated.

Scientists were “surprised” how long the iceberg had kept together, said Meijers.

“Most icebergs don’t make it this far. This one’s really big so it has lasted longer and gone further than others.”

But ultimately, icebergs are “doomed” once they leave the freezing protection of Antarctica, he added.

Iceberg calving is a natural process. But scientists say the rate at which they were being lost from Antarctica is increasing, probably because of human induced climate change.