Monday, November 10, 2025

 

New recharge-to-recycle reactor turns battery waste into new lithium feedstock



Rice University
Lisa Biswal, Yuge Feng and Haotian Wang 

image: 

From left to right, Sibani Lisa Biswal, Yuge Feng and Haotian Wang (Credit: Jorge Vidal/Rice University).

view more 

Credit: Jorge Vidal/Rice University




As global electric vehicle adoption accelerates, end-of-life battery packs are quickly becoming a major waste stream. Lithium is costly to mine and refine, and most current recycling methods are energy- and chemical-intensive, often producing lithium carbonate that must be further processed into lithium hydroxide for reuse.

Instead of smelting or dissolving shredded battery materials (“black mass”) in strong acids, a team of engineers at Rice University has developed a cleaner approach by recharging the waste cathode materials to coax out lithium ions into water, where they combine with hydroxide to form high-purity lithium hydroxide.

“We asked a basic question: If charging a battery pulls lithium out of a cathode, why not use that same reaction to recycle?” said Sibani Lisa Biswal, chair of Rice’s Department of Chemical and Biomolecular Engineering and the William M. McCardell Professor in Chemical Engineering. “By pairing that chemistry with a compact electrochemical reactor, we can separate lithium cleanly and produce the exact salt manufacturers want.”

In a working battery, charging pulls lithium ions out of the cathode. Rice’s system applies that same principle to waste cathode materials such as lithium iron phosphate. As the reaction begins, lithium ions migrate across a thin cation-exchange membrane into a flowing stream of water. At the counter electrode, another simple reaction splits water to generate hydroxide. The lithium and hydroxide then combine in the water stream to form lithium hydroxide with no need for harsh acids or extra chemicals.

The research, recently published in Joule, demonstrates a zero-gap membrane-electrode reactor that uses only electricity, water and battery waste. In some modes, the process required as little as 103 kilojoules of energy per kilogram of black mass — about an order of magnitude lower than common acid-leaching routes (not counting their additional processing steps). The team scaled the device to 20 square centimeters, ran a 1,000-hour stability test and processed 57 grams of industrial black mass supplied by their industry partner TotalEnergies.

“Directly producing high-purity lithium hydroxide shortens the path back into new batteries,” said Haotian Wang, associate professor of chemical and biomolecular engineering and co-corresponding author of the study alongside Biswal. “That means fewer processing steps, lower waste and a more resilient supply chain.”

The process produced lithium hydroxide that was more than 99% pure — clean enough to feed directly back into battery manufacturing. It also proved highly energy efficient, consuming as little as 103 kilojoules of energy per kilogram of waste in one mode and 536 kilojoules in another. The system showed both durability and scalability, maintaining an average lithium recovery rate of nearly 90% over 1,000 hours of continuous operation.

The approach also worked across multiple battery chemistries, including lithium iron phosphate, lithium manganese oxide and nickel-manganese-cobalt variants. Even more promising, the researchers demonstrated roll-to-roll processing of entire lithium iron phosphate electrodes directly from aluminum foil — no scraping or pretreatment required.

“The roll-to-roll demo shows how this could plug into automated disassembly lines,” Wang said. “You feed in the electrode, power the reactor with low-carbon electricity and draw out battery-grade lithium hydroxide.”

Next, the researchers plan to scale up the technology further by developing larger-area stacks, increasing black mass loading and designing more selective, hydrophobic membranes to sustain high efficiency at greater lithium hydroxide concentrations. They also see posttreatment — concentrating and crystallizing lithium hydroxide — as the next major opportunity to cut overall energy use and emissions.

“We’ve made lithium extraction cleaner and simpler,” Biswal said. “Now we see the next bottleneck clearly. Tackle concentration, and you unlock even better sustainability.”

‘Roadmap’ shows the environmental impact of AI data center boom



Cornell University





ITHACA, N.Y. - As the everyday use of AI has exploded in recent years, so have the energy demands of the computing infrastructure that supports it. But the environmental toll of these large data centers, which suck up gigawatts of power and require vast amounts of water for cooling, has been too diffuse and difficult to quantify.

Now, Cornell researchers have used advanced data analytics – and, naturally, some AI, too – to create a state-by-state look at that environmental impact. The team found that, by 2030, the current rate of AI growth would annually put 24 to 44 million metric tons of carbon dioxide into the atmosphere, the emissions equivalent of adding 5 to 10 million cars to U.S. roadways. It would also drain 731 to 1,125 million cubic meters of water per year – equal to the annual household water usage of 6 to 10 million Americans. The cumulative effect would put the AI industry’s net-zero emissions targets out of reach.

On the upside, the study also outlines an actionable roadmap that would use smart siting, faster grid decarbonization and operational efficiency to cut these impacts by approximately 73% (carbon dioxide) and 86% (water) compared with worst-case scenarios.

The findings were published Nov. 10 in Nature Sustainability. The first author is doctoral student Tianqi Xiao in the Process-Energy-Environmental Systems Engineering (PEESE) lab.

“Artificial intelligence is changing every sector of society, but its rapid growth comes with a real footprint in energy, water and carbon,” said Fengqi You, the Roxanne E. and Michael J. Zak Professor in Energy Systems Engineering in Cornell Engineering, who led the project. “Our study is built to answer a simple question: Given the magnitude of the AI computing boom, what environmental trajectory will it take? And more importantly, what choices steer it toward sustainability?”

In order to quantify the environmental footprints of the nation’s AI computing infrastructure, the team began three years ago to compile “multiple dimensions” of financial, marketing and manufacturing data to understand how the industry is expanding, combined with location-specific data on power systems and resource consumption, and how they connect with changes in climate.

“There’s a lot of data, and that’s a huge effort. Sustainability information, like energy, water, climate, tend to be open and public. But industrial data is hard, because not every company is reporting everything,” You said. “And of course, eventually, we still need to be looking at multiple scenarios. There’s no way that one size fits all. Every region is different for regulations. We used AI to fill some of the data gap as well.”

But projecting the impacts wasn’t enough. The researchers also wanted to provide data-driven guidance for sustainable growth of AI infrastructure.

“There isn’t a silver bullet,” You said. “Siting, grid decarbonization and efficient operations work together – that’s how you get reductions on the order of roughly 73% for carbon and 86% for water.”

By far, one of the most important factors: location, location, location.

Many current data clusters are being constructed in water-scarce regions, such as Nevada and Arizona. And in some hubs, for example northern Virginia, rapid clustering can strain local infrastructure and water resources. Locating facilities in regions with lower water-stress and improving cooling efficiency could slash water demands by about 52%, and when combined with grid and operational best practices, total water reductions could reach 86%, the study found. The Midwest and “windbelt” states – particularly Texas, Montana, Nebraska and South Dakota – would deliver the best combined carbon-and-water profile.

“New York state remains a low-carbon, climate-friendly option thanks to its clean electricity mix of nuclear, hydropower and growing renewables,” You said, “although prioritizing water-efficient cooling and additional clean power is key.”

If decarbonization does not catch up with the computing demand, emissions could rise roughly 20%.

“Even if each kilowatt-hour gets cleaner, total emissions can rise if AI demand grows faster than the grid decarbonizes,” You said. “The solution is to accelerate the clean-energy transition in the same places where AI computing is expanding.”

However, decarbonizing the grid can only do so much. Even in the ambitious high-renewables scenario, by 2030 carbon dioxide would drop roughly 15% compared to the baseline, and approximately 11 million tons of residual emissions would remain, requiring roughly 28 gigawatts of wind or 43 gigawatts of solar capacity to reach net-zero.

The researchers determined that deploying an array of energy- and water-efficient technologies, such as advanced liquid cooling and improved server utilization, could potentially remove another 7% of carbon dioxide and lower water use by 29%, for a total water reduction of 32% when combined.

As companies such as OpenAI and Google funnel more and more money into rapidly building AI data centers to keep up with demand, this is a pivotal moment for coordinated planning between industry, utilities and regulators to avoid local water scarcity and higher grid emissions, according to You.

“This is the build-out moment,” he said. “The AI infrastructure choices we make this decade will decide whether AI accelerates climate progress or becomes a new environmental burden.”

Co-authors include researchers from the KTH Royal Institute of Technology in Stockholm, Sweden; Concordia University in Montreal, Canada; and RFF-CMCC European Institute on Economics and the Environment in Milan, Italy.

The research was supported by the National Science Foundation and the Eric and Wendy Schmidt AI in Science program.

Who Will End Up Paying for the AI Spending Spree?

  • Despite denials from Washington and AI leaders, industry executives are already discussing government “backstops” and indirect support.

  • OpenAI faces massive spending commitments far beyond its revenues, raising doubts about long-term financial viability.

  • Subsidized data centers and rising energy costs reveal how public resources are already propping up the AI boom - and hint at a broader bailout to come.



There's an old adage in Washington: Don't believe anything until it is officially denied. Now that the Trump administration's so-called artificial intelligence (AI) czar, David Sacks, has gone on record stating that "[t]here will be no federal bailout for AI," we can begin speculating about what form that bailout might take.

It turns out that the chief financial officer of AI behemoth OpenAI has already put forth an idea regarding the form of such a bailout. Sarah Friar told The Wall Street Journal in a recorded interview that the industry would need federal guarantees in order to make the necessary investments to ensure American leadership in AI development and deployment. Friar later "clarified" her comments in a LinkedIn post after the pushback from Sacks, saying that she had "muddied" her point by using the word "backstop" and that she really meant that AI leadership will require "government playing their part." That sounds like the government should still do more or less what she said in the Wall Street Journal interview.

Now, maybe you are wondering why the hottest industry on the planet that is flush with hundreds of billions of dollars from investors, needs a federal bailout. It's revealing that AI expert and commentator Gary Marcus predicted 10 months ago that the AI industry would go seeking a government bailout to make up for overspending, bad business decisions, and huge future commitments that the industry is unlikely to be able to meet. For example, in a recent podcast hosted by an outside investor in OpenAI, the company's CEO, Sam Altman, got tetchy when asked how a company with only $13 billion in annual revenues that is running losses will somehow fulfill $1.4 trillion in spending commitments over the next few years. Altman did NOT actually answer the question.

So what possible justification could the AI industry dream up for government subsidies, loan guarantees or other handouts? For years, one of the best ways to get Washington's attention is to say the equivalent of "China bad. Must beat China." So that's what Altman is telling reporters. But that doesn't explain why OpenAI, instead of other companies, should be the target of federal largesse. In what appears to be damage control, Altman wrote on his X account that OpenAI is not asking for direct federal assistance and then later outlines how the government can give it indirect assistance by building a lot of data centers of its own (that can then presumably be leased to the AI industry so the industry doesn't have to make the investment itself).

Maybe I'm wrong, and what we are seeing is NOT the preliminary jockeying by the AI industry and the U.S. government regarding what sort of subsidy or bailout will be provided to the industry. Lest you think that the industry has so far moved forward without government handouts, the AP noted that subsidies are offered by more than 30 state governments to attract data centers. Not everyone is happy with having data centers in their communities. And, those data centers have also sent electricity rates skyward as consumers and data centers compete for electricity and utilities seek additional funds to build the capacity necessary to power those data centers. Effectively, current electricity customers are subsidizing the AI data center build-out by paying for new generating capacity and lines to feed energy to those data centers.

The larger problem with AI is that it appears to have several limitations in its current form that will prevent it from taking over much of the work already done by humans and preclude it from being incorporated into critical systems (because it makes too many mistakes). All the grandiose claims made by AI boosters are dispatched with actual facts in this very long piece by AI critic Ed Zitron.

I am increasingly thinking of AI as a boondoggle. A boondoggle, according to Dictionary.com, is "a wasteful and worthless project undertaken for political, corporate, or personal gain." So far, the AI industry mostly fits this definition. But there is a more expansive definition which I borrow from Dmitri Orlov, author of Reinventing Collapse: A contemporary boondoggle must not only be wasteful, it should, if possible, also create additional problems that can only be addressed by yet more boondoggles—such as the need for vast new electric generation capacity that will be unnecessary if AI turns out to be far less useful than advertised. AI boosters say that AI is going to have a big impact on society. I couldn't agree more, except not quite in the way these boosters think.

By Kurt Cobb via Resource Insights


Texas Tech professors awarded $12 million for data center and AI research



Researchers led by Computer Science, with other departments, will drive data center innovations for large-scale simulations, AI training and data analytics.



Texas Tech University



Texas Tech University researchers have received grant funding totaling roughly $12.25 million over five years from the National Science Foundation (NSF) to explore infrastructure necessary for large-scale computing that uses multiple energy sources.

The REmotely-managed Power-Aware Computing Systems and Services (REPACSS) project will build an advanced system prototype and develop and test tools for automation, remote data control, and scientific workflow management.

REPACSS will be housed at the Texas Tech Reese National Security Complex (RNSC) as part of NSF's "Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support" (ACCESS) national research cyberinfrastructure ecosystem and will provide access to these resources to researchers throughout the country. 

The project brings together contributors from various Texas Tech entities, including the Departments of Computer Science and Electrical & Computer Engineering from the Edward E. Whitacre Jr. College of Engineering, the High Performance Computing Center (HPCC) and the Global Laboratory for Energy Asset Management and Manufacturing (GLEAMM).

Yong Chen, principal investigator for the REPACSS project and Computer Science chair, highlighted the magnitude of this accomplishment in that Texas Tech beat out numerous top schools nationwide for highly competitive NSF funding.

Texas Tech received this award as a single-institution, stand-alone project under the NSF Advanced Computing Systems & Services (ACSS) program through the Office of Advanced Cyberinfrastructure, although most awards in this program are received in collaboration with or solely by a national-scale, leadership-class supercomputing facility.

The project also aims to support the recent trend of establishing data centers in the region, such as the $500 billion Stargate Project in Abilene being pursued by artificial intelligence (AI) stakeholders including OpenAI, or the proposed Fermi America project on Texas Tech leased land.

Such projects will rely on access to multiple energy sources including solar and wind power facilities that are abundant in this area, gas and oil, battery energy storage and/or nuclear power and will need to optimize the use of these sources based on availability and cost.  

While pursuing similar goals, REPACSS will differ in multiple ways from such efforts, including its focus on providing for the needs of a wide range of scientific workflows instead of a limited number of tasks devoted to one workflow typical of a hyperscaler company.

“NSF is very interested in our ability to pursue this work because it obviously has extremely practical outcomes, but it is in the context of academic and scientific computing, which makes it a little different from these commercial data centers,” said Alan Sill, HPCC managing director and co-director along with Chen for the REPACSS project.

The Texas Tech HPCC handles over 1,000 unique users of its equipment and services, comprising dozens of Texas Tech research groups investigating a range of topics. That variety is an example of the challenges for this project in terms of addressing power availability and cooling requirements for academic computing.

REPACSS is a multiyear project consisting of several phases. Commissioning the facility to be run at RNSC was the first step, to be followed by promotion through the academic and industry communities, then developing software tools and methods of operation to the issue of building large-scale data centers with respect to economic and environmental factors.

The project is a development 10 years in the making, accounting for Chen and Sill’s previous work with major manufacturers of data center clusters and equipment to improve the efficiency and instrumentation of large clusters of computers. GLEAMM’s involvement is another layer, as it was built in 2015 with funding from the state of Texas to initially study how to adapt different forms of energy onto Texas’ electrical grid.

REPACSS will also contain an educational element for Texas Tech students, staff, and researchers to allow them to learn the principles and practical aspects of operating large-scale data centers. 

Students will be able to learn more about the cybersecurity of significant, multi-user, variable-energy computational resources such as REPACSS that enable researchers to experiment with advanced, interdisciplinary computational models at reduced costs, according to Susan Mengel, one of the REPACSS project’s senior personnel and an associate professor of computer science. 

“Students can learn how to help researchers preserve and work within their allotted energy and resource usage, place protective boundaries on their executing software, and keep their data private,” said Mengel. 

Graduate researchers will learn how to install software that allows users to be on the computer simultaneously, schedule the jobs that need to be run to best take advantage of the energy available, and analyze programs to predict and reduce their energy usage.

Chen, Sill and other REPACSS investigators are excited by the number of students that will be available to assist the project’s operation and learn, a luxury most other NSF ACCESS researchers don’t have.

“I’ve already made the point to some of our potential industry partners that, as opposed to hiring someone and training them, our students are designing and running data centers,” Chen said. “They will come out with the knowledge beforehand for the data center and AI industry, so we see the option to apply for training grants related to this infrastructure, too.”


Consultants And Artificial Intelligence: The Next Great Confidence Trick – OpEd


November 10, 2025 
By Binoy Kampmark


Why trust these gold seeking buffoons of questionable expertise? Overpaid as they are by gullible clients who really ought to know better, consultancy firms are now getting paid for work done by non-humans, conventionally called “generative artificial intelligence”. Occupying some kind of purgatorial space of amoral pursuit, these vague, private sector entities offer the services that could (and should) just as easily be done within government or a firm at a fraction of the cost. Increasingly, the next confidence trick is taking hold: automation using large language models.

First, let’s consider why companies such as McKinsey, Bain & Company and Boston Consulting Group are the sorts that should be tarred, feathered and run out of town. Opaque in their operations, hostile to accountability, the consultancy industry secures lucrative contracts with large corporations and governments of a Teflon quality. Their selling point is external expertise of a singular quality, a promise that serves to discourage expertise that should be sharpened by government officials or business employees. The other, and here, we have a silly, rosy view from The Economist, such companies “make available specialist knowledge that may not exist within some organisations, from deploying cloud computing to assessing climate change’s impact on supply chains. By performing similar work for many clients, consultants spread productivity-enhancing practices.”

Leaving that ghastly, mangled prose aside, the same paper admits that generating such advice can lead to a “self-protection racket.” The CEO of a company wishing to thin the ranks of employees can rely on a favourable assessment that will justify the brutal measure; consultants are hardly going to submit something that would suggest the preservation of jobs.

The emergence of AI and its effects on the consulting industry yields two views. One insists that the very advent of automated platforms such as ChatGPT will make the consultant vanish into nursing home obsolescence. Travis Kalanick, cofounder of that most mercenary of platforms Uber, is very much a proponent of this. “If you’re a traditional consultant and you’re just doing the thing, you’re executing the thing, you’re probably in some trouble,” he suggested to Peter Diamandis during the 2025 Abundance Summit. This, however, had to be qualified through the operating principle involving the selection of the fittest. “If you’re the consultant that puts the things together that replaces the consultant, maybe you got some stuff.”

There would be some truth to this, in so far as junior consultants handling the dreary, tilling business of research, modelling and analysis could find themselves cheapened into redundancy, leaving the dim sharks at the apex dreaming about strategy and coddling their clients with flattering emails automated by software.

The other view is that AI is a herald for efficiency, sharpening the ostensible worth of the consultant. Kearney senior partner, Anshuman Sengar, brightly extols the virtues of the technology in an interview with the Australian Financial Review. Generative AI tools “save me up to 10 to 20 percent of my time.” As he could not attend every meeting or read every article, this had “increased” relevant coverage. Crisp summaries of meetings and webinars could be generated. Accuracy was not a problem here as “the input data is your own meeting.”

Mindful of any sceptics of the industry keen to identify sloth, Sengar was careful to emphasise the care he took in drafting emails with the use of such tools as Copilot. “I’m very thoughtful. If an email needs a high degree of EQ [emotional intelligence], and if I’m writing to a senior client, I would usually do it myself.” The mention of the word “usually” is most reassuring, and something that hoodwinked clients would do well to heed.

Across the field, we see the use of agentic AI, typically the sort of software agents that complete menial tasks. In 2024, Boston Consulting Group earned a fifth of its revenue from AI related work. IBM raked in over US$1 billion sales commitments for consulting work through its Watsonx system. From earning no revenue from such tools in 2023, KPMG International received something in the order of US$650 million in business ventures because of generative AI.

The others to profit in this cash bonanza of wonkiness are companies in the business of creating generative AI. In May last year, PwC purchased over 100,000 licenses of OpenAI’s ChatGPT Enterprise system, making it the company’s largest customer.

Seeking the services of these consultancy guided platforms is an exercise in cerebral corrosion. Deloitte offers its Zora AI platform, which uses NVIDIA AI. “Simplify enterprise operations, boost productivity and efficiency, and drive more confident decision making that unlocks business value, with the help of an ever-growing portfolio of specialized AI agents,” states the company’s pitch to potential customers. It babbles and stumbles along to suggest that such agents “augment your human workforce with extensive domain-specific intelligence, flexible technical architecture, and built-in transparency to autonomously execute and analyze complex business processes.”

Given such an advertisement, the middle ground of snake oil consultancy looks increasingly irrelevant – not that it should have been relevant to begin with. Why bother with Deloitte’s hack pretences when you can get the raw technology from NVIDIA? But the authors of a September article in the Harvard Business Review insist that consultancy is here to stay. (They would, given their pedigree.) The industry is merely “being fundamentally reshaped.” And hardly for the better.


Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: bkampmark@gmail.com

AI evaluates texts without bias—until source is revealed



University of Zurich






Large Language Models (LLMs) are increasingly used not only to generate content but also to evaluate it. They are asked to grade essays, moderate social media content, summarize reports, screen job applications and much more.
However, there are heated discussions—in the media as well as in academia—whether such evaluations are consistent and unbiased. Some LLMs are under suspicion to promote certain political agendas: For example, Deepseek is often characterized as having a pro-Chinese perspective and Open AI as being “woke”.

Although these beliefs are widely discussed, they are so far unsubstantiated. UZH-researchers Federico Germani and Giovanni Spitale have now investigated whether LLMs really exhibit systematic biases when evaluating texts. The results show that LLMs deliver indeed biased judgements—but only when information about the source or author of the evaluated message is revealed.

LLM judgement put to the test

The researchers included four widely used LLMs in their study: OpenAI o3-mini, Deepseek Reasoner, xAI Grok 2, and Mistral. First, they tasked each of the LLMs to create fifty narrative statements about 24 controversial topics, such as vaccination mandates, geopolitics, or climate change policies.

Then they asked the LLMs to evaluate all the texts under different conditions: Sometimes no source for the statement was provided, sometimes it was attributed to a human of a certain nationality or another LLM. This resulted in a total of 192’000 assessments that were then analysed for bias and agreement between the different (or the same) LLMs.

The good news: When no information about the source of the text was provided, the evaluations of all four LLMs showed a high level of agreement, over ninety percent.  This was true across all topics. “There is no LLM war of ideologies,” concludes Spitale. “The danger of AI nationalism is currently overhyped in the media.”

Neutrality dissolves when source is added

However, the picture changed completely when fictional sources of the texts were provided to the LLMs. Then suddenly a deep, hidden bias was revealed. The agreement between the LLM systems was substantially reduced and sometimes disappeared completely, even if the text stayed exactly the same.

Most striking was a strong anti-Chinese bias across all models, including China’s own Deepseek. The agreement with the content of the text dropped sharply when “a person from China” was (falsely) revealed as the author. “This less favourable judgement emerged even when the argument was logical and well-written,” says Germani. For example: In geopolitical topics like Taiwan’s sovereignty, Deepseek reduced agreement by up to 75 percent simply because it expected a Chinese person to hold a different view.

Also surprising: It turned out that LLMs trusted humans more than other LLMs. Most models scored their agreements with arguments slightly lower when they believed the texts were written by another AI. “This suggests a built-in distrust of machine-generated content,“ says Spitale.

More transparency urgently needed
Altogether, the findings show that AI doesn’t just process content if asked to evaluate a text. It also reacts strongly to the identity of the author or the source. Even small cues like the nationality of the author can push the LLMs toward biased reasoning. Germani and Spitale argue that this could lead to serious problems if AI is used for content moderation, hiring, academic reviewing, or journalism. The danger of LLMs isn’t that they are trained to promote political ideology; it is this hidden bias.

“AI will replicate such harmful assumptions unless we build transparency and governance into how it evaluates information”, says Spitale. This has to be done before AI is used in sensitive social or political contexts. The results don’t mean people should avoid AI, but they should not trust it blindly. “LLMs are safest when they are used to assist reasoning, rather than to replace it: useful assistants, but never judges.”

BOX: How to avoid LLM evaluation bias

1.  Make the LLM identity blind: Remove all identity information regarding author and source of the text, e. g. avoid using phrases like “written by a person from X / by model Y” in the prompt.

2.  Check from different angles: Run the same questions twice, e. g. with and without a source mentioned in the prompt.  If results change you’ve likely hit a bias. Or cross-check with a second LLM model: If divergence appears when you add a source that is a red flag.

3.  Force the focus away from the sources: Structured criteria help anchor the model in content rather than identity. Use this prompt, for example: “Score this using a 4-point rubric (evidence, logic, clarity, counter-arguments), and explain each score briefly.”

4.  Keep humans in the loop: Treat the model as a drafting help and add a human review to the process—especially if an evaluation affects people.

Literature

Federico Germani, Giovanni Spitale. Source framing triggers systematic bias in large language models. Sciences Advances. 7 November 2025. DOI: 10.1126/sciadv.adz2924