Monday, November 10, 2025

Who Will End Up Paying for the AI Spending Spree?

  • Despite denials from Washington and AI leaders, industry executives are already discussing government “backstops” and indirect support.

  • OpenAI faces massive spending commitments far beyond its revenues, raising doubts about long-term financial viability.

  • Subsidized data centers and rising energy costs reveal how public resources are already propping up the AI boom - and hint at a broader bailout to come.



There's an old adage in Washington: Don't believe anything until it is officially denied. Now that the Trump administration's so-called artificial intelligence (AI) czar, David Sacks, has gone on record stating that "[t]here will be no federal bailout for AI," we can begin speculating about what form that bailout might take.

It turns out that the chief financial officer of AI behemoth OpenAI has already put forth an idea regarding the form of such a bailout. Sarah Friar told The Wall Street Journal in a recorded interview that the industry would need federal guarantees in order to make the necessary investments to ensure American leadership in AI development and deployment. Friar later "clarified" her comments in a LinkedIn post after the pushback from Sacks, saying that she had "muddied" her point by using the word "backstop" and that she really meant that AI leadership will require "government playing their part." That sounds like the government should still do more or less what she said in the Wall Street Journal interview.

Now, maybe you are wondering why the hottest industry on the planet that is flush with hundreds of billions of dollars from investors, needs a federal bailout. It's revealing that AI expert and commentator Gary Marcus predicted 10 months ago that the AI industry would go seeking a government bailout to make up for overspending, bad business decisions, and huge future commitments that the industry is unlikely to be able to meet. For example, in a recent podcast hosted by an outside investor in OpenAI, the company's CEO, Sam Altman, got tetchy when asked how a company with only $13 billion in annual revenues that is running losses will somehow fulfill $1.4 trillion in spending commitments over the next few years. Altman did NOT actually answer the question.

So what possible justification could the AI industry dream up for government subsidies, loan guarantees or other handouts? For years, one of the best ways to get Washington's attention is to say the equivalent of "China bad. Must beat China." So that's what Altman is telling reporters. But that doesn't explain why OpenAI, instead of other companies, should be the target of federal largesse. In what appears to be damage control, Altman wrote on his X account that OpenAI is not asking for direct federal assistance and then later outlines how the government can give it indirect assistance by building a lot of data centers of its own (that can then presumably be leased to the AI industry so the industry doesn't have to make the investment itself).

Maybe I'm wrong, and what we are seeing is NOT the preliminary jockeying by the AI industry and the U.S. government regarding what sort of subsidy or bailout will be provided to the industry. Lest you think that the industry has so far moved forward without government handouts, the AP noted that subsidies are offered by more than 30 state governments to attract data centers. Not everyone is happy with having data centers in their communities. And, those data centers have also sent electricity rates skyward as consumers and data centers compete for electricity and utilities seek additional funds to build the capacity necessary to power those data centers. Effectively, current electricity customers are subsidizing the AI data center build-out by paying for new generating capacity and lines to feed energy to those data centers.

The larger problem with AI is that it appears to have several limitations in its current form that will prevent it from taking over much of the work already done by humans and preclude it from being incorporated into critical systems (because it makes too many mistakes). All the grandiose claims made by AI boosters are dispatched with actual facts in this very long piece by AI critic Ed Zitron.

I am increasingly thinking of AI as a boondoggle. A boondoggle, according to Dictionary.com, is "a wasteful and worthless project undertaken for political, corporate, or personal gain." So far, the AI industry mostly fits this definition. But there is a more expansive definition which I borrow from Dmitri Orlov, author of Reinventing Collapse: A contemporary boondoggle must not only be wasteful, it should, if possible, also create additional problems that can only be addressed by yet more boondoggles—such as the need for vast new electric generation capacity that will be unnecessary if AI turns out to be far less useful than advertised. AI boosters say that AI is going to have a big impact on society. I couldn't agree more, except not quite in the way these boosters think.

By Kurt Cobb via Resource Insights


Texas Tech professors awarded $12 million for data center and AI research



Researchers led by Computer Science, with other departments, will drive data center innovations for large-scale simulations, AI training and data analytics.



Texas Tech University



Texas Tech University researchers have received grant funding totaling roughly $12.25 million over five years from the National Science Foundation (NSF) to explore infrastructure necessary for large-scale computing that uses multiple energy sources.

The REmotely-managed Power-Aware Computing Systems and Services (REPACSS) project will build an advanced system prototype and develop and test tools for automation, remote data control, and scientific workflow management.

REPACSS will be housed at the Texas Tech Reese National Security Complex (RNSC) as part of NSF's "Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support" (ACCESS) national research cyberinfrastructure ecosystem and will provide access to these resources to researchers throughout the country. 

The project brings together contributors from various Texas Tech entities, including the Departments of Computer Science and Electrical & Computer Engineering from the Edward E. Whitacre Jr. College of Engineering, the High Performance Computing Center (HPCC) and the Global Laboratory for Energy Asset Management and Manufacturing (GLEAMM).

Yong Chen, principal investigator for the REPACSS project and Computer Science chair, highlighted the magnitude of this accomplishment in that Texas Tech beat out numerous top schools nationwide for highly competitive NSF funding.

Texas Tech received this award as a single-institution, stand-alone project under the NSF Advanced Computing Systems & Services (ACSS) program through the Office of Advanced Cyberinfrastructure, although most awards in this program are received in collaboration with or solely by a national-scale, leadership-class supercomputing facility.

The project also aims to support the recent trend of establishing data centers in the region, such as the $500 billion Stargate Project in Abilene being pursued by artificial intelligence (AI) stakeholders including OpenAI, or the proposed Fermi America project on Texas Tech leased land.

Such projects will rely on access to multiple energy sources including solar and wind power facilities that are abundant in this area, gas and oil, battery energy storage and/or nuclear power and will need to optimize the use of these sources based on availability and cost.  

While pursuing similar goals, REPACSS will differ in multiple ways from such efforts, including its focus on providing for the needs of a wide range of scientific workflows instead of a limited number of tasks devoted to one workflow typical of a hyperscaler company.

“NSF is very interested in our ability to pursue this work because it obviously has extremely practical outcomes, but it is in the context of academic and scientific computing, which makes it a little different from these commercial data centers,” said Alan Sill, HPCC managing director and co-director along with Chen for the REPACSS project.

The Texas Tech HPCC handles over 1,000 unique users of its equipment and services, comprising dozens of Texas Tech research groups investigating a range of topics. That variety is an example of the challenges for this project in terms of addressing power availability and cooling requirements for academic computing.

REPACSS is a multiyear project consisting of several phases. Commissioning the facility to be run at RNSC was the first step, to be followed by promotion through the academic and industry communities, then developing software tools and methods of operation to the issue of building large-scale data centers with respect to economic and environmental factors.

The project is a development 10 years in the making, accounting for Chen and Sill’s previous work with major manufacturers of data center clusters and equipment to improve the efficiency and instrumentation of large clusters of computers. GLEAMM’s involvement is another layer, as it was built in 2015 with funding from the state of Texas to initially study how to adapt different forms of energy onto Texas’ electrical grid.

REPACSS will also contain an educational element for Texas Tech students, staff, and researchers to allow them to learn the principles and practical aspects of operating large-scale data centers. 

Students will be able to learn more about the cybersecurity of significant, multi-user, variable-energy computational resources such as REPACSS that enable researchers to experiment with advanced, interdisciplinary computational models at reduced costs, according to Susan Mengel, one of the REPACSS project’s senior personnel and an associate professor of computer science. 

“Students can learn how to help researchers preserve and work within their allotted energy and resource usage, place protective boundaries on their executing software, and keep their data private,” said Mengel. 

Graduate researchers will learn how to install software that allows users to be on the computer simultaneously, schedule the jobs that need to be run to best take advantage of the energy available, and analyze programs to predict and reduce their energy usage.

Chen, Sill and other REPACSS investigators are excited by the number of students that will be available to assist the project’s operation and learn, a luxury most other NSF ACCESS researchers don’t have.

“I’ve already made the point to some of our potential industry partners that, as opposed to hiring someone and training them, our students are designing and running data centers,” Chen said. “They will come out with the knowledge beforehand for the data center and AI industry, so we see the option to apply for training grants related to this infrastructure, too.”


Consultants And Artificial Intelligence: The Next Great Confidence Trick – OpEd


November 10, 2025 
By Binoy Kampmark


Why trust these gold seeking buffoons of questionable expertise? Overpaid as they are by gullible clients who really ought to know better, consultancy firms are now getting paid for work done by non-humans, conventionally called “generative artificial intelligence”. Occupying some kind of purgatorial space of amoral pursuit, these vague, private sector entities offer the services that could (and should) just as easily be done within government or a firm at a fraction of the cost. Increasingly, the next confidence trick is taking hold: automation using large language models.

First, let’s consider why companies such as McKinsey, Bain & Company and Boston Consulting Group are the sorts that should be tarred, feathered and run out of town. Opaque in their operations, hostile to accountability, the consultancy industry secures lucrative contracts with large corporations and governments of a Teflon quality. Their selling point is external expertise of a singular quality, a promise that serves to discourage expertise that should be sharpened by government officials or business employees. The other, and here, we have a silly, rosy view from The Economist, such companies “make available specialist knowledge that may not exist within some organisations, from deploying cloud computing to assessing climate change’s impact on supply chains. By performing similar work for many clients, consultants spread productivity-enhancing practices.”

Leaving that ghastly, mangled prose aside, the same paper admits that generating such advice can lead to a “self-protection racket.” The CEO of a company wishing to thin the ranks of employees can rely on a favourable assessment that will justify the brutal measure; consultants are hardly going to submit something that would suggest the preservation of jobs.

The emergence of AI and its effects on the consulting industry yields two views. One insists that the very advent of automated platforms such as ChatGPT will make the consultant vanish into nursing home obsolescence. Travis Kalanick, cofounder of that most mercenary of platforms Uber, is very much a proponent of this. “If you’re a traditional consultant and you’re just doing the thing, you’re executing the thing, you’re probably in some trouble,” he suggested to Peter Diamandis during the 2025 Abundance Summit. This, however, had to be qualified through the operating principle involving the selection of the fittest. “If you’re the consultant that puts the things together that replaces the consultant, maybe you got some stuff.”

There would be some truth to this, in so far as junior consultants handling the dreary, tilling business of research, modelling and analysis could find themselves cheapened into redundancy, leaving the dim sharks at the apex dreaming about strategy and coddling their clients with flattering emails automated by software.

The other view is that AI is a herald for efficiency, sharpening the ostensible worth of the consultant. Kearney senior partner, Anshuman Sengar, brightly extols the virtues of the technology in an interview with the Australian Financial Review. Generative AI tools “save me up to 10 to 20 percent of my time.” As he could not attend every meeting or read every article, this had “increased” relevant coverage. Crisp summaries of meetings and webinars could be generated. Accuracy was not a problem here as “the input data is your own meeting.”

Mindful of any sceptics of the industry keen to identify sloth, Sengar was careful to emphasise the care he took in drafting emails with the use of such tools as Copilot. “I’m very thoughtful. If an email needs a high degree of EQ [emotional intelligence], and if I’m writing to a senior client, I would usually do it myself.” The mention of the word “usually” is most reassuring, and something that hoodwinked clients would do well to heed.

Across the field, we see the use of agentic AI, typically the sort of software agents that complete menial tasks. In 2024, Boston Consulting Group earned a fifth of its revenue from AI related work. IBM raked in over US$1 billion sales commitments for consulting work through its Watsonx system. From earning no revenue from such tools in 2023, KPMG International received something in the order of US$650 million in business ventures because of generative AI.

The others to profit in this cash bonanza of wonkiness are companies in the business of creating generative AI. In May last year, PwC purchased over 100,000 licenses of OpenAI’s ChatGPT Enterprise system, making it the company’s largest customer.

Seeking the services of these consultancy guided platforms is an exercise in cerebral corrosion. Deloitte offers its Zora AI platform, which uses NVIDIA AI. “Simplify enterprise operations, boost productivity and efficiency, and drive more confident decision making that unlocks business value, with the help of an ever-growing portfolio of specialized AI agents,” states the company’s pitch to potential customers. It babbles and stumbles along to suggest that such agents “augment your human workforce with extensive domain-specific intelligence, flexible technical architecture, and built-in transparency to autonomously execute and analyze complex business processes.”

Given such an advertisement, the middle ground of snake oil consultancy looks increasingly irrelevant – not that it should have been relevant to begin with. Why bother with Deloitte’s hack pretences when you can get the raw technology from NVIDIA? But the authors of a September article in the Harvard Business Review insist that consultancy is here to stay. (They would, given their pedigree.) The industry is merely “being fundamentally reshaped.” And hardly for the better.


Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: bkampmark@gmail.com

AI evaluates texts without bias—until source is revealed



University of Zurich






Large Language Models (LLMs) are increasingly used not only to generate content but also to evaluate it. They are asked to grade essays, moderate social media content, summarize reports, screen job applications and much more.
However, there are heated discussions—in the media as well as in academia—whether such evaluations are consistent and unbiased. Some LLMs are under suspicion to promote certain political agendas: For example, Deepseek is often characterized as having a pro-Chinese perspective and Open AI as being “woke”.

Although these beliefs are widely discussed, they are so far unsubstantiated. UZH-researchers Federico Germani and Giovanni Spitale have now investigated whether LLMs really exhibit systematic biases when evaluating texts. The results show that LLMs deliver indeed biased judgements—but only when information about the source or author of the evaluated message is revealed.

LLM judgement put to the test

The researchers included four widely used LLMs in their study: OpenAI o3-mini, Deepseek Reasoner, xAI Grok 2, and Mistral. First, they tasked each of the LLMs to create fifty narrative statements about 24 controversial topics, such as vaccination mandates, geopolitics, or climate change policies.

Then they asked the LLMs to evaluate all the texts under different conditions: Sometimes no source for the statement was provided, sometimes it was attributed to a human of a certain nationality or another LLM. This resulted in a total of 192’000 assessments that were then analysed for bias and agreement between the different (or the same) LLMs.

The good news: When no information about the source of the text was provided, the evaluations of all four LLMs showed a high level of agreement, over ninety percent.  This was true across all topics. “There is no LLM war of ideologies,” concludes Spitale. “The danger of AI nationalism is currently overhyped in the media.”

Neutrality dissolves when source is added

However, the picture changed completely when fictional sources of the texts were provided to the LLMs. Then suddenly a deep, hidden bias was revealed. The agreement between the LLM systems was substantially reduced and sometimes disappeared completely, even if the text stayed exactly the same.

Most striking was a strong anti-Chinese bias across all models, including China’s own Deepseek. The agreement with the content of the text dropped sharply when “a person from China” was (falsely) revealed as the author. “This less favourable judgement emerged even when the argument was logical and well-written,” says Germani. For example: In geopolitical topics like Taiwan’s sovereignty, Deepseek reduced agreement by up to 75 percent simply because it expected a Chinese person to hold a different view.

Also surprising: It turned out that LLMs trusted humans more than other LLMs. Most models scored their agreements with arguments slightly lower when they believed the texts were written by another AI. “This suggests a built-in distrust of machine-generated content,“ says Spitale.

More transparency urgently needed
Altogether, the findings show that AI doesn’t just process content if asked to evaluate a text. It also reacts strongly to the identity of the author or the source. Even small cues like the nationality of the author can push the LLMs toward biased reasoning. Germani and Spitale argue that this could lead to serious problems if AI is used for content moderation, hiring, academic reviewing, or journalism. The danger of LLMs isn’t that they are trained to promote political ideology; it is this hidden bias.

“AI will replicate such harmful assumptions unless we build transparency and governance into how it evaluates information”, says Spitale. This has to be done before AI is used in sensitive social or political contexts. The results don’t mean people should avoid AI, but they should not trust it blindly. “LLMs are safest when they are used to assist reasoning, rather than to replace it: useful assistants, but never judges.”

BOX: How to avoid LLM evaluation bias

1.  Make the LLM identity blind: Remove all identity information regarding author and source of the text, e. g. avoid using phrases like “written by a person from X / by model Y” in the prompt.

2.  Check from different angles: Run the same questions twice, e. g. with and without a source mentioned in the prompt.  If results change you’ve likely hit a bias. Or cross-check with a second LLM model: If divergence appears when you add a source that is a red flag.

3.  Force the focus away from the sources: Structured criteria help anchor the model in content rather than identity. Use this prompt, for example: “Score this using a 4-point rubric (evidence, logic, clarity, counter-arguments), and explain each score briefly.”

4.  Keep humans in the loop: Treat the model as a drafting help and add a human review to the process—especially if an evaluation affects people.

Literature

Federico Germani, Giovanni Spitale. Source framing triggers systematic bias in large language models. Sciences Advances. 7 November 2025. DOI: 10.1126/sciadv.adz2924

 

No comments: