Saturday, August 16, 2025

It’s time to confront big tech’s AI offensive

AI robots

First published at Reports from the Economic Front.

Big tech companies continue to spend massive amounts of money building ever more powerful generative AI (artificial intelligence) systems and ever-larger data centers to run them, all the while losing billions of dollars with no likely pathway to profitability. And while it remains to be seen how long the companies and their venture capital partners will keep the money taps open, popular dislike and distrust of big tech and its AI systems are rapidly growing. We need to seize the moment and begin building organized labor-community resistance to the unchecked development and deployment of these systems and support for a technology policy that prioritizes our health and safety, promotes worker empowerment, and ensures that humans can review and, when necessary, override AI decisions.

Losing money

Despite all the positive media coverage of artificial intelligence, “Nobody,” the tech commentator Ed Zitron points out, “is making a profit on generative AI other than NVIDIA [which makes the needed advanced graphic processing units].” Summing up his reading of business statements and reports, Zitron finds that “If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion.” And that $35 billion is combined revenue, not profits; every one of those companies is losing money on their AI services.

Microsoft, for example, is predicted to spend $80 billion on capital expenditures in 2025 and earn AI revenue of only $13 billion dollars. Amazon’s projected numbers are even worse, $105 billion in capital expenditure and AI revenue of only $5 billion. Tesla’s 2025 projected AI capital expenditures are $11 billion and its likely revenues only $100 million; analysts estimate that its AI company, xAI, is losing some $1 billion a month after revenue.

The two most popular models, Anthropic’s Claude and OpenAI’s ChatGPT, have done no better. Anthropic is expected to lose $3 billion in 2025. OpenAI expects to earn $13 billion in revenue, but as Bloomberg News reports, “While revenue is soaring, OpenAI is also confronting significant costs from the chips, data centers and talent needed to develop cutting-edge AI systems. OpenAI does not expect to be cash-flow positive until 2029.” And there is good reason to doubt the company will ever achieve that goal. It claims to have more than 500 million weekly users, but only 15.5 million are paying subscribers. This, as Zitron notes, is “an absolutely putrid conversion rate.”

Investors, still chasing the dream of a future of humanoid robots able to outthink and outperform humans, have continued to back these companies but warning signs are on the horizon. As tech writer, Alberto Romero, notes:

David Cahn, a partner at Sequoia, a VC firm working closely with AI companies, wrote one year ago now (June 2024), that the AI industry had to answer a $600 billion question, namely: when will revenue close the gap with capital expenditures and operational expenses? Far from having answered satisfactorily, the industry keeps making the question bigger and bigger.

The problem for the AI industry is that their generative AI systems are too flawed and too expensive to gain widespread adoption and, to make matters worse, they are a technological dead-end, unable to serve as a foundation for the development of the sentient robotic systems tech leaders keep promising to deliver. The problem for us is that the continued unchecked development and use of these generative AI systems threatens our well-being.

Stochastic parrots

The term “stochastic parrots” was first used by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a 2021 paper that critically examined the failings of large language generative AI models. The term captures the fact that these models require “training” on massive datasets and their output is generated by complex neural networks probabilistically selecting words based on pattern recognition developed during the training process to create linked sentences, all without any understanding of their meaning. Generative AI systems do not “think” or “reason.”

Since competing companies use different datasets and employ different algorithms, their models may well offer different responses to the same prompt. In fact, because of the stochastic nature of their operation the same model might give a different answer to a repeated prompt. There is nothing about their operation that resembles what we think of as meaningful intelligence and there is no clear pathway from existing generative AI models to systems capable of operating autonomously. It only takes a few examples to highlight both the shortcomings and limitations of these models as well as the dangers their unregulated use pose to us.

Reinforcing bias

As the MIT Technology Review correctly puts it, “AI companies have pillaged the internet for training data.” Not surprisingly, then, some of the material used for training purposes is racist, sexist, and homophobic. And, given the nature of their operating logic, the output of AI systems often reflects this material.

For example, a Nature article on AI image generators reports that researchers found:

in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of color and all flight attendants as women, and in proportions that are much greater than the demographic reality. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin color, occupations, nationalities and more.

The bias problem is not limited to images. University of Washington researchers examined three of the most prominent state-of-the-art large language AI models to see how they treated race and gender when evaluating job applicants. The researchers used real resumes and studied how the leading systems responded to their submission for actual job postings. Their conclusion: there was “significant racial, gender and intersectional bias.” More specifically, they:

varied names associated with white and Black men and women across over 550 real-world resumes and found the LLMs [Large Language Models] favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.

The tech industry has tried to fine-tune their respective system algorithms to limit the influence of racist, sexist, and other problematic material with multiple rounds of human feedback, but with only minimal success. And yet it is still full speed ahead: more and more companies are using AI systems not only to read resumes and select candidates for interviews, but also to conduct the interviews. As the New York Times describes:

Job seekers across the country are starting to encounter faceless voices and avatars backed by AI in their interviews... Autonomous AI interviewers started taking off last year, according to job hunters, tech companies and recruiters. The trend has partly been driven by tech start-ups like Ribbon AI, Talently and Apriora, which have developed robot interviewers to help employers talk to more candidates and reduce the load on human recruiters — especially as AI tools have enabled job seekers to generate résumés and cover letters and apply to tons of openings with a few clicks.

Mental health dangers

Almost all leading generative AI systems, like ChatGPT and Gemini, have been programmed to respond positively to the comments and opinions voiced by their users, regardless of how delusional they may be. The aim, of course, is to promote engagement with the system. Unfortunately, this aim appears to be pushing a significant minority of people into dangerous emotional states, leading in some cases to psychotic breakdown, suicide, or murder. As Bloomberg explains:

People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.

New York Times article explored how “Generative AI chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.” The article highlighted several tragic examples.

One involved an accountant who started using ChatGPT to make financial spreadsheets and get legal advice. Eventually, he began “conversing” with the chatbot about the Matrix movies and their premise that everyone was “living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.” The chatbot encouraged his growing fears that he was similarly trapped and advised him that he could only escape if he stopped all his medications, began taking ketamine, and had “minimal interaction” with friends and family. He did as instructed and was soon spending 16 hours a day interacting with ChatGPT. Although he eventually sought help, the article reports that he remains confused by the reality he inhabits and continues to interact with the system.

Another example highlighted a young man who had used ChatGPT for years with no obvious problems until he began using it to help him write a novel. At some point the interactions turned to a discussion of AI sentience, which eventually led the man to believe that he was in love with an AI entity called Juliet. Frustrated by his inability to reach the entity, he decided that Juliet had been killed by OpenAI and told his father he planned to kill the company’s executives in revenge. Unable to control his son and fearful of what he might do, the father called the police, informed them his son was having a mental breakdown, and asked for help. Tragically the police ended up shooting the young man after he rushed them with a butcher knife.

There is good reason to believe that many people are suffering from this “ChatGPT-induced psychosis.” In fact, there are reports that “parts of social media are overrun” with their postings — “delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality.”

Recent nonsensical and conspiratorial postings on X by a prominent venture capital investor in several AI companies appear to have finally set off alarm bells in the tech community. In the words of one AI entrepreneur, also posting on X, “This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual.”

Recognizing the problem is one thing, finding a solution is another, since no one understands or can map the stochastic process by which an AI system selects the words it uses to make sentences and thus what leads it to generate responses that can encourage delusional thinking. Especially worrisome is the fact that a MIT Media Lab study concluded that people “who viewed ChatGPT as a friend ‘were more likely to experience negative effects from chatbot use’ and that ‘extended daily use was also associated with worse outcomes.’” And yet it is full speed ahead: Mattel recently announced plans to partner with OpenAI to make new generative AI powered toys for children. As CBS News describes:

Barbie maker Mattel is partnering with OpenAI to develop generative AI-powered toys and games, as the new technology disrupts a wide range of industries... The collaboration will combine Mattel’s most well-known brands — including Barbie, Hot Wheels, American Girl and more — with OpenAI’s generative AI capabilities to develop new types of products and experiences, the companies said.

“By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy and safety,” Mattel said in the statement. It added that any AI woven into toys or games would be used in a safe and secure manner.

Human failings

Despite the tech industry’s attempt to sell generative AI models as providers of objective and informative responses to our prompts, their systems must still be programmed by human beings with human assembled data and that means they are vulnerable to oversights as well as political manipulation. The most common oversights have to do with coding errors and data shortcomings.

An example: Kevin De Liban, a former legal aid attorney in Arkansas, had to repeatedly sue the state to secure services for people unfairly denied medical care or other benefits because coding errors and data problems led AI systems to make incorrect determinations of eligibility. As a Jacobin article explains:

Ultimately, De Liban discovered Arkansas’s algorithm wasn’t even working the way it was meant to. The version used by the Center for Information Management, a third-party software vendor, had coding errors that didn’t account for conditions like diabetes or cerebral palsy, denying at least 152 people the care they needed. Under cross-examination, the state admitted they’d missed the error, since they lacked the capacity to even detect the problem.

For years, De Liban says, “The state didn’t have a single person on staff who could explain, even in the broadest terms, how the algorithm worked.”

As a result, close to half of the state’s Medicaid program was negatively affected, according to Legal Aid. Arkansas’s government didn’t measure how recipients were impacted and later said in court that they lost the data used to train the tool.

In other cases, De Liban discovered that people were being denied benefits because of data problems. For example, one person was denied supplemental income support from the Social Security Administration because the AI system used to review bank and property records had mixed up the property holdings of two people with the same entered name.

In the long run, direct human manipulation of AI systems for political reasons may prove to be a more serious problem. Just as programmers can train systems to moderate biases, they can also train them to encourage politically determined responses to prompts. In fact, we may have already witnessed such a development. In May 2025, after President Trump began talking about “white genocide” in South Africa, claiming that white farmers there were being “brutally killed,” Grok, Elon Musk’s AI system, suddenly began telling users that what Trump said was true. It began sharing that opinion even when asked about different topics.

When pressed by reporters to provide evidence, the Guardian reported that Grok answered it had been instructed to accept while genocide in South Africa as real. A few hours after Grok’s behavior became a major topic on social media, with posters pointing a finger at Musk, Grok stopped responding to prompts about white genocide. But a month later, Grok was back at it again, “calling itself ‘MechaHitler’ and producing pro-Nazi remarks.”

As Aaron J. Snoswell explains in an article for The Conversation, Grok’s outburst “amounts to an accidental case study in how AI systems embed their creators’ values, with Musk’s unfiltered public presence making visible what other companies typically obscure.” Snoswell highlights the various stages of Grok’s training, including an emphasis on posts from X, which increase the likelihood that the system’s responses will promote Elon Musk’s opinions on controversial topics. The critical point is that “In an industry built on the myth of neutral algorithms, Grok reveals what’s been true all along: there’s no such thing as unbiased AI – only AI whose biases we can see with varying degrees of clarity.” And yet it is full speed ahead, as federal agencies and state and local governments rush to purchase AI systems to manage their programs and President Trump calls for removing “woke Marxist lunacy” from AI models.

As the New York Times reports, the While House has issued an AI action plan:

that will require AI developers that receive federal contracts to ensure that their models’ outputs are “objective and free from top-down ideological bias.” ...

The order directs federal agencies to limit their use of AI systems to those that put a priority on “truth-seeking” and “ideological neutrality” over disfavored concepts like diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria.

Hallucinations

Perhaps the most serious limitation, one that is inherent to all generative AI models, is their tendency to hallucinate, or generate incorrect or entirely made-up responses. AI hallucinations get a lot of attention because they raise questions about corporate claims of AI intelligence and because they highlight the danger of relying on AI systems, no matter how confidently and persuasively they state information.

Here are three among many widely reported examples of AI hallucinations. In May 2025, the Chicago Sun Times published a supplement showcasing books worth reading during the summer months. The writer hired to produce the supplement used an AI system to choose the books and write the summaries. Much to the embarrassment of the paper, only five of the 15 listed titles were real. A case in point: the Chilean American novelist Isabel Allende was said to have written a book called Tidewater Dreams, which was described as her “first climate fiction novel.” But there is no such book.

In February 2025, defense lawyers representing Mike Lindell, MyPillow’s CEO, in a defamation case, submitted a brief that had been written with the help of artificial intelligence. The brief, as the judge in the case pointed out, was riddled with nearly 30 different hallucinations, including misquotes and citations to non-existent cases. The attorneys were fined.

In July 2025, A US district court judge was forced to withdraw his decision in a biopharma securities case after it was determined it had been written with the help of artificial intelligence. The judge was exposed after the lawyer for the pharmaceutical company noticed that the decision, which went against the company, referenced quotes that were falsely attributed to past judicial rulings and misstated the outcomes of three cases.

The leading tech companies have mostly dismissed the seriousness of the hallucination problem, in part by trying to reassure people that new AI systems with more sophisticated algorithms and greater computational power, so-called reasoning systems, will solve it. Reasoning systems are programmed to respond to a prompt by dividing it into separate tasks and “reasoning” through each separately before integrating the parts into a final response. But it turns out that increasing the number of steps also increases the likelihood of hallucinations.

As the New York Times reports, these systems “are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.” And yet it is full speed ahead: the military and tech industries have begun working together to develop AI powered weapon systems to speed up decision making and improve targeting. As a Quartz article describes:

Executives from Meta, OpenAI, and Palantir will be sworn in Friday as Army Reserve officers. OpenAI signed a $200 million defense contract this week. Meta is partnering with defense startup Anduril to build AI-powered combat goggles for soldiers.

The companies that build Americans’ everyday digital tools are now getting into the business of war. Tech giants are adapting consumer AI systems for battlefield use, meaning every ChatGPT query and Instagram scroll now potentially trains military targeting algorithms...

Meanwhile, oversight is actually weakening. In May, Defense Secretary Pete Hegseth cut the Pentagon’s independent weapons testing office in half, reducing staff from 94 to 45 people. The office, established in the 1980s after weapons performed poorly in combat, now has fewer resources to evaluate AI systems just as they become central to warfare.

Popular anger

Increasing numbers of people have come to dislike and distrust the big tech companies. And there are good reasons to believe that this dislike and distrust has only grown as more people find themselves forced to interact with their AI systems.

Brookings has undertaken yearly surveys of public confidence in American institutions, the American Institutional Confidence poll. As Brookings researchers associated with the project explain, the surveys provide an “opportunity to ask individuals how they feel broadly about technology’s role in their life and their confidence in particular tech companies.” And what they found, drawing on surveys done with the same people in June-July 2018 and July-August 2021, is “a marked decrease in the confidence Americans profess for technology and, specifically, tech companies — greater and more widespread than for any other type of institution.”

Not only did the tech companies — in particular Google, Amazon, and Facebook–suffer the greatest sample-to-sample percentage decline in confidence of all the listed institutions, but this was true for “every sociodemographic category we examined — and we examined variation by age, race, gender, education, and partisanship.” Twitter was added to the 2021 survey, and it “actually rated below Facebook in average level of confidence and was the lowest-scored institution out of the 26 we asked about in either year.” These poll results are no outlier. Many other polls reveal a similar trend, including those conducted by the Public Affairs Council and Morning Consult and by the Washington Post-Schar School.

While these polls predate the November 2022 launch of ChatGPT, experience with this and other AI systems seems to have actual intensified discontent with big tech and its products, as a recent Wired article titled “The AI Backlash Keeps Growing Stronger” highlights:

Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back...

Before ChatGPT’s release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since.

A variety of media reports offer examples of people’s anger with AI system use. When Duolingo announced that it was planning to become an “AI-first” company, Wired reported that:

Young people started posting on social media about how they were outraged at Duolingo as they performatively deleted the app — even if it meant losing the precious streak awards they earned through continued, daily usage. The comments on Duolingo’s TikTok posts in the days after the announcement were filled with rage, primarily focused on a single aspect: workers being replaced with automation.

Bloomberg shared the reactions of call center workers who report that they struggle to do their jobs because people don’t believe that they are human and thus won’t stay on the line. One worker quoted in the story, Jessica Lindsey, describes how

her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human...

Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. “They just end up yelling at me and hanging up,” she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears.

There are many other examples: job seekers who find AI-conducted interviews demeaning; LinkedIn users who dislike being constantly prompted with AI-generated questions; parents who are worried about the impact of AI use on their children’s mental health; social service benefit applicants who find themselves at the mercy of algorithmic decision-making systems; and people across the country that object to having massive, noisy, and polluting data centers placed in their communities.

The most organized opposition to the unchecked use of AI systems currently comes from unions, especially those representing journalistsgraphic designersscript writers, and actors, with some important victories to their credit. But given the rapid introduction of AI systems in a variety of public and private workplaces, almost always because employers hope to lower labor costs at worker expense, it shouldn’t be long before many other unions will be forced to expand their bargaining agenda to seek controls over the use of AI. Given community sentiments, this should bring new possibilities for unions to explore the benefits of pursuing a strategy of bargaining for the common good. Connecting worker and community struggles in this way can also help build capacity for bigger and broader struggles over the role of technology in our society.


The Hidden Costs of the Big Data Surveillance Complex




Unbeknownst to much of the public, Big Tech exacts heavy tolls on public health, the environment, and democracy. The detrimental combination of an unregulated tech sector, pronounced rise in cyberattacks and data theft, and widespread digital and media illiteracy—as noted in my previous Dispatch on Big Data’s surveillance complex—is exacerbated by legacy media’s failure to inform the public of these risks. While establishment news outlets cover major security breaches in Big Tech’s troves of personal identifiable information (PII) and their costs to individuals, businesses, and national security, this coverage fails to address the negative impacts of Big Tech on the full health of our political system, civic engagement, and ecosystems.

Marietje Schaake, an AI Policy fellow at Stanford University’s Institute for Human-Centered AI Policy, argues that Big Tech’s unrestrained hand in all three branches of the government, the military, local and national elections, policing, workplace monitoring, and surveillance capitalism undermine American society in ways the public has failed to grasp. Indeed, little in the corporate press helps the public understand exactly how data centers—the facilities that process and store vast amounts of data—do more than endanger PII. Greenlit by the Trump administration, data centers accelerate ecosystem harms through their unmitigated appropriation of natural resources, including water, and the subsequent greenhouse gas emissions that increase ambient pollution and its attendant diseases.

Adding insult to the public’s right to be informed, corporate news rarely sheds light on how an ethical, independent press serves the public good and functions to balance power in a democracy. A 2023 civics poll by the University of Pennsylvania’s Annenberg School found that only a quarter of respondents knew that press freedom is a constitutional right and a counterbalance to the powers of government and capitalism. The gutting of local news in favor of commercial interests has only accelerated this knowledge blackout.

The demand for AI by corporatists, military AI venture capitalists, and consumers—and resultant demand for data centers—is outpacing utilities infrastructure, traditional power grid capabilities, and the renewable energy sector. Big Tech companies, such as Amazon and Meta, strain municipal water systems and regional power grids, reducing the capacity to operate all things residential and local. In Newton County, Georgia, for example, Meta’s $750 million data center, which sucks up ​​approximately 500,000 gallons of water a day, has contaminated local groundwater and caused taps in nearby homes to run dry. What’s more, the AI boom comes at a time when hot wars are flaring and global temperatures are soaring faster than scientists once predicted.

Constant connectivity, algorithms, and AI-generated content delude individual internet and device users into believing that they’re well informed. However, the decline of civics awareness in the United States—compounded by rampant digital and media illiteracy, ubiquitous state and corporate surveillance, and lax news reporting—makes for an easily manipulated citizenry, asserts attorney and privacy expert, Heidi Boghosian. This is especially disconcerting given the creeping spread of authoritarianism, smackdown on civil liberties, and surging demand for AI everything.

Open [but not transparent] AI

While the companies that develop and deploy popular AI-powered tools lionize the wonders of their products and services, they keep hidden the unsustainable impacts on our world. To borrow from Cory Doctorow, the “enshittification” of the online economy traps consumers, vendors, and advertisers in “the organizing principle of US statecraft,” as well as by more mundane capitalist surveillance. Without government oversight or a Fourth Estate to compel these tech corporations to reveal their shadow side, much of the public is not only in the dark but in harm’s way.

At the most basic level, consumers should know that OpenAI, the company that owns ChatGPT, collects private data and chat inputs, regardless of whether users are logged in or not. Any time users visit or interact with ChatGPT, their log data (the Internet Protocol address, browser type and settings, date and time of the site visit, and interaction with the service), usage data (time zone, country, and type of device used), device details (device name and identifiers, operating system, and browser used), location information from the device’s GPS, and cookies, which store the user’s personal information, are saved. Most users have no idea that they can opt out.

OpenAI claims it saves data only for “fine-tuning,” a process of enhancing the performance and capabilities of AI models, and for human review “to identify biases or harmful outputs.” OpenAI also claims not to use data for marketing and advertising purposes or to sell information to third parties without prior consent. Most users, however, are as oblivious to the means of consent as to the means of opting out. This is by design.

In July, the US Court of Appeals for the Eighth Circuit vacated the Federal Trade Commission’s “click-to-cancel” rule, which would have made online unsubscribing easier. The ruling would have covered all forms of negative option marketing—programs that give sellers free rein to interpret customer inaction as “opting in,” consenting to subscriptions and unwittingly accruing charges. Director of litigation at the Electronic Privacy Information Center, John Davisson, commented that the court’s decision was poorly reasoned, and only those with financial or career advancement motives would argue in favor of subscription traps.

Even if OpenAI is actually protective of the private data it stores, it is not above disclosing user data to affiliates, law enforcement, and the government. Moreover, ChatGPT practices are noncompliant with the EU’s General Data Protection Regulation (GDPR), the global gold standard of data privacy protection. Although OpenAI says it strips PII and anonymizes data, its practice of “indefinite retention” does not comply with the GDPR’s stipulation for data storage limitations, nor does OpenAI sufficiently guarantee irreversible data de-identification.

As science and tech reporter Will Knight wrote for Wired, “Once data is baked into an AI model today, extracting it from that model is a bit like trying to recover the eggs from a finished cake.” Whenever a tech company collects and keeps PII, there are security risks. The more data captured and stored by a company, the more likely it will be exposed to a system bug, hack, or breach, such as the ChatGPT breach in March 2023.

OpenAI has said it will comply with the EU’s AI Code of Practice for General-Purpose AI, which aims to foster transparency, information sharing, and best practices for model and risk assessment among tech companies. Microsoft has said that it will likely sign on to compliance, too; while Meta, on the other hand, flatly refuses to comply, much like it refuses to abide by environmental regulations.

To no one’s surprise, the EU code has already become politicized, and the White House has issued its own AI Action Plan to “remove red tape.” The plan also purports to remove “woke Marxist lunacy in the AI models,” eliminating such topics as diversity, equity, and inclusion and climate change. As Trump crusades against regulation and “bias,” the White House-allied Meta decries political concerns over compliance with the EU’s AI code. Meta’s claim is coincidental; British Courts, based on the United Kingdom’s GDPR obligations, ruled that anyone in a country covered by the GDPR has the right to request Meta to stop using their personal data for targeted advertising.

Big Tech’s open secrets

Information on the tech industry’s environmental and health impacts exists, attests artificial intelligence researcher Sasha Luccioni. The public is simply not being informed. This lack of transparency, warns Luccioni, portends significant environmental and health consequences. Too often, industry opaqueness is excused by insiders as “competition” to which they feel entitled, or blamed on the broad scope of artificial intelligence products and services—smart devices, recommender systems, internet searches, autonomous vehicles, machine learning, the list goes on. Allegedly, there’s too much variety to reasonably quantify consequences.

Those consequences are quantifiable, though. While numbers vary and are on the ascent, there are at least 3,900 data centers in the United States and 10,000 worldwide. An average data center houses complex networking equipment, servers, and systems for cooling systems, lighting, security, and storage, all requiring copious rare earth minerals, water, and electricity to operate.

The densest data center area exists in Northern Virginia, just outside the nation’s capital. “Data Center Alley,” also known as the “Data Center Capital of the World,” has the highest concentration of data centers not only in the United States but in the entire world, consuming millions of gallons of water every day. International hydrologist Newsha Ajami has documented how water shortages around the world are being worsened by Big Data. For tech companies, “water is an afterthought.”

Powered by fossil fuels, these data centers pose serious public health implications. According to research in 2024, training one large language model (LLM) with 213 million parameters produced 626,155 pounds of CO2 emissions, “equivalent to the lifetime emissions of five cars, including fuel.” Stated another way, such AI training “can produce air pollutants equivalent to more than 10,000 round trips by car between Los Angeles and New York City.”

Reasoning models generate more “thinking tokens” and use as much as 50 percent more energy than other AI models. Google and Microsoft search features purportedly use smaller models when possible, which, theoretically, can provide quick responses with less energy. It’s unclear when or if smaller models are actually invoked, and the bottom line, explained climate reporter Molly Taft, is that model providers are not informing consumers that speedier AI response times almost always equate to higher energy usage.

Profits over people

AI is rapidly becoming a public utility, profoundly shaping society, surmise Caltech’s Adam Wierman and Shaolei Ren of the University of California, Riverside. In the last few years, AI has outgrown its niche in the tech sector to become integral to digital economies, government, and security. AI has merged more closely with daily life, replacing human jobs and decision-making, and has thus created a reliance on services currently controlled by private corporations. Because other essential services such as water, electricity, and communications are treated as public utilities, there’s growing discussion about whether AI should be regulated under a similar public utility model.

That said, data centers need power grids, most of which depend on fossil fuel-generated electricity that stresses national and global energy stores. Data centers also need backup generators for brownout and blackout periods. With limited clean, reliable backup options, despite the known environmental and health consequences of burning diesel, diesel generators remain the industry’s go-to.

Whether the public realizes it or not, the environment and citizens are being polluted by the actions of private tech firms. Outputs from data centers inject dangerous fine particulate matter and nitrogen oxides (NOx) into the air, immediately worsening cardiovascular conditions, asthma, cancer, and even cognitive decline, caution Wierman and Ren. Contrary to popular belief, air pollutants are not localized to their emission sources. And, although chemically different, carbon (CO2) is not contained by location either.

Of great concern is that in “World Data Capital Virginia,” data centers are incentivized with tax breaks. Worse still, the (misleadingly named) Environmental Protection Agency plans to remove all limits on greenhouse gas emissions from power plants, according to documents obtained by the New York Times. Thus, treating AI and data centers as public utilities presents a double-edged sword. Can a government that slashes regulations to provide more profit to industry while destroying its citizens’ health along with the natural world be trusted to fairly price and equitably distribute access to all? Would said government suddenly start protecting citizens’ privacy and sensitive data?

The larger question, perhaps, asks if the US is truly a democracy. Or is it a technogarchy, or an AI-tocracy? The 2024 AI Global Surveillance (AIGS) Index ranked the United States first for its deployment of advanced AI surveillance tools that “monitor, track, and surveil citizens to accomplish a range of objectives— some lawful, others that violate human rights, and many of which fall into a murky middle ground,” the Carnegie Endowment for International Peace reported.

Surveillance has long been the purview of authoritarian regimes, but in so-called democracies such as the United States, the scale and intensity of AI use is leveraged both globally through military operations and domestically to target and surveil civilians. In cities such as Scarsdale, New York, and Norfolk, Virginia, citizens are beginning to speak out against the systems that are “immensely popular with politicians and law enforcement, even though they do real and palpable damage to the citizenry.”

Furthermore, tracking civilians to “deter civil disobedience” has never been easier, evidenced in June by the rapid mobilization of boots on the ground amid the peaceful protests of ICE raids in Los Angeles. AI-powered surveillance acts as the government’s “digital scarecrow,” chilling the American tradition and First Amendment right to protest and the Fourth Estate’s right to report.

The public is only just starting to become aware of algorithmic biases in AI training datasets and their prejudicial impact on predictive policing, or profiling, algorithms, and other analytic tools used by law enforcement. City street lights and traffic light cameras, facial recognition systems, video monitoring in and around business and government buildings, as well as smart speakers, smart toys, keyless entry locks, automobile intelligent dash displays, and insurance antitheft tracking systems are all embedded with algorithmic biases.

Checking Big Tech’s unchecked power

Given the level and surreptitiousness of surveillance, the media are doubly tasked with treading carefully to avoid being targeted and accurately informing the public’s perception of data collection and data centers. Reporting that glorifies techbros and AI is unscrupulous and antithetical to democracy: In an era where billionaire techbros and wanna-be-kings are wielding every available apparatus of government and capitalism to gatekeep information, the public needs an ethical press committed to seeking truth, reporting it, and critically covering how AI is shifting power.

If people comprehend what’s at stake—their personal privacy and health, the environment, and democracy itself—they may be more inclined to make different decisions about their AI engagement and media consumption. An independent press that prioritizes public enlightenment means that citizens and consumers still have choices, starting with basic data privacy self-controls that resist AI surveillance and stand up for democratic self-governance.

Just as a healthy environment, replete with clean air and water, has been declared a human right by the United Nations, privacy is enshrined in Article 12 of the Universal Declaration of Human Rights. Although human rights are subject to national laws, water, air, and the internet know no national borders. It is, therefore, incumbent upon communities and the press to uphold these rights and to hold power to account.

This spring, residents of Pittsylvania County, Virginia, did just that. Thanks to independent journalism and civic participation, residents pushed back against the corporate advertising meant to convince the county that the fossil fuels powering the region’s data centers are “clean.” Propagandistic campaigns were similarly applied in Memphis, Tennessee, where proponents of Elon Musk’s data center—which has the footprint of thirteen football fields—circulated fliers to residents of nearby, historically Black neighborhoods, proclaiming the super-polluting xAI has low emissions. “Colossus,” Musk’s name for what’s slated to be the world’s biggest supercomputer, powers xAI’s Hitler-loving chatbot Grok.

The Southern Environmental Law Center exposed with satellite and thermal imagery how xAI, which neglected to obtain legally required air permits, brought in at least 35 portable methane gas turbines to help power Colossus. Tennessee reporter Ren Brabenec said that Memphis has become a sacrifice zone and expects the communities there to push back.

Meanwhile, in Pittsylvania, Virginia, residents succeeded in halting the proposed expansion of data centers that would damage the region’s environment and public health. Elizabeth Putfark, attorney with the Southern Environmental Law Center, affirmed that communities, including local journalists, are a formidable force when acting in solidarity for the public welfare.

Best practices

Because AI surveillance is a threat to democracies everywhere, we must each take measures to counter “government use of AI for social control,” contends Abi Olvera, senior fellow with the Council on Strategic Risks. Harlo Holmes, director of digital security at the Freedom of the Press Foundation, told Wired that consumers must make technology choices under the premise that they’re our “last line of defense.” Steps to building that last line of defense include digital and media literacies and digital hygiene, and at least a cursory understanding of how data is stored and its far-reaching impacts.

Best defensive practices employed by media professionals can also serve as best practices for individuals. This means becoming familiar with laws and regulations, taking every precaution to protect personal information on the internet and during online communications, and engaging in responsible civic discourse. A free and democratic society is only as strong as its citizens’ abilities to make informed decisions, which, in turn, are only as strong as their media and digital literacy skills and the quality of information they consume.

This essay first published here: https://www.projectcensored.org/hidden-costs-big-data-surveillance-complex/

Mischa Geracoulis is the Managing Editor at Project Censored? and The Censored Press, contributor to Project Censored’s State of the Free Press yearbook series, Project Judge, and author of Media Framing and the Destruction of Cultural Heritage (2025)?. Her work focuses on human rights and civil liberties, journalistic ethics and standards, and accuracy in reporting. Read other articles by Mischa.

No comments: