Showing posts sorted by date for query HOLLOW EARTH. Sort by relevance Show all posts
Showing posts sorted by date for query HOLLOW EARTH. Sort by relevance Show all posts

Wednesday, January 14, 2026

 


Faking It ‘Til We Break It


“Video showing Maduro allegedly torturing Venezuelan dissidents is going viral, with 15 million views and 81k likes already. The only problem? It is actually a scene from a movie.” This tweet from journalist Alan Macleod captured just one droplet in a flood of disinformation following the U.S. capture of Nicolás Maduro. As The Guardian noted, AI-generated images of the event reaped millions of views, instantly saturating social media with fiction.

This cycle repeated days later when Renee Good was killed by an ICE agent in Minnesota. NPR reported that “AI images and internet rumors spread confusion,” fueling a rush to judgment that bypassed all evidence. Rather than awaiting verified facts, the public retreated into partisan scripts: Democrats immediately condemned the agency, while Republicans vilified the victim. This reflexive tribalism illustrates a nation that has abandoned the patience of critical analysis in favor of viral, evidence-free outrage. This reflexive rush to judgment is now weaponized by a new tool of deception: the deepfake.

The emergence of deepfakes has introduced a volatile new dimension to the challenge of disinformation. This technology preys upon systemic vulnerabilities: a widespread lack of media literacy, a ‘greed is good’ hyper-individualism, and a techno-utopian ethos that prioritizes innovation over decency, truth, and social cohesion. The result is a fractured digital landscape where ethically bankrupt creators and profit-driven platforms engineered for engagement oversee the steady demise of civil society. This marriage of cutting-edge deception and ancient tribalism has created a perfect storm where the most successful lie wins and the truth is buried under a million algorithmically-driven clicks. To survive this era of manufactured outrage, we must move beyond passive consumption and demand systemic accountability for the engines of our deception.

The Mechanics of Deception

So-called AI is just the latest tool in a Big-Tech shed that fosters and incentivizes propaganda. Studies have long shown that falsehoods spread more widely than truth on social media, not due to individual behavior, but because Big-Tech platforms are engineered to incentivize the spread of false and misleading content.

AI has complicated the problem of disinformation. Seventy years ago, AI was envisioned as the pursuit of human-like cognitive reasoning; today, that label is frequently marketed to describe technologies that bear little resemblance to those original intellectual ambitions. Modern systems of so-called AI rely primarily on massive datasets and statistical pattern recognition. As a result, they are far from intelligent, and limited in their capacity. Indeed, AI bots are prone to getting things wrong and fabricating information: One study found that AI bot summaries of news content were inaccurate 45% of the time. Other studies have found AI fabricating information from 66% to over 80% of the time.

Beyond deploying bots that circulate misinformation, Big-Tech has released AI tools that empower average users to produce highly convincing, yet entirely fabricated, content with ease. For example, more than 20 percent of videos shown to new YouTube users are “AI slop,” meaning low-quality, mass-produced, algorithmically generated content designed to maximize clicks and watch time rather than inform. These types of deepfakes shaped audience interpretations of recent conflicts such as Israel-Gaza and Russia-Ukraine.

After Maduro’s extraordinary rendition, the internet was quickly flooded with AI-generated content designed to persuade Americans that his capture was an act of justice welcomed by the Venezuelan people. These posts showed Venezuelans supposedly “crying on their knees” to thank President Donald Trump for their liberation. One such video, flagged by Ben Norton, racked up 5 million views.

Similarly, after Good was shot and killed by ICE on January 7, 2026, deepfakes circulated online falsely claiming to reveal the face of the agent involved.

However, the image did not depict the actual agent, Jonathan Ross, an Iraq War veteran with decades of experience in immigration and border enforcement, who had been wearing a mask at the time of the incident.

Ross was not the only one falsely identified; immediately after the shooting, photos circulated online claiming to be of Renee Good. In reality, the images were a confusing mix of a former WWE wrestler and another woman who had previously participated in a poetry contest with the actual victim. The digital desecration continued as users weaponized AI to undress an old photo of Good and manipulate images of her lifeless body, generating deepfakes that placed the victim in a bikini even as she lay at the scene of the shooting.

The Shield of Immunity: Section 230 and Beyond

At a time when approximately 90% of U.S. citizens have access to smartphones and 62% use so-called AI, the U.S. government has largely allowed Big Tech platforms and devices to remain unregulated. Indeed, U.S. policy has favored the tech industry for decades, prioritizing immense corporate profits over meaningful accountability for the societal impacts of these platforms.

Thanks to Section 230 of the Communications Decency Act (CDA), which protects internet platforms from legal liability for content users post, and a religious devotion to the idea that “what’s good for tech is good for America,” these companies enjoy total immunity. Similarly, Trump recently signed an Executive Order shielding the industry from AI regulations at the state level. These actions stem from a shared conviction among Big-Tech and its allies in government that regulation is the fundamental enemy of progress and innovation. The few regulations that do exist typically place the burden on users rather than on platforms, such as requiring individuals to show identification, which further contributes to the surveillance mechanisms that define these tools.

Classroom Capture: Big-Tech’s Educational Influence

In the absence of a robust regulatory framework, many argue that media literacy education offers the most promise for mitigating the influence of misinformation on the public. In the U.S., media literacy is broadly defined as “the ability to access, analyze, evaluate, create, and act using all forms of communication.” Indeed, media literacy education has been associated with a reduction in users accepting disinformation. However, the lack of a central education authority to establish a national curriculum, combined with opposition from traditionalists resistant to new media in the classroom, has significantly hindered the spread of media literacy education. Most Americans lack access to a formal media literacy education, even though more than three-fourths of the population believe it is a critical skill that everyone should develop.

While efforts to integrate media literacy into the classroom are growing, they are increasingly dominated by the very companies they should be critiquing. Big Tech has leveraged the vast wealth gained from harvesting user data to exert an outsized role in shaping educational standards. By offering content and programs for classroom use, these corporations provide tools of “corporate indoctrination.” Their curricula emphasize the opportunities of technology while framing issues like “fake news” and bullying as individual moral failures, such as a lack of character or excessive screen time, rather than systemic results of the dopamine loops and profit models the industry intentionally built.

In contrast, critical media literacy scholars argue that a robust education must teach students to interrogate power dynamics, ownership, platform design, and profit motives. However, because their work challenges the industry, these scholars receive almost no corporate funding and must rely on nonprofits and volunteer labor.

Despite these concerns, educational institutions are leaning further into corporate partnerships. For example, the California State University system, the nation’s largest public university with nearly half a million students, recently announced a $17 million deal with OpenAI, the creator of ChatGPT. This massive investment in “Big Tech” occurred even as the CSU system was simultaneously cutting faculty and staff positions across its 23 campuses.

From Social Capital to Content Creation

While Big-Tech algorithms are complicit, they are not solely to blame. In the post-Cold War era, the United States concluded that capitalism had definitively triumphed over all other systems, treating the Cold War as a final, accurate contest of ideas. This ushered in a fundamental shift in the nation’s cultural and political compass, famously epitomized by the mantra from Oliver Stone’s Wall Street: “greed is good.” Researchers like Robert Putnam, in his influential work Bowling Alone, noted that this shift eroded the social capital and communal bonds essential for a functioning democracy. This hyper-individualistic context has profoundly shaped every generation since, leading to what Jean Twenge and W. Keith Campbell refer to as a “narcissism epidemic.” The nation has become so individualistic that even some people associated with the left, which historically has believed in collectivism, such as Matt Taibbi and Cenk Uygur, joined conservatives in outrage when New York Mayor Zohran Mamdani used the word “collectivism” in his inaugural speech this month.

The resulting culture of narcissistic individualism has engendered a generation of ethically hollow influencers and content creators who worship at the altar of Big-Tech, sacrificing collective integrity for personal profit. As commentator Krystal Ball noted, the incentives for these creators are so perverse that even when videos, such as those regarding Maduro, are debunked, users continue to share them for engagement.

Perhaps the most egregious example is influencer Nick Shirley, who went viral for ‘exposing’ alleged fraud at daycare centers in Minnesota. Although the New York Times had previously reported on a legitimate multimillion-dollar fraud case in Minnesota, where the state and federal governments under the Biden Administration were actively prosecuting individuals for embezzling childcare funds, Shirley fabricated his own distorted narrative. He produced videos alleging a deeper, hidden layer of corruption that the government was supposedly ignoring; however, he relied on a blend of fabricated evidence and baseless accusations to support a “cover-up” narrative that simply did not exist. Nonetheless, Shirley’s video has over 100 million views within a week across different platforms. It was reposted by Vice President J.D. Vance and FBI Director Kash Patel.

Beyond merely inspiring copycat content, these viral fabrications were weaponized by the Trump administration to justify freezing federal funding in five Democratic-led states. Under the guise of addressing fraud and systemic misuse, the administration withheld billions of dollars earmarked for essential childcare and social services. Simultaneously, the Department of Homeland Security deployed as many as 2,000 federal agents to Minnesota in a massive law enforcement surge that resulted in Good’s death.

Shirley reflects a broader culture where viral lies are rewarded with wealth. In this environment, deception has become a viable business model because fraud no longer carries a social stigma when used for profit. Instead, it is often rewarded. Just look at Elon Musk. He is one of the richest men on Earth and was a distributor of some of the fake online content following Maduro’s capture. At the same time, Musk is expanding his wealth in the age of AI with tools that spread baseless racist conspiracies such as the myth of white genocide in South Africa, a new version of Wikipedia that refers to Adolf Hitler simply as “The Führer,” and AI tools that enable users to create deepfake images undressing women and children.

Instead of being treated like a James Bond villain, Musk is worshipped as an aspirational figure. He embodies the ultimate “fake it ’til you make it” con man: a self-brander who convinced the world he was a self-made, intelligent inventor, when in fact he relied heavily on $38 billion in government funding, investments from his father, and piggybacked on the creations of truly brilliant inventors.

The Architecture of Hypocrisy: Why One Standard is No Longer Enough

The toxicity of narcissistic content creators and hyperpartisan figures seeking to expand their brands in the attention economy goes beyond the mere production of falsehoods; it is a symptom of a culture seemingly unable or unwilling to shame even the most glaring contradictions.

For instance, many conservatives backed Trump when he labeled the law enforcement officer who shot a woman during the January 6 Capitol riot a “thug,” yet his allies staunchly defended the agents involved in the Good incident. In fact, even before an official investigation had been launched, let alone concluded, Vice President J.D. Vance argued that Ross possessed “immunity.” Furthermore, while these same circles argue that individuals must take personal responsibility for their actions, rejecting the idea that Trump’s rhetoric created the context for January 6, they paradoxically blame leftists for creating the environment that led to Ross shooting Good.

Yet, even these double standards pale in comparison to the reaction following Charlie Kirk’s death in September 2025. In the ensuing months, conservatives frequently bemoaned a lack of empathy and decorum from the left, which criticized Kirk’s legacy of divisive rhetoric while his wife and loved ones were still grieving. However, those same voices, with notable exceptions such as Tucker Carlson, refused to extend that same grace or “decorum” to the wife and loved ones of Good.

Fox News Channel’s Jesse Watters dismissed Good’s claim that she is a poet and mocked her for listing pronouns in her online bio, a relatively mild attack compared to others. Without evidence, former President Trump called Good a “professional agitator.” Secretary of Homeland Security Kristi Noem labeled her actions an “act of domestic terrorism.” Meanwhile, Vance described Good’s death as a “tragedy of her own making.”

They accompanied these baseless claims with rhetoric directly contradicted by witnesses and video. Trump falsely claimed that Good “viciously ran over” Ross who was recovering in the hospital. In reality, Good’s car did not run over anyone, and Ross walked away from the scene unassisted. Relatedly, a Department of Homeland Security spokesperson, Tricia McLaughlin, falsely claimed that multiple ICE officers were hurt. McLaughlin also falsely accused Good of “stalking agents all day long, impeding our law enforcement.” In reality, video evidence reveals that Good had been on-site for a few minutes. She had just dropped off her now-orphaned six-year-old child at school and was not blocking the road; in fact, cars can be clearly seen passing her vehicle throughout the footage. A crowd had gathered because an ICE vehicle was immobilized in the snow. Unlike the U.S. Postal Service, which is famously expected to operate in all weather conditions—under the creed, “Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds”—ICE was visibly struggling to function in the elements.

These fabrications and baseless accusations reveal a profound callousness toward Good’s grieving family, most notably her wife and orphaned child. This stands in stark, bitter contrast to the demands for decorum and empathy that conservatives issued following Charlie Kirk’s death. One would expect a civilized nation to extend basic sympathy whenever a citizen dies: whether they are shot during a chaotic political protest or killed and denied medical attention. Faced with such blatant double standards, the nation must finally direct Joseph Welch’s famous rebuke of McCarthyism toward itself: ‘Have you no sense of decency, sir, at long last? Have you left no sense of decency?'”

Rhetorically, most claim to view these incidents as tragedies; in practice, however, they operate with blatant double standards: empathy for one side while displaying callousness, or even cruelty, toward the other. A more sophisticated culture would remember comedian George Carlin’s wisdom: “Let’s not have a double standard. One standard will do just fine.”

Conclusion

To restore the soul of a nation fractured by digital fabrication, there must be a collective refusal to continue this cycle of reflexive tribalism. Engaging in a perpetual war of “us versus them,” where truth is sacrificed for the sake of a partisan win, ensures that everyone loses, and the country remains a casualty of its own division. It is insanity to continue entrusting the national discourse to unregulated algorithms and narcissistic creators, expecting that more of the same will somehow yield a different, more unified result.

The time has come to demand a higher standard: one that prioritizes evidence over engagement and human decency over ideological dominance. By rejecting the lure of the deepfake and the ease of the echo chamber, a path can be cleared toward a more sophisticated culture, one that values critical analysis, insists on corporate accountability, and understands that without a single standard of truth and empathy, the foundations of a functioning democracy cannot hold.

Nolan Higdon is a Project Censored national judge, an author, and university lecturer at Merrill College and the Education Department at University of California, Santa Cruz. Read other articles by Nolan, or visit Nolan's website.

Tuesday, January 13, 2026


Big Data Is a Bad Idea: Why AI Factory Farms Will Not Save Rural America

AI data centers have been added to the limited menu for economic development in marginalized US communities, but people in those communities have good reason to oppose them.


A sign on a rural Michigan road opposes a planned $7 billion data center on southeast Michigan farm land in Saline, Michigan on December 1, 2025.
(Photo by Jim West/UCG/Universal Images Group via Getty Images)

John Peck
Jan 12, 2026
Common Dreams


One word—plastics. That was the golden grail that Dustin Hoffman learned about from some well-wisher in the movie The Graduate. I remember watching the film as a farm kid and thinking about the updated version I was being told by my guidance counselors—one word: computers. We are now in the midst of the “Fourth Industrial Revolution” and the latest mantra is: artificial intelligence. Such free advice, though, could really be a costly warning in disguise.

Granted, there is a lot of poverty in the “richest” nation on Earth, and marginalized US communities often have few choices for economic (mal) development. It becomes a twisted game of pick your own poison: supermax prison, toxic waste dump, ethanol facility, tar sands pipeline… Now, AI data centers have been added to the limited menu. Someone recently shared a map of looming AI data centers across the world. It reminded me of how a tumor spreads and Edward Abbey’s quote that “growth for the sake of growth is the ideology of the cancer cell.”



Big Tech Ramps Up Propaganda Blitz As AI Data Centers Become Toxic With Voters



US Electric Grid Heading Toward ‘Crisis’ Thanks to AI Data Centers

The fact that Big Data has targeted Rural America for its latest mastitis should be no surprise. We have lots of available land to grab, thanks to the legacy of settler colonialism and family-farm foreclosure. Back in August I remember driving past Beaver Dam, Wisconsin and watching bulldozers flattening over 800 acres along Hwy 151 and my first hunch was: data center. Sure enough, the secretive $1 billion deal with Meta was finally revealed in a November press release. Just north of Madison in the town of DeForest, Blackstone subsidiary QTS Realty Trust is aiming to build another $12 billion data center on close to 1,600 acres. And if we need to free up more land for AI, we quaint rural folks could just abandon growing real Xmas trees and force people to buy plastic ones instead, as one Fox News “expert” suggested over the holidays. Former President Joe Biden visited Mt. Pleasant, Wisconsin in May 2024 to promote Microsoft’s new $3.3 billion 300+ acre AI campus on the former site of flat screen maker, Foxconn, that welcomed President Donald Trump for its groundbreaking back in 2018. Foxconn abandoned that $10 billion project and its 13,000 job promise, after getting millions in state subsidies and local tax deferrals.

The Microsoft AI complex in Mt. Pleasant will also require over 8 million gallons of water per year from Lake Michigan. We still have some clean water, though that may not last long thanks to agrochemical monocultures, CAFO manure dumping, and PFAS-laden sludge spreading. And AI certainly is thirsty—the Alliance for the Great Lakes noted in its August 2025 report that a hyperscale AI data center needs up to 365 million gallons of water to keep itself cool—that is as much water as is needed by 12,000 people! A recent investigative report by Bloomberg News found that over two-thirds of the AI data centers built since 2022 are in parts of the country already facing water stress. And it is really hard to drink data.

But is all the AI hype just another bubble about to burst? Rural communities (and public taxpayers) have been offered many “amazing” schemes in the past that ended up being just a “bait and switch”—another hollow promise.

In the Midwest we also have potential access to vast electricity (fracked natural gas, wind and solar farms, methane digesters), and relatively under-stressed high voltage grids (unlike California or Texas), though the loss of “cheaper” imported Canadian hydropower with the latest trade war could be a serious challenge. In 2023 the US had over a $2 billion electricity trade deficit vis-a-vis Canada. According to a recent Clean Wisconsin report, just two of our proposed AI data centers will require 3.9 gigawatts—1.5 times the current power demand of all 4.3 million homes in the state.

But, no worry, there are dilapidated US nuclear reactors with massive waste dumps that could be put back online such as Palisades in Michigan, despite opposition from environmental activists and family farmers. The Trump administration also just announced a $1 billion low-interest loan to reanimate Three Mile Island in Pennsylvania for the sake of AI. Until all that happens, though, regular ratepayers can expect a huge hike in their energy bills as Big Data has the market clout to siphon off what it needs first, especially as it colludes with utility monopolies. Many people in Wisconsin are already paying for $1+ billion in stranded assets—mostly defunct coal plants, as well as nuclear waste storage facilities—while utility investors continue to receive guaranteed dividends of 9-10%.

But is all the AI hype just another bubble about to burst? Rural communities (and public taxpayers) have been offered many “amazing” schemes in the past that ended up being just a “bait and switch”—another hollow promise. If we subsidize a massive data center, will the projected “market” for increasing algorithms actually come? Many within the AI industry don’t think so, and are now invoking the lessons we should have learned from the Enron scandal decades ago or the even worse sequel in the subprime mortgage-fueled financial meltdown. Corporate cheerleaders can be quite clever when it comes to inflating prices (and stocks) for goods and services that may not even exist, while hiding their massive debt obligations in a whole cascading series of shadowy shell subsidiaries and dishonest accounting shenanigans.

Many industry insiders are ringing alarm bells. “These models are being hyped up, and we’re investing more than we should,” said Daron Acemoglu, who won the 2024 Nobel Economics Prize, quoted in a recent NPR story about the current AI boom or bubble. OpenAI says it will spend $1.4 trillion on data centers over the next eight years, while Amazon, Google, Meta, and Microsoft are going to throw in another $400 billion. Meanwhile, just 3% of people who use AI now pay for it, and many are frantically trying to figure out how to turn off AI mode on their internet searches and to reject AI eavesdropping on their Zoom calls. Where is the real revenue going to come from to pay for all this AI speculation? The same NPR story notes that such a flood of leveraged capital is equal to every iPhone user on Earth forking over $250 to “enjoy” the benefits of AI—and “that’s not going to happen,” adds Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy. Morgan Stanley estimates AI companies will shell out $3 trillion by 2028 for this data center buildout—but less than 50% of that money will come from them. Hmmm...

Special purpose vehicle (SPV) may sound like a fancy name for a retrofitted tractor, but that is how Big Data is creating a Potemkin Village to hide their Ponzi Scheme. Here is one example from Richland Parish, Louisiana where Meta is now building its Hyperion Data Center—a massive $27 billion project. A Wall Street outfit, Blue Owl, borrows $27 billion, using Meta’s future rent payments for a data center to back up its loan. Meta’s 20% “mortgage” on the facility gives them 100% control of the purported data crunching from the facility. This debt never shows up on Meta’s books and remains hidden from carefree investors and shallow analysts, but, like other synthetic financial instruments such as the now infamous mortgage backed security (MBS), the reality only comes home to roost when the house of cards collapses and Meta has to eventually pay off Blue Owl.

In the meantime, as the Louisiana Illuminator reports, the residents of Richland Parish (where 25% live below the poverty level) are bearing the brunt of all the real costs of having an AI factory farm. Dozens of crashes involving construction vehicles; damage to local roads; and massive future energy demands (three times that required for the entire city of New Orleans), which will entail new natural gas power plants to be built (subsidized by existing ratepayers even as fossil fuel-induced climate change floods the Louisiana delta). Beyond the initial building flurry, AI data centers are ultimately job poor. It just doesn’t take that many people to tend computers once they are built. As Meta’s VP, Brad Smith, admitted, the 250,000 square foot Hyperion data center may need 1,500 workers to build but barely 50 to operate. Beyond all the ballyhoo, the main reason a particular community is chosen to “host” one seems to be based upon the bought duplicity of elected officials and the excessive generosity of local taxpayers. Not a good cost-benefit analysis—unless you are Big Data.

And then there are the questionable kickback schemes between the suppliers of the technology and those owning the data centers. If you are maker of computer chips, would you not be tempted to fork over capital to a major buyer of your own products to ensure future demand? Nvidia just announced a $100 billion stake in OpenAI to help bankroll the data centers. In turn OpenAI signed a $300 billion deal with Oracle to actually build the AI data centers that will require Nvidia’s graphics processing units (GPUs). OpenAI also signed a separate $6+ billion deal with former BitCoin miner, CoreWeave, which rents out internet cloud access (using Nvidia’s chips once again). This type of incestuous circular financing should raise eyebrows to anyone who studies business ethics—and perhaps remind others of how a toilet operates.

What is all this AI doing? Promoters will point to many innovations—faster screening for cancer cells, closer connection to far-flung relatives, precision application of fertilizers and pesticides, elimination of drudgery in the workplace through automation. A bright future indeed—or perhaps not?

The real issue is whether or not AI data centers are economically viable, socially appropriate, environmentally sustainable, and actually serve the public interest.

In August 2025, ProPublica reported that the Food and Drug Administration (FDA) had lost 20% of its staff devoted to food safety thanks to DOGE cuts. Inspection of food import facilities is now at a historic low even as our dependence on the rest of the world to feed us grows. But not to worry, the FDA announced in May that AI was coming to the rescue thanks to a large language model (LLM)—dubbed Elsa—that would be deployed alongside what’s left of its human staff to expedite their oversight work. Hopefully, Elsa knows melamine when it sees it. AI chatbots are also growing in popularity and available 24-7 to “talk or advise” people on all sorts of pressing issues—how to win more friends, how to cheat on this exam, how to make up fake legal opinions, even encouraging a teenager to commit suicide and suggesting to someone else that they murder their own parents.

But there is an even dirtier AI underbelly. Some have dubbed these AI slop, AI smut, and AI stazi—three 21st-century horsemen of the digital apocalypse. What is this all about? Well, a lot of these accelerating AI algorithms are actually devoted to selling “products” that many people do not want and would find objectionable, as well as providing “services” that undermine our basic freedoms. Slop (Merriam Webster’s word of 2025) is used to describe when AI generates internet content that is only meant to make money through advertising. Right now there are thousands of wannabe internet “creatives” all over the globe, watching “how-to videos” to manufacture AI social media to grab the eyeballs of US consumers. That cute puppy video you see on Instagram or that shocking “news” story you read on Facebook is not by accident—the goal is to monetize clicks per thousand (cost per mille, or CPM) where advertisers pay for how much their ad is viewed online. This is also why online content is often overly long (where is the actual recipe in this cooking blog?), since that increases ad scrolling. The average US consumer is now subject to between 6,000 and 10,000 ads per day—70% of which are online. For more on AI slop, visit: https://www.visibrain.com/blog/ai-slop-social-media.

An even worse virtual commodity is AI smut—literally algorithms creating pornography. This perverted version of AI scraps the internet for images (high school yearbooks, red carpet fashion shows, popular music concerts, street cam footage, etc.) and then uses “face swap” programs to create personalized hardcore rubbish. There is little if any accountability for this theft of public images and violation of personal privacy—at best those involved are “shamed” into taking down their AI sites after being exposed due to fears of liability and prosecution for child abuse. But that has hardly stopped this seedy AI subsector. Can you imagine your face or image being put into such a lucrative sexploitative scenario without your permission? At this point, there are hardly any internet police walking the beat in the virtual AI world. We don’t even have the right to be forgotten on the internet.

Which brings us to AI stazi—the updated version of the Cold War-era East German secret police. University of Wisconsin Madison just announced the creation of a College of Computing and Artificial Intelligence (CAI), in part thanks to a $140 million donation from Cisco. Few Bucky Badger fans know that 30 years ago they were used as guinea pigs while cheering at Camp Randall Stadium to help create facial recognition technology through a UW-Madison grant from the Department of Defense Applied Research Agency (DARPA). Visitors to the UW campus today will no doubt “enjoy” the automated license plate readers (ALRPs) owned by Flock Safety. According to an August 2025 Wisconsin Examiner expose, there are hundreds of Flock cameras across the state in use by law enforcement agencies, including Wisconsin county sheriff departments with active 287(g) cooperation agreements with Immigration and Customs Enforcement. No warrant is needed for law enforcement agencies to browse the national Flock database. In fact, agents have used Flock to track peaceful protesters, spy on spouses, or just stalk people they don’t like. To see where Flock cameras are near you, visit: www.deflock.me. Of course, Flock Security has outsourced its AI programming to cheaper (and more secure?) Filipino contractors. Similar AI spying networks such as Pegasus have been widely exposed and have become “bread and butter” for authoritarian regimes from Israel to Saudi Arabia. China and Russia have their own versions (Skynet, SORM, etc.). Thanks to the cozy relationship between Trump and Peter Thiel, the US-based AI mercenary outfit, Palantir, is now being redeployed for domestic surveillance—first revealed by Edward Snowden back in 2017.

The latest executive bluster from Trump is that states’ rights are out the window when it comes to regulating AI data centers—such federal preemption of local democratic control is part of the larger neoliberal “race to the bottom” forced-trade agenda. But the cat is already out of the bag as dozens of communities have successfully blocked AI data center projects and others are poised to do the same based upon their winning strategies. Better yet, this is a bipartisan grassroots organizing issue!

What is the best way to keep out an AI factory farm? No non-disclosure agreements (NDAs)! These are massive development schemes that could not exist without the approval and support of elected officials, so any agreement should not be secret. They can hardly claim to be providing a public good if they are not subject to transparency and oversight. No sweetheart deals! Big Data is among the wealthiest sectors of our current economy and does not need or deserve subsidies, discounted electric rates, tax increment financing, property tax holidays, or other incentives. It is a classic move of crony capitalism to privatize the benefits and socialize the costs. No regulatory loopholes! Given their huge demands for land, water, and energy, Big Data should not be allowed to cut legal corners and needs to follow all the rules of any other normal enterprise—full liability coverage, no special economic zones, consideration of cumulative impacts, protections for ratepayers, no unregulated toxic pollution or illegal water transfer in violation of the Clean Water Act or the Great Lakes Compact, etc. How much water your data center demands is hardly a “trade secret.”

And most important, don’t let Big Data boosters belittle your legitimate concerns as “neo-Luddite!” Everyone uses technology—even the Amish. The real issue is whether or not AI data centers are economically viable, socially appropriate, environmentally sustainable, and actually serve the public interest. People have good reasons to be wary and oppose them on all those fronts.

For more info, checkout: Big Tech Unchecked: A Toolkit for Community Action

As well as the North Star Data Center Policy Toolkit


Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.


John Peck
John E. Peck is the executive director of Family Farm Defenders.