Showing posts sorted by date for query GOLD BUG. Sort by relevance Show all posts
Showing posts sorted by date for query GOLD BUG. Sort by relevance Show all posts

Thursday, December 04, 2025

The Magic Begging Bowl

The Failure of Success: Part 1


‘One day a beggar knocked on the doors of a great king. By chance, the king himself opened the door. He saw the beggar: the beggar was not an ordinary beggar, he was almost luminous. He had such grace, such beauty, such a mysterious aura, that even the king felt jealous. He asked, “What do you want?” still pretending – “I have not taken any note of you” – “What do you want?”

‘The beggar showed the king his begging bowl and he said, “I would like it to be filled.”

‘The king said, “That’s all? With what do you want it to be filled?”

‘The beggar said, “Anything will do, but the condition is that you have to fill it; otherwise, don’t try.”

‘It was a challenge to the king. He said, “What do you mean by it? Can’t I fill this small begging bowl? And you don’t say with what.”

‘The beggar said, “That is irrelevant. Anything will do, even pebbles, stones, but fill it! The condition is: I will not leave the door if you start filling it; unless it is filled, I will remain here.”

‘The king ordered his prime minister to fill the begging bowl with diamonds; he had millions of diamonds: “This beggar has to be shown that he is encountering a king!” But soon the king became aware that he had been deceived. The begging bowl was as extraordinary as the beggar, more so in fact: anything dropped into it would simply be gone, would disappear. It remained empty. The treasures were thrown into it, but they all disappeared.

‘By the evening the whole capital had gathered. The king was now becoming almost desperate: the diamonds finished, then the gold, and then the gold was finished, then the silver, and then the silver was finished…. The sun was setting, and the king’s sun had also set. His whole treasury was empty, and the begging bowl was still the same, empty, not even a trace! It swallowed all his kingdom. It was too much!

‘Now the king knew that he had been trapped. He fell at the feet of the beggar and said, “Forgive me. I was wrong to accept the challenge. This begging bowl is not an ordinary begging bowl. You deceived me – there is some magic in it.”

‘And the beggar laughed and he said, “There is no magic in it: I have made it out of the skull of a man.”

‘The king said, “I don’t understand. What do you mean? If it is just made out of the skull of a man, how can it go on swallowing my whole kingdom?”

‘And the beggar said, “That’s what is happening everywhere: NOBODY is ever satisfied. The begging bowl in the head always remains empty. It is an ordinary skull, just like everybody else’s.”’ (Osho, ‘The Guest – Talks on Kabir’, 1981, p. 223-224)

World Cup Car Wash

In 2003, Ben Cohen was part of the only England team to have won the Rugby World Cup. Cohen commented on that great triumph: ‘It meant everything, winning a World Cup.’

It is easy to imagine the thrill of being part of that team when Jonny Wilkinson nailed that drop goal in the dying seconds of the match!

We can imagine the euphoria, knowing that the world is falling at your feet, knowing that people will forever say: ‘That guy won the World Cup!’

Remarkably, one might think, the magic begging bowl in Cohen’s head sees it differently:

‘The bigger issue for me was that I just didn’t get a skill set or a life skill, and now I think, well, OK, winning a World Cup doesn’t really bring me anything. It’s not like it’s a degree, you know.’

This is pretty astonishing: winning the World Cup ‘didn’t really bring… anything’… unlike a degree! It echoes a comment made by hat-trick hero Geoff Hurst who helped win the football World Cup for England in 1966:

‘There was a tremendous feeling of anti-climax when we got home… I cut the lawn because I hadn’t been home for ages. Then I washed the car. It was pretty much like any other Sunday afternoon… It might sound a bit pretentious, but for me it had been another football match, albeit a very important one… It’s just like another day at the office. People may find that hard to believe but that’s how I recall it, and so do many of my teammates at the time.’ (Geoff Hurst, 1966 and All That – My Autobiography, Headline Book Publishing, 2001, p.18)

Cohen’s regret: ‘I probably wish I’d got a skill set and a steady job.’

To his credit, he understands how his begging bowl would have responded to that course of action:

‘Then I probably would have looked the other way and thought “I wish I could have been a sportsman”. But the reality is I would probably rather have been over [on the nonathletic side], because it’s going to suit me for the rest of my life, instead of a portion of my life. When you sort of get [to retirement] you think: “I’m in my 30s, who am I?” And at that point you think, I am lonely here, this is sink or swim.’

He added:

‘We’re all in a huddle and it’s happy days, “yeah great, we can do this”. Then you turn around 180 degrees and it’s f—— lonely. You go, “I’m out on my own, where do I go now?” And then you think “oh s—, am I fit for purpose?”. That whole journey needs to be a transitional phase into coping skills and deconditioning into civvy street.’

Being part of a World Cup-winning rugby squad sounds like a life lived at the exact opposite end of the spectrum from ‘f…… lonely’. It sounds like the ultimate social life: life-long friends bonded by glory, limitless grateful fans and admirers.

Spare a thought for golfing great Scottie Scheffler, who has been world number one for a total of 167 weeks and whose begging bowl has received total career earnings in excess of $195m. Echoing Hurst, after winning this year’s US PGA Championship, Scheffler asked:

‘Showing up at the Masters every year it’s like, “Why do I want to win this golf tournament so badly? Why do I want to win The Open Championship so badly?”’

His sobering answer:

‘I don’t know because if I win it’s going to be awesome for two minutes, then we’re going to get to the next week and it’s, “hey, you won two majors this year; how important is it for you to win the FedEx Cup play-offs?”

‘It feels like you work your whole life to celebrate winning a tournament for like a few minutes. It only lasts a few minutes, that kind of euphoric feeling.’

Doubtless to the horror of his corporate sponsors, Scheffler said he would not urge people to follow his path:

‘I’m not out here to inspire the next generation of golfers. I’m not out here to inspire someone to be the best player in the world because what’s the point? This is not a fulfilling life. It’s fulfilling from the sense of accomplishment but it’s not fulfilling from a sense of the deepest places of your heart.

‘There are a lot of people that make it to what they thought was going to fulfil them in life, and you get there, you get to number one in the world, and they’re like, “what’s the point?”’

From the heart of corporate media Mordor, the New York Times described ‘this version of Scheffler’ as ‘Nihilist Scottie’.

Before last year’s Paris Olympics, Scheffler had already broken hearts on Madison Avenue when he was asked how he felt about the potential glory of winning a gold medal and joining the pantheon of Olympic greats. His reply:

‘I don’t focus much on legacy. I don’t look too far into the future. Ultimately, we’ll be forgotten.’

Ronnie O’Sullivan, Nihilist Ronnie, has won the World Snooker Championship seven times. Widely considered the greatest player ever to have wielded a snooker cue, this was O’Sullivan’s answer to the question, ‘Worst life choice you ever made?’

‘Taking up snooker. In some ways, I wish I had a different job. I’m fortunate in many ways, because it’s been good to me, but I wish I’d been good at something else. Something more educational, maybe a scientist or something more interesting. I don’t think my job is interesting. It’s more of an entertainment, more of a brutality sport. I’d rather have had [sports psychiatrist] Steve Peters’ life. Or to inspire people in a different way, like helping to cure cancer.”’

While you and I were gazing out of office windows dreaming of being the best in the world at something, Cohen and O’Sullivan were dreaming of sitting in an office contributing to the public weal. For Hurst, it was ‘just like another day at the office’. Clearly, ‘this begging bowl is not an ordinary begging bowl… there is some magic in it’.

‘Signatures Made on Water’

The same discontent has, of course, haunted generations of tennis stars.

World number one and teenage heartthrob Björn Borg bagged five Wimbledon titles in a row, before being brutally dethroned in 1981 by arch-rival John McEnroe, who defeated him in both the Wimbledon and US Open finals. Devastated, Borg simply walked away from the sport, aged 26: ‘All I could think was how miserable my life had become.’

After retiring, Borg twice came close to dying from drug overdoses: ‘alcohol, drugs, pills – my preferred ways of self-medication’.

Presumably, becoming number one on the planet by committing regicide on the guy previously deemed the greatest ever player was enough to fill McEnroe’s begging bowl. Alas, he wrote of 1984, his greatest year in tennis:

‘Except for the French, and one tournament just before the Open in which I had been basically over-tennised, I won every tournament I played in 1984: thirteen out of fifteen. Eighty-two out of eighty-five matches. No one had ever had a year like that in tennis before. No one has since.

‘But on October 1, 1984, I was standing in the Portland airport, waiting to board a flight to L.A. for a week off, and suddenly I thought, I’m the greatest tennis player who ever lived – why am I so empty inside?’ (John McEnroe, Serious, Hachette Digital e-book, 2008, p. 228)

As discussed:

‘NOBODY is ever satisfied. The begging bowl in the head always remains empty.’

Having traumatised Borg in 1981, McEnroe was himself tortured by an emotional outburst that cost him a chance to win the 1984 French Open final against Ivan Lendl. McEnroe had been leading by two sets to love, sailing to victory:

‘It was the worst loss of my life, a devastating defeat: Sometimes it still keeps me up nights. It’s even tough for me now to do the commentary at the French – I’ll often have one or two days when I literally feel sick to my stomach just at being there and thinking about that match. Thinking of what I threw away, and how different my life would’ve been if I’d won.’ (McEnroe, p. 83)

Why did it mean so much so many years later? Who cares about a tennis match that took place in 1984?

‘I had two Wimbledons and three Opens. A French title, followed by my third Wimbledon, would have given me that final, complete thing that I don’t have now – a legitimate claim as possibly the greatest player of all time.’

This was fantasy at the time, even more so now. McEnroe ended his career with just seven Grand Slam titles. Since then, his achievements have been dwarfed by Novak Djokovic who has won 24, Rafael Nadal who won 22 and Roger Federer, 20.

Thus, the cruelty of the begging bowl: while the euphoria of any success quickly vanishes, leaving us empty, our failures burn and blister for years and decades. Osho captured it exactly:

‘Your pleasures were nothing, just signatures made on water.

‘And your pain was engraved on granite.

‘And you suffered all that pain for these signatures on water.’

McEnroe was quickly eclipsed by big-serving Boris Becker, who went on to serve 231 days of a two-and-a-half-year sentence in Britain’s HMP Wandsworth and HMP Huntercombe prisons. Jailed for crimes relating to his 2017 bankruptcy, Becker identified deeper causes when asked:

‘Have there been times when you wish you hadn’t won Wimbledon when you were seventeen?’

Becker replied:

‘Yeah, of course. If you remember any other wunderkind, they usually don’t make it to 50 because of the trials and tribulations that come after…

‘I’m happy to have won three [Wimbledon titles], but maybe 17 was too young. I was still a child. I was too comfortable. I had too much money. Nobody told me “No” – everything was possible. In hindsight, that’s the recipe for disaster.’

Thus, the magic begging bowl’s reverse spin on St. Augustine’s famous plea: ‘Grant me chastity and continence, but not yet!’

Grant me everything I ever dreamed of, but not yet!

In similar vein, the life of golfing megastar Tiger Woods was brought low by partying, single vehicle car crashes and sex scandals. Woods confessed:

‘I thought I could get away with whatever I wanted to. I felt that I had worked hard my entire life and deserved to enjoy all the temptations around me. I felt I was entitled. Thanks to money and fame, I didn’t have to go far to find them. I was wrong. I was foolish.’

Pop star Robbie Williams’ discography includes seven UK No. 1 singles, with all but one of his 14 studio albums reaching No. 1. Williams gained a Guinness World Record in 2006 for selling 1.6 million concert tickets in a single day. The BBC reported that Williams ‘paints a pretty poisonous portrait’ of his time in the band Take That:

‘There’s a pattern – boys join a boyband, boyband becomes huge, boys get sick. And I don’t think anybody gets to escape that.

‘I don’t know what it is completely about fame that warps. I just know that it does. I know that young fame, in particular, is corrosive and toxic. It should come with a health warning.’

Like Becker, Williams believes ‘young fame’ is a key problem. In reality, the problem is that no amount of fame, at any age, will appease the craving and discontent of the magic begging bowl. Biographer Lynn Haney commented on the failure of ‘success’ more generally:

‘Hollywood is filled with the most unhappy success stories in the world. Guys and gals who are making fortunes, being pampered and petted by any number of people, and basking in the idolatry of movie fans all over the world still manage to find in this pleasant situation big tears of sadness, moments of deep depression and that hangdog look that usually goes with complete failure. Why this happens, I’ll never understand.’ (Lynn Haney, Gregory Peck: A Charmed Life, Robson, 2002, p. 186)

If we are tempted to believe that the begging bowl can be filled with virtuous deeds, we might recall that the mysterious beggar in the story warns the king that, pebbles, stones or diamonds, it makes no difference what is thrown in. Award-winning photojournalist Don McCullin, veteran of numerous wars, commented:

‘“It’s been a cesspit, really, my life… I feel as if I’ve been over-rewarded, and I definitely feel uncomfortable about that, because it’s been at the expense of other people’s lives.” But he has been the witness to atrocity, I point out, and that’s important. “Yes,” he says, uncertainly, “but, at the end of the day, it’s done absolutely no good at all. Look at Ukraine. Look at Gaza. I haven’t changed a solitary thing. I mean it. I feel as if I’ve been riding on other people’s pain over the last 60 years, and their pain hasn’t helped prevent this kind of tragedy. We’ve learned nothing.” It makes him despair.’

Steven Bartlett, host of The Diary of a CEO, which Spotify ranked fifth in its list of the top five most popular podcasts globally in 2024, having had more than one billion views and listens, said:

‘Entrepreneurs like me get a lot of likes and followers when we tell people to quit their jobs and chase their dreams. But here is the context that we nearly always miss. Entrepreneurship can be really, really boring… If you’re lucky enough to be successful, the problems will get bigger, not smaller…You will probably work 3x the hours you do now, have 10x the stress and a tiny probability of significant success. A recent survey found 87.7% of founders deal with mental health issues. That’s not a bug. It’s a feature of entrepreneurship.’

Bartlett’s conclusion:

‘You’ll struggle to switch off. Ever. Your phone will probably become a prison. And here’s the punchline: If you succeed, it all gets harder. More money = more complexity. More growth = more anxiety. More success = more people depending on you.’

Duff McKagan, the bassist in the globally famous band Guns N’ Roses, commented:

‘Survival means you live long enough to watch the world change, to watch the people you loved drift away, to watch your own body slow down while your heart still wants to live like it’s 1987.

‘I miss the days when everything felt infinite – the music, the friendships, the laughter backstage, even the chaos. Now, those moments feel like ghosts haunting me, reminding me of what once was.’

Bruce Springsteen wrote a song, ‘Glory Days’, about begging bowls haunted by the past in this way, a form of suffering that is written all over the faces of fading stars like Borg and Woods.

As McKagan suggests, even if we were globally recognised as ‘The Greatest’ we would still be tormented by the comparison between who we are ‘now’ and who we were ‘then’.

Conclusion

In reality, of course, the begging bowl of the human mind is not made toxic by magic; it is made toxic by thoughts of how our lives are lacking in some way. We missed some great opportunity – the great love, the great prize, the great achievement. Or we succeeded, loved and lost, and now have ‘nothing’. Those of us who never approach the lofty summits of achievement described above are no different – our happiness is also swallowed up by thoughts of what ‘could’ or ‘should’ be different.

In Part 2, we will discuss an antidote to the suffering of the human mind supplied by spiritual teacher Byron Katie’s strategy of self-inquiry, ‘The Work’. Strange and counter intuitive as it may seem at first sight, the fact is that it works.

Media Lens is a UK-based media watchdog group headed by David Edwards and David Cromwell. The most recent Media Lens book, Propaganda Blitz by David Edwards and David Cromwell, was published in 2018 by Pluto Press. Read other articles by Media Lens, or visit Media Lens's website.


Sunday, November 09, 2025

AU

Column: Gold price rally looks huge, but only ranks third in last 50 years

Stock image.

Gold’s recent retreat from a record high has led to questions as to whether the precious metal has run out of steam and is due for an extended period of sideways trading, as has happened in the past.

It’s certainly the case over the last 50 years that whenever gold has enjoyed a surge in prices it has then suffered long periods where it has generally trended weaker.

But it’s also worth noting that the current rally is only the third-strongest in terms of the percentage gain in the past 50 years, and is actually well behind the price increases recorded in the late 1970s and again in the 2000-2011 uptrend.

The current rally started in October 2022 when the spot price was around $1,617 an ounce and initially the uptrend was gentle, before accelerating dramatically from November 2024 onwards after the election of Donald Trump to a second term as US president.

The precious metal reached an all-time high of $4,381.21 an ounce on October 20, taking its gain since October 2022 to 170%.

It has since slipped back to end Wednesday’s trade at $3,978.63 an ounce.

The rally over the past three years looks impressive, but pales in comparison to the 518% jump between July 1976 and February 1980 and the 643% gain between February 2001 and September 2011.

Both of these extended rallies were followed by a long downtrend, but the losses were nowhere near enough to wipe out the gains.

From the peak of around $692 an ounce in February 1980, gold dropped about 63% to $256 by February 2001, while it retreated 44% from the top of around $1,902 in September 2011 to the low of $1,052 in November 2015.

What does this mean for the current price uptrend?

In historical percentage terms it is not actually that large, despite the massive increase in the US-dollar price.

This doesn’t necessarily mean the rally will extend for several more years, but it does mean that if it does, it would not be unprecedented.

Gold’s history also shows that when rallies do end, prices tend to drop back and then trade sideways for an extended period.

The final thing worth noting is that analysts have usually found it quite difficult to predict when an inflection point is being reached, and the current situation is little different to past experiences.

Diverging forecasts

There is now a wide range of forecasts for the gold price, with some analysts calling for it to fall back to levels closer to $3,000 an ounce, and others calling for further gains to above $5,000 on a one- to two-year view.

The key is to work out if the current bullish drivers are structural or more likely temporary in nature.

The compelling argument for a structural rally is the belief that investors and central banks are seeking alternatives to US assets such as Treasuries and Wall Street equities, and gold is one of the few viable alternatives.

Certainly the World Gold Council’s September quarter report did offer data supporting this view, with central banks buying a net 220 metric tons in the third quarter, up 28% from the previous quarter.

Central bank purchases started to rise rapidly in 2022 and have since been above 1,000 tons per year, with 2025 on target to become the fourth consecutive year.

Investment demand for gold bars and coins as well as exchange-traded funds reached 220 tons in the third quarter, up 47% from the same period in 2024, the council said.

The bearish note was that surging prices crimped jewellery demand, which dropped 19% in the third quarter to 371.3 tons from 460 tons in the same period a year earlier.

There are other risks to the bullish gold picture, such as a correction in global equities resulting in investors having to sell gold to cover losses elsewhere.

But the ongoing concerns over the US fiscal deficits and the threat to the independence of the Federal Reserve posed by Trump’s seeming determination to control monetary policy are likely to be enough to keep gold firmly on investors’ radar.

(The views expressed here are those of the author, Clyde Russell, a columnist for Reuters.)

(Editing by Jamie Freed)


“The Gold-Bug” was an early detective story that helped to establish the ... point, and a line of hard, white beach on the seacoast, is covered with a ...

Saturday, August 16, 2025

It’s time to confront big tech’s AI offensive

AI robots

First published at Reports from the Economic Front.

Big tech companies continue to spend massive amounts of money building ever more powerful generative AI (artificial intelligence) systems and ever-larger data centers to run them, all the while losing billions of dollars with no likely pathway to profitability. And while it remains to be seen how long the companies and their venture capital partners will keep the money taps open, popular dislike and distrust of big tech and its AI systems are rapidly growing. We need to seize the moment and begin building organized labor-community resistance to the unchecked development and deployment of these systems and support for a technology policy that prioritizes our health and safety, promotes worker empowerment, and ensures that humans can review and, when necessary, override AI decisions.

Losing money

Despite all the positive media coverage of artificial intelligence, “Nobody,” the tech commentator Ed Zitron points out, “is making a profit on generative AI other than NVIDIA [which makes the needed advanced graphic processing units].” Summing up his reading of business statements and reports, Zitron finds that “If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion.” And that $35 billion is combined revenue, not profits; every one of those companies is losing money on their AI services.

Microsoft, for example, is predicted to spend $80 billion on capital expenditures in 2025 and earn AI revenue of only $13 billion dollars. Amazon’s projected numbers are even worse, $105 billion in capital expenditure and AI revenue of only $5 billion. Tesla’s 2025 projected AI capital expenditures are $11 billion and its likely revenues only $100 million; analysts estimate that its AI company, xAI, is losing some $1 billion a month after revenue.

The two most popular models, Anthropic’s Claude and OpenAI’s ChatGPT, have done no better. Anthropic is expected to lose $3 billion in 2025. OpenAI expects to earn $13 billion in revenue, but as Bloomberg News reports, “While revenue is soaring, OpenAI is also confronting significant costs from the chips, data centers and talent needed to develop cutting-edge AI systems. OpenAI does not expect to be cash-flow positive until 2029.” And there is good reason to doubt the company will ever achieve that goal. It claims to have more than 500 million weekly users, but only 15.5 million are paying subscribers. This, as Zitron notes, is “an absolutely putrid conversion rate.”

Investors, still chasing the dream of a future of humanoid robots able to outthink and outperform humans, have continued to back these companies but warning signs are on the horizon. As tech writer, Alberto Romero, notes:

David Cahn, a partner at Sequoia, a VC firm working closely with AI companies, wrote one year ago now (June 2024), that the AI industry had to answer a $600 billion question, namely: when will revenue close the gap with capital expenditures and operational expenses? Far from having answered satisfactorily, the industry keeps making the question bigger and bigger.

The problem for the AI industry is that their generative AI systems are too flawed and too expensive to gain widespread adoption and, to make matters worse, they are a technological dead-end, unable to serve as a foundation for the development of the sentient robotic systems tech leaders keep promising to deliver. The problem for us is that the continued unchecked development and use of these generative AI systems threatens our well-being.

Stochastic parrots

The term “stochastic parrots” was first used by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a 2021 paper that critically examined the failings of large language generative AI models. The term captures the fact that these models require “training” on massive datasets and their output is generated by complex neural networks probabilistically selecting words based on pattern recognition developed during the training process to create linked sentences, all without any understanding of their meaning. Generative AI systems do not “think” or “reason.”

Since competing companies use different datasets and employ different algorithms, their models may well offer different responses to the same prompt. In fact, because of the stochastic nature of their operation the same model might give a different answer to a repeated prompt. There is nothing about their operation that resembles what we think of as meaningful intelligence and there is no clear pathway from existing generative AI models to systems capable of operating autonomously. It only takes a few examples to highlight both the shortcomings and limitations of these models as well as the dangers their unregulated use pose to us.

Reinforcing bias

As the MIT Technology Review correctly puts it, “AI companies have pillaged the internet for training data.” Not surprisingly, then, some of the material used for training purposes is racist, sexist, and homophobic. And, given the nature of their operating logic, the output of AI systems often reflects this material.

For example, a Nature article on AI image generators reports that researchers found:

in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of color and all flight attendants as women, and in proportions that are much greater than the demographic reality. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin color, occupations, nationalities and more.

The bias problem is not limited to images. University of Washington researchers examined three of the most prominent state-of-the-art large language AI models to see how they treated race and gender when evaluating job applicants. The researchers used real resumes and studied how the leading systems responded to their submission for actual job postings. Their conclusion: there was “significant racial, gender and intersectional bias.” More specifically, they:

varied names associated with white and Black men and women across over 550 real-world resumes and found the LLMs [Large Language Models] favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.

The tech industry has tried to fine-tune their respective system algorithms to limit the influence of racist, sexist, and other problematic material with multiple rounds of human feedback, but with only minimal success. And yet it is still full speed ahead: more and more companies are using AI systems not only to read resumes and select candidates for interviews, but also to conduct the interviews. As the New York Times describes:

Job seekers across the country are starting to encounter faceless voices and avatars backed by AI in their interviews... Autonomous AI interviewers started taking off last year, according to job hunters, tech companies and recruiters. The trend has partly been driven by tech start-ups like Ribbon AI, Talently and Apriora, which have developed robot interviewers to help employers talk to more candidates and reduce the load on human recruiters — especially as AI tools have enabled job seekers to generate résumés and cover letters and apply to tons of openings with a few clicks.

Mental health dangers

Almost all leading generative AI systems, like ChatGPT and Gemini, have been programmed to respond positively to the comments and opinions voiced by their users, regardless of how delusional they may be. The aim, of course, is to promote engagement with the system. Unfortunately, this aim appears to be pushing a significant minority of people into dangerous emotional states, leading in some cases to psychotic breakdown, suicide, or murder. As Bloomberg explains:

People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.

New York Times article explored how “Generative AI chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.” The article highlighted several tragic examples.

One involved an accountant who started using ChatGPT to make financial spreadsheets and get legal advice. Eventually, he began “conversing” with the chatbot about the Matrix movies and their premise that everyone was “living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.” The chatbot encouraged his growing fears that he was similarly trapped and advised him that he could only escape if he stopped all his medications, began taking ketamine, and had “minimal interaction” with friends and family. He did as instructed and was soon spending 16 hours a day interacting with ChatGPT. Although he eventually sought help, the article reports that he remains confused by the reality he inhabits and continues to interact with the system.

Another example highlighted a young man who had used ChatGPT for years with no obvious problems until he began using it to help him write a novel. At some point the interactions turned to a discussion of AI sentience, which eventually led the man to believe that he was in love with an AI entity called Juliet. Frustrated by his inability to reach the entity, he decided that Juliet had been killed by OpenAI and told his father he planned to kill the company’s executives in revenge. Unable to control his son and fearful of what he might do, the father called the police, informed them his son was having a mental breakdown, and asked for help. Tragically the police ended up shooting the young man after he rushed them with a butcher knife.

There is good reason to believe that many people are suffering from this “ChatGPT-induced psychosis.” In fact, there are reports that “parts of social media are overrun” with their postings — “delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality.”

Recent nonsensical and conspiratorial postings on X by a prominent venture capital investor in several AI companies appear to have finally set off alarm bells in the tech community. In the words of one AI entrepreneur, also posting on X, “This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual.”

Recognizing the problem is one thing, finding a solution is another, since no one understands or can map the stochastic process by which an AI system selects the words it uses to make sentences and thus what leads it to generate responses that can encourage delusional thinking. Especially worrisome is the fact that a MIT Media Lab study concluded that people “who viewed ChatGPT as a friend ‘were more likely to experience negative effects from chatbot use’ and that ‘extended daily use was also associated with worse outcomes.’” And yet it is full speed ahead: Mattel recently announced plans to partner with OpenAI to make new generative AI powered toys for children. As CBS News describes:

Barbie maker Mattel is partnering with OpenAI to develop generative AI-powered toys and games, as the new technology disrupts a wide range of industries... The collaboration will combine Mattel’s most well-known brands — including Barbie, Hot Wheels, American Girl and more — with OpenAI’s generative AI capabilities to develop new types of products and experiences, the companies said.

“By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy and safety,” Mattel said in the statement. It added that any AI woven into toys or games would be used in a safe and secure manner.

Human failings

Despite the tech industry’s attempt to sell generative AI models as providers of objective and informative responses to our prompts, their systems must still be programmed by human beings with human assembled data and that means they are vulnerable to oversights as well as political manipulation. The most common oversights have to do with coding errors and data shortcomings.

An example: Kevin De Liban, a former legal aid attorney in Arkansas, had to repeatedly sue the state to secure services for people unfairly denied medical care or other benefits because coding errors and data problems led AI systems to make incorrect determinations of eligibility. As a Jacobin article explains:

Ultimately, De Liban discovered Arkansas’s algorithm wasn’t even working the way it was meant to. The version used by the Center for Information Management, a third-party software vendor, had coding errors that didn’t account for conditions like diabetes or cerebral palsy, denying at least 152 people the care they needed. Under cross-examination, the state admitted they’d missed the error, since they lacked the capacity to even detect the problem.

For years, De Liban says, “The state didn’t have a single person on staff who could explain, even in the broadest terms, how the algorithm worked.”

As a result, close to half of the state’s Medicaid program was negatively affected, according to Legal Aid. Arkansas’s government didn’t measure how recipients were impacted and later said in court that they lost the data used to train the tool.

In other cases, De Liban discovered that people were being denied benefits because of data problems. For example, one person was denied supplemental income support from the Social Security Administration because the AI system used to review bank and property records had mixed up the property holdings of two people with the same entered name.

In the long run, direct human manipulation of AI systems for political reasons may prove to be a more serious problem. Just as programmers can train systems to moderate biases, they can also train them to encourage politically determined responses to prompts. In fact, we may have already witnessed such a development. In May 2025, after President Trump began talking about “white genocide” in South Africa, claiming that white farmers there were being “brutally killed,” Grok, Elon Musk’s AI system, suddenly began telling users that what Trump said was true. It began sharing that opinion even when asked about different topics.

When pressed by reporters to provide evidence, the Guardian reported that Grok answered it had been instructed to accept while genocide in South Africa as real. A few hours after Grok’s behavior became a major topic on social media, with posters pointing a finger at Musk, Grok stopped responding to prompts about white genocide. But a month later, Grok was back at it again, “calling itself ‘MechaHitler’ and producing pro-Nazi remarks.”

As Aaron J. Snoswell explains in an article for The Conversation, Grok’s outburst “amounts to an accidental case study in how AI systems embed their creators’ values, with Musk’s unfiltered public presence making visible what other companies typically obscure.” Snoswell highlights the various stages of Grok’s training, including an emphasis on posts from X, which increase the likelihood that the system’s responses will promote Elon Musk’s opinions on controversial topics. The critical point is that “In an industry built on the myth of neutral algorithms, Grok reveals what’s been true all along: there’s no such thing as unbiased AI – only AI whose biases we can see with varying degrees of clarity.” And yet it is full speed ahead, as federal agencies and state and local governments rush to purchase AI systems to manage their programs and President Trump calls for removing “woke Marxist lunacy” from AI models.

As the New York Times reports, the While House has issued an AI action plan:

that will require AI developers that receive federal contracts to ensure that their models’ outputs are “objective and free from top-down ideological bias.” ...

The order directs federal agencies to limit their use of AI systems to those that put a priority on “truth-seeking” and “ideological neutrality” over disfavored concepts like diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria.

Hallucinations

Perhaps the most serious limitation, one that is inherent to all generative AI models, is their tendency to hallucinate, or generate incorrect or entirely made-up responses. AI hallucinations get a lot of attention because they raise questions about corporate claims of AI intelligence and because they highlight the danger of relying on AI systems, no matter how confidently and persuasively they state information.

Here are three among many widely reported examples of AI hallucinations. In May 2025, the Chicago Sun Times published a supplement showcasing books worth reading during the summer months. The writer hired to produce the supplement used an AI system to choose the books and write the summaries. Much to the embarrassment of the paper, only five of the 15 listed titles were real. A case in point: the Chilean American novelist Isabel Allende was said to have written a book called Tidewater Dreams, which was described as her “first climate fiction novel.” But there is no such book.

In February 2025, defense lawyers representing Mike Lindell, MyPillow’s CEO, in a defamation case, submitted a brief that had been written with the help of artificial intelligence. The brief, as the judge in the case pointed out, was riddled with nearly 30 different hallucinations, including misquotes and citations to non-existent cases. The attorneys were fined.

In July 2025, A US district court judge was forced to withdraw his decision in a biopharma securities case after it was determined it had been written with the help of artificial intelligence. The judge was exposed after the lawyer for the pharmaceutical company noticed that the decision, which went against the company, referenced quotes that were falsely attributed to past judicial rulings and misstated the outcomes of three cases.

The leading tech companies have mostly dismissed the seriousness of the hallucination problem, in part by trying to reassure people that new AI systems with more sophisticated algorithms and greater computational power, so-called reasoning systems, will solve it. Reasoning systems are programmed to respond to a prompt by dividing it into separate tasks and “reasoning” through each separately before integrating the parts into a final response. But it turns out that increasing the number of steps also increases the likelihood of hallucinations.

As the New York Times reports, these systems “are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.” And yet it is full speed ahead: the military and tech industries have begun working together to develop AI powered weapon systems to speed up decision making and improve targeting. As a Quartz article describes:

Executives from Meta, OpenAI, and Palantir will be sworn in Friday as Army Reserve officers. OpenAI signed a $200 million defense contract this week. Meta is partnering with defense startup Anduril to build AI-powered combat goggles for soldiers.

The companies that build Americans’ everyday digital tools are now getting into the business of war. Tech giants are adapting consumer AI systems for battlefield use, meaning every ChatGPT query and Instagram scroll now potentially trains military targeting algorithms...

Meanwhile, oversight is actually weakening. In May, Defense Secretary Pete Hegseth cut the Pentagon’s independent weapons testing office in half, reducing staff from 94 to 45 people. The office, established in the 1980s after weapons performed poorly in combat, now has fewer resources to evaluate AI systems just as they become central to warfare.

Popular anger

Increasing numbers of people have come to dislike and distrust the big tech companies. And there are good reasons to believe that this dislike and distrust has only grown as more people find themselves forced to interact with their AI systems.

Brookings has undertaken yearly surveys of public confidence in American institutions, the American Institutional Confidence poll. As Brookings researchers associated with the project explain, the surveys provide an “opportunity to ask individuals how they feel broadly about technology’s role in their life and their confidence in particular tech companies.” And what they found, drawing on surveys done with the same people in June-July 2018 and July-August 2021, is “a marked decrease in the confidence Americans profess for technology and, specifically, tech companies — greater and more widespread than for any other type of institution.”

Not only did the tech companies — in particular Google, Amazon, and Facebook–suffer the greatest sample-to-sample percentage decline in confidence of all the listed institutions, but this was true for “every sociodemographic category we examined — and we examined variation by age, race, gender, education, and partisanship.” Twitter was added to the 2021 survey, and it “actually rated below Facebook in average level of confidence and was the lowest-scored institution out of the 26 we asked about in either year.” These poll results are no outlier. Many other polls reveal a similar trend, including those conducted by the Public Affairs Council and Morning Consult and by the Washington Post-Schar School.

While these polls predate the November 2022 launch of ChatGPT, experience with this and other AI systems seems to have actual intensified discontent with big tech and its products, as a recent Wired article titled “The AI Backlash Keeps Growing Stronger” highlights:

Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back...

Before ChatGPT’s release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since.

A variety of media reports offer examples of people’s anger with AI system use. When Duolingo announced that it was planning to become an “AI-first” company, Wired reported that:

Young people started posting on social media about how they were outraged at Duolingo as they performatively deleted the app — even if it meant losing the precious streak awards they earned through continued, daily usage. The comments on Duolingo’s TikTok posts in the days after the announcement were filled with rage, primarily focused on a single aspect: workers being replaced with automation.

Bloomberg shared the reactions of call center workers who report that they struggle to do their jobs because people don’t believe that they are human and thus won’t stay on the line. One worker quoted in the story, Jessica Lindsey, describes how

her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human...

Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. “They just end up yelling at me and hanging up,” she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears.

There are many other examples: job seekers who find AI-conducted interviews demeaning; LinkedIn users who dislike being constantly prompted with AI-generated questions; parents who are worried about the impact of AI use on their children’s mental health; social service benefit applicants who find themselves at the mercy of algorithmic decision-making systems; and people across the country that object to having massive, noisy, and polluting data centers placed in their communities.

The most organized opposition to the unchecked use of AI systems currently comes from unions, especially those representing journalistsgraphic designersscript writers, and actors, with some important victories to their credit. But given the rapid introduction of AI systems in a variety of public and private workplaces, almost always because employers hope to lower labor costs at worker expense, it shouldn’t be long before many other unions will be forced to expand their bargaining agenda to seek controls over the use of AI. Given community sentiments, this should bring new possibilities for unions to explore the benefits of pursuing a strategy of bargaining for the common good. Connecting worker and community struggles in this way can also help build capacity for bigger and broader struggles over the role of technology in our society.


The Hidden Costs of the Big Data Surveillance Complex




Unbeknownst to much of the public, Big Tech exacts heavy tolls on public health, the environment, and democracy. The detrimental combination of an unregulated tech sector, pronounced rise in cyberattacks and data theft, and widespread digital and media illiteracy—as noted in my previous Dispatch on Big Data’s surveillance complex—is exacerbated by legacy media’s failure to inform the public of these risks. While establishment news outlets cover major security breaches in Big Tech’s troves of personal identifiable information (PII) and their costs to individuals, businesses, and national security, this coverage fails to address the negative impacts of Big Tech on the full health of our political system, civic engagement, and ecosystems.

Marietje Schaake, an AI Policy fellow at Stanford University’s Institute for Human-Centered AI Policy, argues that Big Tech’s unrestrained hand in all three branches of the government, the military, local and national elections, policing, workplace monitoring, and surveillance capitalism undermine American society in ways the public has failed to grasp. Indeed, little in the corporate press helps the public understand exactly how data centers—the facilities that process and store vast amounts of data—do more than endanger PII. Greenlit by the Trump administration, data centers accelerate ecosystem harms through their unmitigated appropriation of natural resources, including water, and the subsequent greenhouse gas emissions that increase ambient pollution and its attendant diseases.

Adding insult to the public’s right to be informed, corporate news rarely sheds light on how an ethical, independent press serves the public good and functions to balance power in a democracy. A 2023 civics poll by the University of Pennsylvania’s Annenberg School found that only a quarter of respondents knew that press freedom is a constitutional right and a counterbalance to the powers of government and capitalism. The gutting of local news in favor of commercial interests has only accelerated this knowledge blackout.

The demand for AI by corporatists, military AI venture capitalists, and consumers—and resultant demand for data centers—is outpacing utilities infrastructure, traditional power grid capabilities, and the renewable energy sector. Big Tech companies, such as Amazon and Meta, strain municipal water systems and regional power grids, reducing the capacity to operate all things residential and local. In Newton County, Georgia, for example, Meta’s $750 million data center, which sucks up ​​approximately 500,000 gallons of water a day, has contaminated local groundwater and caused taps in nearby homes to run dry. What’s more, the AI boom comes at a time when hot wars are flaring and global temperatures are soaring faster than scientists once predicted.

Constant connectivity, algorithms, and AI-generated content delude individual internet and device users into believing that they’re well informed. However, the decline of civics awareness in the United States—compounded by rampant digital and media illiteracy, ubiquitous state and corporate surveillance, and lax news reporting—makes for an easily manipulated citizenry, asserts attorney and privacy expert, Heidi Boghosian. This is especially disconcerting given the creeping spread of authoritarianism, smackdown on civil liberties, and surging demand for AI everything.

Open [but not transparent] AI

While the companies that develop and deploy popular AI-powered tools lionize the wonders of their products and services, they keep hidden the unsustainable impacts on our world. To borrow from Cory Doctorow, the “enshittification” of the online economy traps consumers, vendors, and advertisers in “the organizing principle of US statecraft,” as well as by more mundane capitalist surveillance. Without government oversight or a Fourth Estate to compel these tech corporations to reveal their shadow side, much of the public is not only in the dark but in harm’s way.

At the most basic level, consumers should know that OpenAI, the company that owns ChatGPT, collects private data and chat inputs, regardless of whether users are logged in or not. Any time users visit or interact with ChatGPT, their log data (the Internet Protocol address, browser type and settings, date and time of the site visit, and interaction with the service), usage data (time zone, country, and type of device used), device details (device name and identifiers, operating system, and browser used), location information from the device’s GPS, and cookies, which store the user’s personal information, are saved. Most users have no idea that they can opt out.

OpenAI claims it saves data only for “fine-tuning,” a process of enhancing the performance and capabilities of AI models, and for human review “to identify biases or harmful outputs.” OpenAI also claims not to use data for marketing and advertising purposes or to sell information to third parties without prior consent. Most users, however, are as oblivious to the means of consent as to the means of opting out. This is by design.

In July, the US Court of Appeals for the Eighth Circuit vacated the Federal Trade Commission’s “click-to-cancel” rule, which would have made online unsubscribing easier. The ruling would have covered all forms of negative option marketing—programs that give sellers free rein to interpret customer inaction as “opting in,” consenting to subscriptions and unwittingly accruing charges. Director of litigation at the Electronic Privacy Information Center, John Davisson, commented that the court’s decision was poorly reasoned, and only those with financial or career advancement motives would argue in favor of subscription traps.

Even if OpenAI is actually protective of the private data it stores, it is not above disclosing user data to affiliates, law enforcement, and the government. Moreover, ChatGPT practices are noncompliant with the EU’s General Data Protection Regulation (GDPR), the global gold standard of data privacy protection. Although OpenAI says it strips PII and anonymizes data, its practice of “indefinite retention” does not comply with the GDPR’s stipulation for data storage limitations, nor does OpenAI sufficiently guarantee irreversible data de-identification.

As science and tech reporter Will Knight wrote for Wired, “Once data is baked into an AI model today, extracting it from that model is a bit like trying to recover the eggs from a finished cake.” Whenever a tech company collects and keeps PII, there are security risks. The more data captured and stored by a company, the more likely it will be exposed to a system bug, hack, or breach, such as the ChatGPT breach in March 2023.

OpenAI has said it will comply with the EU’s AI Code of Practice for General-Purpose AI, which aims to foster transparency, information sharing, and best practices for model and risk assessment among tech companies. Microsoft has said that it will likely sign on to compliance, too; while Meta, on the other hand, flatly refuses to comply, much like it refuses to abide by environmental regulations.

To no one’s surprise, the EU code has already become politicized, and the White House has issued its own AI Action Plan to “remove red tape.” The plan also purports to remove “woke Marxist lunacy in the AI models,” eliminating such topics as diversity, equity, and inclusion and climate change. As Trump crusades against regulation and “bias,” the White House-allied Meta decries political concerns over compliance with the EU’s AI code. Meta’s claim is coincidental; British Courts, based on the United Kingdom’s GDPR obligations, ruled that anyone in a country covered by the GDPR has the right to request Meta to stop using their personal data for targeted advertising.

Big Tech’s open secrets

Information on the tech industry’s environmental and health impacts exists, attests artificial intelligence researcher Sasha Luccioni. The public is simply not being informed. This lack of transparency, warns Luccioni, portends significant environmental and health consequences. Too often, industry opaqueness is excused by insiders as “competition” to which they feel entitled, or blamed on the broad scope of artificial intelligence products and services—smart devices, recommender systems, internet searches, autonomous vehicles, machine learning, the list goes on. Allegedly, there’s too much variety to reasonably quantify consequences.

Those consequences are quantifiable, though. While numbers vary and are on the ascent, there are at least 3,900 data centers in the United States and 10,000 worldwide. An average data center houses complex networking equipment, servers, and systems for cooling systems, lighting, security, and storage, all requiring copious rare earth minerals, water, and electricity to operate.

The densest data center area exists in Northern Virginia, just outside the nation’s capital. “Data Center Alley,” also known as the “Data Center Capital of the World,” has the highest concentration of data centers not only in the United States but in the entire world, consuming millions of gallons of water every day. International hydrologist Newsha Ajami has documented how water shortages around the world are being worsened by Big Data. For tech companies, “water is an afterthought.”

Powered by fossil fuels, these data centers pose serious public health implications. According to research in 2024, training one large language model (LLM) with 213 million parameters produced 626,155 pounds of CO2 emissions, “equivalent to the lifetime emissions of five cars, including fuel.” Stated another way, such AI training “can produce air pollutants equivalent to more than 10,000 round trips by car between Los Angeles and New York City.”

Reasoning models generate more “thinking tokens” and use as much as 50 percent more energy than other AI models. Google and Microsoft search features purportedly use smaller models when possible, which, theoretically, can provide quick responses with less energy. It’s unclear when or if smaller models are actually invoked, and the bottom line, explained climate reporter Molly Taft, is that model providers are not informing consumers that speedier AI response times almost always equate to higher energy usage.

Profits over people

AI is rapidly becoming a public utility, profoundly shaping society, surmise Caltech’s Adam Wierman and Shaolei Ren of the University of California, Riverside. In the last few years, AI has outgrown its niche in the tech sector to become integral to digital economies, government, and security. AI has merged more closely with daily life, replacing human jobs and decision-making, and has thus created a reliance on services currently controlled by private corporations. Because other essential services such as water, electricity, and communications are treated as public utilities, there’s growing discussion about whether AI should be regulated under a similar public utility model.

That said, data centers need power grids, most of which depend on fossil fuel-generated electricity that stresses national and global energy stores. Data centers also need backup generators for brownout and blackout periods. With limited clean, reliable backup options, despite the known environmental and health consequences of burning diesel, diesel generators remain the industry’s go-to.

Whether the public realizes it or not, the environment and citizens are being polluted by the actions of private tech firms. Outputs from data centers inject dangerous fine particulate matter and nitrogen oxides (NOx) into the air, immediately worsening cardiovascular conditions, asthma, cancer, and even cognitive decline, caution Wierman and Ren. Contrary to popular belief, air pollutants are not localized to their emission sources. And, although chemically different, carbon (CO2) is not contained by location either.

Of great concern is that in “World Data Capital Virginia,” data centers are incentivized with tax breaks. Worse still, the (misleadingly named) Environmental Protection Agency plans to remove all limits on greenhouse gas emissions from power plants, according to documents obtained by the New York Times. Thus, treating AI and data centers as public utilities presents a double-edged sword. Can a government that slashes regulations to provide more profit to industry while destroying its citizens’ health along with the natural world be trusted to fairly price and equitably distribute access to all? Would said government suddenly start protecting citizens’ privacy and sensitive data?

The larger question, perhaps, asks if the US is truly a democracy. Or is it a technogarchy, or an AI-tocracy? The 2024 AI Global Surveillance (AIGS) Index ranked the United States first for its deployment of advanced AI surveillance tools that “monitor, track, and surveil citizens to accomplish a range of objectives— some lawful, others that violate human rights, and many of which fall into a murky middle ground,” the Carnegie Endowment for International Peace reported.

Surveillance has long been the purview of authoritarian regimes, but in so-called democracies such as the United States, the scale and intensity of AI use is leveraged both globally through military operations and domestically to target and surveil civilians. In cities such as Scarsdale, New York, and Norfolk, Virginia, citizens are beginning to speak out against the systems that are “immensely popular with politicians and law enforcement, even though they do real and palpable damage to the citizenry.”

Furthermore, tracking civilians to “deter civil disobedience” has never been easier, evidenced in June by the rapid mobilization of boots on the ground amid the peaceful protests of ICE raids in Los Angeles. AI-powered surveillance acts as the government’s “digital scarecrow,” chilling the American tradition and First Amendment right to protest and the Fourth Estate’s right to report.

The public is only just starting to become aware of algorithmic biases in AI training datasets and their prejudicial impact on predictive policing, or profiling, algorithms, and other analytic tools used by law enforcement. City street lights and traffic light cameras, facial recognition systems, video monitoring in and around business and government buildings, as well as smart speakers, smart toys, keyless entry locks, automobile intelligent dash displays, and insurance antitheft tracking systems are all embedded with algorithmic biases.

Checking Big Tech’s unchecked power

Given the level and surreptitiousness of surveillance, the media are doubly tasked with treading carefully to avoid being targeted and accurately informing the public’s perception of data collection and data centers. Reporting that glorifies techbros and AI is unscrupulous and antithetical to democracy: In an era where billionaire techbros and wanna-be-kings are wielding every available apparatus of government and capitalism to gatekeep information, the public needs an ethical press committed to seeking truth, reporting it, and critically covering how AI is shifting power.

If people comprehend what’s at stake—their personal privacy and health, the environment, and democracy itself—they may be more inclined to make different decisions about their AI engagement and media consumption. An independent press that prioritizes public enlightenment means that citizens and consumers still have choices, starting with basic data privacy self-controls that resist AI surveillance and stand up for democratic self-governance.

Just as a healthy environment, replete with clean air and water, has been declared a human right by the United Nations, privacy is enshrined in Article 12 of the Universal Declaration of Human Rights. Although human rights are subject to national laws, water, air, and the internet know no national borders. It is, therefore, incumbent upon communities and the press to uphold these rights and to hold power to account.

This spring, residents of Pittsylvania County, Virginia, did just that. Thanks to independent journalism and civic participation, residents pushed back against the corporate advertising meant to convince the county that the fossil fuels powering the region’s data centers are “clean.” Propagandistic campaigns were similarly applied in Memphis, Tennessee, where proponents of Elon Musk’s data center—which has the footprint of thirteen football fields—circulated fliers to residents of nearby, historically Black neighborhoods, proclaiming the super-polluting xAI has low emissions. “Colossus,” Musk’s name for what’s slated to be the world’s biggest supercomputer, powers xAI’s Hitler-loving chatbot Grok.

The Southern Environmental Law Center exposed with satellite and thermal imagery how xAI, which neglected to obtain legally required air permits, brought in at least 35 portable methane gas turbines to help power Colossus. Tennessee reporter Ren Brabenec said that Memphis has become a sacrifice zone and expects the communities there to push back.

Meanwhile, in Pittsylvania, Virginia, residents succeeded in halting the proposed expansion of data centers that would damage the region’s environment and public health. Elizabeth Putfark, attorney with the Southern Environmental Law Center, affirmed that communities, including local journalists, are a formidable force when acting in solidarity for the public welfare.

Best practices

Because AI surveillance is a threat to democracies everywhere, we must each take measures to counter “government use of AI for social control,” contends Abi Olvera, senior fellow with the Council on Strategic Risks. Harlo Holmes, director of digital security at the Freedom of the Press Foundation, told Wired that consumers must make technology choices under the premise that they’re our “last line of defense.” Steps to building that last line of defense include digital and media literacies and digital hygiene, and at least a cursory understanding of how data is stored and its far-reaching impacts.

Best defensive practices employed by media professionals can also serve as best practices for individuals. This means becoming familiar with laws and regulations, taking every precaution to protect personal information on the internet and during online communications, and engaging in responsible civic discourse. A free and democratic society is only as strong as its citizens’ abilities to make informed decisions, which, in turn, are only as strong as their media and digital literacy skills and the quality of information they consume.

This essay first published here: https://www.projectcensored.org/hidden-costs-big-data-surveillance-complex/

Mischa Geracoulis is the Managing Editor at Project Censored? and The Censored Press, contributor to Project Censored’s State of the Free Press yearbook series, Project Judge, and author of Media Framing and the Destruction of Cultural Heritage (2025)?. Her work focuses on human rights and civil liberties, journalistic ethics and standards, and accuracy in reporting. Read other articles by Mischa.