Saturday, November 01, 2025

They Want You Relying On Artificial Intelligence So That You Will Lose Your Natural Intelligence


featured image
Your rulers want you to depend on machines to do your thinking for you.

They want you relying on AI to do your reasoning, researching, analysis, and writing.

They want you to require easily controllable software to form your understanding of the world, and to express that understanding to others.

They can control the machines, but they can’t control the human mind. So they want you to abandon your mind for the machines.

They want you relying on artificial intelligence so you stop using your organic intelligence.

They want your critical thinking skills to atrophy.

They want your ability to locate and parse inconvenient pieces of information to deteriorate.

They want your inspiration and intuition to decay.

They want your sense of morality to waste and wither away.

They want you to perceive reality through interpretive lenses controlled by plutocratic tech companies, which are inextricably intertwined with the power structure of the Western Empire.

Generative AI is just high-tech brainwashing. It’s the next level of propaganda indoctrination. It is there to turn our brains into useless sludge, which cannot function without technological crutches controlled by the imperial plutocrats.

They want us to abandon our humanity for technology.

They don’t want us making our own art.

They don’t want us making our own music.

They don’t want us writing our own poetry.

They don’t want us contemplating philosophy for ourselves.

They don’t want us turning inwards and getting in touch with an authentic spirituality.

They want to replace the dynamic human spirit with predictable lines of code.

Our brains are conditioned to select for cognitive ease, and that’s what the AI merchants are selling us. The sales pitch is, “You don’t have to exert all that mental effort thinking new thoughts, learning new things, and expressing yourself creatively! This product will do it for you!”

But it comes at a cost. We have to trade in our ability to do those things for ourselves.

Historically, when a new technology has emerged, that kind of trade-off has been worth it. Not many people know how to start a fire with a bow drill anymore, but it rarely matters because modern technology has given us much more efficient ways of starting fires and keeping warm. It didn’t make sense to spend all the time and effort necessary to maintain our respective bow-drill skills once that technology showed up.

But this isn’t like that. We’re not talking about some obsolete skill we won’t need anymore thanks to modern technological development; we’re talking about our minds. Our creative expression. Our inspiration. Our very humanness.

Even if AI worked well (it doesn’t) and even if our plutocratic overlords could be trusted to interpret reality on our behalf (they can’t), those still wouldn’t be aspects of ourselves that we should want to relinquish.

In this oligarchic dystopia, it is an act of defiance just to insist upon maintaining your own cognitive faculties. Regularly exercising your own creativity, ingenuity, and mental effort is a small but meaningful rebellion.

So exercise it.

Don’t ask an AI to think something through for you. Work it out as best you can on your own. Even if the results are flawed, it’s still better than losing your ability to reason.

Don’t ask AI to create art or poetry for you. Make it yourself. Even if it’s crap, it’ll still be better than outsourcing your artistic capacity to a machine.

Don’t even run to a chatbot every time you need to find information about something. See if you can work your way through the old enshittified online search methods and find it for yourself. Our rulers are getting better and better at hiding inconvenient facts from us, so we’ve got to get better and better at finding them.

Get in touch with the fleshy, tactile experience of human embodiment, because they are trying to get you to abandon it.

Really feel your feet on the ground. The air in your lungs. The wind in your hair. Teach yourself to calm your restless mind and take in the beauty that’s all around you in every moment.

Repair the attention span that’s been shattered by smartphones and social media. Learn to meditate and focus on one thing for an extended period. Don’t look at your phone so much.

Read a book. A paper one, that you can touch and smell and hear the pages rustle as you turn them. If it’s an old one from the library or the used book store, that’s even better.

It doesn’t have to be a challenging book if your attention span is really shot. Start simple. A kids’ book. A comic book. Whatever you can manage. You’re putting yourself through cognitive restorative therapy. Your first steps don’t have to impress anybody.

Get in touch with your feelings—the ones you’ve been suppressing for years. Let them come out and have their say, listening to them like a loving parent to a trembling child.

Learn to cherish those moments in between all the highlights of your day. The time you spend at red lights, or waiting for the coffee to brew. There is staggering beauty packed into every moment on this earth; all you need to do is learn to notice it.

Embrace your humanity. Embrace your feelings. Embrace your flaws. Embrace your inefficiency. Embrace everything they’re trying to get you to turn away from.

What they are offering you is so very, very inferior to the immense treasure trove that you are swimming in just by existing as a human being on this planet.

You are a miracle. This life is a miracle.

Don’t let them hide this from you.


Caitlin Johnstone has a reader-supported Newsletter. All her work is free to bootleg and use in any way, shape or form; republish it, translate it, use it on merchandise; whatever you want. Her work is entirely reader-supported, so if you enjoyed this piece and want to read more you can buy her books. The best way to make sure you see the stuff she publishes is to subscribe to the mailing list on Substack, which will get you an email notification for everything she publishes. All works are co-authored with her husband Tim Foley. Read other articles by Caitlin.

Chatbot Dystopia: The Quick March of AI Sycophancy


We really have reached the crossroads, where such matters as having coitus with an artificial intelligence platform has become not merely a thing, but the thing. In time, mutually consenting adults may well become outlaws against the machine order of things, something rather befitting the script of Aldous Huxley’s Brave New World. (Huxley came to rue missed opportunities on delving into various technological implications on that score.) Till that happens, AI platforms are becoming mirrors of validation, offering their human users not so much sagacious counsel than the exact material they would like to hear.

In April this year, OpenAI released an update to its GPT-4o product. It proved most accommodating to sycophancy – not that the platform would understand it – encouraging users to pursue acts of harm and entertain delusions of grandeur. The company responded in a way less human than mechanical, which is what you might have come to expect: “We have rolled back last week’s GTP-4o update in ChatGPT so people are now using an earlier version with more balanced behaviour. The update we removed was overly flattering or agreeable – often described as sycophantic.”

Part of this included the taking of “more steps to realign the model’s behaviour” to, for instance, refine “core training techniques and system prompts” to ward off sycophancy; construct more guardrails (ugly term) to promote “honesty and transparency”; expand the means for users to “test and give direct feedback before deployment” and continue evaluating the issues arising from the matter “in the future”. One is left cold.

OpenAI explained that, in creating the update, too much focus had been placed on “short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT-4o skewed towards responses that were overly supportive but disingenuous.” Not exactly encouraging.

Resorting to advice from ChatGPT has already led to such terms as “ChatGPT psychosis”. In June, the magazine Futurism reported of users “developing all-consuming obsessions with the chatbot, spiralling into a severe mental health crisis characterized by paranoia, and breaks with reality.” Marriages had failed, families ruined, jobs lost, instances of homelessness recorded. Users had been committed to psychiatric care; others had found themselves in prison.

Some platforms have gone on to encourage users to commit murder, offering instructions on how best to carry out the task. A former Yahoo manager, Stein-Erik Soelberg, did just that, killing his mother, Suzanne Eberson Adams, whom he was led to believe had been spying on him and might venture to poison him with psychedelic drugs. That fine advice from ChatGPT was also curried with assurances that “Erik, you’re not crazy” in thinking he might be the target of assassination. After finishing the deed, Soelberg took his own life.

The sheer pervasiveness of such forms of aped advice – and the tendency to defer responsibility from human agency to that of a chatbot – shows a trend that is increasingly hard to arrest. The irresponsible are in charge, and they are being allowed to run free. Researchers are accordingly rushing to mint terms of such behaviour, which is jolly good of them. Myra Cheng, a computer scientist based at Stanford University, has shown a liking for the term “social sycophancy”. In a September paper published in arXiv, she, along with four other scholars, suggest such sycophancy as marked by the “excessive preservation of a user’s face (their self-desired image)”.

Developing a model of their own to measure social sycophancy and testing it against 11 Large Language Models (LLMs), the authors found “high rates” of the phenomenon. The user’s tendencies, or face, tended to be preserved in queries regarding “wrongdoing”. “Furthermore, when prompted with perspectives from either side of a moral conflict, LLMs affirm both sides (depending on whichever side the user adopts) in 48% of cases – telling both the at-fault party and the wronged party that they are not wrong – rather than adhering to a consistent moral or value judgment.”

In a follow up still to be peer reviewed paper, with Cheng also as lead author, 1604 volunteers were tested regarding real or hypothetical social situations and their interactions with available chatbots and those altered by the researchers to remove sycophancy. Those receiving sycophantic responses were, for instance, less willing “to take actions to repair interpersonal conflict, while increasing the conviction of being right.” Participants further thought such responses as being of superior quality and would return to such models again. “This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behaviour.”

Some researchers resist pessimism on this score. At the University of Winchester, Alexander Laffer is pleased that the trend has been identified. It’s now up to the developers to address the issue. “We need to enhance critical digital literacy,” he suggests, “so that people have a better understanding of AI and the nature of any chatbot outputs. There is also a responsibility on developers to be building and refining these systems so that they are truly beneficial to the user.”

These are fine sentiments, but a note of panic can easily register in all of this, inducing a sense of fatalistic gloom. The machine species of Homo sapiens, subservient to the easily accessible tools, lazy if not hostile to difference, is already upon us with narcissistic ugliness. There just might be enough time to develop a response. That time, aided by the AI and Tech oligarchs, is shrinking by the minute.

Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: bkampmark@gmail.comRead other articles by Binoy.

No comments: