Movements Need the Critical Thinking That AI Destroys
Millions of people are now asking chatbots to summarize books, draft emails, and even explain political events to them. But what looks from one perspective like a productivity revolution may also be something more discomfiting: the quiet outsourcing of judgment itself.
Writers on artificial intelligence have long claimed that it poses an existential risk because, for example, it may become so powerful that it turns against human beings. But AI may create a different kind of existential risk, as philosopher Nir Eisikovits notes — not in the apocalyptic sense often imagined but in relation to the question of what it means to be human. One of the most underestimated dangers of these systems lies in the growing tendency for users to delegate the task of forming judgments to the algorithmic outputs of chatbots, thereby risking the gradual erosion of our capacity for independent thought.
The negative side effects accompanying the use of large language models (LLMs) are vividly illustrated by the phenomenon of “cognitive debt.” From an economic perspective, the short-term productivity gains achieved through the use of AI systems are difficult to dispute. By delegating numerous tasks previously performed by humans to AI, significant efficiency gains can be observed: workflows are accelerated, processes are rationalized, and organizational routines are overall made more efficient.
Yet the resilience and efficiency generated through delegation to AI systems could threaten a gradual loss of the cognitive capacities that are being outsourced to them. A recent MIT study that found significantly reduced brain activity among regular users of chatbots, for instance, provides some initial support for this worry.
While debates about the threat modern AI corporations pose to democracy tend to focus on the fact that data (and thus control over algorithms) are increasingly concentrated in the hands of major tech companies that largely avoid public oversight, another important question is surprisingly often pushed into the background. It is a question about the preconditions for people to be able to take part in democratic processes and emancipatory political projects.
The outsourcing of thinking is, of course, not a new phenomenon. It is the main theme, in fact, of Immanuel Kant’s classic 1784 essay, “What Is Enlightenment?” For Kant, the process of emancipation consists in freeing oneself from the “self-incurred immaturity” of letting others think for you and instead making use of one’s own powers of reasoning. He writes:
It is so convenient to be immature. If I have a book that has understanding for me, a pastor who has a conscience for me, a physician who judges my diet for me, and so forth, then I need not trouble myself at all. I have no need to think if only I can pay; others will readily undertake the disagreeable business for me.
Yet with the emergence of LLM chatbots, the outsourcing of thinking — and therefore also the critical questioning of existing social norms and power relations — is taking a new form.
A Subject Without Subjectivity
But why should the outsourcing of one’s own thinking (and, in many cases, even one’s feelings) to chatbots be a cause for concern? And more specifically, why does the use of chatbots threaten people’s ability to take part in democratic or emancipatory politics?
I suggest — loosely following Slavoj Žižek — that chatbots represent a highly technologized manifestation of what I would call a decaffeinated form of subjectivity. Liberal capitalist societies, Žižek argues, are characterized by a structural tendency to avoid ambivalence. This dynamic first becomes visible at the level of consumer behavior: rather than accepting alcohol or caffeine, with their well-known negative side effects, consumers are increasingly turning to alcohol-free beer or decaffeinated coffee.
The chatbot, in its most advanced form, is a “decaffeinated” subject simply because it lacks something that is essential to human beings: the principle of subjectivity itself. The human desire for decaffeinated communication tools such as chatbots expresses a desire for contact with what we might describe as a “subject without subjectivity.”
Žižek’s observation about the consumer logic of liberal capitalist societies may, at first glance, appear to be banal. But it takes on far-reaching significance once its implications are considered at the level of subjectivity and politics. People’s increasing attraction to chatbot companions is symptomatic of a systematic avoidance of confrontation with the Other, that is, with another actual human subject.
Why might people prefer talking to a decaffeinated subject? Derek Thompson puts it clearly: “Unlike the most patient spouses, they could tell us that we’re always right. Unlike the world’s best friend, they could instantly respond to our needs without the all-too-human distraction of having to lead their own life.”
The “caffeinated” aspect of human existence — manifested, for example, in passive aggression and disempowering ambiguity but also in the necessary confrontation with one’s own flaws and frailties — is now increasingly being displaced by interactions with bots, because they are conversation partners who always give us the feeling that we are the best versions of ourselves.
Philosopher of technology Shannon Vallor, in her book The AI Mirror, explains the danger of such chatbots as follows:
What AI mirrors do is to extract, amplify, and push forward the dominant powers and most frequently recorded patterns of our documented, datafied past. In doing so they turn our vision away from the newer, rarer, wiser, more mature and humane possibilities that we must embrace for the future. Instead of asking one another what we might now become, we ask AI mirrors to show us who we already are and have been, and to predict from there what must come next.
Chatbots can be regarded as subjects without subjectivity because they lack those characteristics that make us actual subjects in the first place: biographically shaped trajectories of the past, which are themselves preconditions for self-reflection and, with it, for efforts at social transformation. AI’s responses do not arise from actual experience but from the statistical aggregation of other people’s pasts.
Whereas human subjectivity essentially involves reflection on one’s own past and is therefore capable of self-transformation, the chatbot merely reproduces the dominant thinking of the documented past. In this sense, it tends toward the stabilization and reproduction of the status quo.
For instance, as highlighted by AI ethics expert Zinnya del Villar, language models like GPT and BERT often associate jobs such as “nurse” with women and “scientist” with men, reflecting stereotypes embedded in their training data from historical texts and media. Similarly, when trained on past hiring examples rife with bias — such as résumés favoring men for technical roles — these systems perpetuate gender discrimination by filtering applications in ways that reinforce outdated norms, rather than innovating beyond them through critical reflection.
The Disappearance of Experience
Avantika Tewari argues that the loss of subjectivity embodied by the increasing use of AI may work to shore up the capitalist system:
Just as capitalism reduces labor to a mere function within a larger system, AI reduces creativity to a mechanical process, stripping it of its subjective and intentional dimensions. The so-called “equality” between AI-generated texts and human creativity is less about the intrinsic quality of the output and more about its role within a system that prioritizes efficiency and productivity over genuine artistic expression.
If intention and subjectivity are central to what it means to be human, this fact helps explain how AI furthers the alienating tendencies endemic to capitalism. The substitution of AI for human cognition in various domains, akin to the automation of human labor more generally, involves assuming thinking is simply a technical process — something that can be broken down into steps and automated — instead of something arising from and shaped by actual living in the world.
But human thought doesn’t work like that. Our judgments and choices grow out of our personal histories and conflicts. AI systems do not have such experiences of their own. They reduce thinking to patterns in existing data — what has been recorded in the past — and process it statistically. They lack what makes human thinking creative and open to change.
The proletarian subject, as conceived of by Karl Marx — despite its being reduced by the capitalist organization of production to mere labor power — still remains a subject, whose actions and perceptions are anchored in its own concrete experience. This is illustrated by the phenomenon of alienation within the labor process, as Marx understood it.
Alienation presupposes that a form of subjectivity exists that carries with it its own past, expectations, and claims upon the world, and which now confronts the capitalist production process as something like a foreign body. Precisely because workers, as subjects, bring their own history and experience into the labor process, they can experience this process as something alienating.
For Marx, this gap between the worker and the capitalist production process opens up space for a subjectively experienced sense of discontent — and ultimately also for possibilities of action oriented toward transforming the existing order. This discontent arises from the tension between the worker’s subjective experience and the objective structure of the production process, which deprives workers of their agency even as it makes use of their labor power.
With the rise of chatbots, though, something new is happening: it is possible that even our ability to feel dissatisfied — to sense that something isn’t right — may be eroded.
For the chatbot appears to be a speaking subject, yet it possesses no past of its own, no history of experience, and therefore no subjectivity. Its responses do not arise from the processing of lived experience but from the statistical aggregation of the already documented pasts of others.
To the extent that people increasingly outsource reflection, critique, and even the articulation of discontent to such systems, the possibility of emancipatory thinking and action may wither. Because the impulse for change develops out of the tension between a person’s experience and existing social conditions — a tension that chatbots neither provide for us nor are capable of themselves.

No comments:
Post a Comment