Can Prospects for Nuclear War Get Any Worse? Sure, We Can Put AI in Charge

Image by Nicolas Hippert.
Can we possibly get away from AI’s ubiquitous presence in our lives? But as long as AI is now in our faces 24/7, it’s time to seriously start pushing back about its outsized and overwhelming influence. Troubling stories tumble out of the media daily. Employees in a major fast-food chain must now wear AI headsets that tell them how friendly they’re being to customers and coaching them on their work. (AI is now posing as our servant, but in the years ahead will the dynamic be reversed?)
And then there is the looming data center controversy, with Big Tech companies rapidly taking over huge swaths of land across the US to build massive and environmentally unfriendly data centers. Fortunately, this trend is now emerging as a campaign issue given early and cascading effects on electricity prices. In general, AI is having a tough year in the court of public opinion. Witness this cover story in a recent issue of Time magazine: “The People vs AI.” The article noted that “a growing cross section of the public—from MAGA loyalists to Democratic socialists, pastors to policymakers, nurses to filmmakers—agree on at least one thing: AI is moving too fast…. A 2025 Pew poll found… the public thinks AI will worsen our ability to think creatively, form meaningful relationships, and make difficult decisions.” Along with Immigration and Customs Enforcement-related pushback, a spontaneous wellspring of grassroots activism appears to be bubbling up against the AI juggernaut and the patently undemocratic backdoor power grab by technocrats and the companies behind them.
One of the greatest concerns in the public sphere is AI’s rapid incorporation into present and future military campaigns. This is actively being encouraged by the Trump administration’s decision to give AI companies free reign to develop their products with minimal regulation and oversight. This is an existential train wreck waiting to happen, and it came into striking focus in the monthslong dispute between AI company Anthropic and the Pentagon. Although it was already using the Claude platform, Secretary of War Pete Hegseth was unhappy with the company’s refusal to use it to remove human decision-making from military operations and support accelerated mass surveillance of US citizens.
Anthropic’s move was that rarity in Big Tech circles, a strong and principled ethical stand against an administration that doesn’t seem to know what that is. Happy warrior Hegseth then branded the company as a “supply chain risk,” effectively banning further use by the Pentagon and punishing the company’s overall viability in the non-defense marketplace as well. Ever the opportunist, the CEO of OpenAI, Sam Altman, then jumped in to offer his AI platform to do what Anthropic wouldn’t. The matter is now in the courts.
Handing AI the “Nuclear Football”
Using AI to create what are called autonomous systems represents a quantum leap in the rapidly advancing business of modern weaponry. Paradoxically, weapons technology is being simultaneously downsized through the use of drones and smaller and sophisticated high-tech devices (such as mine sniffers) and upsized with the use of the AI systems designed to manage and control them.
This raises the very troubling picture of wars being conducted without much human oversight. It’s probably one reason even high- profile AI influencers and Big Tech CEOs have admitted (sometimes a little too casually) that the technology could destroy humanity given the right set of circumstances. While autonomous systems can apply to stand-alone weapons such as killer robots, the most worrying concern relates to the Pentagon’s desire to build and deploy command-and-control systems that remove military officers from the split-second decisions that need to be made in warfare. And yes, that includes nuclear weapons.
How quickly is the Pentagon moving toward handing the nuclear keys over to AI systems and Big Tech? No one really knows. When questioned by a reporter on the matter, one senior official in the Trump administration weakly demurred, “The administration supports the need to maintain human control over nuclear weapons.”
AI experts and strategic thinkers say that a big driver of this process is that America’s top nuclear adversaries—Russia and China—are already using AI in their command-and-control systems. These developments are happening at lightning speed and are being further propelled by Epic Fury, the first AI-fueled war in US history. And let’s not be too laudatory about Anthropic. Its Claude system has been integrated with Palantir’s Maven to identify military targets. The Pentagon is still investigating whether Maven played any part in the horrific event when a US Tomahawk missile struck a girls’ elementary school killing more than 165.
Sleepwalking Into Armageddon?
What madness is this? By what shallow calculus can a handful of powerful individuals or shadowy organizations decide or even risk the fate of humanity? How do we put all of this dangerous thinking at the highest levels of our government into some kind of perspective that correlates with common sense and basic human decency? In our trajectory toward what some have called techno-feudalism, we have this apparent plunge into barbarity coupled with a powerful array of tools to accelerate it. When nuclear activist Helen Caldicott warned that Western civilization is “sleepwalking into Armageddon,” it was perhaps this particular kind of blindness that she had in mind. And the brilliant socio-biologist E.O. Wilson’s profound observation also springs to mind: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology. And it is terrifically dangerous.”
The rush to deploy AI as large-scale weaponry with every bit as much destructive potential as our existing nuclear arsenal is a tip off to the deeper motivations behind its development. In the meantime, some obvious questions need to be asked. Why aren’t government and academic institutions eager to apply these advanced AI tools to the many intractable problems that characterize world polycrisis such as global climate change or better distribution of scarce resources including food and water? Where are the urgent calls from those who serve in Congress to do so? Or why don’t we see headlines like “Harvard Inaugurates $100 Million AI Project to Address Climate Change”?
It seems pretty clear that AI justifications coming from the both the administration and Congress (not to mention that the establishment commentariat that serves them) invariably gravitate to enhancing corporate productivity or military use. And it’s equally clear that AI will also serve as yet another powerful mechanism of wealth transfer to the 1% and either knowingly or unknowingly act as a chaos agent in an increasingly unstable multipolar geopolitical world. If AI is truly as superintelligent (and sentient) as its Big Tech proponents claim it is, then these systems should also be smart enough to refuse to participate in any projects that could degrade or destroy life on the planet. I don’t see any evidence of this. Sadly, it looks like we may have to once again learn the hard way that information, knowledge, and wisdom all are very different things. And that while knowledge can be appropriated by powerful computers, wisdom will never be.
This piece first appeared on CommonDreams.
The Political Culture of the Smartphone and the Cult of the Algorithm

Photograph Source: J Stimp – CC BY 2.0
Art is everywhere, since artifice is at the very heart of reality. And so art is dead, not only because its critical transcendence is gone, but because reality itself, entirely impregnated by an aesthetic which is inseparable from its own structure, has been confused with its own image.
Jean Baudrillard, Simulacra and Simulation (1981)
The French social theorist and philosopher Jean Baudrillard (1929-2007) memorably argued, “We live in a world where there is more and more information, and less and less meaning.” Today, we might see the advent of smartphone culture as a kind of coup for the ruling class. It installed an incredibly pervasive infrastructure that captures attention and commodifies subjectivity; it embedded new tools of diffuse, apparently voluntary social control through which people actively, almost obsessively, pursue their own erasure. Today, under digitized capitalism, the logic of the slot machine has come to dominate almost the entire social experience, with rewards spooned out variably and an uneasy sense that one must compulsively reengage the system. It is unfortunate enough that slot machines exist, but their logic is no longer confined to casino games. That logic now colonizes our communications, work, and even our friendships and intimate relationships.
Digital environments like social media platforms are designed to deliver rewards intermittently and unpredictably, imitating the psychologically addictive qualities of slot machines. The randomness, opacity, and variability are all there on purpose to provoke compulsive engagement. If we take a step back and consider this system, it is strange beyond words: the system links the most personal, private aspects of life and identity to a gambling mechanism, co-opting the most fundamental human feelings and motivations. It has colonized life’s inherent uncertainty and unpredictability, leveraging these fundamental features of existence to generate anxiety and disciplinary power. Digital capitalism has changed the concept of uncertainty itself. Past social and cultural systems entailed bounded and circumscribed uncertainties in the form of gambling, sports, some forms of traditional ritual practice, and the like. The stakes were limited and generally you could leave whenever you chose.
In this kind of cultural practice, while uncertainty bears narrative or symbolic power, there is, perhaps, a sense of shared meaning and social bonding, where uncertainty is temporary rather than totalizing. In today’s digital world, uncertainty seems to saturate everything, as platforms leverage it by making the system of outcomes and rewards totally opaque and stochastic. The result of this is that it turns random chance into a tool for behavioral control. Baudrillard’s work introduced a concept of hyperreality, a state in which there has been a breakdown in the distinction between reality and representation or symbol, when signs and symbols no longer have to refer back to anything real.
In hyperreality, these simulations, the symbols of nothing, as it were, come to replace direct experiences to the point where they start to reference each other in an endless process, and reality is functionally replaced. The internet’s engagement and attention economies represent something similar to or approaching Baudrillard’s idea of hyperreality. He writes:
Abstraction today is no longer that of the map, the double, the mirror or the concept. Simulation is no longer that of a territory, a referential being or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory – precession of simulacra – it is the map that engenders the territory and if we were to revive the fable today, it would be the territory whose shreds are slowly rotting across the map.
There is a strong current of the hyperreal in the culture of mutual bullshitting that has overtaken today’s internet. Everyone agrees to participate in the self-referential system of validation, in which what is rewarded is the appearance of something (success, happiness, authority, etc). The clicks and likes are based on how well you perform to create that appearance, so everyone is frantically chasing each other’s carefully framed images. Hyperreality’s endless flood of circulating signs becomes a source of structural anxiety, as individuals chase differentiation and recognition within a system that continuously and intentionally erodes these values, that is designed to dissolve and replace them as quickly as possible. Hyperreality is something worse than mere pretense; it is the collapse of the very difference between truth and falsehood. Baudrillard writes:
Pretending, or dissimulating, leaves the principle of reality intact: the difference is always clear, it is simply masked, whereas simulation threatens the difference between the “true” and the “false,” the “real” and the “imaginary.”
At the macro level, this hyperreal world of influencers and followers allows capital to manipulate desire and perception in new, more direct ways, commodifying social life and individual subjectivity. An important aspect of Baudrillard’s thought is his argument that we use consumption not only to satisfy our needs, but also to send out certain signals about our identity and social position. As within today’s capitalist systems, commodities often do more signifying than serving. The object itself is really beside the point, as it is desired for what it represents in a complex, layered field of social codes showing status, taste, and other desirable attributes. An item’s uniqueness will increase its desirability, but only to the extent that the surrounding system recognizes the uniqueness and makes it comprehensible or legible (take luxury brands, limited collaborations, etc.). The simulations work because they’re able to repeat and manage difference in a comfortable, consumable form.
Underneath this is the fact that we don’t actually want true uniqueness, but now prefer a mass-mediated simulation of it, some way to distinguish ourselves within sameness. Many of the most successful commodities give the impression or feeling or being personal or expressive, despite being consumables made to throw away. For the philosopher Georges Bataille, true excess, associated with irrecuperable loss, waste, and sacrifice, belongs to the sacred. Unable to leave this behind completely, commodity culture comes to simulate true, sacred excess in its production of surplus as a lifestyle defined by fashion cycles, premium pricing, and intentional obsolescence. In our culture, a truly singular object appears as almost rude in its unwillingness to enter the system of sign or code exchanges. The best thing is something rare and expensive, but branded and reproduced in a limited number: rare but conspicuous, exclusive but reproducible in meaning.
By definition, a genuinely singular, unique object can’t circulate, and in resisting exchange it would, in these terms, destabilize representation and carry excess. Consumer-capitalist society, by contrast, traffics in carefully framed and codified simulacra of singularity or uniqueness. What is prized is the thing that only signals luxury, rarity, and limitedness, the kind of difference that doesn’t actually disrupt or destabilize. A singular object as such, something like a sacred relic, an act of sacrifice, a hand-made gift, resists abstraction (abstraction into price, function, etc.). A truly unique thing disrupts exchange by its very nature, which makes it inherently suspect, even repellent, within our system. What’s the point if it can’t be consumed, bought, and showcased? At the same time, truly singular objects are interesting in that they may hint at conceptions of value that cannot be contained by capitalism. What Baudrillard’s work stresses is that consumption is never just about the use of an object, but includes also a performance of symbolic meaning within the context of a system of ranking and difference.
In a system defined by and organized around difference, the act of consuming a sign or symbol undermines its distinctiveness just by definition. Once a symbolic marker is sufficiently watered down, it becomes less valuable as a signifier of difference and distinction. We get a strange paradox whereby consumption means both acquiring value and destroying it. Baudrillard explains:
The principle of analysis remains as follows: you never consume the object in itself (in its use-value); you are always manipulating objects (in the broadest sense) as signs which distinguish you either by affiliating you to your own group taken as an ideal reference or by marking you off from your group by reference to a group of higher status.
In the digital world, we approach a seemingly pure form of Baudrillard’s sign economy. Smartphone culture’s approach to advertisements further highlights many of his insights about the symbolic import of these dynamics. The promise of advertisements is always much more than the featured products. The advertisement offers a point of respite or stability in that it reorients you to the inevitability of the concrete relationships involved, inuring you to the oppressive and exploitative features of those relationships. Arguably, the respite is bound up in the fact that advertising has helped to produce a reality where consumption and identity are almost coextensive, where the promise of the commercial becomes an operative “reality” for the subject that is more potent than the underlying social relations. Advertisements function as a kind of regulator of affect, offering short-term affective relief through a combination of promise, aspiration, and belonging. The process reduces the subject’s motivation to examine or contest the structural causes of the felt distress, a kind of habituation. Subjectivity itself becomes impossibly tangled in the relations of mindless consumption and permanent indebtedness to the system. Advertisement stabilizes identities tied to this system and thus puts them at ease.
Advertisement leads you on, never fulfilling or resolving desire. Even when you have the thing, the commercial is orienting you in the direction of the next point of consumption, the next upgrade or event or thing to obtain. This is important because satisfaction is never finalized, always a mirage on a moving horizon. Studies have shown that this endless horizon is psychologically measurable, that exposure to “aspirational” advertising decreases our subjective satisfaction with our own lives as it increases our desire to acquire. The form of respite being hawked by the advertisement is the prospect of acceptance in a symbolic community. Thus in advertisements, we have ideological interpellation or hailing in a relatively pure form, in that there is the invitation of belonging at the price of buying the thing, and the call is to act in a particular way, not to think in a particular way.
Looking for evidence of this phenomenon empirically, studies have confirmed that today people often feel as strongly or more strongly about their brand communities than they do about many of the more traditional indications of group belonging, for example, religious or civic organizations. It doesn’t take much imagination to see how such dynamics serve important social and class functions, stabilizing capitalist economic relations and reproducing consent. Once your belonging is conditioned on your continuing to buy the right things, you can never fully arrive but must continue to buy back into the game, anteing up over and over. In this way, the promise of belonging reassures us that the abyss of uncertainty can be overcome. Your alienated, anxious striving becomes your participation and belonging.
In much of the video “content” online today, it is almost impossible to distinguish the content from the ad. But this stands to reason given the logic of the major platforms: the content is situated as a commodity from the word go, to be evaluated in terms of engagement and revenue from ads. At the end of the day, it doesn’t matter to you whether you’re watching an ad or a show, does it? Most influencer content is so corny and exaggerated that it feels like an advertisement anyway. But it does feel insidious, because one can’t really tell whether or when creators are in ad mode. On the other hand, there really is no other mode today, is there? In the hyperreal digital world where reality is uncertain and constantly mediated by signs and symbols, advertising is able to assume an even more central role in shaping desire. No longer able to brace themselves against anything stable or solid, with no authentic point of reference on which to hang meaning or value, consumers look to ads for their identities and ideals. These are always held just out of reach, meant to stoke desire rather than fulfill any real need. The digital tokens we exchange produce a simulation of connection, but as signs without a reference point, they don’t produce authentic interactions. We have counterfeit, commodified sociality and solidarity, with isolated people performing sociality for sign-value. A structural critique would show the way to solidarity, so the political and economic system prefers individualized shame and confusion. Today there is almost constant exposure to carefully curated, hyperreal personas and imagery, encouraging unrealistic expectations and self-doubt.
Likes and followers are close to pure simulacra in Baudrillard’s terms. They generate symbolic capital even as they’re totally detached from authentic interpersonal connection, fueling anxious, compulsive use. The commodification of everything, even romance, where you are a commodity and a hyperreal set of signs. The process commodifies and thus fragments identity, turning the search for love and partnership into a transaction whereby people are products to be judged and selected based on superficial factors. It is one of our more perverse and self-disciplining technological and cultural inventions. “Look through our wares and see if there’s anything you like! Oh, good choice. But you could do better.” Because the value of anything is about its place within a series, nothing means anything. You prefer something of relatively greater value if you can get it. Thus uncertainty is a totalizing condition of existence under our current political and economic system.
We can see digital uncertainty and algorithmic anxiety as extensions of the kinds of structural instability the economic system has created in the past. For example, Taylorism and the progressive deskilling of labor subjected working people to new forms of constant pressure, measurement, and monitoring. This increased ability to monitor and control work, to treat workers as themselves instruments, generated an upswell of new anxieties and uncertainties, now more impossible than ever to escape or neutralize. How do we explain rising rates of loneliness, depression, and alienation next to the promise of perpetual connectedness and ever-expanding consumer choice? It’s almost as if the steady reduction of every last feature of life to a neat, salable commodity, subscription, or in-app purchase is not actually helpful for the human being.
Within the dominant cultural and ideological system, major social problems like rampant loneliness and hopelessness are treated as individual shortcomings. The state is at the gravitational center of this cultural and ideological complex, and it uses every available resource to reproduce the dominant ideology and divert attention from structural questions. What the state wants to do is to flip your critical energy back at you, so that you push yourself just a little harder. And in organizing society around empty, grotesque consumption and wild, cutthroat scrambles for money, the state has managed to get us to perform its disciplining duties for it. The digital world’s new symbolic economy only pushes us further into this self-disciplining mode, to the point where a level of social control is embedded in almost every aspect of life. Through social media algorithms, the sign economy permeates mundane experience, everyone always looking, even if only casually, for engagements, likes, comments, shares, and whatever else counts for internet clout that week (that day? hour? minute?). In the digital age, monopoly capitalism is able to turn desire into a mechanism of discipline, as people are compelled to ceaselessly strive for symbolic recognition. You can work harder and consume more if you just start monitoring your performance.
Under such all-embracing pressures, people have started to concoct completely made-up accounts of what is driving their feed, insisting that posting in certain ways or at certain times will enable the user to game the algorithm. These folk stories and myths about the algorithms and how they work are quite useful from a class perspective, driving users to try to optimize for measures or targets that are at least partly illusory or manipulable, aiding engagement. While we aim at moving targets, we’re spending more time on the platforms. As within this structure of user manipulation, such myths about algorithms appear almost like religious beliefs, creating a sense of order or meaning while also disciplining the subject and channeling their behavior toward ruling class goals. At the same time, people naturally recognize that the internet is not real, that it is a series of simulations. This is part of the reason that people behave so unmannerly online, the sense of simulation created by an environment where reality is mediated by images, symbols, signs, and thirsty performances, not direct, unmediated experience of reality. The logic of the economic system has become something almost like Aristotle’s hylomorphism, under which matter has to be imprinted with form before something can be made real. In today’s version, something isn’t really real until it has taken the form of something to be sold. “Capital sees everything as uncapitalized versions of itself. To that extent, everything can be capitalizable upon; everything can be made to make more value.”
If it seems over-dramatic to call this system insidious, consider how far-reaching its powers over us really are and how they can serve to lock us into authoritarian and exploitative social forms. Our smartphones place us in curated virtual environments, shaped fundamentally by surveillance, data extraction, and algorithmic manipulation. These virtual environments are versions of hyperreality mediated by the tech giants and the state. This smartphone culture unthinkingly normalizes Orwellian surveillance and data commodification, integrating everyone into a system of constant monitoring and profiling – and all for the sake of convenience. It’s almost unbelievable. Where Taylorism segments labor into time-controlled units, digital platforms try to break user attention into quantifiable interactions, in the form of clicks, likes, shares, time spent watching, etc. These short interactions are tracked in real time, as the system gathers its own data, and are mined to further optimize user engagement. The overarching sense of being watched or manipulated online stokes a state of hypervisibility combined with powerlessness, not unlike Baudrillard’s idea of individuals becoming “pure screens” absorbing and re-absorbing media influences without control over the narrative. The subject, he says, “becomes a pure screen, a pure absorption and re-absorption surface of the influent networks.” Nothing could be more naive and misguided than the belief that the state will rescue us from the world of algorithmic anxiety, alienation, and meaningless striving.
The state does not operate outside of the economic system. Its role is to stabilize that system and ensure its continuity, along with the structural integrity of the hierarchies necessary for amassing the wealth of the ruling class. Chance, fate, and uncertainty, once treated as sacred and unknowable, have been harnessed as a weapon to keep us locked in perpetual engagement, waiting for respite. Uncertainty could be socially and culturally productive, as a site of the sacred and unknowable, but through the apps and algorithms, it is chopped up and commodified in an agitated world of never-ending competition and comparison, conditional belonging, and mindless, meaningless consumption. This of course serves to make people disciplined and productive without the need for force or coercion. If you want to rely on the state, good luck to you. It is the state that gives this whole system its power. If today’s political and economic system has turned daily life into a slot machine, then perhaps the task is to reclaim forms of life that are not playable or monetizable, not reducible to a quid pro quo or exchange of quantifiable value.
No comments:
Post a Comment