AI, Techno-Determinism and Techno-Pessimism
Artificial Intelligence is very much in the news these days, so I decided to do a critical analysis of one of works discussing AI on the bestseller lists, The Coming Wave, by one of the key figures identified with its development, Mustafa Suleyman.
Blurbs promoting The Coming Wave contain all the usual hyperbole from experts and notables. For this jaded reader, however, the book didn’t need all that hype. It is actually a thoughtful piece of work, one with some flaws undoubtedly, but nevertheless quite compelling.
This is not a work that goes into the technical intricacies of AI. It provides a fairly simple understanding of the basics of AI, which can be summed up as a process involving a program that teaches machines to answer complex questions from a universe of data and learn from their mistakes to become more precise and comprehensive—to the point where they rewrite the algorithm themselves to allow them to take on and solve problems of even greater complexity. This recursive process of “deep learning” eventually enables the machine to solve problems infinitely faster than the human mind, relegating humans to a general supervisory role where, eventually, they are supplanted by more sophisticated machines.
There is no limit to the machine’s ability to turn out ever more complex programs. The only constraint to the whole process is the material one of how many transistors can be imprinted on a silicon wafer or computer chip. However, these limits continue to be breached by ever more sophisticated micro-processes that have allowed the number of transistors per chip to increase ten-million-fold over the last 50 years, adding up to vast computational power, and so far there is no end in sight.
AI, the author tells us, really only took off in the last decade. Now it is one of the two technologies central to the coming “technological wave.” The other is genetic engineering, which began with the discovery of the structure of the DNA, the molecule encoding the instructions for producing an organism. Over the last 50 years, genetic research has led to revolutionary breakthroughs in gene sequencing, which involves the unlocking of the information contained in the genomes of humans a
nd organisms. CRISPR, a process of cutting genes using enzymes. and advances in laser technology like laser microinjection, have made “gene editing” immensely easier, to the point that the only barriers to producing humans with edited genes are not practical ones but ethical concerns.
The main contention of Suleyman, the founder of the outfit DeepMind, which was later bought by Google, is that AI and genetic engineering are the central technologies of the near future, which will not only pollinate each other but also synergize with existing technologies, from pharmaceuticals to energy technologies like solar and hydrogen, to the Internet, quantum computing, and robotics. The explosive interaction of these technologies will produce the “next wave.”
It is difficult not to agree with the author’s contention that the impact of the AI/bio-tech-led revolution will be immense, but he tends to be overly deterministic, seeing the relationships among technology, society, and politics as being largely unidirectional rather than interactive. Nowhere is this techno-determinism more evident than when he claims that it was the stirrup, which revolutionized warfare in favor of mounted cavalry, that created the hierarchical social relationships among mounted knights controlling land and peasants that formed European feudal society.
This simplistic understanding of societal evolution should not, however, blind us to this work’s useful insights, among them the contradictory drives of technological centralization and diffusion. The state’s goal to preserve the existing social order will push it to control the development and diffusion of AI, but the radical reduction of costs and simplification of complex knowledge will put AI within the reach of everyone, including non-compliant individuals and organizations who can wage “asymmetric warfare” against the state.
Although the author is a key figure in the development of AI, he inclines towards techno-pessimism when it comes to society’s ability to control the development of AI. He acknowledges the labor-displacing impact of AI but offers no effective solutions to counteract this tendency. Although he offers suggestions to “contain” AI’s negative impacts, he appears resigned to the replacement of “homo technologicus” by super-intelligent machines at the “top of the food chain.” He seems open to the possibility that AI can make the leap from machine life to sentience, or self-consciousness, at some point in its evolution.
There are important gaps in the book. One is the digital divide between the Global North and the Global South, though one can infer from his analytical thrust that this gap will grow exponentially. But the biggest flaw in this book is its failure to situate the development of AI in the dynamics of capitalism. This is a blind spot that owes itself to the author’s technological determinism or reductionism. True, Suleyman identifies investors’ push to gain windfalls from AI development as an important factor behind AI development, but the discussion is superficial. In the Global North, one cannot divorce the speed of AI’s spread from the drive of the Big Tech firms to exploit and monopolize the technology in the service of amassing greater and greater profits. It is Big Capital’s priorities that directs AI’s development into profitable channels while its use to service socially necessary but unprofitable activities such as health care lags behind.
In his notebook, the Grundrisse, Marx envisioned a post-capitalist world where machines freed human beings from exploitation and the drudgery of work to fully develop their potential as human beings in creative endeavors. Technology would enable
the free development of individualities, and hence not the reduction of necessary labor time so as to posit surplus labor, but rather the general reduction of the necessary labor of society to a minimum, which then corresponds to the artistic, scientific etc. development of the individuals in the time set free, and with the means created, for all of them…Truly wealthy [is] a nation, when the working day is 6 rather than 12 hours. Wealth is not command over surplus labor…but rather, disposable time outside that needed in direct production, for every individual and the whole society.
So long as capitalist relations of production are dominant, only a very small minority of humanity can enjoy this condition. For the vast majority, AI is seen as a threat to their jobs, rather than an opportunity for liberation. The farthest the system can go is to improvise measures like the universal basic income that seek mainly to pacify people rather than release their potential to be truly human. So long as capital reigns, AI will merely exponentially increase the gap between the billionaire elite and the rest of humanity. While a post-capitalist society will not eliminate the risks associated with AI, controls on the profit drive will likely make their containment and regulation immeasurably easier.
One of the more interesting sections of the book is on China and AI. The author cannot contain his admiration for China’s ability to focus its resources single-mindedly on being on the cutting edge of AI development once it realized AI-fueled technological development would be the next wave—a moment that the author traces to the time AlphaGo, an AI machine produced by his company, beat the world’s top player in the ancient Chinese game of Go, Ke Jie, in 2017. Already moving fast at that time, China’s development of AI and other technologies took a Great Leap Forward so that by the time of writing, China had surged “ahead across the spectrum of fundamental technologies, investing at an epic scale, a burgeoning IP behemoth with ‘Chinese characteristics.’”
DeepSeek, the sensational Chinese AI program, was launched in 2024, a year after The Coming Wave was published. DeepSeek revolutionized AI by innovatively maximizing output from much less advanced chips than those produced by the United States and its allies to come out with computational results that were equal to, if not faster, that the most advanced western AI, and at much lower cost. Perhaps out of recognition that it was DeepMind’s beating the Chinese world champion Ke Jie at Go in 2017 that sparked China’s AI revolution—its “Sputnik moment”—the creators of the Chinese program christened it with a similar-sounding name.
So what will the future bring in terms of AI’s impact on geopolitical competition? Suleyman gives us a hint of where he thinks things are heading by quoting the Pentagon’s first chief software officer who resigned in 2021 out of great frustration: “We have no competing chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over in my opinion.”
A.I. is asking us to reexamine our humanity.
It pops this question every time we sit down to write – whether we’re a high school student writing an English essay, a manager penning a business memo, or a screenwriter composing a script for an action film.
It asks: who is writing? Who is the writer?
It asks, too: who is the reader, the listener, the viewer? Who is the person, or persons, being written about?
Some years ago, long before the advent of ChatGPT, a Vietnamese spiritual teacher, poet, and activist named Thich Nhat Hanh described how he addressed these questions when he composed. In the mid-1970’s, he helped a committee for orphans in Vietnam by translating applications from Vietnamese to French. The committee was sending the applications to France, seeking donors who could help the children who were victims of the war there.
Each day Nhat Hanh translated about 30 applications, each consisting of a single sheet of paper that included a small picture of the child along with information about the child’s name, age, and condition. Nhat Hanh explained his process this way:
“The way I did it was to look at the picture of the child. I did not read the application. I just took time to look at the picture of the child. Usually after only thirty or forty seconds, I became one with the child. Then I would pick up the pen and translate the words from the application onto another sheet. Afterwards I realized that it was not me who had translated the application; it was the child and me, who had become one. Looking at his or her face, I felt inspired, and I became the child and he or she became me, and together we did the translation.”
Thich Nhat Hanh was a prolific writer and poet. Today, someone wanting to create a text that sounded like him could easily do so by deploying A.I.’s immense capacity for mimicry. That same person could also scan videos of Thich Nhat Hanh to generate a plausible likeness of the man.
But A.I. could never enter the interior life of Thich Nhat Hanh, a life richly woven of experience, memory, and love. Nor could it ever bring the same humanity to the kind of translating that Nhat Hanh described above.
To put the matter in the words of another writer, the philosopher and theologian Martin Buber, Thich Nhat Hanh lived and wrote in a realm of I and Thou: a fully present, fully receptive, fully reciprocal relation with other human beings. It is a realm where the “Thou,” the “You,” is addressed completely and directly. As Buber noted, “whoever says You does not have something for his object. Where You is said there is no something. You has no borders.”
By contrast, Buber points to the world of “I-It” relations, a world in which we encounter people and things transactionally. We constantly assess and are being assessed, evaluate and are being evaluated, so much so that our own self-assessments can become highly corrosive.
And our technologies help drive transactional relations in myriad ways, A.I. being the most powerful driver to date. It has the potential to yield great benefits – say, accelerated vaccine development – but it also disrupts on a vast scale. As its developers seek ever greater power and wealth, A.I. displaces people from jobs, removes human contact from our encounters with institutions of all kinds, consumes immense amounts of power, and helps destabilize democracy by scaling up disinformation and deep fakes of all kinds.
Amidst this forest of I-It relations, there is at the same time an epidemic of loneliness and social isolation, so much so that the previous Surgeon General, Vivek Murthy, issued a report on its extent, causes, and adverse impacts on human health. It’s no coincidence that the use of A.I. for companionship, therapy, and even romance has grown significantly.
We are social beings, and by asking us to reexamine our humanity, A.I. is calling on us to reexamine our social relations in all aspects, from the personal to the political. There are ways of strengthening these connections and affiliations, including the rebuilding of labor unions that had once served as vibrant centers of social life for so many Americans. There are ways, too, of curbing the most malign aspects of A.I. In considering the current state of our society, individuals must consider what’s most important to them in making better lives for themselves, their families, and their communities.
One starting point can be to cultivate the kind of genuine human encounter that Thich Nhat Hanh and Martin Buber described.
We’re no doubt familiar with Rene Descartes’ formulation: “I think, therefore I am.” But there’s another formulation from ancient India that may speak more compellingly to us today.
It’s the Sanskrit phrase, So Hum: “You are, therefore I am.”
Andrew Moss, syndicated by PeaceVoice, writes on politics, labor,and nonviolence from Los Angeles. He is an emeritus professor (Nonviolence Studies, English) from the California State University.
Artificial Intelligence Is on a Collision Course With the Green Transition
The choice now is whether the United States continues to aid and abet Silicon Valley’s environmental rampage or to fight it.
A protester attends a community meeting at the Tucson Convention Center on August 4, 2025, a public forum to discuss pros and cons of "Project Blue," a massive data center installation proposed by Amazon Web Services.
(Photo by: Wild Horizons/Universal Images Group via Getty Images)
Aug 23, 2025
The tech industry’s accelerating buildout of infrastructure to power artificial intelligence is rapidly turning an industry once lauded as “clean” and environmentally friendly into an air polluting, ecosystem destroying, water guzzling behemoth. Now, there’s an intensifying rift on the left about how to approach what was, until recently, a steadfast Democratic ally.
Progressives are now at a fork in the road with two very different options: a political reckoning with Silicon Valley or a rapprochement paid for with environmental havoc.
Some pundits and industry figures have counterintuitively argued that the proliferation of data centers to power AI is a good thing for the environment. The massive energy demand for training artificial intelligence will, in this telling, necessarily prompt a massive investment in clean energy and transmission infrastructure to meet that demand, thereby catalyzing a world-altering transition toward renewable energy. This argument, already suspect years ago, is entirely untenable now.
Following US President Donald Trump and company’s evisceration of the clean energy investments from the Inflation Reduction Act (IRA), the narrow path of AI buildout being aligned with a green transition is now completely walled off. The choice now is whether the United States continues to aid and abet Silicon Valley’s environmental rampage or to fight it.
At present, there is simply no way to have the scale of AI buildout that the United States is seeing without terrible environmental downsides.
Even prior to Republicans torpedoing the IRA, AI electricity demand was growing faster than both renewable energy production and overall grid capacity. Without strong additionality regulations to require that new data centers be powered by the construction of new renewable energy generation, the AI boom will continue to increase consumption of fossil fuels.
Much of the increased energy demand was already being met by natural gas before the Republican spending package. It’s only going to get worse now. Without the clean energy tax credits, the advantages of incumbency that fossil fuels enjoy mean that the AI energy boom will further hook us on unsustainable resource consumption.
The firms building out AI infrastructure know this and often point to major investments in clean energy to protest characterizations of data centers as environmentally disastrous. But there are two major problems there. First, those investments may be in totally different locations than the actual data centers, meaning the centers are still consuming dirty energy. Second, and more importantly, at our present juncture in the climate crisis, we need to be actively decreasing our use of fossil fuels, not just containing increases in dirty energy production. (It’s worth noting that AI is also being used to enable more fossil fuel extraction.)
And the environmental destruction doesn’t stop there. The Trump White House recently moved to exempt data centers from environmental review under the National Environmental Policy Act, or NEPA, paving the way for tech companies to despoil local environments without a second thought, and limiting opportunities for the public to gain information about data centers’ environmental impacts.
Perhaps nothing captures the excesses of AI quite so clearly as its water usage. Despite some pundits glibly claiming that there’s actually tons of water to go around, data centers threaten to worsen already dire droughts. We’re already beginning to see this in arid places like Chile and the American Southwest.
The Colorado River’s mismanagement is the stuff of public policy legend at this point. Aquifers across the Western US are being depleted. People were not mulling the idea of partially rerouting the Mississippi River for giggles. There is, unequivocally, a water crisis unfolding. And those data centers are very, very thirsty. A single data center can use millions of gallons a day.
There are already more than 90 data centers in the Phoenix area alone. That’s hundreds of millions of gallons of water a day. Protesting that “there’s plenty of water” is not just detached from the drought-stricken reality, it’s dangerous.
Data centers are being built in arid places intentionally; the low humidity reduces the risk of corrosion for the processor stacks warehoused there. Fresh water supplies, when depleted, are not easily renewed. Devoting more of it to cooling GPUs means less for drinking, irrigation, fighting wildfires, bathing, and other essential uses.
And there isn’t a way to bring water to the arid environments to mitigate that, either. Some people point to desalination, but that isn’t tenable for multiple reasons. To start, most of these data centers tend to be inland, as the sea air has similar corrosive effects as humidity. That, in turn, means that even accepting desalination as a cure for water scarcity, data centers would require transporting massive quantities of that purified water over significant distances, which would require complex energy-and resource-consuming engineering projects unlikely to proceed within the hurry up and go of our AI bubblish moment. (Desalination also has its own serious environmental harms.)
At present, there is simply no way to have the scale of AI buildout that the United States is seeing without terrible environmental downsides. The only choice left is whether to get out of Silicon Valley’s way or whether to slow the industry’s pace.
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
Dylan Gyauch-Lewis is a senior researcher at The Revolving Door Project, where she leads RDP's Economic Media Project.
Full Bio >
The Mad Religion of Technological Salvation

Image by Logan Voss.
A science journalist and PhD astrophysicist, Adam Becker spent the last several years investigating the futurological vision of our tech-bro masters – and found it’s a bunch of nonsense.
The targets in his new book More Everything Forever are the names we’ve come to associate with the great leaps forward in the march of what Becker calls the religion of technological salvation: Sam Altman, genius progenitor of artificial intelligence; Elon Musk, technocrat extraordinaire and SpaceX honcho; space-colonization guru and Amazon godhead Jeff Bezos; Mark Andreesen, billionaire software engineer, venture capital investor, and avowed technofascist; and Facebook co-founder Dustin Moskovitz, who has donated hundreds of millions of dollars to so-called “longtermist” think tanks that provide the ideological ground for the religion of tech salvation. He also aims his guns at Ray Kurzweil, grand wizard of software engineering, inventor, planner of the immortalizing digitalization of human affairs called “the Singularity;” Eliezer Yudkowsky, a writer and researcher who attributes fantastical (but as yet non-existent) powers to artificial general intelligence, or AGI; and the crew of longtermist tech apologists at Oxford University on whom Moskovitz and other Valley barons have lavished funding.
What unites these players is lust for power and control based in the seduction that technology will solve all humanity’s problems and overcome the human condition. “These ideas offer transcendence,” writes Becker. “Go to space, and you can ignore scarcity of resources…Be a longtermist, and you can ignore conventional morality, justifying whatever actions you take by claiming they’re necessary to ensure the future safety of humanity. Hasten the Singularity, and you can ignore death itself.”
Musk and Bezos’s “power fantasies” of space colonization and visions of “AI immortality” will usher in a future of unlimited wealth and resources, beyond the confines of Earth, the solar system, the galaxy. Ray Kurzweil’s dream of the Singularity involves the uploading of minds into digital simulations, so we can live forever. All of this, Becker says, is a divorced-from-reality sales pitch driven by the primordial fear of death. Overarching it is what’s called “engineer’s disease”: the mental derangement of believing that engineering can solve anything and everything.
In Becker’s telling, for example, Kurzweil is an unhinged fantasist manically attempting to resurrect his dead father into an artificial intelligence “Dad Bot.” Like a Christian apocalyptic prophet, the high priest of the church of tech salvation promises the Singularity to arrive as early as 2045, when AI computing becomes so fast and so powerful it will transform society, Earth, and the universe itself to overcome “cosmological forces,” including time and aging, the laws of physics and entropy. All existence would become one giant computer spinning forever out across the vastness of space. “The objective is to tame the universe, to make it into a padded playground,” writes Becker. “Nobody would age, nobody would get sick, and – above all else – nobody’s dad would die.”
“The promise of control is total,” he explains, “especially for those who know how to control computers. This is a fantasy of a world where the single most important thing, the thing that literally determines all aspects of reality, is computer programming. All of humanity, running on a computer…”
It’s the ultimate revenge of the nerds, made worse because of our subservience to their immense money and overhyped influence. What to do in answer? Understand the authoritarian nature of these zealots, so we can repulse their attempts at the takeover of society and shatter into bits the armatures of their loony-tune machines. As Becker puts it, channeling Orwell’s 1984: “If you want a picture of [the] future, imagine a billionaire’s digital boot stamping on a human face – forever.”
I spoke with Becker recently via Zoom about his book. Our conversation has been edited for length and clarity.
Ketcham: Let’s start with what inspired you to write this book. Like, why go after Sam Altman, Ray Kurzweil, Bezos, Musk, the whole techno-optimist crowd?
Becker: I’ve been following these sorts of subcultures – longtermists, general techno-optimism, Singularity stuff – for a very long time. I’m a science fiction junkie and first encountered a lot of these ideas in science fiction in high school or earlier. I think I first heard of Ray Kurzweil in college. And I thought, oh, yeah, these ideas are bad, but they don’t seem to be getting a lot of traction. And then the funniest thing happened: tech billionaires took this stuff seriously, giving these people a lot of money. I moved out to the Bay Area about 13 years ago. And of course, this is ground zero. I realized how deep in the culture this stuff is, these things like the singularity and AI hype, the idea that technology is going to solve every single problem, we’ll go to space and that will solve every single problem. I was amazed at how uncritical and ubiquitous the acceptance of these ideas was out here. I thought, you know, this is ridiculous. The other thing is, when I saw people going after these ideas I didn’t see a detailed scientific breakdown of why these things don’t work. There were a lot of people who dismissed people like Yudkowsky or Kurzweil just out of hand, but they would be like, Oh, this is ridiculous. Why? Usually the answer from the analysis was it’s ridiculous because it’s an insane fantasy. Yes, it is an insane fantasy. Why? I thought, well, there’s not enough actual analysis because people are not taking these ideas seriously outside of these communities. What people don’t seem to realize is these communities are becoming bigger and more influential. So even though their ideas are sort of prima facie ridiculous, we have to engage with them because they are gaining more power. Fundamentally, that’s where the impulse for the book came from.
Ketcham: So what drives this zealous acceptance by the technocrats of what you describe as prima facie ridiculous ideas?
Becker: Because it provides all kinds of excuses for them to do and say the things that they already want to do and say, and that makes these ideas really appealing and compelling and persuasive for them. Because that’s the way human psychology works. If something provides an excuse for you to do a thing you want to do anyway, it increases the chances that you genuinely believe it, because it’s so convenient to believe it. It makes the world simple. It provides a sense of direction and meaning. It lets them see themselves as the hero of the story of humanity, that they’re going to save us by taking us all to space and letting us live forever with an AI god. They’re going to be the people who usher in a permanent paradise for humanity. What could be more important than that? And of course, all of that’s nonsense. But it’s like if somebody came down out of the sky and said, you are the chosen one, you are Luke fucking Skywalker, here’s your lightsaber, all you have to do is believe everything that I tell you and you will be seen as a hero. Anybody told you that that person was lying? You and I are used to thinking critically, but tech billionaires don’t think that way. They’re not really in the habit of thinking at all. They don’t have to, because thought is something that requires a lot of effort and critical self-examination. And if you have all of your needs that you could ever possibly have taken care of, and the only thing left is this fearful pissing contest of who has the most billions of dollars, then why would you stop to question yourself? There’s no reason to, and everybody around you is going to tell you that everything you’re doing is right, because you’re a billionaire. You surround yourself with sycophants.
Ketcham: What you describe is, of course, a religion, in that it provides all the various salutary, mentally assuaging elements of religion – meaning, purpose, direction, a god of sorts.
Becker: And it even provides, in some cases, a kind of community.
Ketcham: Right. Not an unimportant thing. Let’s talk about the religion of technological salvation. The religion long predates this movement, no? You could almost go back to the Cartesian vision of the world, Enlightenment science, this idea that science and knowledge will lead to the ultimate perfection of the world. Tell me how the current iteration of the religion of tech salvation fits into the history of industrial society.
Becker: That’s a really good question. But I want to be clear. I think science is great. And I think that it is true that science has brought about really amazing things. It’s also brought about horrors. It gave us vaccines, but it also gave us thermonuclear weapons. And I think that that’s about the scale, right? Vaccines are arguably the best thing that science has ever done. And thermonuclear weapons are, I think, pretty indisputably the worst thing that science has ever enabled. But science is ultimately a tool. And just like the rest of technology, it’s a tool that is subject to human choice and contingency. What scientific truths we discover, that’s not really up to us. What technology we build off of the scientific advances that we’ve made, that is up to us. Technology is not preset on some sort set of rails or like a tech tree out of a video game. And so that means that there is no inevitable future of technology. Technology can enable us to do things that we previously couldn’t, but which things it enables are a combination of the constraints placed on us by nature and human choice. The narrative that technology will inevitably lead us to a utopia or inevitably lead us to apocalypse, these are just stories that we tell. The idea that it will lead to a utopia, as you said, is an old one. The specific version of this ideology of technological salvation that the tech oligarchs and their kept intellectuals and the subcultures that they fund is ultimately something that springs from a mix of early to mid-20th century science fiction and various Christian apocalyptic movements. Because there’s a fairly long history in Christian apocalyptic movements of the idea that technology will bring about the kind of utopia and second coming that you find in Christian apocalyptic writing.
One of the words I learned in the course of doing this book was soteriology, which is the study of doctrines of salvation. There’s a long history of that, going back at least as far as the Russian cosmism concept and then Teilhard de Chardin, as I talk about in the book. And then you’ve also got the technocracy movement, which Elon Musk’s grandfather was involved in, which is a sort of fascist technological pipe dream. The idea behind the technocracy movement was that only the people who build technology actually understand what it’s doing. And technology is what’s going to determine the future. So only the people who build the technology can be allowed to run society. Laying it out that way, it sounds awfully familiar. That sounds a lot like Marc Andreessen and his techno optimist manifesto. And indeed, in that manifesto, he harks back to Marinetti’s Futurist Manifesto, which is itself a forerunner to the Fascist Manifesto, also co-written by Marinetti. All of this stuff has these early echoes, and you also see it in, like I said, early to mid-20th century science fiction, because they pulled a lot of their ideas from those same places. The idea is that the ideal future is one run by engineers. There was this inevitable march of progress that would make the world a kind of utopia on the backs of space colonization and artificial intelligence. These ideas are all over golden age science fiction.
Ketcham: So why should we beware the rule of engineers?
Becker: Because there’s no democratic accountability. And also, because engineers often suffer from engineer’s disease, which has a couple of different definitions. First, there’s a tendency to just ignore the humanities and ignore anything that’s not in the narrow domain of STEM as fundamentally not important. But engineer’s disease really boils down to the idea that if you are an expert in one technical domain and know how to solve one kind of very difficult problem, that makes you an expert in every domain because you know how to solve all kinds of difficult problems. And that’s just not how the world works. There is not a hierarchy of which problems are the hardest and which domains are the most difficult. And domain expertise is not generally transferable. If you are really, really good at, say, string theory, that does not mean that you’re going to be really, really good at, oh, geopolitics. Or even if we pick another technical discipline, it doesn’t mean that you’re going to be really, really good at, say, computer science or genetics or genomics or ecology or whatever. It’s not like Albert Einstein would have been the world’s greatest psychotherapist if he had just gone into Freudian psychology rather than theoretical physics.
Expertise is not transferable like that. It’s not innate. And this is especially pernicious when talking about computer science in particular. In software engineering, the problems that you learn to solve are fundamentally human problems, because the systems that you’re working with and within are designed and built by humans. There’s a legible logic to them because the systems were designed – and designed such that questions that humans would ask of them would have answers. The problems that show up in software engineering generally do have answers. And yes, some of the problems that you run into in software engineering involve making those systems work with the natural world. And that can be hard, but ultimately you are dealing with a human-built system. An artificial human-built system, not even like an evolved human-built system like natural language, but an artificial language, computer programming. Contrast this with problems in fundamental physics, sociology, linguistics, political science, biology. These are all systems that in one way or another are natural. Even human language, like I said, natural human language evolved. It wasn’t designed. Other systems are even less designed. Nobody designed the ribosomes in your cells. Nobody designed the solar system. The world that we live in is an aggressively non-human world, and the logic that underlies it is not a human logic.
Ketcham: Yeah, I noticed throughout your book a kind of implicit critique of the blinkeredness of anthropocentrism. This idea that humans are the center of the world, that everything we invent in the technosphere that comes flying out of our minds and our opposable thumbs is somehow the defining fact of the universe. The example of Ray Kurzweil seems to take this anthropocentric blindness to absurd heights. He talks about “replicator” bots dispersed throughout the universe to transform all matter into a gigantic computer into which our brains will be uploaded. This on its face is a fantasy, as you note, out of the most extreme idealistic visions of science fiction. How do we take someone like Kurzweil seriously when he proposes something manifestly outside the realm of the physically possible, not based in any known science today?
Becker: In a perfect world, we wouldn’t have to take it seriously. But the problem is that there are powerful and influential people who do take these ideas seriously. Marc Andreessen, one of the most powerful and wealthy people alive, he takes Kurzweil’s ideas very seriously. And unfortunately, that means that we have to take those ideas seriously in order to tear them apart. We have to say, that’s ridiculous, and here’s a detailed explanation of why – you know, in small words, so that you, Marc, can understand it. Not that I harbor any hope that Marc Andreessen will understand and absorb the lessons of my book. I don’t think that he’s capable of that kind of honest, critical self-reflection. Prove me wrong, Marc! Look, these ideas, as I said, are very seductive to very powerful people. They claim that the ideas come from science. But they don’t come from science. Where do they come from? And the answer is they come from science fiction and Christian apocalyptic movements. If you have any familiarity with history, politics, sociology, religious studies, it’s pretty obvious that ideas like Kurzweil’s are echoes of other things. There’s this lovely book called “God, Human, Animal, Machine,” which I reference at the end of my book, which does a nice job of laying out the connections between ideas like Kurzweil’s and those of Christian eschatological movements. These people are very dismissive of analyses like that. And this goes back to your first question, why did I write this? I said, oh, okay, let me do an analysis in a language that they will understand.
Ketcham: I’ve noticed a phrase you like to use: That’s not how the world works. These people, it seems, are divorced from the reality of the world.
Becker: Yeah, they are. They have completely misunderstood how the world works, how science works, how people work. I know I keep hammering away at Andreesen, because he’s my least favorite person in the entire book. He says in that unhinged manifesto of his that he is the keeper of the true scientific method, contrasting himself with academic scientists. Well, buddy, first of all, you wouldn’t need to say it so loud if it were true. And second, the real scientific method is not to have a statement of beliefs about what the world is and how it works, or what the inevitable future of technology is. The real scientific method is to be curious and questioning about the world and be open, constantly open, to the possibility that you’re wrong – in fact, expecting that you’re wrong. And that’s not something these people are capable of.
Ketcham: Inherent throughout this movement is techno authoritarianism. You mentioned about why we should we beware the rule of the engineers: there’s no democratic accountability. Talk about that a little bit and tie that in with Andreessen’s embrace of the Italian fascist techno-enthusiast Filippo Tommaso Marinetti – Mussolini’s favorite philosopher.
Becker: So, if you think that the future of technology is the only thing that matters because you suffer from engineer’s disease and that nothing else is really important, and if you think that the future of technology is predetermined on rails, that there’s something inevitable about it because you subscribe to this ideology of technological salvation, then you’re going to think that anybody who doesn’t see the world that way is, you know, irrelevant and in the way of the future, either the glorious future or the apocalyptic future that you are trying to avert. Either way, they’re in the way. And if it is a utopia that you are certain is coming or an apocalypse that you are certain you need to avert, it doesn’t matter how many people tell you you’re wrong or how many people try to stop you. The best thing that you can do is to amass as much power as possible to bring that utopia about and avert the apocalypse. And so, democracy is, you know, just the rule of the un-informed, right? And since they don’t know the secret knowledge that has been vouchsafed to you of what the future holds, they can’t be trusted.
Ketcham: Democracy is an inconvenience to be swatted away. The technological priesthood requires an authoritarian apparatus.
Becker: Exactly. So, this is where you get people like Curtis Yarvin, who is an excellent example of engineer’s disease. Here’s a guy who thinks that he understands how the world works. He doesn’t know anything. His analysis of all of the different texts that he supposedly draws upon in the construction of his blitheringly incoherent philosophy shows he’s just bad at reading and bad at understanding. And like a lot of these guys, just bad at thinking.
Ketcham: Give us a quick take on Yarvin, his worldview, what he represents as part of this movement.
Becker: Curtis Yarvin is a favorite court philosopher of J.D. Vance, a software engineer who got funded by Peter Thiel – he’s tight with Thiel – and a monarchist. He wants kings, and he wants the monarch to be essentially a tech CEO, because those are the people who actually understand things, according to him. He thinks that the world does not work without a king, that society does not work without a king, and that all of the problems of the world today are proof that we need kings. He says that the problem with society today, and this is a direct quote, is chronic kinglessness, and that democracy is a disease that needs to be stamped out. This guy has also come right to the edge of defending slavery and has certainly said way more positive things about slavery and apartheid than could ever be warranted. He believes there are inherent genetic differences in intelligence and other aptitudes between different races of humans, an idea that has been proposed and dismissed just over and over and over again because there is overwhelming evidence that it’s not true. He thinks people like him are better at genetics and genomics than geneticists and genomicists. Again, engineer’s disease.
Ketcham: The engineers must by all means realize their future utopia, and one of the versions of utopia as envisioned by Bezos and Musk is space colonization. You show very clearly that this is, again, divorced from reality. How much of a fantasy are we talking about here? And why go to Mars in the first place? As I told a friend recently talking about this, there’s no wine on Mars, no women, no wildflowers, no running water, no air.
Becker: To pick on another guy who absolutely deserves it, Elon Musk has been very consistent about the vision he has for Mars and the justification for it. He says that we need to become an interplanetary and interstellar species to preserve the light of consciousness, and that specifically what we need to do is go to Mars. His plan is to have a million people living on Mars by 2050 in order to form a self-sufficient colony that will survive even if the rockets from Earth stop coming, as a backup for humanity in the event of a massive disaster here on Earth. This is one of the stupidest ideas that I’ve ever heard in my life. It doesn’t work for so many different reasons. Mars is absolutely terrible. The radiation levels are too high. The gravity is too low. Yes, there’s no air! The dirt is made of poison. It’s a horrible place. Musk talks about wanting to terraform it by nuking the polar ice caps to build a bigger atmosphere. That’s not going to work. It wouldn’t produce enough of an atmosphere to allow for human habitation. It wouldn’t solve the radiation and gravity problems. It wouldn’t solve the toxic dirt problems. Musk talks about Mars as a refuge in the event of an asteroid strike here on Earth. More asteroids strike Mars than Earth. And Earth, even after an asteroid strike like the one that killed off the dinosaurs, was still a nicer place than Mars. We know that because mammals survived, whereas no mammal could survive unprotected on the surface of Mars. And think about what getting a million people to Mars would require. Say that you could somehow cram a hundred people into a single spaceship, into a single rocket. That is more than ten times the number of people that have ever gone up in one space mission ever. And also, of course, that mission that sent eight people up, that was just to Earth orbit. They could get back to the ground in a couple of hours. And it took less than a couple of hours for them to even get to Earth orbit in the first place. A mission to Mars takes six to nine months minimum.
Ketcham: And the radiation that would accumulate or that would be absorbed by the passengers during that period, would it not then result in terrible cancers over time?
Becker: Yeah, it would massively increase the cancer risk. It would probably sterilize some of the people on board or at least make it much harder for them to have kids.
Ketcham: Hopefully Bezos and Musk?
Becker: Well, a little too late for Musk. Put all that aside, say that you could get a hundred people in a rocket, say that you somehow could keep them alive for six to nine months of the journey from Earth to Mars. And yeah, okay, some future rocket technology could cut that number down – but by 2050? No, that’s not happening. But say you have those hundred people on each rocket somehow. You want to put a million people on Mars? That’s 10,000 launches with a hundred people each. That’s how many you’d need. And Musk has said, yeah, that’s right. We’re going to have to launch a hundred people a day for years. Buddy, your rockets explode all the time. You know, the failure rate of crude launches over the history of human spaceflight is something between one percent and five percent. Say that it was 0.1 percent. Say that somehow SpaceX, rather than having a terrible safety record, suddenly had the best space safety record by a factor of 10 or more. Well, 0.1 percent of launches for a million people – let’s see, how many would that be? That would be, if you got 10,000 launches, ten launches. So congratulations, you killed a thousand people.
Ketcham: All for the greater good of realizing utopia. So you’ve arrived on Mars, you’re living presumably in an underground community. You never see the sky. It sounds like a nightmare. It sounds like a place for people to go insane.
Becker: It’s hellish. Look, it’s hard enough to find people who want to winter over at the South Pole. If you winter over at the South Pole, you still get to go outside. You still get to see the sky. It’s cold out there. You don’t go outside for long, but they do it. You can’t leave the polar station in the winter because you can’t get planes or helicopters in or out because of the weather and the darkness. You’re stuck there with people. But there’s oxygen to breathe. There’s enough air that you don’t need to have the air piped in. All you need is to have food and to be able to stand staying there with the same people for upwards of six months. And it’s still so psychologically brutal that very few people are willing to do it. I think a lot of people think that they could do it, but they actually can’t because it requires a very particular psych profile. That is a walk in the park compared to Mars. In my book I write that Mars would make Antarctica look like Tahiti. Someone, I don’t remember who, told me that’s actually inaccurate, an understatement. Compared to Mars, the polar base at the South Pole Station, in the middle of a polar night is like being in Central Park surrounded by people on a gorgeous summer’s day. It is like sitting in the lap of luxury compared to anything that you could have on Mars at anything like a reasonable amount of time from now.
Ketcham: You make abundantly clear in the book that this vision of space colonization is really in service of perpetuating growth and substantiating or rationalizing the ideology of growthism. And I commend you for bringing up the 1972 Club of Rome-MIT report, Limits to Growth, because that’s a pivotal 20th century document that has been forgotten by too many people who should know better. You quote Musk and Bezos both saying that if we don’t colonize the solar system and beyond, we will stagnate because the planet is limited in its resources. The ecosphere is finite. The technosphere will not be able to function limitlessly on a finite resource base. What we’re talking about here is this idea that, oh, we’ll go to space, ergo no need to impose any limits now. We can continue business as usual.
Becker: That’s right. One of the things that I actually like about what Bezos said is that he was so explicit and specific. In the same way I kind of like what Musk said about Mars, precisely because he pinned it down so carefully and it was so easy to tear apart. Because it’s just a delusion. Bezos did something very similar. And you know, Bezos has made fun of Musk, which I kind of love as well – the two of them sniping at each other, right? I love it when they fight each other! Bezos has made fun of Musk for good reason. He said, Musk wants to go to Mars, but Mars sucks. Mars does suck! Instead, Bezos has this idea of giant space stations that he pulled from Gerard O’Neill. Bezos has also been very, very clear that what he wants is continued growth in energy usage per capita – forever. He specifies growth in energy usage per capita. And he’s been very clear that the reason he thinks we need to go to space is so we can have that energy usage per capita continue to grow indefinitely because resources on Earth – and he is correct on this – would run out no later than a couple hundred years.
The problem, of course, is that if you go to space, you still run out, putting aside problems such as it’s hard to live in space. And we might run out of resources a lot sooner than that, because the theoretical limit may be well above the actual practical limit. In other words, there are inherent problems with this ideology of growth that comes along with technological salvation. You still don’t get out of having limited resources by going out into space. Because if you have energy usage continue to grow at the same rate that it has for the past couple hundred years, then in like a thousand years, you’re using all the energy output of the Sun. And then a couple thousand years after that, you’re using the entire energy output of all of the stars in the entire observable universe. A couple hundred years after that, you’re just using all of the energy in the universe. Of course, none of that is possible, if for no other reason than the fact that the speed of light is limited. That is, you can’t get to all of those places in that amount of time. And the laws of physics are not gonna come in and save you, you know? We’re not getting around that speed of light limit. It’s not happening.
Ketcham: One of the big themes of the book is the all-too-human fear of the ultimate limit, which is death. These people are terrified of death. Things come to an end, people die. And that’s it, that’s the nature of things. It’s how the world works, as you would say. One of the most moving parts of the book was Kurzweil and his ridiculous Dad Bot. Now I mentioned that my father died recently. And you know, I was joking about uploading his consciousness into a computer, obviously nonsense – but you write that Kurzweil really believes it. I read that and I thought, oh boy, this guy’s got some psychological illness.
Becker: There’s nothing wrong with being afraid of death. There’s a problem with denying that it’s gonna happen and letting that fear override and control your entire life. Like, you know, live a little, have a life! If you let your life be controlled by fear, you’re not really living. That’s hardly an original thought. I’m pretty sure I just quoted a platitude that shows up in at least half a dozen tearjerker Hollywood movies just from the nineties alone, right? But these guys don’t get it. They are not able to understand that all the money and power in the world can insulate you from a lot of things, but it can’t insulate you from death. And technology can do a lot of things, and science can discover a lot of things, but it cannot prevent or reverse death. And in fact, the better that we understand science, the better that we understand, you know, thermodynamics, chemistry, biochemistry, biology, the better we understand exactly why death is both inescapable and irreversible. The other thing is that our best science also tells us is that we don’t haunt or inhabit our bodies. We are our bodies.
Ketcham: When the body dies, we die. We cannot be separated from the physical environment in which the brain is operating at any particular moment. Hence the absurdity of the notion of uploading consciousness into a computer. Again, it goes back to Cartesianism, right? This artificial separation of mind and body?
Becker: It doesn’t just go back to Descartes. Descartes is sort of the origin of that idea in the history of rational Western philosophy, or empiricist Western philosophy, but it’s not actually the origin. It goes back to the idea of a soul, an ancient idea across many different human cultures. That there is something immaterial that inhabits the material. If you want to have a worldview based on science, that’s not what’s going on. There’s no good evidence that mind uploading can be accomplished. That’s not happening! And even if it turns out it can be done at some point in time, first of all, there’s a great debate in philosophy about whether that would be you or a copy of you. And second, there are many reasons to think that the technology to do that is not computer technology. The computational analogy for how the brain works is just that. It’s an analogy. And it’s the latest in a long line of technological analogies for how the brain works.
Ketcham: But how the brain works remains – and you remark on this in the book – mostly a mystery. We don’t know what’s going on in there, really.
Becker: No, there’s a lot that we don’t know. We do know that it’s not very much like a computer. Computers are designed and built. Brains are not designed by anybody. Brains are things that evolved in response to a long history in the world. And that’s not to say that you can’t ever get a technological artifact to think and do the things that humans think and do. But we wouldn’t call that intelligence in the same sense as brain intelligence.
Ketcham: What about the hype around artificial intelligence, and what we’re now calling artificial general intelligence?
The term AI is one of these terms that has kind of deflated over time in meaning. It’s gotten to mean less and less, which is where this term AGI came in. When I was a kid, AI was what we used to describe Commander Data on Star Trek. Now we’d call that AGI, artificial general intelligence, and AI is instead the same thing we were calling machine learning a few years ago. And then before that, we were calling it data science. And before that, we were calling it statistics. Which is not to say that deep learning models like LLMs are not doing anything other than statistics, but fundamentally they are doing a lot of linear algebra to find patterns in large data sets and then reproduce those patterns in new data sets that they produce. They are trying to predict the next word or in the case of images, the next pixel. They can do some interesting tricks and actually, you know, do things that we’ve not seen computers do before. But fundamentally, they do not have a model of the world. They do not understand how the world works. They do not bear any relationship to truth or falsehood. They only know how to do one thing, which is predict the next word in a way that will sound plausible. But saying that these machines talk to you, that they make mistakes, are all different ways of anthropomorphizing the machines. And we are so, so, so prone as humans to anthropomorphize everything, especially things that produce language. There’s a long history of this, even with far less impressive and less capable chatbots, all throughout the history of artificial intelligence research. Eliza was a chatbot from the 1960s, a very transparent algorithm that was made to emulate the style of a particular kind of psychotherapist, where you would say, Eliza, I’m not doing so well today. And Eliza would say, Oh, I’m sorry to hear that. Why aren’t you doing so well today? And then you’d say, Well, you know, I’m really upset because I got into a fight with my partner. And Eliza would say, Oh, I’m really sorry to hear that. Why did you get into a fight with your partner? It had maybe 10 or a couple dozen different canned phrases with blanks. But people said at the time, I talked to Eliza and she really understood me and helped me. But that machine is not thinking.
Ketcham: Sounds like a conversation Ray Kurzweil has with his Dad Bot. Let’s conclude with the group of people who are, to me, the most offensive protagonists in your book, the “kept intellectuals,” as you describe them, who provide the ideological underpinnings used to justify all these absurd, sometimes nonsensical claims about our glorious tech future. I’m talking about the utilitarian ethicists Toby Ord, William MacAskill, and Nick Bostrom, all of Oxford University and all central figures in the philosophical school called longtermism.
Becker: They have an approach to ethics that, again, fundamentally misunderstands how the world works. They believe that everything can, at least in principle, be quantified.
Ketcham: Happiness can be quantified, as in all utilitarian ethics.
Becker: Happiness can be quantified, the goodness or badness of a world for people can be quantified.
Ketcham: Do you think that’s another expression of engineer’s disease?
Becker: I think so, yeah. There’s this notion that quantification can lead to figuring out what the right answer is for what to do in any given situation. Well, there’s a reason why ethics is hard. And the reason is that the world is a complicated and messy place we did not make. There are all sorts of situations that arise in which it’s not clear what the right thing to do is. Utilitarianism has been around hundreds of years. Variants on the idea have been around for thousands of years. And part of the reason why it’s not taken seriously by a lot of philosophers is precisely because in order to really be a utilitarian, you need to believe that it is possible in principle, if not in practice, to quantify across all different kinds of human happiness and suffering and push that down into a single number that’s positive or negative valued across human experience, not just in one life, but in every life. And when these guys turn to longtermism, they say you have to be able to do this over the course of any human life that could ever come to pass over the future history of the universe. That’s absurd. And worse, it leads them to absurd conclusions. Toby Ord says that there’s some number of future people for whom ensuring their happiness would be worth torturing and killing a million people. And he would say that the number of future people is truly astronomical. I don’t think that any such number of people exists, even in principle.
Ketcham: Do you think that these guys know better, but are being paid to present a philosophy that pleases tech billionaires?
Becker: I think they genuinely believe it. This is the nicest thing I can say about them. Of all the people in the book, I am fully convinced that those guys – Ord, MacAskill, Bostrom – are one hundred percent true believers. I definitely think their views have been influenced by the fact that they’re being paid by tech billionaires, but not in a cynical way. They probably have not really examined how that might have subtly or unconsciously influenced them by changing where their biases lie. And again, if earnestly believing something makes it easier for you to get paid, it’s gonna make it more likely that you earnestly believe it rather than less likely.
And note that they also tend to dismiss expertise, which likely issues from unexamined biases that come from getting money from tech billionaires who dismiss expertise. To take Toby Ord again: he said that in his estimation, the chance of human extinction or irrecoverable civilizational collapse from artificial general intelligence – a machine that is ill-defined that nobody knows how to build and that doesn’t exist – is greater than the threat posed by climate change and nuclear war by a factor of 50. That’s nuts. It’s a deeply irresponsibly ill-informed thing for someone with the platform and cultural power of an Oxford philosophy professor to say. I mean, this guy advises the UK parliament on AI!
Ketcham: Irresponsible, ill-informed – but well paid.
Becker: Yes. And a true believer.


No comments:
Post a Comment