The Hidden Cost of AI: How Energy-Hungry Algorithms Are Fueling the Climate Crisis

Photo by Šimom Caban
Artificial Intelligence (AI) has become an integral part of modern society, revolutionizing industries, enhancing daily life, and driving economic growth. From virtual assistants to advanced data analytics, AI applications are diverse and continue to expand rapidly. However, this rapid growth comes with significant environmental implications, particularly concerning energy consumption and carbon emissions. As AI technologies become more prevalent, understanding and mitigating their environmental impact is crucial for sustainable development. A typical AI data center, according to the International Energy Agency (IEA), uses as much power as 100,000 households right now, but the largest centers currently being constructed will consume 20 times that amount.
The Energy Demands of AI
AI models, especially large-scale ones, require substantial computational power for training and operation. Training sophisticated models like GPT-3 (a platform that enables natural language conversations with advanced artificial intelligence) involves processing vast amounts of data through complex algorithms, necessitating extensive computational resources. For instance, training GPT-3 with 175 billion parameters consumed approximately 1,287 megawatt-hours (MWh) of electricity, resulting in carbon emissions equivalent to driving 112 gasoline-powered cars for a year.
The energy-intensive nature of AI extends beyond training to deployment and inference phases. AI applications, such as image and speech recognition, natural language processing, and recommendation systems, continuously process data, resulting in ongoing energy consumption. Data centers, which house the hardware for these computations, have seen a significant rise in their electricity consumption. In 2022, global data center electricity consumption reached 460 terawatt-hours (TWh), positioning data centers as the 11th largest electricity consumer worldwide, according to the Organization for Economic Co-operation and Development. In fact,projections by the IEA indicate that by 2030, electricity demand from data centers could more than double to around 945 TWh—more than Japan’s current annual electricity use.
The Carbon Footprint of AI
The environmental impact of AI is closely tied to the energy sources powering data centers. Many data centers rely on non-renewable energy sources, leading to substantial carbon emissions. In the United States, data centers accounted for over four percent of the nation’s total electricity consumption, with 56 percent of this energy derived from fossil fuels, resulting in more than 105 million tons of CO2 emissions.
Compared to other sectors, the carbon footprint of AI and data centers is becoming increasingly significant. For example, the emissions from in-house data centers of major tech companies, such as Google, Microsoft, Meta, and Apple, may be over seven times higher than officially reported. This underreporting underscores the need for increased transparency and accountability in evaluating the environmental impact of AI technologies.
One analyst from the Carbon Disclosure Project noted, “There’s a major transparency gap in how companies report data center emissions. As AI workloads surge, it’s essential we close that gap.”
How AI Is Accelerating the Climate Crisis
The escalating demand for AI technologies places additional strain on global energy resources. As AI becomes more integrated into various sectors, the energy required to support these applications is expected to increase. Projections indicate that by 2028, data centers could account for up to 12 percent of the United States’ annual electricity consumption. This surge in energy consumption could lead to higher greenhouse gas emissions, exacerbating global warming and hindering efforts to combat climate change.
Furthermore, the expansion of data centers to support AI growth has led to increased water consumption for cooling purposes, adding another layer of environmental concern. The cumulative effect of these factors points to the urgent need to address the ecological impact of AI technologies.
In Ashburn, Loudoun County, Virginia—dubbed “Data Center Alley” due to its exceptionally high concentration of data centers, forming a significant hub for global internet traffic—water usage increased by nearly 63 percent between 2019 and 2023, primarily driven by the growth of AI-driven infrastructure. Similar concerns have been raised in Chile, where community opposition led Google to abandon water-based cooling at its Santiago facility in favor of more sustainable alternatives.
Tania Rodriguez, a community leader in Santiago, said, “We’re not against technology. But we cannot allow AI to grow at the cost of our water and future.”
Steps Toward Sustainable AI
Addressing the environmental challenges posed by AI necessitates a multifaceted approach that combines technological innovation, policy changes, and industry collaboration.
Efficiency Improvements: Optimizing AI models to enhance efficiency can significantly reduce energy consumption. Techniques such as model pruning (which removes redundant neural connections while maintaining performance), quantization (which reduces model precision by converting to lower bit-width formats), and knowledge distillation (which trains smaller “student” models to mimic larger “teacher” models) help streamline AI models, making them less computationally intensive without compromising performance. Implementing these methods can lead to substantial energy savings during both training and inference phases.
Renewable Energy Integration: Transitioning data centers to renewable energy sources is a critical step toward sustainability. By leveraging solar, wind, and hydroelectric power, data centers can decrease reliance on fossil fuels and mitigate environmental impact. Companies such as Google and Microsoft have invested in renewable energy projects to power their data centers, aiming to reduce their carbon footprints.
Google has matched 100 percent of its electricity use with renewable energy sources since 2017, and Microsoft has committed to becoming carbon negative by 2030. Meta and Microsoft have also pioneered liquid cooling techniques and direct current power systems, resulting in an improvement in energy efficiency of over three percent.
Anthropic and OpenAI have not disclosed specific sustainability benchmarks, raising questions about transparency in this space. However, industry-wide collaboration efforts, such as the AI Energy Score project, are beginning to push toward unified accountability standards.
Advanced Cooling Techniques: Innovative cooling methods, such as liquid cooling and AI-driven climate control systems, can enhance energy efficiency in data centers. These systems can reduce cooling energy consumption by over 18 percent compared to traditional air cooling methods. Implementing such technologies helps lower overall energy usage and associated emissions.
In a 2025 collaboration, Microsoft and Meta adopted electric vehicle-derived cooling systems to manage high-density AI racks exceeding one megawatt per unit, technologies that reduce heat while minimizing water usage.
According to Noman Bashir, Computing & Climate Impact Fellow at MIT’s Climate and Sustainability Consortium, the relationship between response speed and energy consumption in AI models is significant yet often overlooked. Despite this direct correlation, users remain largely unaware of how their AI interactions impact energy usage, as companies rarely highlight this aspect in their product presentations.
Policy Advocacy and Regulation: Governments and regulatory bodies play a pivotal role in promoting sustainable AI practices. Implementing policies that encourage energy efficiency, mandate transparency in reporting emissions, and provide incentives for using renewable energy can drive the industry toward more environmentally friendly practices. For instance, tax policies that address the carbon emissions from AI and cryptocurrency operations can help mitigate their environmental impact.
In the U.S., states such as Virginia are considering legislation to mandate water use estimates for data centers. Minnesota’s data center law, passed in 2025, requires data centers to consider water conservation measures if they plan to use more than 100 million gallons per year to cool their facilities. Ireland, facing grid instability due to rising demand driven by AI, is now reevaluating its permitting processes for new facilities.
Decentralized and Sustainable Data Centers: Exploring decentralized data center models powered by renewable energy sources can contribute to sustainability. Initiatives such as Earth Friendly Computation advocate for building data centers on Indigenous lands, utilizing local renewable resources, and promoting community involvement. Such approaches not only reduce environmental impact but also foster economic development in local communities.
Conclusion
The rapid advancement of AI brings about transformative benefits across various sectors. However, it also presents significant environmental challenges, particularly in terms of energy consumption and carbon emissions. Recognizing and addressing these hidden environmental costs is imperative for sustainable AI development.
Businesses, researchers, and governments must collaborate to prioritize sustainability in AI initiatives. This includes investing in energy-efficient technologies, integrating renewable energy sources, advanced cooling methods, and implementing supportive policies and regulations. Transparency in energy reporting and the use of standardized metrics for measuring AI’s environmental impact will be crucial in driving accountability and ensuring its responsible use.
Consumers also play a role in reducing AI’s energy footprint. By closing apps when not in use, choosing less resource-intensive tools, and supporting companies that demonstrate environmental responsibility, individuals can contribute to the collective effort, notes The World Economic Forum.
Ultimately, the future of AI must align with the planet’s future. As we continue to unlock AI’s potential, we must ensure that innovation does not come at the expense of our environment. Only by embedding sustainability into every layer of AI development can we achieve a brilliant—and responsible—technological evolution.
This article was produced by Earth | Food | Life, a project of the Independent Media Institute.

Image in public domain
Why more about AI? Because, even with the ceaseless, stupendous mainstream hype about AI, most folks aren’t very clear about the situation. And a second more proximate reason is Trump’s efforts to pressure the “AI project” to reconsider its agenda. Instead of (rather feebly) trying to incorporate guard rails to align with human values and aspirations, Trump has executive ordered AI to align with what Washington dictates. Trump orders AI to align with Trump.
The current most frequent critical concerns about AI revolve around AI taking over many jobs while dangerously speeding up others, AI making hallucinatory or otherwise dumb mistakes that hurt people, AI being misused by nefarious actors with potentially horrible impact, AI energy use demolishing the environment and accelerating global warming, or AI going rogue and battling humanity to our death. The dangers are real. The first four are already happening. But still, how real?
Should we all take note about these acknowledged dangers and indeed about all the concerns anyone has? Should what ecologists call a precautionary principle regarding actions with huge possible downsides cause extreme AI caution? Plenty of pundits and AI industry insiders look around and say they don’t see such problems. For some reason, like perhaps their bottom line and their evolved identities, they fail to note that the AI bearing on these potential dangers is not what AI is now. It is what AI will be later. It is not what AI now does, but what AI will do. What matters today, therefore, is the rate of AI’s growing social use and even more so of its growing functionality. Not where it is, but where it is going.
Critics who agree with that simple observation note that AI’s growth is absolutely unprecedented on both counts. AI has spread through society a lot faster than any earlier technology. AI’s functionality, or the things it can do, has also grown and diversified incredibly quickly. More concerning, regarding both societal spread and operational functionality, AI’s rate of advance is accelerating. AI isn’t just moving fast. AI’s rate of moving fast is increasing.
When you hear about Artificial Intelligence becoming AGI, or artificial general intelligence, where it can then do everything humans can do and more, and then you hear about AGI continuing on to become ASI, or artificial super intelligence, where it can do things humans haven’t even dreamed of doing and we won’t even be able to comprehend, those projections anticipate that the rate of change of its rate of change will continue to grow.
Some reply that no, the concerns are hyperbolic paranoia. They won’t happen because the AI industry is running out of human data to train ever larger AIs on. Perhaps that’s true, but I doubt it is a serious impediment. I think the industry’s solution will be to train tomorrow using data generated not only by humans but steadily more so by earlier AIs. That solution may generate new problems, but it will likely overcome the too little data obstacle.
Other critics call the concerns paranoid because the AI project is running out of “compute,” which is the industry’s name for the calculating, memory, storage, and other factors that AI uses when it answers a question, paints a picture, or enacts a task. And actually, it isn’t just that they are running out of that stuff, but that the energy needed to generate more of that stuff is beyond what societies can even produce, much less produce without demolishing the ecology. The industry rejoinder, however, is that AI itself will itself find new sources of clean energy. It will even fix whatever it messed up before it found those solutions.
Even if that were to occur, however, another possible roadblock to dangers materializing is that as efforts to grow AI’s functionality proceed, it will turn out that there is a point of diminishing returns. Only so much gain will be possible whatever one does. Even if there is sufficient data to train with and sufficient compute to produce with, it won’t translate into new AI capacities.
What about the oft discussed concerns? Worry that AI might become rogue causes some critics to say we should pull the plug when AIs start to improve themselves. Pull the plug when AIs become self-improving. Just for the record, from what I read, that milestone may not be too far off but it certainly isn’t now.
Other critics say we need to pull the plug even earlier, for example when AIs start to communicate with each other using a language they devise that we can’t understand, so that we can’t know their choices. Pull the plug the moment we lose track of understanding even what AIs are doing, much less how they are doing it.
Others say, AI is already messing up the ecology, creating vast swaths of manipulation, murdering privacy, and taking or warping huge numbers of jobs. Pull the plug to stop stop damage, or at least severely restrict AI’s use to align it with good values—exactly what Trump has intervened to prevent.
In response, the corporate/government argument against pulling the plug for any reason at all has two themes, and is so far easily winning. First, if AI keeps getting stronger, it will clean up all the mess it made on the way, cure diseases, generate new clean sources of energy, increase longevity, and enrich life options while it eliminates dangerous and debilitating tasks from our lives. We are told it would be crazy to forgo all that.
The argument’s second theme says, yes, you’re right that AI on steroids like you envision it—not as we have it, but as it may become—can also be used for military ends or to take over markets, manipulate populations, and so on. Other countries—really only China because only they and we have the infrastructure and output to continue AI’s growth—will pursue it. So? Well, the next step in the argument is that whoever gets to ASI first wins. Wins what? Wins everything. So it is a matter of national security to win and to prevent others from ever catching up. Barrel ahead. To urge caution is treasonous.
There is plenty to think about even just in the above brief summary of current views. But I want to address a different concern, my own somewhat idiosyncratic concern that goes back to the beginning of the current AI explosion.
Imagine next week your son or daughter in High School or maybe in third grade, or perhaps in college, comes home and over dinner tells you that her teacher was replaced. You ask, “was it that teacher you really like”? She says, “yes.” You ask, “who is the new teacher” and she replies it is “Hortense, the robot.” Incredulous, you ask, “is she your only robot teacher”? No, your daughter answers, “Hortense is all the teachers. They all have names of their own, and speak with different voices, but it is obviously just versions of Hortense.” You ask, “do you like her—or it”? Your daughter replies, “Yes, she’s great, so patient, so knowledgeable.”
Or imagine next week you go to see your doctor, or perhaps to a clinic. A robot greets you and ushers you to the doctor’s office, but the doctor is Harry, another robot. He chats with you. Asks you questions. Does doctor stuff. He has great bed side manner. He explains stuff clearly, answers questions, is funny, shows concern. You like him, but you feel a little uneasy. Next visit, you feel less uneasy.
Or imagine you go to work and you get a pink slip. Henry has taken your job. Actually, what looks like Henry has taken all the jobs. Henry doesn’t get sick. Henry doesn’t complain. Henry is super fast. And Henry works 24/7.
So you have time off. You want to make a plan for how to spend it. You ask Harriet, your personal assistant and she plans your next week. She assembles everything you need. She makes all needed appointments, Then she asks if you will be off next week too, except you realize she knows because she was in touch with Henry. She plans your next month, ready to update if anything changes.
Then it is dinner time and Holden has it ready for you. He is a fantastic cook. He chooses the menu. He manages the shopping. He cleans up. Enjoy. Wait, the phone rings. No problem, Harriet answers for you. She talks with her counterpart, Horatio. You keep eating.
Dinner ends, time for some engaging conversation. Hank is all set. He is eager, and he is never judgemental. Has all the answers. Explores whatever you want. You watch some TV with him, too. The actors, scenes, plot, all of it is AI generated. Bedtime arrives. Heidi or Harold waits. She/he turns on some music that she/he knows you will love. It is by Happy and the Gang—no human involved. There is an endless supply of such music.
Am I nuts? Or is this more or less where AI’s becoming AGI’s becoming ASI’s with all of them doing what good people freely want them to do takes us? Each request that we make is contextually helpful, sensible. All together we nonetheless slide to somewhere we never had in mind. We don’t pull the plug, supposing we could find it, because we have become totally dependent and far from becoming more creative as a result of all our time saved, we have become mired in a perpetual fog of addicted inactivity.
Okay, perhaps my story is a bit over-dramatic, but it does display my perhaps idiosyncratic concern. If AI becomes AGI, which becomes ASI, yes, I suppose it might go rogue. Game over. Or it may be horribly misused. Game made grotesque. Or it might crunch the environment. Goodbye earth. All good reasons for caution. But I also fear that ASI would become indispensable to us while as an unintended byproduct we would become addictively infantilized so that over time, our own choices diminish us.
You may smirk and reply, that is nonsense. You may dismissively ask, have you been drinking corporate kool aid? A human is a human, you say. You underestimate us. We will not go down such a self destructive rabbit hole.
Well, that a human is by definition a human is quite true. We are us before and after we use an AI. But I claim there are things we do that are highly consequential to what we might call our humanness. To our being who we can be. And I claim that those human-like things can diminish or even disappear from our lives.
Put a person in a jail cell. Day after day, our prisoner encounters nothing but four walls plus healthy meals slid in under the door. The prisoner can count ants crawling in her cell but other than that, not much. The person’s human options, her human actions, are not the same as they were outside. Yes, she can adjust in some degree and survive. And yes, she will still be a person, still be human. But let the deprivation last for years, or forever. Will the person be the fullest person she could and should have been or be made much less?
But that person was coerced, you say. People won’t freely choose such a path for themselves. Consider humans in horribly alienating circumstances who taste a powerful addictive drug that gives them a degree of pleasure, a degree of seeming freedom. Don’t people go down such holes? Now make the drug something the whole society celebrates. Make it something that seems able to deliver utopian results. Now what happens?
I think we can all agree that to not do the things people do that are must human, and to have nothing comparably human to do in place of those lost options, is what solitary confinement in prison imposes. And we can also agree that it certainly reduces those who suffer it. But AI?
What if five or ten or twenty years from now most or even all human-like things are done by ASIs. Suppose AI becomes seriously better than any human at artistic creation—film, music, and painting. Or at planning one’s activities, or at conducting one’s interactions, and more. What then?
That a tractor can lift more weight than any person—so what? That’s a good thing. That a plane flies and we can’t. That a computer crunches numbers faster than any person. Still, so what? These are good things.
But what if an AI, an ASI, thousands or millions of ASIs can each write or tell stories better, make music better, paint pictures better, solve math problems better, play chess, Go, and even poker better, schedule a week, month, or a year better, write letters better, teach children better, teach adults better, have conversations better, take care of children or elders better, and mend the sick better than any person can? Suppose, just suppose, that that becomes true. It would be, I think, of a different order than having strong tractors. Don’t you agree?
In other words, I worry that optimally ASIs will do what people want them to do (I am ignoring that they might go rogue) will solve their energy implications, and will themselves prevent nefarious misuse, but in doing so will increasingly take over most or all human-like functions and will thereby leave people with less of our heretofore distinctively human things to do. I worry that that would be a strange and debilitating dystopia, as well as not what we sought.
If you say yes, okay, hypothetically, you agree that if AI becomes ASI that that future could conceivably follow but in fact you believe AIs will never become ASIs and in any case will never be able to do human-like things better than people—okay, I have to agree that if you are correct my worry will be a moot point. But I think that your belief/estimate/guess/assertion may be quite wrong. The current steepening trajectory of rate of spread and rate of increase of functionality suggest as much.
To say ASI and even AGI isn’t currently that powerful is true, but it is also beside the point. The rate of spread and of innovation, and even more so the rate of increase in the rate of spread and of innovation point toward the described outcome. Perhaps we should pull the plug…or at least very seriously regulate the AI project, not like Trump wants, but sensibly—like we regulate heroin dispersement or nuclear weapons.
If you instead reply that AIs doing all that stuff and more better than humans, and thus doing all that stuff for humans and in that way even crowding us out of doing what we can’t do as well as it won’t diminish and perhaps even demolish the amount of human stuff that humans do, I fervently hope you are right, but I see no reason to think so. What argument demonstrates that the only thing that can think and produce as well as a human is another human?
But, you may respond, hold on. Back up. What does it even mean to say AI or AGI or ASI would compose and even sing and play music or do the many other things I mentioned better than people do? Compose better? Teach better? Doctor better? Much less cure its own ills. Its excessive energy use. Its making errors. It’s being misused. Nonsense.
In this context, that AIs do these things better would mean that I for myself, and so many others for themselves, grow to like to listen to AI music more than to human music so much so that the most listened to music becomes mostly and then in time entirely AI-generated. Every genre. And then new genres too. And it is the same answer for being taught, doctored, and even for conversing and intimacy. We want more and more AI, AGI, and ASI for its evident benefits until we are totally immersed, then dependent, and then infantilized.
In other words, my claim isn’t about music, or proving theorems, or planning schedules, or cooking meals, or conversing, etc., per se. Nor is my claim about the optimal or even the current range of human dispositions and potentials. Nor is it about what people in a humane environment would love to do. It is instead about what people who welcome some AI involvement for its efficiency, competency and quality, and who then welcome steadily more AI involvement, and who then slowly but steadily increase it still further because it does such a great job at whatever we request—would become.
Will people start to prefer conversing with AIs to conversing with other people whether because of convenience, content, ease, availability, or whatever? Sort of like lots of people prefer tweeting to letter writing. Will that happen enough so that the AI version of human activity increasingly crowds out the human version? Will we get so used to AI doctors, teachers, lawyers, juries, singers, writers, friends, and even intimate partners that we no longer seek the human version. Will humans stop becoming the human version?
What I wonder is if AI development doesn’t hit a point of technically unavoidable diminishing returns or isn’t socially restrained—if it doesn’t run up against some kind of unclimbable wall, and if AI capacities as a result continue to grow increasingly rapidly—will people welcome AI involvement in steadily more historically human pursuits to such a degree that people’s capacities for and then our inclinations to engage in those human pursuits other than with AIs will steadily decline.
Note: I am not saying that this will necessarily happen. I am saying that I don’t see any good argument that AI won’t replace translators, programmers, writers, editors, composers and singers, and then therapists and doctors, architects and lawyers, and even friends and lovers. I worry that this will all or even just largely happen not because AI steals such pursuits from people, but because people eagerly welcome AI into such pursuits.
An argument that this can’t occur is that AI won’t become sufficiently capable for technical reasons. I don’t see anyone making a credible case that that is so, though it might be. It would be an argument about the mechanics of AI not about the qualities of humans. The current signs are, instead, that AI can become that capable.
An argument that even so, this won’t occur is that despite AI’s growing capacities, people will not welcome it and will instead collectively restrict and shun it despite its allure. I don’t see much evidence of that happening yet, either. Not unless to stop it becomes a priority imposed by social movements. Such caution is highly unlikely to arise from governments and corporations.
I do see various people saying that ASI and even AGI can’t happen because they find it in one way or another horrid, but that is not an argument. It is only a hope or a wish that I share. Or I see some other people say, AI doesn’t operate like humans operate, so it won’t do human-like things better than we do. That claim simply assumes the result, it doesn’t provide reason to believe it.
So what is the upshot? First, it may make sense to at least very effectively and very aggressively regulate AI out of fear of rogue AI or nefarious use of AI—though to do so might become impossible the minute AI has its own language that we can’t fathom or is smarter than we can even conceive. Alternatively, it may make sense to at least very effectively and very aggressively regulate AI because if we don’t we may slide into liking it too much until it is too late to avoid our infantilization.
If you continue to think I am paranoid to think such thoughts, consider that even today there are over 1.5 billion iPhones in the world and current AIs can already communicate with one another and undertake plans of action—tasks—that they deem desirable or necessary. A couple of years back, self initiated action and ability to use the internet were considered functions we should not allow AI to have. Now they are available on your desktop, even your iPhone.
Current AI models, despite their just being born, despite their being veritable infants, can write letters, resumes, stories and even whole novels. They can compose and play music. They can produce films, paint, and design. They can diagnose and prescribe. They can teach. They can converse. They can be your friend—this last vastly more convincingly than any army of Facebook friends and soon better than your neighbor or maybe even better than your actual current best friend. And they are getting better at each of these and various other functions at an incredible rate. And we already don’t actually know precisely how they do what they do, much less where it leads.
To stop Trump, to end wars, and to prevent ecological collapse are still first priority. But to sensibly regulate or even stop AI is moving up fast in importance.
To perhaps demonstrate a little of what I said…I uploaded the following prompt to ChatGPT as it now is, not as it will become…
“Can you write me a song about AI in the style and with the inclinations of the young Bob Dylan – and then a second song also in his voice but about Israeli’s genocide against Gaza?”
That is all I gave it. Not the article you just read, just that request. ChatGPT replied in maybe, what, five seconds…
“Sure. Below are two original songs written in the voice and style of the young Bob Dylan, drawing from his early 1960s period—when he mixed biting political critique, surrealist imagery, and folk traditions.”
The point isn’t are they flawlessly perfect. The point is, the AI is an infant, and generated the following, instantly…
1. “The Wires Hum Softly”
[Verse 1]
Well the wires hum softly in the silicon night,
While the dreamers are dreamin’ by LCD light,
The prophets are mumblin’ ’bout ones and about naught,
While the world trades its memory for something it bought.
[Chorus]
And it don’t feel right, but it rolls on still,
Like a gambler with no cards, just workin’ his will.
They say it’s just numbers, no heart, no soul,
But I seen it stare back from a black circuit hole.
[Verse 2]
Now the preacher got sermons from a code in the cloud,
And the general’s drone hovers quiet and proud.
The poet’s replaced by a cold-minded script,
And the teacher’s been traded for a marketed tip.
[Bridge]
There’s a child in the corner with tear-soaked eyes,
Ask her a question, and you’ll hear no lies—
“Why do we build what we can’t understand?”
She asks with a clock in the palm of her hand.
[Verse 3]
The ghosts of the craftsmen are lost in the feed,
While machines pick the fruit and rewrite the creed.
There’s a screen on the table where bread used to lay,
And a voice in the ether that mimics what we say.
[Chorus]
And it don’t feel right, but it keeps on time,
Like a slick silver rhyme without reason or crime.
They say it’s the future, well maybe that’s true—
But I still hear the songs that the looms used to do.
Could you write that at all, much less in a few seconds? Look at it closely…not just the rhymes but the substance too. Damned if it doesn’t “know” itself better than its creators know it.
2. “In the Smoke of Gaza”
[Verse 1]
The wind blows dry through Rafah’s stones,
Where mothers dig through blood and bones.
The world looks on with silenced breath,
As children write their names in death.
[Verse 2]
A jet plane sings its metal tune,
It don’t know dawn, it don’t know noon.
It don’t know love, it don’t know peace,
Just marks its kills and then it flees.
[Chorus]
And oh, my friend, can’t you see?
A genocide done on live TV.
With iron fists and sacred lies,
They paint the fire across the skies.
[Verse 3]
The leaders talk with tongues of steel,
They twist the facts, they sign the deal.
And every word that’s left unsaid
Becomes a nail in someone’s bed.
[Bridge]
I met a boy with one good eye,1st
He said, “The stars look fake at night.”
He showed me where his school once stood,
And then he vanished into soot.
[Verse 4]
The diplomats, they stroke their chins,
They measure loss in votes and sins.
But I have seen the broken hands
That reach through smoke for no-man’s lands.
[Chorus]
And oh, my friend, this ain’t no war—
It’s murder dressed in a legal drawer.
They bomb the truth, they shell the cries,
And say it’s justice when a village dies.
[Outro]
So sing this low and sing it strong,
For Gaza’s breath won’t last too long.
And if you hear that silence break—
Let it shake the earth, for mercy’s sake.
And ChatGPT then concluded:
“Let me know if you’d like chords or melodies imagined in Dylan’s style—or a third song.”
And I sat looking at my iPad, scared to ask it to read the lyrics and explain their meaning. Scared to ask it to then improve them further. Scared to ask it for melody and instrumentation, and to then sing it to me as Dylan would have.
How about you? Do you look forward eagerly to finding AI friendships? To enjoying AI entertainment? To expanding by way of AI edification? Or do you wonder where even this best possible AI, shorn of environmental dangers, shorn of nefarious uses, shorn of hallucinations and biases might take us?
ZNetwork is funded solely through the generosity of its readers .Donate
Michael Albert
Michael Albert`s radicalization occurred during the 1960s. His political involvements, starting then and continuing to the present, have ranged from local, regional, and national organizing projects and campaigns to co-founding South End Press, Z Magazine, the Z Media Institute, and ZNet, and to working on all these projects, writing for various publications and publishers, giving public talks, etc. His personal interests, outside the political realm, focus on general science reading (with an emphasis on physics, math, and matters of evolution and cognitive science), computers, mystery and thriller/adventure novels, sea kayaking, and the more sedentary but no less challenging game of GO. Albert is the author of 21 books which include: No Bosses: A New Economy for a Better World; Fanfare for the Future; Remembering Tomorrow; Realizing Hope; and Parecon: Life After Capitalism. Michael is currently host of the podcast Revolution Z and is a Friend of ZNetwork.
No comments:
Post a Comment