Why Is Big Tech Using the Energy of the Past to Power the Future?
As these companies invest billions in technology for AI, they must re-up investments in renewables to power our future and protect our communities.

An abstract image shows a data. center.
(Photo: Yuichiro Chino/Getty Images)
As these companies invest billions in technology for AI, they must re-up investments in renewables to power our future and protect our communities.

An abstract image shows a data. center.
(Photo: Yuichiro Chino/Getty Images)
Sep 06, 2025
OtherWords
AI is everywhere. But its powerful computing comes with a big cost to our planet, our neighborhoods, and our wallets.
AI servers are so power hungry that utilities are keeping coal-fired power plants that were slated for closure running to meet the needs of massive servers. And in the South alone, there are plans for 20 gigawatts of new natural-gas power plants over the next 15 years—enough to power millions of homes—just to feed AI’s energy needs.
Multibillion dollar companies like Microsoft, Google, Amazon, and Meta that previously committed to 100% renewable energy are going back to the Jurassic Age, using fossil fuels like coal and natural gas to meet their insatiable energy needs. Even nuclear power plants are being reactivated to meet the needs of power-hungry servers.
At a time when we need all corporations to reduce their climate footprint, carbon emissions from major tech companies in 2023 have skyrocketed to 150% of average 2020 values.
AI data centers also produce massive noise pollution and use huge amounts of water. Residents near data centers report that the sound keeps them awake at night and their taps are running dry.
Many of us live in communities that either have or will have a data center, and we’re already feeling the effects. Many of these plants further burden communities already struggling with a lack of economic investment, access to basic resources, and exposure to high levels of pollution.
To add insult to injury, amid stagnant wages and increasing costs for food, housing, utilities, and consumer goods, AI’s demand for power is also raising electric rates for customers nationwide. To meet the soaring demand for energy that AI data servers demand, utilities need to build new infrastructure, the cost of which is being passed onto all customers.
These companies have the know-how and the wealth to power AI with wind, solar, and batteries—which makes it all the more puzzling that they’re relying on fossil fuels to power the future.
A recent Carnegie Mellon study found that AI data centers could increase electric rates by 25% in Northern Virginia by 2030. And NPR recently reported that AI data centers were a key driver in electric rates increasing twice as fast as the cost of living nationwide—at a time when 1 in 6 households are struggling to pay their energy bills.
All of these impacts are only projected to grow. AI already consumes enough electricity to power 7 million American homes. By 2028, that could jump to the amount of power needed for 22% of all US households.
But it doesn’t have to be this way.
AI could be powered by renewable energy that is nonpolluting and works to reduce energy costs for us all. The leading AI companies, who have made significant climate pledges, must lead the way.
Microsoft, Google, Amazon, and Meta have all made promises to the communities they serve to tackle climate and pollution. They all have climate pledges. And they have made significant investments in renewable energy in the past.
Those investments make sense, since renewables are the most affordable form of electricity. These companies have the know-how and the wealth to power AI with wind, solar, and batteries—which makes it all the more puzzling that they’re relying on fossil fuels to power the future.
If these corporate giants are to be good neighbors, they first need to be open and honest about the scope and scale of the problem and the solutions needed.
As these companies invest billions in technology for AI, they must re-up investments in renewables to power our future and protect our communities. They must ensure that communities have a real voice in how and where AI data centers are built—and that our communities aren’t sacrificed in the name of profits.
This column was distributed by OtherWords.
Dan Howells is the climate campaigns director at Green America.
Full Bio >
Todd Larsen is Green America’s executive co-director.
Full Bio >
AI is everywhere. But its powerful computing comes with a big cost to our planet, our neighborhoods, and our wallets.
AI servers are so power hungry that utilities are keeping coal-fired power plants that were slated for closure running to meet the needs of massive servers. And in the South alone, there are plans for 20 gigawatts of new natural-gas power plants over the next 15 years—enough to power millions of homes—just to feed AI’s energy needs.
Multibillion dollar companies like Microsoft, Google, Amazon, and Meta that previously committed to 100% renewable energy are going back to the Jurassic Age, using fossil fuels like coal and natural gas to meet their insatiable energy needs. Even nuclear power plants are being reactivated to meet the needs of power-hungry servers.
At a time when we need all corporations to reduce their climate footprint, carbon emissions from major tech companies in 2023 have skyrocketed to 150% of average 2020 values.
AI data centers also produce massive noise pollution and use huge amounts of water. Residents near data centers report that the sound keeps them awake at night and their taps are running dry.
Many of us live in communities that either have or will have a data center, and we’re already feeling the effects. Many of these plants further burden communities already struggling with a lack of economic investment, access to basic resources, and exposure to high levels of pollution.
To add insult to injury, amid stagnant wages and increasing costs for food, housing, utilities, and consumer goods, AI’s demand for power is also raising electric rates for customers nationwide. To meet the soaring demand for energy that AI data servers demand, utilities need to build new infrastructure, the cost of which is being passed onto all customers.
These companies have the know-how and the wealth to power AI with wind, solar, and batteries—which makes it all the more puzzling that they’re relying on fossil fuels to power the future.
A recent Carnegie Mellon study found that AI data centers could increase electric rates by 25% in Northern Virginia by 2030. And NPR recently reported that AI data centers were a key driver in electric rates increasing twice as fast as the cost of living nationwide—at a time when 1 in 6 households are struggling to pay their energy bills.
All of these impacts are only projected to grow. AI already consumes enough electricity to power 7 million American homes. By 2028, that could jump to the amount of power needed for 22% of all US households.
But it doesn’t have to be this way.
AI could be powered by renewable energy that is nonpolluting and works to reduce energy costs for us all. The leading AI companies, who have made significant climate pledges, must lead the way.
Microsoft, Google, Amazon, and Meta have all made promises to the communities they serve to tackle climate and pollution. They all have climate pledges. And they have made significant investments in renewable energy in the past.
Those investments make sense, since renewables are the most affordable form of electricity. These companies have the know-how and the wealth to power AI with wind, solar, and batteries—which makes it all the more puzzling that they’re relying on fossil fuels to power the future.
If these corporate giants are to be good neighbors, they first need to be open and honest about the scope and scale of the problem and the solutions needed.
As these companies invest billions in technology for AI, they must re-up investments in renewables to power our future and protect our communities. They must ensure that communities have a real voice in how and where AI data centers are built—and that our communities aren’t sacrificed in the name of profits.
This column was distributed by OtherWords.
Dan Howells is the climate campaigns director at Green America.
Full Bio >
Todd Larsen is Green America’s executive co-director.
Full Bio >
OPINION
Asad Baig
IT’S earnings season. Once again, the markets are moving to the rhythm set by the ‘Magnificent Seven’ dominating the Nasdaq. Their numbers are in, and the mood on Wall Street is electric. Much of that excitement, let’s be honest, is about just one thing: AI.
What started as a buzzword a few years ago has turned into a full-blown economic engine. Investors are no longer just buying into hype, but the results. Nvidia has posted a staggering year-over-year revenue increase of nearly 70 per cent, underscoring the explosive demand for its AI chips. Meta is investing heavily in AI for its core ad functionality, boosting engagement, margins and investor confidence through more efficient monetisation. Microsoft, Amazon, Go-ogle, they are all re-architecting their business models around AI, and the market is rewarding them with soaring valuations.
The logic is simple: the future has arrived early, and investors have already priced in the gains, undermining the ‘bubble theory’, which suggests that the AI-based company valuations are inflated without real substance and could burst like a figurative bubble. Much like the Dot Com crash of the early 2000s.
But the earnings of the ‘Mag 7’ tell another story. They signal that AI has moved beyond being the next big thing, to the thing. And we are only beginning to tap its full potential.
Critical gaps risk undermining the AI policy’s promise.
With the US gradually easing chip export restrictions to China, a new phase of AI acceleration is taking shape. Automation is giving way to autonomy, and cars, drones and infrastructure systems are beginning to operate as intelligent agents, capable of learning, adapting and coordinating with other machines in real time.
The next AI wave will reinvent entire sectors. Generative design in manufacturing. AI-discovered drugs. Language models embedded in judicial, health and financial systems. Smart cities built not around traffic lights but predictive analytics. If this feels like science fiction, you haven’t been paying attention. We are moving fast, past the age of data, deep into the age of autonomous decisions.
Now here is the uncomfortable part: are we as a country ready for any of this? The answer is complicated.
On one hand, the newly released National AI Policy signals intent: it outlines large-scale commitments, from training a million professionals in AI and related technologies to integrating AI into key areas of governance, healthcare, education and agriculture. The policy envisions the use of AI to streamline civic services, enhance public sector efficiency and enable data-driven decision-making at scale. It also proposes the establishment of oversight bodies to ensure ethical deployment, data privacy and algorithmic accountability, an attempt to build a governance framework around a rapidly evolving technology.
In essence, the document offers a blueprint for a future where AI moves beyond being an abstract innovation reserved for elites or those who can afford it, and becomes a foundational part of national infrastructure and a key driver of state capacity.
On the other hand, several critical gaps risk undermining the policy’s promise. First, while the vision is bold, the execution framework is vague. Ambitious targets, like training a million people or deploying national-scale civic AI projects, lack operational detail, funding clarity and realistic timelines. Without institutional capacity-building, these goals may remain rhetorical.
Second, the policy assumes that ministries, boards and provincial departments will be able to digitise, standardise and share data rapidly. However, it does not fully address existing challenges related to fragmented systems, limited interoperability and bureaucratic hurdles that may impede effective im-plementation.
Third, it doesn’t take into account geopolitical constraints. With ongoing chip export controls and rising global competition over compute, Pakistan’s lack of sovereign AI infrastructure, whether in silicon, data or foundational models, poses a major strategic vulnerability. Without plans to build resilience, the country risks dependence without capability.
And perhaps the most important question is this: if a policy repeatedly invokes ethics and responsible use but does not clarify, in concrete, actionable terms, how AI systems will safeguard fundamental rights such as privacy, freedom of expression, and protection from discrimination (especially where regulation operates in legally ambiguous digital spaces, and where explicit safeguards against algorithmic bias, data misuse or non-transparent decision-making remain absent or vague), can such governance truly protect the most marginalised?
And if not, should it not be anchored much more firmly and explicitly in constitutional and human rights principles? Food for thought.
The writer is the founder of Media Matters for Democracy.
Published in Dawn, September 5th, 2025
Asad Baig
September 5, 2025
DAWN
IT’S earnings season. Once again, the markets are moving to the rhythm set by the ‘Magnificent Seven’ dominating the Nasdaq. Their numbers are in, and the mood on Wall Street is electric. Much of that excitement, let’s be honest, is about just one thing: AI.
What started as a buzzword a few years ago has turned into a full-blown economic engine. Investors are no longer just buying into hype, but the results. Nvidia has posted a staggering year-over-year revenue increase of nearly 70 per cent, underscoring the explosive demand for its AI chips. Meta is investing heavily in AI for its core ad functionality, boosting engagement, margins and investor confidence through more efficient monetisation. Microsoft, Amazon, Go-ogle, they are all re-architecting their business models around AI, and the market is rewarding them with soaring valuations.
The logic is simple: the future has arrived early, and investors have already priced in the gains, undermining the ‘bubble theory’, which suggests that the AI-based company valuations are inflated without real substance and could burst like a figurative bubble. Much like the Dot Com crash of the early 2000s.
But the earnings of the ‘Mag 7’ tell another story. They signal that AI has moved beyond being the next big thing, to the thing. And we are only beginning to tap its full potential.
Critical gaps risk undermining the AI policy’s promise.
With the US gradually easing chip export restrictions to China, a new phase of AI acceleration is taking shape. Automation is giving way to autonomy, and cars, drones and infrastructure systems are beginning to operate as intelligent agents, capable of learning, adapting and coordinating with other machines in real time.
The next AI wave will reinvent entire sectors. Generative design in manufacturing. AI-discovered drugs. Language models embedded in judicial, health and financial systems. Smart cities built not around traffic lights but predictive analytics. If this feels like science fiction, you haven’t been paying attention. We are moving fast, past the age of data, deep into the age of autonomous decisions.
Now here is the uncomfortable part: are we as a country ready for any of this? The answer is complicated.
On one hand, the newly released National AI Policy signals intent: it outlines large-scale commitments, from training a million professionals in AI and related technologies to integrating AI into key areas of governance, healthcare, education and agriculture. The policy envisions the use of AI to streamline civic services, enhance public sector efficiency and enable data-driven decision-making at scale. It also proposes the establishment of oversight bodies to ensure ethical deployment, data privacy and algorithmic accountability, an attempt to build a governance framework around a rapidly evolving technology.
In essence, the document offers a blueprint for a future where AI moves beyond being an abstract innovation reserved for elites or those who can afford it, and becomes a foundational part of national infrastructure and a key driver of state capacity.
On the other hand, several critical gaps risk undermining the policy’s promise. First, while the vision is bold, the execution framework is vague. Ambitious targets, like training a million people or deploying national-scale civic AI projects, lack operational detail, funding clarity and realistic timelines. Without institutional capacity-building, these goals may remain rhetorical.
Second, the policy assumes that ministries, boards and provincial departments will be able to digitise, standardise and share data rapidly. However, it does not fully address existing challenges related to fragmented systems, limited interoperability and bureaucratic hurdles that may impede effective im-plementation.
Third, it doesn’t take into account geopolitical constraints. With ongoing chip export controls and rising global competition over compute, Pakistan’s lack of sovereign AI infrastructure, whether in silicon, data or foundational models, poses a major strategic vulnerability. Without plans to build resilience, the country risks dependence without capability.
And perhaps the most important question is this: if a policy repeatedly invokes ethics and responsible use but does not clarify, in concrete, actionable terms, how AI systems will safeguard fundamental rights such as privacy, freedom of expression, and protection from discrimination (especially where regulation operates in legally ambiguous digital spaces, and where explicit safeguards against algorithmic bias, data misuse or non-transparent decision-making remain absent or vague), can such governance truly protect the most marginalised?
And if not, should it not be anchored much more firmly and explicitly in constitutional and human rights principles? Food for thought.
The writer is the founder of Media Matters for Democracy.
Published in Dawn, September 5th, 2025
No comments:
Post a Comment