Sunday, June 08, 2025

Tech Sector Emissions, Energy Use Grow With Rise Of AI














By 

Tech sector carbon emissions continued their rise in recent years, fueled by rapid advances in artificial intelligence (AI) and data infrastructure, according to Greening Digital Companies 2025.


The report, produced by the International Telecommunication Union (ITU) — the UN agency for digital technologies — and the World Benchmarking Alliance (WBA), tracks the greenhouse gas (GHG) emissions, energy use, and climate commitments of 200 leading digital companies as of 2023, the most recent year for which full data is available.

While the annual report calls on digital companies to address their growing environmental footprint, it also indicates encouraging progress. Worldwide, more companies had set emissions targets, sourced renewable energy and aligned with science-based frameworks.

“Advances in digital innovation — especially AI — are driving up energy consumption and global emissions,” said ITU Secretary-General Doreen Bogdan-Martin. “While more must be done to shrink the tech sector’s footprint, the latest Greening Digital Companies report shows that industry understands the challenge — and that continued progress depends on sustaining momentum together.”

Global AI expansion fuels energy demand 

According to the latest edition of the report, electricity consumption by data centers — which power AI development and deployment, among other uses — increased by 12 per cent each year from 2017 to 2023, four times faster than global electricity growth.

Four leading AI-focused companies alone saw their operational emissions increase in the reporting period by 150 per cent on average since 2020. This rise in energy that is either produced or purchased – known as Scope 1 and Scope 2 emissions – underscores the urgent need to manage AI’s environmental impact.

In total, the amount of greenhouse gas emissions reported by the 166 digital companies covered by the report contributed 0.8 per cent of all global energy-related emissions in 2023.

The 164 digital companies that reported electricity consumption accounted for 2.1 per cent of global electricity use, at 581 terawatt-hours (TWh), with 10 companies responsible for half of this total.

“Digital companies have the tools and influence to lead the global climate transition, but progress must be measured not only by ambition, but by credible action,” said Lourdes O. Montenegro, Director of Research and Digitisation at WBA. “This report provides a clear signal to the international community: more companies are stepping up, but emissions and electricity use continues to rise.”

Progress amid rising challenges

Although emissions continued their rise, Greening Digital Companies 2025 highlights steps taken by many tech firms that suggest a strengthening of transparency and accountability.

Eight companies scored above 90 per cent in the report’s climate commitment assessment on data disclosure, targets and performance. This is up from just three in last year’s report.

For the first time, the report includes data on companies’ progress toward meeting climate targets and realizing stated net-zero ambitions. Almost half of the companies assessed had committed to achieving net-zero emissions, with 41 firms targeting 2050 and 51 aiming for earlier deadlines.

Other trends among the 200 digital companies featured in the report include:

  • Renewable energy adoption: 23 companies operated on 100 per cent renewable energy in 2023, up from 16 in 2022.
  • Dedicated climate reporting: 49 companies released standalone climate reports, signaling greater transparency.
  • Scope 3 consideration: The number of companies publishing targets on indirect emissions from supply chains and product use rose from 73 to 110, showing increasing awareness of industry impacts.

A call for bold, collaborative and immediate action

Highlighting how the tech sector can ensure long-term digital sustainability, the joint ITU-WBA report recommends that companies:

  • Strengthen data verification, target ambition and climate reporting, including by publishing climate transition action plans.
  • Disclose the full environmental footprint of their AI operations.
  • Foster cross-sector collaboration among tech firms, energy producers and environmental advocates, alongside industry initiatives to drive accelerated digital decarbonization.
  • Keep accelerating renewable energy adoption.

“The Greening Digital Companies report has become a vital tool in tracking the climate footprint of the tech sector,” said Cosmas Luckyson Zavazava, Director of ITU’s Telecommunication Development Bureau. “Despite the progress made, greenhouse gas emissions continue to rise, confirming that that the need for digital companies to adopt science-aligned, transparent, and accountable climate strategies has never been greater. ITU’s work in monitoring the environmental impact of the sector is a crucial step towards achieving a sustainable digital transformation.”

ITU’s Telecommunication Development Bureau is working with regulators, statisticians, academics, and industry experts to define indicators that support national GHG monitoring and data-driven action through the Expert Group on Telecommunication/ICT Indicators.

As the COP30 UN climate conference approaches, ITU’s Green Digital Action aims to ensure that updated climate pledges and adaptation plans will fully reflect the complete impacts of digital technologies.



Eurasia Review

Eurasia Review is an independent Journal that provides a venue for analysts and experts to disseminate content on a wide-range of subjects that are often overlooked or under-represented by Western dominated media.

Ukraine War: Use Of AI Drones Signals A Dangerous New Era – Analysis



By 

In a quiet but potentially seismic shift in modern warfare, Ukraine has used artificial intelligence-powered drones to destroy several Russian bombers deep behind enemy lines. These drones, capable of navigating without GPS and identifying targets with minimal human intervention, reportedly struck high-value assets that once symbolised Russian air superiority.


For a nation under relentless attack from a nuclear-armed power, this represents more than just a tactical victory. It is a preview of a battlefield transformed by artificial intelligence—where machines, rather than humans, may increasingly make life-or-death decisions.

According to Ukrainian defence sources and military analysts cited by The Economist and Forbes, the AI-enabled drones employed in these operations were developed by local startups in cooperation with the Ukrainian military, designed to operate autonomously in GPS-jammed zones (Forbes, May 2024).

A War Driven by Necessity and Innovation

In Kherson and other contested regions, Russian forces have intensified attacks using their drones, often targeting civilian infrastructure and non-combatants. According to Ukraine’s Ministry of Defence, more than 150 civilians have been killed by drone strikes since late 2023, with hundreds more wounded. A UN-appointed Independent International Commission of Inquiry on Ukraine concluded that several of these attacks constitute crimes against humanity (United Nations Human Rights Council, March 2024).

Ukraine’s turn to AI-enhanced weapons, therefore, is not just a matter of military modernisation—it is an act of survival. In an increasingly digitised theatre of war, these systems offer speed, reach, and lethality with fewer human risks.

But that advantage comes with a warning: Are we prepared for a world where autonomous systems can take a life, and no human bears responsibility?


Enter: Lethal Autonomous Weapons Systems

The systems Ukraine is believed to be using fall into a category called Lethal Autonomous Weapons Systems (LAWS), which are defined by their capacity to select and engage targets without direct human input. These systems are no longer hypothetical. In Libya, a UN report documented the first known case of an autonomous drone attacking human targets without command input, possibly executed by a Turkish Kargu-2 drone in 2020 (UN Security Council Panel of Experts on Libya, March 2021).

In that case, no nation accepted responsibility. No military authority confirmed the strike. The chain of command was unclear. A weapon, operating on its own logic, decided to kill.

That’s the existential problem with LAWS. As Mary Wareham of Human Rights Watch explains, these machines operate on algorithms trained with human bias. “People with disabilities are particularly vulnerable—wheelchairs or unusual gait patterns can be misread as threats. Facial recognition systems consistently misidentify darker skin tones,” she said in a 2024 briefing (Human Rights Watch).

  • Countries developing autonomous weapons (e.g., the U.S., China, Russia, Israel, South Korea)
  • Countries calling for a ban (e.g., Chile, Austria, New Zealand, Mexico, South Africa)
  • A timeline from 2013 (when the Stop Killer Robots campaign launched) to 2025, showing milestones in AI weapons development, UN negotiations, and notable deployments (e.g., Libya 2020, Ukraine 2024–2025).

A Familiar Justification, A New Frontier

The rhetoric used to justify the deployment of AI in war is hauntingly familiar. In August 1945, the United States dropped two atomic bombs on Japan, with the stated goal of ending World War II and saving lives. The devastation was catastrophic—over 200,000 dead, many of them civilians. In the decades that followed, treaties and norms emerged to contain nuclear arms. But those came after the bombs fell.

With LAWS, the weapons are already in use, and regulation is struggling to keep up.

Unlike nuclear weapons, which require vast resources and rare materials, autonomous drones are small, cheap, and increasingly accessible. Open-source software, off-the-shelf components, and machine learning models are all readily available. The proliferation risk is high, not just to states but also to armed non-state actors, private military contractors, and authoritarian regimes.

The UN’s Long Struggle for Regulation

In 2014, diplomats convened at the United Nations Office at Geneva to begin formal discussions under the Convention on Certain Conventional Weapons (CCW) about the legal and ethical implications of LAWS. Despite growing concern, progress has been halting. The CCW requires consensus for any resolution, effectively giving a veto to powerful countries that are heavily invested in AI weaponry, including the United States, Russia, China, and Israel (UN Geneva, CCW Meetings Archive).

UN Secretary-General António Guterres has been unequivocal in his condemnation of the actions. In a 2018 speech, he called LAWS “politically unacceptable and morally repugnant” (United Nations). In May 2025, during informal consultations at UN Headquarters, Guterres again urged member states to agree on a legally binding framework that bans or strictly regulates autonomous weapons by 2026 (UNODA, May 2025 Briefing).

Yet after more than a decade of negotiations, there is still no international agreement, not even on a definition of what qualifies as an autonomous weapon.

Civil Society Fills the Void

In the vacuum of international law, advocacy groups have taken the lead. The Stop Killer Robots campaign—formed in 2013—is a coalition of over 180 civil society organisations, including Amnesty International, Human Rights Watch, Soka Gakkai International and dozens of regional disarmament networks. They argue for:

  • A legally binding international treaty banning fully autonomous weapons.
  • Mandates for meaningful human control over all lethal targeting.
  • Clear accountability for developers, commanders, and states (Stop Killer Robots).

Nicole Van Rooijen, the campaign’s executive director, says political will is the missing ingredient. “We’re not yet negotiating a formal treaty,” she told reporters in May 2025, “but recent discussions have shown unprecedented momentum. We believe that if political courage matches technical urgency, regulation is still within reach.”

Over 30 countries, including Austria, New Zealand, Mexico, and Chile, now support some form of legal restriction or outright ban. Surveys across the European Union and Latin America show overwhelming public opposition to the idea of machines deciding who dies in war (Ipsos Global Advisor, 2023).

Izumi Nakamitsu, Under-Secretary-General and High Representative for Disarmament Affairs, added her voice at the May consultations: “When it comes to war, someone has to be held accountable. A machine cannot stand trial in The Hague.”

Ukraine as Catalyst—and Caution

Ukraine’s battlefield innovation is in many ways admirable. Faced with an existential threat, the country has harnessed cutting-edge tools not out of ambition, but out of necessity for survival. Yet it also sets a precedent. What begins as a tool for defending democracy could become, in the wrong hands, a weapon of unchecked terror.

What happens when an authoritarian regime turns autonomous drones inward, against protesters, political opponents, or minorities? What happens when an AI misidentifies a school bus or hospital as a hostile target?

Those are not distant hypotheticals. The technology is already capable. The political incentives to misuse it are real.

The Choice Before Us

We stand at a crossroads. The choice is not between rejecting all military AI and leaving nations defenceless. It is between maintaining meaningful human control over lethal force, or surrendering that control to lines of code.

Ukraine’s experience demonstrates both the potential and the peril of these systems. Just as Hiroshima and Nagasaki forced the world to confront the consequences of nuclear warfare, the rise of autonomous weapons must compel us to draw red lines now—before they are crossed at scale.

Because when machines kill and no one is responsible, the rules of war do not just bend. They break.



Ramesh Jaura

Ramesh Jaura is a Journalist, Author, Publicist, Moderator, and Public Speaker. A journalist with more than 60 years of experience, he was also the founder-editor of IDN-InDepthNews.


By 

For the past decade, scientists have been trying to get to the bottom of what seemed like a major inconsistency in the universe.  


The universe expands over time, but how fast it’s expanding seemed to differ depending on whether you looked early in the universe’s history or the present day. If true, this would have presented a major problem to the gold-standard model that represents our best understanding of the universe.  

But thanks to the new James Webb Space Telescope, scientists from the University of Chicago have been able to take new and better data—suggesting there may be no conflict after all.  

“This new evidence is suggesting that our Standard Model of the universe is holding up,” said UChicago Prof. Wendy Freedman, a leading figure in the debate over this rate of expansion, known as the Hubble Constant.  

“It doesn’t mean we won’t find things in the future that are inconsistent with the model, but at the moment the Hubble Constant doesn’t seem to be it,” she said. 

The new results are published in The Astrophysical Journal.  


Space, stars and supernovae 

There are currently two major approaches to calculating how fast our universe is expanding.  

The first approach is to measure the remnant light left over from the Big Bang, which is still traveling across the universe. This radiation, known as the cosmic microwave background, informs astronomers about what the conditions were like at early times in the universe.  

Freedman, the John and Marion Sullivan University Professor in Astronomy and Astrophysics, specializes in a second approach, which is to measure how fast the universe is expanding right now, in our local astronomical neighborhood. Paradoxically, this is much trickier than seeing back in time, because accurately measuring distances is very challenging.  

Over the last half century or so, scientists have come up with a number of ways to measure relatively nearby distances. One relies on catching the light of a particular class of star at its peak brightness, when it explodes as a supernova, at the end of its life. If we  know the maximum brightness of these supernovae, measuring their apparent luminosities allows us to calculate its distance. Additional observations tell us how fast the galaxy in which that supernova occurred is moving away from us. Freedman has also pioneered two other methods that use what we know about two other types of stars: red giant stars and carbon stars.  

However, there are many corrections that must be applied to these measurements before a final distance can be declared. Scientists must first account for cosmic dust that dims the light between us and these distant stars in their host galaxies. They must also check and correct for luminosity differences that may arise over cosmic time. And finally subtle measurement uncertainties in the instrumentation used to make the measurements must be identified and corrected for.  

But with technological advances such as the launch of the much more powerful James Webb Space Telescope in 2021, scientists have been able to increasingly refine these measurements. 

“We’ve more than doubled our sample of galaxies used to calibrate the supernovae,” Freedman said. “The statistical improvement is significant. This considerably strengthens the result.”  

Freedman’s latest calculation, which incorporates data from both the Hubble Telescope and the James Webb Space Telescope, finds a value of 70.4 kilometers per second per megaparsec, plus or minus 3%.  

That brings her value into statistical agreement with recent measurements from the cosmic microwave background, which is 67.4, plus or minus 0.7%.  

Webb has four times the resolution of the Hubble Telescope, which allows it to identify individual stars previously detected in blurry groups. It’s also about 10 times as sensitive, which provides higher precision, and the ability to find even fainter objects of interest.   

“We’re really seeing how fantastic the James Webb Space Telescope is for accurately measuring distances to galaxies,” said co-author Taylor Hoyt of the Lawrence Berkeley Laboratory. “Using its infrared detectors, we can see through dust that has historically plagued accurate measurement of distances, and we can measure with much greater accuracy the brightnesses of stars,” added co-author Barry Madore, of the Carnegie Institution for Science. 

‘Extraordinarily difficult’ 

Freedman explained that astrophysicists have been trying to come up with a theory that would have explained different rates of expansion as the universe ages.  

“There have been well over 1,000 papers trying to attack this problem, and it’s just turned out to be extraordinarily difficult to do,” she said. 

Scientists are still trying to find cracks in the Standard Model that describes the universe, which could provide clues to the nature of two big outstanding mysteries—dark matter and dark energy. But the Hubble Constant increasingly seems not to be the place to look.  

Freedman and her team will be using the Webb Telescope next year to get measurements in a group of galaxies called the Coma cluster, which should provide more data from a different angle, she said. “These measurements will allow us to measure the Hubble constant directly, without the additional step of needing the supernovae.” 

“I am optimistic about resolving this in the next few years, as we boost the accuracy to make these measurements,” she said. 



Eurasia Review

Eurasia Review is an independent Journal that provides a venue for analysts and experts to disseminate content on a wide-range of subjects that are often overlooked or under-represented by Western dominated media.

Malaysia’s MCMC Is Becoming A Spy Agency – OpEd




By 

The Malaysian Communications and Multimedia Commission (MCMC) was established in 1998 under the Mahathir administration as a regulatory body of the internet, digital communications, and the multimedia industry under the Communications and Multimedia Commission Act 1998, and the Communications and Multimedia Act 1998. At the time, the MCMC was modelled on the National Telecommunications Commission (NTC) of the Philippines. 


The MCMC is additionally charged with overseeing the Postal Services Act 1991, and licensing of the certification authorities under the Digital Signature Act 1997.

According to The Edge Malaysia (June 6), the Malaysian government has ordered all telecommunications firms (ISPs) to hand over detailed call and internet logs for the first three months of 2025 to the MCMC for the country’s “Mobile Phone Data” project. A media statement issued by the MCMC on June 6 claimed that only metadata is being collected for statistics and policy making purposes. However, an SCMP report states that the MCMC has ordered for such data as call records, IP call records, location, latitude and longitude information

The collection of metadata is a tool used by the United States National Security Agency (NSA) after 911 for intelligence analysis. Such information is used to map communications networks, identify potential targets, and most importantly enable to agency to focus on communications content for more specific and precise analysis using other means. According to Edward Snowden, an NSA whistleblower, such metadata is used for tracking and surveillance purposes. 

The NSA uses sophisticated AI software to analyse metadata. The MCMC itself has developed expertise in AI and invested large sums in development. Now the MCMC is in the business of storing metadata, which can be used to map out the behaviour of targeted people. Such data enables the building of online profiles of persons of interest, their social connections, and their behavioural history. The Guardian pointed out that the NSA can access peoples’ passwords, business data, and other commercially valuable information. 

What we don’t know is the software capability of the MCMC. The MCMC is not disclosing this. Secondly, the MCMC is not disclosing what allied software and facilities it possesses to dive deeper into metadata. No real privacy assurances have been given, other than the metadata the MCMC is collecting has not personal data. We just don’t know what ‘backdoor’ search loopholes the MCMC has access to. 


Malaysians can only take the MCMC on its word. Bersatu communications and new media chief Na’im Brundage, in a press conference questioned what privacy protection protocols would be employed, and what for of oversight and auditing would be implemented to prevent data misuse. Na’im went further and said there should be public consultation on the matter. Malaysians only learnt of the MCMC’s actions after international media reports. 

Data security is of concern, as in 2017 there was a massive data breach of ISP data intended for the MCMC including MyKad numbers, addresses, phone numbers, and phone brands and models leaked to the public domain. Information within the MCMC has not been safe previously. 

Past history of the MCMC adhering to the law is of concern

The MCMC, with the extra powers it received from the 2024 Communications and Multimedia Act amendments have led to the arbitrary detaining of individuals, conducting interrogations and publishing images, and imposing arbitrary fines may strictly fall within the framework of the act, but open up questions of fairness, transparency, and lack of oversight. There appears an abuse of the ‘good faith’ provision, where the rights to privacy, a fair trial, an may lead to perceptions of ‘guilty by accusation’ by the wider community. There are further questions as to whether the MCMC is infringing upon the right to freedom of speech and basic human rights. Some of the penalties the MCMC imposes upon individuals appears to be out of proportion to their alleged crime, as solely defined by the MCMC itself, without oversight. 

The MCMC arbitrarily blocking websites without consultation with website owners, is a contentious issue. Although the MCMC has authority to block websites under Section 211 and 233 of the Communications and Multimedia Act, there is no oversight. MCMC procedures, processes, and decision lack any transparency. There is no clear path for the owners of blocked websites to challenge blocks, as there is no formal appeal process mandated.  The overzealousness of the MCMC can be seen in the fact that 25% of world request for content pull downs on Tik Tok back in 2024, came from the MCMC. 

Perhaps one of the biggest concerns about the MCMC are its extra-judicial and out of jurisdiction operations. The MCMC has demanded companies pull down certain materials on their platforms, under the pressure that the whole platform would be blocked in Malaysia. Evidence indicates the MCMC has also impersonated overseas judicial authorities to intimidate individuals outside of jurisdiction, thus breaking the laws of other countries. 

As the MCMC is taking on this new role, similar to the NSA in the United States, people are being asked to trust the integrity of the MCMC, which has shown recklessness with current law enforcement activities. United States citizens are now experiencing intelligence agencies steering out of control outside their authority. The MCMC has never been given the legal mandate and authority to spy upon Malaysians. 

Murray Hunter

Murray Hunter has been involved in Asia-Pacific business for the last 30 years as an entrepreneur, consultant, academic, and researcher. As an entrepreneur he was involved in numerous start-ups, developing a lot of patented technology, where one of his enterprises was listed in 1992 as the 5th fastest going company on the BRW/Price Waterhouse Fast100 list in Australia. Murray is now an associate professor at the University Malaysia Perlis, spending a lot of time consulting to Asian governments on community development and village biotechnology, both at the strategic level and “on the ground”. He is also a visiting professor at a number of universities and regular speaker at conferences and workshops in the region. Murray is the author of a number of books, numerous research and conceptual papers in referred journals, and commentator on the issues of entrepreneurship, development, and politics in a number of magazines and online news sites around the world. Murray takes a trans-disciplinary view of issues and events, trying to relate this to the enrichment and empowerment of people in the region.