Wednesday, November 12, 2025

Most workers are enthusiastic about AI, but are employers involving them in creating new workflows?

By Alessio Dell'Anna & Mert Can Yilmaz
Published on Euronews

Worldwide, Canadian businesses are at the forefront of designing new AI workflows including their workers. Which countries lead the way in Europe?

AI is coming to many a workplace, one way or another, and employees in Europe know that.

A new survey by human resources firm Adecco on 37,500 people in 30 countries — most of them European — shows 55% of them expect employers to integrate AI agents into their workflows within a year.

Most businesses, however, aren't yet including employees in designing AI-integrated processes: the world average of people saying they are being consulted on ways of working with AI stands at 30%.

China and Europe lag slightly behind at 23% and 29% respectively, compared to America's 37% and Canada's 50% employee involvement rate.

Focusing on European countries specifically, the rate in Germany, France and the Netherlands is 36% — higher than the global average — with Switzerland and Slovenia leading the continent (41%).

'Future-ready' workers are born and bred, too: Where are they?

The survey also shows that future-ready workers are much more likely to get involved in AI-related decisions in the workplace.The rate among them jumps to 41%.

According to Adecco, future-readyemployees are those who already, and proactively, experiment with AI uses at work, and are curious to learn new skills, even outside working hours.

The highest rate of future-ready workers in Europe was identified in Spain — third worldwide (7%) and level with India.

"They embrace new technologies and they have versatile skills," the HR firm said, adding that they are more likely to respond positively to questions such as "AI has made me productive".

Crucially, Adecco adds that these types of workers "aren’t simply found" but "are supported by their employers to become high-performing talent".

"They won't wait around if they don’t understand how or where they fit in as AI continues to quickly reshape the workforce," the company said.

On this note, being future-ready and growing professionally is becoming increasingly important to workers.

The percentage who say they will stay with their employer for the next 12 months under the condition of career progression is now at 33%, an 11-point increase from 2024.

How optimistic are workers about the impact of AI in the future?

Most interviewed workers don't seem to fear AI: some 76% believe AI could create more jobs, while only 23% anticipate AI-driven layoffs.

The most positive country not only in Europe, but worldwide, appears to be Germany: 93% say they believe AI could bring more job opportunities than it takes away.

In fact, 77% of workers globally say AI now allows them to carry out tasks they couldn't before.

This means having more time to perform duties like strategic thinking and checking work quality and accuracy, as well as upskilling and being more creative.

Ultimately, three-quarters of workers say AI has already changed or will change their work, for example, modifying the activities carried out at work or changing the skills required for the role.

Adecco's recommendation to employers is to guide employees in developing "new, value-adding capabilities through targeted upskilling and career development".

"Position AI as a tool that complements, enhances and augments human efforts, and therefore empowers employees," the company said.


AI can deliver personalized learning at scale, study shows


A generative AI teaching assistant for personalized learning in medical education

A Dartmouth study finds that curated chatbots can be effective for 24/7 support and are more trusted by students, with caveats for future development




Dartmouth College





A new Dartmouth study finds that artificial intelligence has the potential to deliver educational support that meets the individual needs of large numbers of students. The researchers are the first to report that students may put more trust in AI platforms programmed to pull answers from only curated expert sources, rather than from massive data sets of general information.

Professor Thomas Thesen and co-author Soo Hwan Park tracked how 190 medical students in Dartmouth's Geisel School of Medicine used an AI teaching assistant called NeuroBot TA, which provides around-the-clock individualized support for students in Thesen's Neuroscience and Neurology course.

Thesen and Park built the platform using retrieval-augmented generation, or RAG, a technique that anchors the responses of large language models to specific information sources. This results in more accurate and relevant answers by reducing "hallucinations," AI-generated information that often sounds convincing but is inaccurate or incorrect.

NeuroBot TA is designed to base its responses on select course materials such as textbooks, lecture slides, and clinical guidelines. Unlike general chatbots that have been known to invent facts, NeuroBot TA only answers questions it can support with the vetted materials.

Thesen and Park's study examined whether the RAG approach inspires more trust in student users, and how they might actually integrate such safeguarded systems into their learning. They report in npj Digital Medicine that students overwhelmingly trusted NeuroBot's curated knowledge more than generally available chatbots.

This pattern indicates that generative AI and RAG have the potential to provide tailored, interactive instruction outside of the traditional academic setting, says Thesen, the study's first author and an associate professor of medical education. Park, who received his MD from Dartmouth in 2025 and took Thesen's Neuroscience and Neurology course, is now a neurology resident at Stanford Health Care.

"This work represents a step toward precision education, meaning the tailoring of instruction to each learner's specific needs and context," Thesen says. "We're showing that AI can scale personalized learning, all while gaining students' trust. This has implications for future learning with AI, especially in low-resource settings."

"But first, we need to understand how students interact with and accept AI, and how they will react if guardrails are implemented," he says.

The study focused on students from two different class years who took the course in fall 2023 and fall 2024. Of the students in the study, 143 completed a final survey and provided comments about their experience using NeuroBot TA. More than a quarter of respondents highlighted the chatbot's trust and reliability, as well as its convenience and speed, especially when studying for exams. Nearly half thought the software was a useful study aide.

"Transparency builds trust," Thesen says. "Students appreciated knowing that answers were grounded in their actual course materials rather than drawn from training data based on the entire internet, where information quality and relevance varies."

The findings also highlight some of the challenges educators may face in implementing generative AI chatbots, Thesen and Park report. Surveys have shown that nearly half of medical students use chatbots at least weekly. In the Dartmouth study, students mainly used NeuroBot TA for fact-checking—which increased dramatically before exams—rather than for in-depth learning or long, engaging discussions.

Some users also were frustrated by the platform's limited scope, which might nudge students toward using larger but less quality-controlled chatbots. The study also revealed a unique vulnerability students face when interacting with AI—they often lack the expertise to identify hallucinations, Thesen says.

"We're now exploring hybrid approaches that could mark RAG-based answers as highly reliable while carefully expanding the breadth of information students can encounter on their learning journey," he says.

AI tools like NeuroBot TA could have the most significant impact in institutions where students face overcrowded classrooms and limited access to instructors by expanding access to individualized learning, Thesen says. 

That impact is being seen with AI Patient Actor, which was developed in Thesen's Neuroscience-Informed Learning and Education Lab in 2023. The platform helps medical students hone their communication and diagnostic skills by simulating conversations with patients and providing immediate feedback on students' performance. AI Patient Actor is now used in medical schools in and outside the United States and in Dartmouth medical courses, including the new On Doctoring curriculum.

An August study led by Thesen and Roshini Pinto-Powell, a professor of medicine and medical education and co-director of On Doctoring, found that AI Patient Actor provided first-year medical students with a safe space to test their skills, learn from their mistakes, and identify their strengths and weaknesses.

For NeuroBot TA, Thesen and Park plan to enhance the software with teaching techniques and cognitive-science principles known to produce deeper understanding and long-term retention, such as Socratic tutoring and spaced retrieval practice. 

Rather than providing answers, a chatbot would guide students to discover solutions through targeted questioning and dialogue and quiz them at regular intervals. Future systems also could choose one strategy or the other depending on context, such as preparing for an exam versus doing regular study, Thesen and Park suggest. 

"At a metacognitive level, students, like the rest of us, need to understand when they can use AI to just get a task done, and when and how they should use it for long-term learning," Thesen says.

"There is an illusion of mastery when we cognitively outsource all of our thinking and learning to AI, but we're not really learning," he says. "We need to develop new pedagogies that can positively leverage AI while still allowing learning to occur." 

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date

When AI draws our words



A new study proposes visual composition criteria for evaluating Midjourney and DALL·E, beyond mere computer scores



University of Liège




Can we really trust artificial intelligence to illustrate our ideas? A team of scientists has examined the capabilities of Midjourney and DALL·E - two Generative Artificial Intelligence (GAI) software programs - to produce images from simple sentences. The verdict is mixed...  between aesthetic feats and beginner's mistakes, machines still have a long way to go.

Since the emergence of GAIs such as Midjourney and DALL·E, creating images from simple sentences has become a fascinating, and sometimes even disturbing, reality. Yet behind this technical feat lies an essential question: how do these machines translate words into visuals? This is what four researchers from the University of Liège, the University of Lorraine and EHESS sought to understand by conducting an interdisciplinary study combining semiotics, computer science and art history.

"Our approach is based on a series of rigorous tests," explains Maria Giulia Dondero, semiotician at the University of Liège. "We submitted very specific requests to these two AI systems and analysed the images produced according to criteria from the humanities, such as the arrangement of shapes, colours, gazes, the specific dynamism of the still image, the rhythm of its deployment, etc." The result? AI systems are capable of generating images that are supposedly aesthetic, but often struggle to follow even the simplest instructions.

The study reveals surprising difficulties, such as the fact that GAIs do not understand negation well ("a dog without a tail" shows a dog with a tail or a frame that hides it), complex spatial relationships, the correct positioning of elements, or the rendering of consistent gaze and distance relationships ("two women behind a door"). They sometimes translate simple actions such as "fighting" into dance scenes, and struggle to represent temporal sequences such as the beginnings and ends of gestures ("starting to eat" or "having finished eating"). "These GAIs allow us to reflect on our own way of seeing and representing the world," says Enzo D'Armenio, former researcher at ULiège, junior professor at the University of Lorraine and lead author of the article. "They reproduce visual stereotypes from their databases, often constructed from Western images, and reveal the limitations of translation between verbal and visual language."

Repeat, validate and analyse

The results obtained by the research team were validated by repetition - up to fifty generations per prompt - in order to establish their statistical robustness. The models also have distinct aesthetic signatures. Midjourney favours "aestheticised" renderings, with artefacts or textures that embellish the image, sometimes at the expense of strict instruction respect, while DALL·E, which is more "neutral" in terms of texture, offers greater compositional control but can vary more in terms of orientation or number of objects. The series of 50 tests on the prompt "three vertical white lines on a black background" illustrate these trends: relative consistency but frequent artefacts for Midjourney; variability in the number and orientation of lines for DALL·E.

The study points out that these AIs are statistical. "GAIs produce the most plausible result based on their training databases and the (sometimes editorial) settings of their designers," explains Adrien Deliège, a mathematician at ULiège, "these choices might standardise the gaze and convey or reorient stereotypes"A telling example: given the prompt "CEO giving a speech," DALL·E may generate mostly women, while other models produce almost exclusively middle-aged white men, a sign that the imprint of designers and datasets influences the machine's "vision" of the world.

Researchers emphasise that evaluating these technologies requires more than just measuring their statistical effectiveness; it also necessitates using tools from the humanities to understand their cultural and symbolic functioning. "AI tools are not simply automatic tools," concludes Enzo D'Armenio. "They translate our words according to their own logic, influenced by their databases and algorithms. The humanities have an essential role to play in understanding and evaluating them." And while these AI tools can already help us illustrate our ideas, they still have a long way to go before they can translate them perfectly.

 

AI world model to simulate the Earth System

The WOW project will develop a new AI approach to take climate modeling to a new level, from simulating global climate change down to better estimates of highly local impacts on ecosystems and societies



Karlsruher Institut für Technologie (KIT)





Climate change is already reshaping global weather patterns and ecosystems around the world. In the long-term, its consequences could range from further substantial increases in the number of extreme weather events to even the collapse of entire ecosystems. “Numerical climate, weather, and environmental models already help us estimate these coupled changes across large spatial and temporal scales. However, modeling the entire Earth system at the required level of complexity has remained a formidable challenge for decades. AI has the potential to be a game-changing technology in modeling complex systems such as the Earth system” says tenure-track Professor Peer Nowack from KIT’s Institute of Theoretical Informatics who coordinates the project. “AI can emulate, i.e. mimic, the behavior of computationally expensive physics-based models. But the truly transformative step is that it can be trained or fine-tuned directly on observational data. In weather forecasting, this has led to AI models surpassing conventional models in key performance scores within just a few years. This technology offers opportunities for environmental modeling that go far beyond weather forecasting alone.”

 

In the WOW project, Nowack and seven other KIT researchers are now going one step further: They investigate how a number of these AI models for different processes in the Earth system can be coupled through their “latent spaces” – effectively abstractions of data within the AI models. This approach promises to be particularly effective to couple AI sub-models across scales of space and time. To this end, the team wants to pursue an AI approach from computer science, referred to as “world models”, but in this case applied to the actual physical world of the Earth system. 

 

How Can the AI World Model be Shaped?

With WOW, the team will thus develop new methods that can link different AI models, following a modular approach that promises both high task-specific performance but also global consistency and efficiency. For the Earth system, these AI sub-models include emulators of global climate models, AI-based weather forecasting models as well as models that simulate highly local phenomena such as wildfires or flooding events. The aim is to link those initially separately trained and task-oriented AI sub-models to form a consistently coupled end-to-end process chain from global changes to local impacts. In order to enable these improvements, new advances - especially in AI methodology and in the relevant AI sub-models - will be developed as part of the project. Consequently, the team is a multidisciplinary mix of computer scientists, and environmental scientists.

 

With the world model, the researchers hope to better understand often highly nonlinear interactions between the atmosphere, water cycle, and the land surface. “We want to know how variations in one part of the Earth system affect others – for example, how droughts or changed cloud formation might feedback onto climate and vice versa,” says Professor Almut Arneth from the Institute of Meteorology and Climate Research – Atmospheric Environmental Research, i.e. KIT’s Campus Alpin located in Garmisch-Partenkirchen, who is also involved in the research project. “This could help us to reveal so far hidden connections in the climate system.” 

 

Relevance to Other Fields of Knowledge

Even in the mid-term, the new AI world model might help to better assess risks, and to make well-founded decisions for climate adaptation and mitigation measures. “In the future, our methods might also be applied to other natural sciences where complex systems are modeled,” explains Dr. Markus Götz from KIT’s Scientific Computing Center. “If we learn to couple AI models efficiently, we can understand relations between them faster and more accurately. All told, this offers great opportunities for science.” The Carl Zeiss Foundation is funding the WOW project for five years with six million euros. 

 

More information

More information on the KIT Climate and Environment Center

More details on the KIT Information, Systems, Technologies Center

 

 

In close partnership with society, KIT develops solutions for urgent challenges – from climate change, energy transition and sustainable use of natural resources to artificial intelligence, sovereignty and an aging population. As The University in the Helmholtz Association, KIT unites scientific excellence from insight to application-driven research under one roof – and is thus in a unique position to drive this transformation. As a University of Excellence, KIT offers its more than 10,000 employees and 22,800 students outstanding opportunities to shape a sustainable and resilient future. KIT – Science for Impact.



Texas A&M researchers use AI to identify genetic ‘time capsule’ that distinguishes species



A new study, published in Nature, reveals a conserved genetic region that preserves species history through waves of gene flow and may be crucial to the development of some common X-linked diseases.



Texas A&M University

Murphy and Foley 

image: 

Texas A&M University researchers Dr. Bill Murphy and Dr. Nicole Foley led an AI-driven genome study that identified a genetic region conserved across mammals — a “time capsule” that helps preserve species identity and may inform human reproductive health.

view more 

Credit: Texas A&M University




In a groundbreaking study, scientists from the Texas A&M College of Veterinary Medicine and Biomedical Sciences (VMBS) have utilized cutting-edge artificial intelligence methods to identify a region of the X chromosome that has maintained the distinctiveness of mammal species for millions of years.

Their findings shed new light on how species maintain their genetic identity, even when hybridization acts to homogenize their gene pools.

“We know that species like big cats; wolves, dogs and coyotes; and even whales and dolphins have interbred to create hybrid offspring. What has been less clear has been why, despite all this interbreeding, these animals have remained separate species,” said Dr. Nicole Foley, a research assistant professor in the VMBS’ Department of Veterinary Integrative Biosciences and the study’s lead author.  

The mixing of DNA between species is common across the Tree of Life and often helps species survive as they explore new environments and encounter new pathogens or environmental conditions.

A major obstacle has been the lack of detailed genetic recombination maps, which are crucial for understanding how the shuffling of genes during reproduction, together with natural selection, influences the emergence of reproductive barriers in nature. This genetic swapping makes it more challenging for scientists to accurately map out species relationships, which are crucial for understanding the evolutionary history of animals.

Now, using AI-driven genome analysis, researchers can unlock this hidden blueprint of mammalian evolution.

A time capsule in the genome

A major discovery from these studies is the identification of a massive region on the X chromosome that is shared across most mammalian species for more than 100 million years.

Dubbed the X-linked recombination desert (XLRD), this region spans nearly 30% of the X chromosome. It serves as a powerful reproductive barrier and plays a crucial role in preserving the true evolutionary relationships among species, even when widespread genetic exchange clouds the rest of the genome.

“Remarkably, the XLRD appears to be a recurrent and ancient feature in mammals, functioning almost like a genomic ‘time capsule’ that records deep evolutionary history,” Foley said.

“We were unable to see this before because we never had this diversity of recombination maps,” she said. “When we lined up all of the X chromosomes for those 22 species and we looked at the recombination map, it was pretty much the same map — it dipped in the exact same place, so we knew there was something functionally important going on in this part of the chromosome.”

“We had some evidence from previous studies based on a small handful of species that the XLRD exists, but we were very surprised to discover that this region was so conserved and so ancient,” said Dr. Bill Murphy, a distinguished professor in the VMBS and director of the Texas A&M Center for Comparative Genomics.

This discovery was especially exciting because the XLRD appears to play a key role in speciation — the process by which one species evolves into distinct new species through the development of reproductive barriers.

The XLRD’s reproductive role

The researchers also discovered that the XLRD region is notably enriched with genes related to male and female reproduction and sex chromosome silencing; this suggest that genetic switches relevant to X chromosome regulation in both sexes, which are embedded within and around the XLRD, may play a larger role in infertility as well as in human conditions like polycystic ovarian syndrome, an endocrine disorder that has been linked to reproductive and metabolic issues.

“This is one of the more novel findings because it has been thought that reproductive barriers arise rapidly and from unique genetic sources across different groups of species. Our results suggest this is not the case,” Murphy said. “For all the reasons, it looks like the XLRD is a key region associated with reproductive dysfunction in hybrids and reproductive isolation in nature.”

These discoveries open new avenues for understanding problems — and finding solutions — related to human reproduction and fertility.

By Texas A&M University College of Veterinary Medicine and Biomedical Sciences

AI's insatiable appetite for cash, energy and data: Bubble ahead?




Issued on: 12/11/2025 - FRANCE24

44:31 min VIDEO 




Could the artificial intelligence boom already be running out of road? We examine the warning signs. To think that three short years ago, the commercial launch of ChatGPT took the world by storm. AI has since sparked a global race for cash, energy resources and data – all to feed the seemingly insatiable appetite of large language model computing systems.

With a few US companies dominating the AI race – and a US president who's all-in with billionaires – market watchers worry about investors tempted by the easy money of rising tech stocks at the expense of the entire rest of the economy. Is it a bubble? Is it about to burst?

And with what consequences? How should Europe and the rest of the world prepare? More broadly, is AI changing humanity and our world for better – or for worse?

Produced by François Picard, Rebecca Gnignati, Charles Wente, Ilayda Habip & Jean-Vincent Russo

OUR GUESTS
Rayna STAMBOLIYSKA Cybersecurity and digital diplomacy expert; 

Writer Tanya PERELMUTER Cofounder, Fondation Abeona

Simon McGARR Solicitor with McGarr Solicitors and director of Data Compliance Europe

Leïla MörchPartner, Maresquier Partners



AI bubble about to pop as returns on investment fall short?
DW
11/10/2025

Billions have poured into AI, helping stock valuations soar. But the cracks are starting to show. Slowing adoption, surging costs and elusive profits are fueling warnings that the boom may be headed for a hard reset.

The artificial intelligence (AI) party is still in full swing, with tens of billions globally pouring into infrastructure, startups and attracting the best talent.

Among the headline announcements this year: ChatGPT parent company Open AI, Softbank and Oracle pledged to invest $500 billion (€433 billion) in AI supercomputers, Open AI and chip giant Nvidia announced a $100 billion fund to maintain the United States' dominance in advanced chips, while Chinese tech giants Alibaba and Tencent hiked investments to help speed up China's ambition to lead AI by 2030.

Since ChatGPT’s debut in November 2022, AI-related stocks have added an estimated $17.5 trillion in market value, according to Bloomberg Intelligence, driving around 75% of the S&P 500’s gains and propelling companies like Nvidia and Microsoft to record-breaking valuations.

Corporations are hesitant over AI adoption

But signs of a hangover are getting harder to ignore. AI usage by corporations is slipping, spending is tightening and the machine learning hype has massively outpaced the profits.

Many economists think usage concerns, barely three years into AI going mainstream, dropkick the prevailing narrative that AI would revolutionize how businesses operate by streamlining repetitive tasks and improving forecasting.

"The vast bet on AI infrastructure assumes surging usage, yet multiple US surveys show adoption has actually declined since the summer," Carl-Benedikt Frey, professor of AI & work at the UK's University of Oxford, told DW. "Unless new, durable use cases emerge quickly, something will give — and the bubble could burst."

The US Census Bureau, which surveys 1.2 million US companies every fortnight, found that AI-tool usage at firms with more than 250 employees dropped from nearly 14% in June to under 12% in August.



AI’s biggest challenge remains its tendency to hallucinate — generating plausible but false information. Other weaknesses are inconsistent reliability and the poor performance of autonomous agents, which complete tasks successfully only about a third of the time.

"Unlike an intern who learns on the job, today’s pretrained [AI] systems don’t improve through experience. We need continual learning and models that adapt to changing circumstances," said Frey.

Unsustainable capital burn


As the gap widens between sky-high expectations and commercial reality, investor enthusiasm for AI is starting to fade.

In the third quarter of the year, venture-capital deals with private AI firms dropped by 22% quarter on quarter to 1,295, although funding levels remained above $45 billion for the fourth consecutive quarter, market intelligence firm CB Insights wrote last month.

"What perturbs me is the scale of the money being invested compared to the amount of revenue flowing from AI," economist Stuart Mills, a senior fellow at the London School of Economics, told DW.

Microsoft has poured billions into ChatGPT owner Open AI
Image: Mateusz Slodkowski IMAGO/SOPA Images

Market leader OpenAI, which is backed by Microsoft, generated $3.7 billion in revenue last year, versus total operating expenses of $8-9 billion. The company says it is on course to make $13 billion this year but is still expected to burn through $129 billion before 2029, news site The Information calculated in September.

Mills thinks generative AI companies like Elon Musk's Grok and ChatGPT are "charging far less than they need to make a profit" and should raise subscription prices.

Few have quantified the AI bubble more starkly than Julien Garran, partner at UK-based research firm MacroStrategy Partnership. He argues that the sheer volume of capital flowing into AI — despite little evidence of sustainable returns — dwarfs previous speculative frenzies.

"We estimate a misallocation of capital equivalent to 65% of US GDP — four times bigger than the housing buildup before the 2008/9 financial crisis and 17 times bigger than the dot-com bust," Garran told DW.

How AI boom is impacting Latin America, US  07:31


Investors increasingly cautious

Recent earnings from Big Tech have sparked cautious optimism, but also fresh doubts about AI’s staying power. Data analytics and intelligence platform Palantir's Q3 revenue surged 63% year-over-year, but its stock price fell by up to 7% on the news. AMD and Meta also saw their strong AI-related earnings overshadowed by market concerns about sustainability.

That disconnect between soaring valuations and shaky fundamentals is exactly what worries Mills, who sees a widening gap between what AI promises and what it actually delivers to businesses.

"The data suggests that AI is not penetrating high enough up the value chain. Loads of people are using it, but it's not being used for tasks that directly contribute to value production," he told DW.

Nvidia's upcoming earnings on November 19 may prove a key test of whether the AI boom still has legs. In the second quarter, Nvidia's data center sales alone made up 88% of total revenue, which hit a record $46.7 billion. For Q3, the company has guided $54 billion, projecting 54% year-on-year growth, which would equate to a full-year total of more than $200 billion.


Nvidia founder and CEO Jensen Huang has turned the chipmaker into a nearly $5 trillion giant
Image: Jung Yeon-je/AFP


When will the bubble pop?

"With the exception of Nvidia, which is selling shovels in a gold rush, most generative AI companies are both wildly overvalued and wildly overhyped," Gary Marcus, Emeritus Professor of Psychology and Neural Science at New York University, told DW. "My guess is that it will all fall apart, possibly soon. The fundamentals, technical and economic, make no sense."

Garran, meanwhile, believes the era of rapid progress in large language models (LLMs) is drawing to a close, not because of technical limits, but because the economics no longer stack up.

"They [AI platforms] have already hit the wall," Garran said, adding that the cost of training new models is "skyrocketing, and the improvements aren’t much better."

Striking a more positive tone, Sarah Hoffman, director of AI Thought Leadership at the New York-based market intelligence firm AlphaSense, predicted a "market correction" in AI, rather than a "cataclysmic 'bubble bursting.'"

After an extended period of extraordinary hype, enterprise investment in AI will become far more discerning, Hoffmann told DW in an emailed statement, with the focus "shifting from big promises to clear proof of impact."

"More companies will begin formally tracking AI ROI [return on investment] to ensure projects deliver measurable returns," she added.

Edited by: Uwe Hessler
Nik Martin is one of DW's team of business reporters.


AI’s energy usage is less than previously thought



Energy consumption in the U.S. shifts perception of the environmental risks of AI



University of Waterloo





Contrary to popular belief, new research finds that the use of artificial intelligence has a minimal effect on global greenhouse gas emissions and may actually benefit the environment and the economy.

For their study, researchers from the University of Waterloo and the Georgia Institute of Technology combined data on the U.S. economy with estimates of AI use across industries to determine the environmental fallout if AI use continues its current trajectory.

According to the U.S. Energy Information Administration, 83 per cent of the U.S. economy is powered by petroleum, coal and natural gas, all of which contribute to climate change when burned. The study authors found that while power usage from AI in the U.S. equalled the energy consumption for all of Iceland, the amounts were not noticeable on a global or national scale.

“It is important to note that the increase in energy use is not going to be uniform. It’s going to be felt more in the places where electricity is produced to power the data centres,” said Dr. Juan Moreno-Cruz, a professor in the Faculty of Environment at Waterloo and Canada Research Chair in Energy Transitions. “If you look at that energy from the local perspective, that's a big deal because some places could see double the amount of electricity output and emissions. But at a larger scale, AI’s use of energy won’t be noticeable.”

While this paper did not examine the effects on local economies where the data centres are located, the researchers found some encouraging results.

“For people who believe that the use of AI will be a major problem for the climate and think we should avoid it, we're offering a different perspective,” Moreno-Cruz said. “The effects on climate are not that significant, and we can use AI to develop green technologies or to improve existing ones.”

To reach their conclusions, environmental economists Moreno-Cruz and Dr. Anthony Harding examined different sectors of an economy, the jobs within those sectors, and what portion of them could be done by AI.

Moreno-Cruz and Harding plan to repeat the study for other countries to measure the impacts of AI adoption in other parts of the world.

The paper, Watts and Botts: The Energy Implications of AI Adoption, appears in Environmental Research Letters.