Wednesday, November 12, 2025

Most workers are enthusiastic about AI, but are employers involving them in creating new workflows?

By Alessio Dell'Anna & Mert Can Yilmaz
Published on Euronews

Worldwide, Canadian businesses are at the forefront of designing new AI workflows including their workers. Which countries lead the way in Europe?

AI is coming to many a workplace, one way or another, and employees in Europe know that.

A new survey by human resources firm Adecco on 37,500 people in 30 countries — most of them European — shows 55% of them expect employers to integrate AI agents into their workflows within a year.

Most businesses, however, aren't yet including employees in designing AI-integrated processes: the world average of people saying they are being consulted on ways of working with AI stands at 30%.

China and Europe lag slightly behind at 23% and 29% respectively, compared to America's 37% and Canada's 50% employee involvement rate.

Focusing on European countries specifically, the rate in Germany, France and the Netherlands is 36% — higher than the global average — with Switzerland and Slovenia leading the continent (41%).

'Future-ready' workers are born and bred, too: Where are they?

The survey also shows that future-ready workers are much more likely to get involved in AI-related decisions in the workplace.The rate among them jumps to 41%.

According to Adecco, future-readyemployees are those who already, and proactively, experiment with AI uses at work, and are curious to learn new skills, even outside working hours.

The highest rate of future-ready workers in Europe was identified in Spain — third worldwide (7%) and level with India.

"They embrace new technologies and they have versatile skills," the HR firm said, adding that they are more likely to respond positively to questions such as "AI has made me productive".

Crucially, Adecco adds that these types of workers "aren’t simply found" but "are supported by their employers to become high-performing talent".

"They won't wait around if they don’t understand how or where they fit in as AI continues to quickly reshape the workforce," the company said.

On this note, being future-ready and growing professionally is becoming increasingly important to workers.

The percentage who say they will stay with their employer for the next 12 months under the condition of career progression is now at 33%, an 11-point increase from 2024.

How optimistic are workers about the impact of AI in the future?

Most interviewed workers don't seem to fear AI: some 76% believe AI could create more jobs, while only 23% anticipate AI-driven layoffs.

The most positive country not only in Europe, but worldwide, appears to be Germany: 93% say they believe AI could bring more job opportunities than it takes away.

In fact, 77% of workers globally say AI now allows them to carry out tasks they couldn't before.

This means having more time to perform duties like strategic thinking and checking work quality and accuracy, as well as upskilling and being more creative.

Ultimately, three-quarters of workers say AI has already changed or will change their work, for example, modifying the activities carried out at work or changing the skills required for the role.

Adecco's recommendation to employers is to guide employees in developing "new, value-adding capabilities through targeted upskilling and career development".

"Position AI as a tool that complements, enhances and augments human efforts, and therefore empowers employees," the company said.


AI can deliver personalized learning at scale, study shows


A generative AI teaching assistant for personalized learning in medical education

A Dartmouth study finds that curated chatbots can be effective for 24/7 support and are more trusted by students, with caveats for future development




Dartmouth College





A new Dartmouth study finds that artificial intelligence has the potential to deliver educational support that meets the individual needs of large numbers of students. The researchers are the first to report that students may put more trust in AI platforms programmed to pull answers from only curated expert sources, rather than from massive data sets of general information.

Professor Thomas Thesen and co-author Soo Hwan Park tracked how 190 medical students in Dartmouth's Geisel School of Medicine used an AI teaching assistant called NeuroBot TA, which provides around-the-clock individualized support for students in Thesen's Neuroscience and Neurology course.

Thesen and Park built the platform using retrieval-augmented generation, or RAG, a technique that anchors the responses of large language models to specific information sources. This results in more accurate and relevant answers by reducing "hallucinations," AI-generated information that often sounds convincing but is inaccurate or incorrect.

NeuroBot TA is designed to base its responses on select course materials such as textbooks, lecture slides, and clinical guidelines. Unlike general chatbots that have been known to invent facts, NeuroBot TA only answers questions it can support with the vetted materials.

Thesen and Park's study examined whether the RAG approach inspires more trust in student users, and how they might actually integrate such safeguarded systems into their learning. They report in npj Digital Medicine that students overwhelmingly trusted NeuroBot's curated knowledge more than generally available chatbots.

This pattern indicates that generative AI and RAG have the potential to provide tailored, interactive instruction outside of the traditional academic setting, says Thesen, the study's first author and an associate professor of medical education. Park, who received his MD from Dartmouth in 2025 and took Thesen's Neuroscience and Neurology course, is now a neurology resident at Stanford Health Care.

"This work represents a step toward precision education, meaning the tailoring of instruction to each learner's specific needs and context," Thesen says. "We're showing that AI can scale personalized learning, all while gaining students' trust. This has implications for future learning with AI, especially in low-resource settings."

"But first, we need to understand how students interact with and accept AI, and how they will react if guardrails are implemented," he says.

The study focused on students from two different class years who took the course in fall 2023 and fall 2024. Of the students in the study, 143 completed a final survey and provided comments about their experience using NeuroBot TA. More than a quarter of respondents highlighted the chatbot's trust and reliability, as well as its convenience and speed, especially when studying for exams. Nearly half thought the software was a useful study aide.

"Transparency builds trust," Thesen says. "Students appreciated knowing that answers were grounded in their actual course materials rather than drawn from training data based on the entire internet, where information quality and relevance varies."

The findings also highlight some of the challenges educators may face in implementing generative AI chatbots, Thesen and Park report. Surveys have shown that nearly half of medical students use chatbots at least weekly. In the Dartmouth study, students mainly used NeuroBot TA for fact-checking—which increased dramatically before exams—rather than for in-depth learning or long, engaging discussions.

Some users also were frustrated by the platform's limited scope, which might nudge students toward using larger but less quality-controlled chatbots. The study also revealed a unique vulnerability students face when interacting with AI—they often lack the expertise to identify hallucinations, Thesen says.

"We're now exploring hybrid approaches that could mark RAG-based answers as highly reliable while carefully expanding the breadth of information students can encounter on their learning journey," he says.

AI tools like NeuroBot TA could have the most significant impact in institutions where students face overcrowded classrooms and limited access to instructors by expanding access to individualized learning, Thesen says. 

That impact is being seen with AI Patient Actor, which was developed in Thesen's Neuroscience-Informed Learning and Education Lab in 2023. The platform helps medical students hone their communication and diagnostic skills by simulating conversations with patients and providing immediate feedback on students' performance. AI Patient Actor is now used in medical schools in and outside the United States and in Dartmouth medical courses, including the new On Doctoring curriculum.

An August study led by Thesen and Roshini Pinto-Powell, a professor of medicine and medical education and co-director of On Doctoring, found that AI Patient Actor provided first-year medical students with a safe space to test their skills, learn from their mistakes, and identify their strengths and weaknesses.

For NeuroBot TA, Thesen and Park plan to enhance the software with teaching techniques and cognitive-science principles known to produce deeper understanding and long-term retention, such as Socratic tutoring and spaced retrieval practice. 

Rather than providing answers, a chatbot would guide students to discover solutions through targeted questioning and dialogue and quiz them at regular intervals. Future systems also could choose one strategy or the other depending on context, such as preparing for an exam versus doing regular study, Thesen and Park suggest. 

"At a metacognitive level, students, like the rest of us, need to understand when they can use AI to just get a task done, and when and how they should use it for long-term learning," Thesen says.

"There is an illusion of mastery when we cognitively outsource all of our thinking and learning to AI, but we're not really learning," he says. "We need to develop new pedagogies that can positively leverage AI while still allowing learning to occur." 

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date

When AI draws our words



A new study proposes visual composition criteria for evaluating Midjourney and DALL·E, beyond mere computer scores



University of Liège




Can we really trust artificial intelligence to illustrate our ideas? A team of scientists has examined the capabilities of Midjourney and DALL·E - two Generative Artificial Intelligence (GAI) software programs - to produce images from simple sentences. The verdict is mixed...  between aesthetic feats and beginner's mistakes, machines still have a long way to go.

Since the emergence of GAIs such as Midjourney and DALL·E, creating images from simple sentences has become a fascinating, and sometimes even disturbing, reality. Yet behind this technical feat lies an essential question: how do these machines translate words into visuals? This is what four researchers from the University of Liège, the University of Lorraine and EHESS sought to understand by conducting an interdisciplinary study combining semiotics, computer science and art history.

"Our approach is based on a series of rigorous tests," explains Maria Giulia Dondero, semiotician at the University of Liège. "We submitted very specific requests to these two AI systems and analysed the images produced according to criteria from the humanities, such as the arrangement of shapes, colours, gazes, the specific dynamism of the still image, the rhythm of its deployment, etc." The result? AI systems are capable of generating images that are supposedly aesthetic, but often struggle to follow even the simplest instructions.

The study reveals surprising difficulties, such as the fact that GAIs do not understand negation well ("a dog without a tail" shows a dog with a tail or a frame that hides it), complex spatial relationships, the correct positioning of elements, or the rendering of consistent gaze and distance relationships ("two women behind a door"). They sometimes translate simple actions such as "fighting" into dance scenes, and struggle to represent temporal sequences such as the beginnings and ends of gestures ("starting to eat" or "having finished eating"). "These GAIs allow us to reflect on our own way of seeing and representing the world," says Enzo D'Armenio, former researcher at ULiège, junior professor at the University of Lorraine and lead author of the article. "They reproduce visual stereotypes from their databases, often constructed from Western images, and reveal the limitations of translation between verbal and visual language."

Repeat, validate and analyse

The results obtained by the research team were validated by repetition - up to fifty generations per prompt - in order to establish their statistical robustness. The models also have distinct aesthetic signatures. Midjourney favours "aestheticised" renderings, with artefacts or textures that embellish the image, sometimes at the expense of strict instruction respect, while DALL·E, which is more "neutral" in terms of texture, offers greater compositional control but can vary more in terms of orientation or number of objects. The series of 50 tests on the prompt "three vertical white lines on a black background" illustrate these trends: relative consistency but frequent artefacts for Midjourney; variability in the number and orientation of lines for DALL·E.

The study points out that these AIs are statistical. "GAIs produce the most plausible result based on their training databases and the (sometimes editorial) settings of their designers," explains Adrien Deliège, a mathematician at ULiège, "these choices might standardise the gaze and convey or reorient stereotypes"A telling example: given the prompt "CEO giving a speech," DALL·E may generate mostly women, while other models produce almost exclusively middle-aged white men, a sign that the imprint of designers and datasets influences the machine's "vision" of the world.

Researchers emphasise that evaluating these technologies requires more than just measuring their statistical effectiveness; it also necessitates using tools from the humanities to understand their cultural and symbolic functioning. "AI tools are not simply automatic tools," concludes Enzo D'Armenio. "They translate our words according to their own logic, influenced by their databases and algorithms. The humanities have an essential role to play in understanding and evaluating them." And while these AI tools can already help us illustrate our ideas, they still have a long way to go before they can translate them perfectly.

No comments: