It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Tuesday, April 14, 2026
Novo Nordisk joins forces with OpenAI to fast-track drug research
Danish pharma company Novo Nordisk has announced a partnership with OpenAI to apply artificial intelligence across its drug development process.
Novo Nordisk's partnership with OpenAI will “help the company bring new and better treatment options to patients faster,” the Danish pharmaceutical company announced on Tuesday
The collaboration will allow Novo Nordisk to apply advanced AI to analyse complex datasets, identify potential drugs, and cut the time between research and patient access.
“This partnership is one important step in positioning Novo Nordisk to lead in the next era of healthcare. There are millions of people living with obesity and diabetes who need treatment options, and we know there are therapies still waiting to be discovered that could change their lives,” said Mike Doustdar, president and CEO of Novo Nordisk, in a press release announcing the collaboration.
The Danish company’s flagship products target chronic diseases and it is best known for its diabetes and weight-loss treatments, including Ozempic and Wegovy.
“Integrating AI in our everyday work gives us the ability to analyse datasets at a scale that was previously impossible, identify patterns we could not see, and test hypotheses faster than ever," Doustdar added.
"This means discovering new therapies and bringing them to market faster than ever before."
The partnership will also apply OpenAI’s capabilities to improve efficiency in manufacturing, supply chain and distribution, as well as corporate operations.
The pilot programmes will launch across research and development (R&D), manufacturing, and commercial operations, aiming for full integration by the end of the year.
“AI is reshaping industries and in life sciences, it can help people live better, longer lives,” said Sam Altman, CEO of OpenAI.
“This collaboration with Novo Nordisk will help them accelerate scientific discovery, run smarter global operations, and redefine the future of patient care,” he added.
In 2024, the Novo Nordisk Foundation partnered with Nvidia and the Export and Investment Fund of Denmark (EIFO) to establish the Danish Centre for AI Innovation, which operates Gefion – Denmark's first AI-ready supercomputer.
The initiative aimed to accelerate research and innovation in multiple fields, including healthcare and life sciences.
Pharmaceutical companies are increasingly investing in AI for drug discovery and development.
Eli Lilly, in the race with Novo Nordisk to lead the weight-loss drug market, announced a partnership with Insilico Medicine in March 2026 to develop and commercialise medicines discovered using artificial intelligence.
Under the agreement, worth up to $2.75 billion (€2.39bn), the American company will receive an exclusive worldwide licence for the development, manufacturing and commercialisation of novel oral therapeutics in preclinical development for certain indications, the companies said.
Can AI systems replace human judges and lawyers?
Copyright Ayaulim Amangeldina
By Ayaulim Amangeldina
Published on
Can AI make fundamentally ethical decisions? If it delivers an unlawful verdict, who will be punished? In the pursuit of efficiency and speed, can we trust people's fates to artificial intelligence?
Imagine a courtroom where artificial intelligence (AI) replaces jurors, and a perfectly designed AI agent replaces the lawyer.
This is exactly the type of scenario that was pondered at the International MaxUp Legathon, which took place in Kazakhstan's capital, Astana.
Here, at the Maqsut Narikbayev University, students from 13 countries explored the impact of new technologies such as AI on legal systems, legal principles, ethics, and human rights, and shared experiences from their own countries.
Can AI replace judges?
Judicial decisions require factual analysis, moral reasoning, and human empathy to reach a fair resolution.
Firstly, artificial intelligence has no emotions. It cannot offer mitigating circumstances, show empathy for the situation, or feel compassion. AI systems are built on repetition. They learn from data, identify patterns, and make decisions based on experience.
If the outcome of previous experience was incorrect, the AI will resolve similar cases incorrectly in the same way.
Meanwhile, modern AI bots produce a specific result based on systematic repetition of data. They often fail to explain the logical chain by which they arrived at a particular choice. In courts, however, this justification is crucial.
Sergey Pen, Deputy Chairman of the Board for Science, Innovation, and Artificial Intelligence at Maqsut Narikbayev University (MNU) in Astana, believes that, for these reasons, AI currently has no chance of replacing judges.
Grand final of International MaxUp Legathon 2026 Press service of Maqsut Narikbayev University
According to him, the language model provides answers based only on its knowledge base and statistical contextual matches, leaving out the reasoning provided by a human judge.
"There's a huge problem, as language models cannot reproduce the legal chain of reasoning, or so-called legal reasoning,” said Pen
Nowadays, he said, AI should be viewed only as a tool alongside human decision-making.
In Kazakhstan, such tools review judicial practice and analyse legislation. It can process large volumes of information more quickly. There is also a judicial system which is already using AI for official purposes as an internal tool to help judges follow consistent judicial practices and see what decisions are being made on similar types of disputes across Kazakhstan.
But, this does not replace or supplant the legitimacy of the decision itself, which rests solely with humans.
"Only a judge, as a human being, can legitimise and render a judicial decision," explained Pen.
How does AI work in legal matters in other countries?
In China, AI has already entered the courtroom, with a student from the China University of Political Science and Law noting that AI is used in basic and simple tasks.
"In my country,it is now used to fill in some blanks, and maybe help the jury find some cases, [analyse whether] the case is similar to the previous ones, but the AI can't just decide the result," said Chinese law student Hongyi Chen.
Relat
Students from Georgia conducted immersive research to understand how AI could function within the legal system. Analysing international practice, they highlight the gap between technological feasibility and legal legitimacy.
The main risk is the lack of a soul in the algorithm and the impossibility of ethical choice.
"For the time being, the human judge, as an arbitrator, still has to weigh the merits in the case and only then does the legal solution become binding. But if we have already reached this level of AI application, then I think the possibility exists for AI to pass judgement in the future," said Tbilisi University student Keti Khaliashvili.
Participants from 23 universities from different countries were divided into 2 leagues: 10 English-speaking groups and 13 Russian-speaking groups. Press service of Maqsut Narikbayev University
Students from Canada's McGill University noted that the integration of AI into legal matters has to be explored and regulated.
“Personally, at the moment, I feel that AI is not developed yet to the point where it could completely replace human judgment," student Elisa Xue noted.
The responsibility dilemma
The most fundamental reason why AI can't replace humans is responsibility. A court decision is an act of authority, for which judges are accountable. If a judge makes a mistake, an appeal can be filed, disciplinary action can be taken, or the matter can be resolved by law.
But in the case of artificial intelligence, who is to blame? The code developer or the cloud service provider?
MNU students believe it's necessary to determine who will bear responsibility.
"If AI-generated content causes harm, is it responsible, and is "AI" labelling necessary? There are no categories in law that differentiate between harmless content, harmful content, or potentially harmful content. Law must be adaptable and apply to different levels of legal relations," argues MNU student, Islam Shagatayev.
The winners of the legathon agree with him, and they propose the introduction of absolute responsibility for manufacturers and system developers.
Winners of Legathon - students of Al-Farabi Kazakh National University in Almaty. Press service of Maqsut Narikbayev University
"We think that an individual or user does not always have any protection. That's why the developer should bear most of the responsibility," said Alissa Doktorovich, a student of Al-Farabi Kazakh National University.
The fact that discussions about AI are taking place right now in Kazakhstan is symbolic, as 2026 here marks the Year of Digitalisation and Artificial Intelligence.
The country's Law on "Artificial Intelligence", passed in November 2025, introduced and enshrined the principle of anthropocentricity, meaning AI is merely a tool that imitates human cognitive functions, and does not replace human responsibility.
The Legathon will allow us to rethink and reassess the relationship between AI and the law. In an era when law is becoming increasingly algorithmic, we need to remember the human element.
How will AI impact tourism and travel? Your next trip could be entirely planned by ChatGPT
Travel companies Rome2Rio and Omio are integrating with OpenAI, giving over 900 million weekly ChatGPT users instant access to routes, prices and transport options worldwide.
From rushing to catch a flight to panicking mid-air about how to reach a foreign hotel, travel anxiety may soon be a thing of the past as artificial intelligence promises to make the whole experience seamless, if perhaps a little too predictable
Two global travel platforms are launching apps with OpenAI to offer the platform's 900 million weekly users access to routes, prices and transport options worldwide.
Rome2Rio and its German parent company, Omio, have announced they are launching apps options within ChatGPT which will allow users to search, compare and plan journeys across trains, buses, flights, ferries and other modes of transportation.
Finding the best route between two cities often means juggling multiple booking sites to piece together connections — but new AI-powered apps are changing that.
Users can simply ask "What's the fastest and cheapest route from Rome to Florence this Saturday?" and get everything in a single conversation.
Relate
One in three travellers is already using AI to plan trips, often turning to the technology before they even decide on a destination, according to Rome2Rio's research.
Despite AI being far from perfect yet and being able to hallucinate and make things up, the travel companies say they use live data and not AI-generated estimates.
"There's a real train, there's a real bus, a ferry — and it's all connected via API, deep technical integrations," said Naren Shaam, founder and CEO of Omio told Euronews Next.
"Anything built off of that is real content."
The technology is designed to reduce AI hallucination by pulling from a verified inventory rather than generating approximate travel information, he added.
AI may also help the travel experience as it can tell you about disruptions and provide alternate routes, Shaam said.
"If there is a disruption on a line we should, in theory, send you a message saying, 'Hey, there's likely a disruption. Here are a couple of alternate options to consider,'" he said, adding that while last-minute changes may cost more, the goal is to make travel "a lot more transparent and help customers make sound decisions".
Relate
Despite the convenience AI brings to travel, there is a fear that if everyone uses it to plan their routes and their holidays, already over-touristed areas may become even more populated.
And will an algorithm take away the wanderlust of travel, stumbling across an unexpected route, discovering a town not on any itinerary and making a split-second decision at a station?
AI systems are trained on popularity data, they reinforce existing patterns, meaning they may nudge users to the same routes and travel adventures that already dominate internet search results.
Shaam acknowledges the risk, but argues the effect can also go the other way.
"AI can empower people to discover more routes," he said. "You have to trigger more questions for it to go deeper into context to give more unique itineraries."
The idea is that conversational AI, unlike a search bar, invites follow-up questions and may lead a user, who was asking about where to spend a night in Madrid, to ask about other parts of Spain.
Shaam also argues that AI-driven discovery could help spread tourism beyond overcrowded major cities, nudging travellers toward rail and bus connections to secondary destinations.
"If you go to Spain and you're not only going to Madrid and Barcelona, but Seville, Granada, Bilbao — those are two, two-and-a-half hour train journeys," he said.
"If AI can make that trip happen, it's good for local ecosystems too."
For now, Omio frames AI as a tool that handles logistics, leaving the spirit of adventure intact.
New research finds workers are leveraging AI for career mobility as employers struggle to keep pace
New University of Phoenix Career Institute® Career Optimism Index® study points to an emerging shift in workforce power dynamics
An infographic depicting key findings of the 2026 Career Optimism Index® study: While today’s workforce appears to be staying put, a quiet shift is underway. AI is helping workers build confidence, develop skills and prepare for future career moves – potentially away from their current employer.
University of Phoenix Career Institute® today released its sixth annual Career Optimism Index® recurring national workforce research study of 5,000 U.S. working adults and 1,000 employers fielded January 21–February 6, 2026. The study found that while workers appear to be "job hugging” in a stabilizing labor market where mobility remains limited, many are quietly using AI to build their skills, boost confidence, and position themselves for greater career mobility – potentially preparing for their next move, which could be away from their current employer.
Onthe surface, the landscape favors employers: companies are deploying AI to increase productivity, reshape teams, and find efficiencies, according to the World Economic Forum‘s latest AI at Work report. But the 2026 Index points to a new dynamic underway: half of workers (50%) say AI makes them more confident about pivoting to a new role – signaling an impending shift from “job hugging” to “job hopping” that puts power back in workers’ hands. The last time workplace power was firmly in employees’ hands was in 2022, when employers saw a mass exodus of talent seeking greater mobility and opportunity, as highlighted in the 2022 Career Optimism Index® study.
This year’s Index shows workers are increasingly turning to AI independently to strengthen their readiness in a business environment characterized by historically low turnover rates, as illustrated in the U.S. Bureau of Labor Statistics’ January JOLTS report. More than half of workers (53%) say AI advancements boost confidence in building their skills, while 75% say AI increases their confidence at work, and 81% say it helps them identify new ways to apply their skills for future growth.
This AI-driven confidence is translating into optimism: 63% of workers say they feel positive about job opportunities available to them, rising to 75% among workers who have become comfortable and knowledgeable about AI. As job growth shows signs of strengthening, according to the U.S. Bureau of Labor Statistics’ March Employment Situation report, this may mark the moment many workers have been quietly preparing for – when rising confidence and AI-driven skill building begin to translate into increased career movement. At the same time, nearly half of employers (48%) worry they cannot retain AI-fluent talent, highlighting AI capability as both a competitive advantage and a looming retention risk.
Key Findingsfrom the 2026 Career Optimism Index®
AI is increasing workers’ confidence in career mobility: 50% of workers say AI makes them more confident about pivoting into a new role, and workers who are knowledgeable about AI report even greater optimism about available job opportunities than workers overall (75% vs. 63%).
Workers are learning AI independently: Half of workers (50%) say they are learning to use AI independently, pointing to strong employee demand for AI skill-building even without formal employer support.
Employees are looking for more AI guidance: Many workers say employer support has not kept pace with their needs, with 47% saying their employer should be doing more to incorporate AI into their work and 60% wanting more guidance in learning AI tools.
Retention concerns are rising: Nearly half of employers (48%) worry they may be unable to retain AI-fluent talent as demand for those skills continues to grow, and 62% say employees are developing AI skills faster than the organization can adapt.
Clear AI strategy improves job satisfaction: Workers whose employer has a clear plan for AI-enabled growth are significantly more likely to be satisfied in their current job than those whose employer does not (87% vs. 72%).
Why This Matters Now
As organizations accelerate AI adoption, the 2026 Index identifies that workforce implications extend beyond productivity and efficiency. For workers, AI is becoming a tool for career growth, confidence, and mobility. For employers, that creates a new challenge: the same capabilities that help employees become more effective in their current roles may also make them feel more prepared to plan their exit.
“AI is changing the workforce conversation in real time,” said John Woods, Provost and Chief Academic Officer at University of Phoenix. “While many organizations are focused on how AI can improve efficiency, our 2026 Career Optimism Index® study shows workers are focused on how to use AI to help them grow and advance their careers. For employers, this is an important moment to lead with AI clarity, because organizations that make AI part of a broader growth strategy for their people may be better positioned to support engagement, satisfaction, and retention – particularly as hiring shows signs of strengthening and workers gain more confidence to explore new opportunities.”
The findings suggest employers have an opportunity to move from AI experimentation to workforce strategy by defining clear AI career pathways and standards, establishing skills assessment systems that support talent management and internal mobility, expanding workforce training and structured enablement, and building AI capability among managers to foster a stronger culture of AI support.
The Career Optimism Index® study is one of the most comprehensive studies of Americans' personal career perceptions to date. The University of Phoenix Career Institute® conducts this research annually to provide insights on current workforce trends and to help identify solutions to support and advance American careers.
The sixth annual study, fielded between January 21, 2026-February 6, 2026, surveyed 5,000 U.S. adults who either currently work or wish to be working on how they feel about their careers at this moment in time, including their concerns, their challenges, and the degree to which they are optimistic about their careers. The study was conducted among a nationally representative sample of U.S. adults (ages 18 and up). The study also explores insights from 1,000 U.S. employers who are influential or play a critical role in hiring and workplace decisions within a range of departments, company sizes, and industries.
ABOUT UNIVERSITY OF PHOENIX CAREER INSTITUTE®
Housed within the university's College of Doctoral Studies, the Career Institute conducts impactful research and collaborates with leading organizations to explore broad and persistent barriers to career growth. Through the Career Optimism Index® annual studies and targeted reports, the Institute shares actionable insights to inform solutions. For more information, visit www.phoenix.edu/career-institute.
ABOUT UNIVERSITY OF PHOENIX
University of Phoenix is Built for Real Life. 50 Years Strong. The University innovates to help working adults enhance their careers and develop skills in a rapidly changing world through flexible online learning, relevant courses, academic AI pillars, and skills-mapped curriculum for associate, bachelor’s and master’s degree programs. Active students and alumni have access to Career Services for Life® resources including career guidance and tools. For more information, visit phoenix.edu.
Managed misalignment of AI and the impossibility of full AI-human agreement
Dr. Zenil showing on the screen a simulation of AI agents interacting and trying to infleunce one another, along with the various metrics associated with each agent in the arena.
Perfect AI alignment with human values and interests is mathematically impossible, according to a study, but behavioral diversity among AI agents offers the promise of some control. Hector Zenil and colleagues used Gödel’s incompleteness theorem and Turing’s undecidability result for the Halting Problem to show that any LLM complex enough to exhibit general intelligence or superintelligence will also be computationally irreducible and produce unpredictable behavior, making forced alignment impossible. As an alternative, the authors propose a strategy of “managed misalignment,” in which competing AI agents with different cognitive styles and partially overlapping goals operate in distinct roles to check one another.
As each agent attempts to fulfill its own goals with its own modes of reasoning and ethical frameworks—what the authors dub “artificial agentic neurodivergence”—the agents will dynamically aid or thwart one another, preempting ultimate dominance by any single system. The authors simulated a “cognitive ecosystem” by prompting AI interacting agents to represent fully aligned behaviors such as optimizing human utility, partially aligned behaviors such as prioritizing the environment, or unaligned behaviors, pursuing arbitrary objectives.
The authors trialed this approach in ethical debates between a range of LLMs in which humans or prompted LLMs tried to disrupt emerging consensus. In these debates, open models showed a wider spectrum of perspectives than proprietary models, creating what the authors characterize as a more resilient AI ecosystem, one that is less likely to converge on a single opinion—which could be harmful in cases where that opinion is not aligned with human interests.
Journal
PNAS Nexus
Article Title
Neurodivergent influenceability in agentic AI as a contingent solution to the AI alignment problem
Article Publication Date
14-Apr-2026
Research uses AI to examine social exchanges and interactions
Psychologists have long known that social situations profoundly influence human behavior, yet have lacked a unified, empirically grounded way to describe them. A new study addresses this problem by using generative AI to systematically classify thousands of everyday social interactions. In a new study, researchers analyzed thousands of textual descriptions of two-person social interactions, then used generative artificial intelligence (AI) to code the exchanges by features, resulting in a taxonomy of categories of social interactions. Then they related these groups to variables like conflict, power, and duty to provide a comprehensive, data-driven framework for quantifying the structure of interactions.
The study, “The Structure of Social Situations: Insights From the Large-Scale Automated Coding of Text,” by researchers at Carnegie Mellon University and the University of Pennsylvania (Penn), is published in Psychological Science. “Researchers have proposed many frameworks for representing social situations, but due to the diversity and complexity of real-life situations, many of these are partial, non-integrated, and not mapped onto situations encountered in everyday life,” says Taya R. Cohen, Professor of Organizational Behavior and Business Ethics at Carnegie Mellon’s Tepper School of Business, who coauthored the study. “Our work advances the study of social cognition and behavior by using AI to create a more comprehensive framework for the structure of social situations.”
Because social situations exert a profound influence on human behavior and mental life, understanding the structure and dimensions of such situations has been a major topic of psychology research for decades. But gaps remain, leaving the field without a rigorous understanding of how the characteristics that matter most relate to commonly encountered social interactions.
In this study, researchers analyzed more than 20,000 detailed textual descriptions of two-person social interactions. They used a large data set of short stories describing social interactions in daily life (e.g., family situations, workplace interactions, animal interactions, pet mishaps written by online participants, as well as short situational descriptions from other sources (e.g., blogs, novels, fiction published on social media, reading-comprehension exams).
The study used a combination of large language model (LLM) techniques to extract high-level situational characteristics from the data sets and core situational cues like relationships, activities, locations, and goals (who, what, where, and why) that make up the observable dimensions of each situation. “A core challenge in psychology is understanding the structure of social situations—the patterns and psychological features that shape how people think, feel, and behave in social contexts,” explains Sudeep Bhatia, Associate Professor of Psychology at Penn, who led the study. “Our work provides a rigorous and integrative framework for mapping out everyday social situations and relating them to key theoretical dimensions in psychology.”
The study found systematic associations between situational characteristics proposed by existing taxonomies as well as between situational characteristics and observable cues, replicating and extending findings from earlier studies, but at a much larger scale. In particular, the study drew on a broader and more representative group of typical exchanges experienced by adults.
“Our study offers researchers a rich descriptive catalogue of dozens of classes of situations with which they can test and refine their theories,” Bhatia added, “It can be used to model the distributional structure of situations, as we did, as well as to formally study the effect of situations on interpersonal behavior, perceptions of situations, pursuit of goals, and the interplay between situations and personality.”
Among the study’s limitations, the authors note that their analysis relied on short stories, which resemble the brief autobiographical narratives used in prior research but likely exclude more complex and nuanced situations. In addition, their findings depended on analyses conducted with current-generation LLMs, which have biases and constraints. Finally, the work examined only English-language narratives, which limits the cultural scope of the conclusions.
There are already hundreds of thousands of large language models (LLMs) in existence with a few dozen commercial systems dominating the market. Between options such as GPT-4, Claude and Gemini, many people have their favorite, especially when it comes to creative tasks such as writing.
Those preferences, however, are likely entirely in the eye of the beholder. According to new research from Duke University, the creative outputs of commercial LLMs are more similar to each other than users might hope. When challenged with three standard tasks assessing creativity, answers from commercial LLMs are much more alike than their human counterparts.
The results appeared online March 24 in the journal Proceedings of the National Academy of Sciences Nexus.
“People might wonder if different LLMs will take them in different directions with the same prompts for creative projects,” said Emily Wenger, the Cue Family Assistant Professor of Electrical and Computer Engineering at Duke. “This paper basically says no. LLMs are less creative as a population than humans.”
According to a 2024 survey by Adobe, over half of Americans have already used LLMs as creative partners for brainstorming, writing, creating images or writing code. Because an overwhelming majority of users trust them for help with being more creative, researchers have been trying to find out if that trust is misplaced.
One seminal paper in this emerging field conducted by Anil Doshi and Oliver Hauser found that writers who used GPT-4 produced more creative stories than humans working alone. However, the same study showed that those LLM-aided stories were more similar to each other than were stories from human writers working solo.
This research, and other papers like it, only looked at people using one specific LLM. Wenger, who studies how data gets into AI models, was curious how these types of results would translate between different LLMs.
“Commercial LLMs have all been trained on the same dataset—the entirety of the internet—and they all have the same goal,” Wenger said. “It seemed likely to me that this would limit the amount of diversity we’d see in their creativity, so I decided to find out.”
To explore her hunch, Wenger turned to Yoed Kenett, a cognitive neuroscientist and associate professor of data and decision sciences at the Technion – Israel Institute of Technology. Together, they settled on three standard tasks used to assess creativity levels and put 22 LLMs to the test against over 100 people.
One test, called the Alternative Uses Test (AUT), challenges participants to name different ways that an object could be used from its intended use. For example, using a book as a doorstop, fly swatter or kindling for a fire. The second test, called the Divergent Association Task (DAT), asks participants to name 10 different words, each as different as possible from the others in every sense. Lastly, the Forward Flow (FF) test provides a starting prompt word and asks participants to write down the next word that follows in their mind from the previous word for up to 20 words. For example, fire, candle, wax, hair, comb, honey, bee, stripes, zebra, etc.
Together, these tests seek to measure the divergent and dissociative thinking abilities that facilitate creativity.
“Significant empirical research on the past few decades highlight how much human creativity depends on variability,” said Yoed Kenett. “The problem, as we and others are increasingly showing, is that while LLMs appear to generate extremely original outputs, they are overly homogenized and not variable in their responses. This could have detrimental long-term impact on human creative thinking and thus must be addressed.”
The results, which aimed to measure the variability and originality in responses between LLMs and people, were clear. While individual LLMs might outperform individual people in levels of creativity, as a whole, the algorithms’ responses were much more similar to each other than the people’s. Importantly, altering the LLM system prompt to encourage higher creativity only slightly increased their variability—and human responses still won out.
“This work has broad implications as people continue adopting and integrating LLMs into their daily life,” Wenger said. “Over reliance on these tools will smooth the world’s work toward the same underlying set of words or grammar, tending to make writing all look the same.”
“If you’re trying to come up with an original concept or product to stand out from the crowd,” Wenger continued, “this work highly suggests you should bring together a diverse group of people to brainstorm rather than relying on AI.”
CITATION: “Large language models are homogeneously creative.” Emily Wenger and Yoed N. Kenett. PNAS Nexus, 2026, 5, pgag042. DOI: 10.1093/pnasnexus/pgag042
The University of Bonn is hosting a new Emmy Noether Group devoted to AI methods. Junior Professor Marc Rußwurm is developing AI methods for fusing different types of geodata to arrive at a uniform geospatial representation. The German Research Foundation (DFG) will be providing up to 1.4 million euros in funding for the research group over the next six years. The Emmy Noether Program is a framework designed to enable selected postdocs and assistant professors on fixed-term contracts to obtain the qualifications necessary to hold a university professorship.
Places can be described based on various different characteristics, such as whether a given place is forested or barren, its height above sea level, what animals are found there and whether there are buildings, roads or parks. Such information is generally stored in classic geodatabases of maps, satellite images, elevation models, etc. This practice tends to create problems however, because “the data exist in differing formats, resolutions and grid sizes, so it takes major effort to utilize them in combination,” as Junior Professor Marc Rußwurm of the University of Bonn Institute for Food and Resource Economics explains. “Harmonizing such geodata to make it usable in modern AI methods is a lot of work”. This can mean combining animal photos from camera traps with vegetation, altitude, climate and human infrastructure data in order to predict whether certain species will find suitable habitats there.
AI is learning to better “understand” places
The new Emmy Noether Group will investigate how geodata can be represented within the parameters of artificial neural networks. Rußwurm and his team are developing AI methods to synthesize such different data types to derive a uniform geospatial representation. The goal is for AI to achieve a better “understanding” of places than it currently has. “People often rely on pictures and maps to get a sense of what a place is like without actually being there themselves—whether warm or cold, green or intensively developed, crowded or deserted. Our work is aimed at enabling AI to use this kind of data to know more about places in similar fashion.”
The new AI methods developed have diverse application potential, such as allowing more precise urban quality-of-life analysis by correlating location characteristics with resident satisfaction data or real estate prices. “By drawing on different kinds of geoinformation, AI could also project what coastlines and beaches are subject to elevated plastic waste levels,” Rußwurm observes. Global mapping of vegetation and settled areas could also be made more precise, as AI would be more aware of regional differences.
Transdisciplinary AI research
The breadth of possible applications indicates the transdisciplinary nature of this work, which is why the University of Bonn is an ideal research center for it. Rußwurm, who moved here from the Netherlands at the start of the year, will be collaborating with colleagues from different disciplines within the framework of the University of Bonn’s Transdisciplinary Research Areas (TRAs) Modelling and Sustainable Futures. The collaborative purpose is to study how AI methods can be employed to more effectively evaluate local biodiversity, gauge microplastic soil content over large areas, represent global Earth gravity in AI models and reveal how biodiversity and other environmental changes correlate with political and societal decision-making processes. “What makes the University of Bonn so attractive to me is how fundamental research and applied research really go hand in hand here.”
Bio
Marc Rußwurm has been junior research group leader at the University of Bonn’s Machine Learning in Earth Observation (MEO) Lab since February 2026. His previous position was Assistant Professor of Machine Learning and Remote Sensing at Wageningen University, and he has experience in geodesy and geoinformation. Starting in September 2026 Rußwurm will be head of the Emmy Noether Group “Earth Embeddings: Learning Concept Maps in Neural Nets,” backed by roughly 800,000 euros in initial funding from the German Research Foundation (DFG) for a three-year period. Around 600,000 euros in follow-on funding may be granted for a three-year extension after passing an interim evaluation. The funding is provided as part of the DFG's involvement in the Global Minds Initiative Germany by the Federal Ministry of Research, Technology and Space.
Tired of swiping? Now an AI simulation helps us understand why
Screen logging tells us where smart phone users tap and swipe, but now researchers have developed a musculoskeletal model that helps understand the physical effort that goes into these motions.
Tired of swiping? Now an AI simulation helps us understand why
Screen logging tells us where smart phone users tap and swipe, but now researchers have developed a musculoskeletal model that helps understand the physical effort that goes into these motions.
Prolonged scrolling is bad for your well-being, but is it also physically tiring? Until now, we haven’t really been able to say. This is why researchers from Aalto and Leipzig Universities created a new AI model that makes it possible to simulate muscle activations and used energy to work out how physically effortful smartphone interactions are for users.
'It’s the first time anyone has developed a tool that can help designers and developers quickly assess how physically tiring a real mobile user interface could be,’ says Antti Oulasvirta, Professor at Aalto University and ELLIS Institute Finland. ‘So far, smartphone logs have only told us where a finger has touched the screen – not whether or not it’s felt comfortable.'
To bridge this gap, Oulasvirta and his colleagues at Leipzig University developed Log2Motion, an AI model that translates smartphone logs into simulated human motion. Movement of this musculoskeletal simulation is based on data from previous motion capture studies.
In the simulation, a human model consisting of digital bones and muscles moves its index finger to interact with a smartphone laid out on a desk. Through a software emulator, the model can use real mobile apps in real time. It can re-enact logs collected on users to illuminate what happened during interaction. The Log2Motion model then estimates the motion, speed, accuracy and effort of these biomechanical movements.
The model provides entirely new horizons for smartphone use research – as well as design.
'We found that some gestures are harder to perform – in this case, up-down and down-up swipes,' explains Oulasvirta. 'Small icons and locations toward the corners of the display also require additional effort.'
Using such simulation early in the process could help designers create user-friendly interfaces. It can also provide insight into accessibility needs for users with tremors, reduced strength or prosthetics.
'It is possible to scale the Log2Motion model to simulate other scenarios, such as the more classic: laying on the couch, holding the phone in one hand and scrolling with the thumb,' Oulasvirta says.
The researchers hope that human simulations would be adopted to help design interactions that are more ergonomic and pleasant for users. In the future, these simulations could be combined with other AI methods to optimise user interfaces to a user’s needs.
The paper, 'Log2Motion: Biomechanical Motion Synthesis from Touch Logs', will be presented on April 17 at CHI 2026, the leading conference on human–computer interaction.
Aalto University is where science and art meet technology and business. We shape a sustainable future by making research breakthroughs in and across our disciplines, sparking the game changers of tomorrow and creating novel solutions to major global challenges. Our community is made up of 16,000 students and 5,200 employees, including 446 professors. Our campus is in Espoo, Greater Helsinki, Finland.
Protein engineering is a field primed for artificial intelligence research. Each protein is made up of amino acids; to optimize a protein function, researchers modify proteins by switching out one of 20 different amino acids for another. For a protein that is just 50 amino acids in length, this leads to approximately 1.13x1065 potential combinations to test — that’s 113 followed by 65 zeros, or five times as many zeros as a trillion has.
This number of potential combinations, impossible to test in the lab, makes protein engineering an ideal challenge for AI. Modeling which of these combinations will give the best results is a perfect problem for the technology’s massive computing power. But AI is only as good as the data used to train it, and in some areas of protein engineering, the right data just didn’t exist.
“One of the biggest bottlenecks in AI-guided protein engineering is not coming up with machine-learning models. It is generating the right and enough experimental data to train them,” said Han Xiao, Rice University professor of chemistry, biosciences and bioengineering and director of the SynthX Center. “For engineering protein activity, which optimizes what a protein does, we had a very clear problem: There simply were not enough datasets to train accurate models.”
To be able to generate AI models that could accurately predict how to optimize a protein’s function, or activity, Xiao’s team had to first generate enough activity data about any given protein to train an AI model. In a recent Nature Biotechnology publication, Xiao’s team and collaborators from Johns Hopkins University and Microsoft did just that, sharing an approach that provided the needed data and created accurate models in just three days.
This approach, called Sequence Display, can generate more than 10 million data points in a single experiment. These data points are then fed into protein language AI models, which use them to predict which changes to a protein’s amino acids will create the desired change for the protein’s activity or function.
“We were able to develop an activity-based barcoding system that records the activity of individual protein variants and generates the kind of dataset needed to train a machine learning model,” said Linqi Cheng, a Rice graduate student and first author on the study. “Then the model was able to predict mutations that significantly improved the activity of the protein we were studying.”
The team chose a small CRISPR-Cas protein for proof of concept. This protein was valued for its size but limited in its activity to target stretches of DNA to cut. The researchers wanted to identify a version that could cut a wider variety of DNA targets.
First, they mutated the DNA that codes for the Cas9 protein, creating many variations. A blank DNA barcode was attached to each variant, along with a special editor that would change the barcode in response to the protein’s activity level. As the protein’s activity levels increased, so did the editor’s. This meant that the most active protein variations had the biggest changes in their barcodes. The DNA barcodes were then read by next-generation sequencing, which would essentially scan the barcode and classify each sequence by level of activity.
“The AI is not replacing the experiment here. It instead depends on the experiment,” Cheng said. “Sequence Display gives us the data foundation, and the models help us search a much larger data space for strong candidates.”
The team successfully repeated this process with other proteins, including aminoacyl-tRNA synthetases, cytosine deaminase and uracil glycosylase inhibitor. In each case, the barcoding experiment generated enough data points to train AI models.
“What this approach provides is a practical framework for integrating AI with protein engineering,” said Xiao, who is also a Cancer Prevention and Research Institute Scholar. “Rather than relying on machine learning as a stand-alone solution, we couple it with an experimental platform that generates high-quality training data. This synergy enables more efficient discovery of advanced research tools and next-generation therapeutic proteins.”
This work was supported by a SynthX Seed Award (SYN-IN-2024-002), the National Institutes of Health (R35-GM133706, R01-CA277838, R01-AI165079 to H.X.), the Robert A. Welch Foundation (C-1970 to H.X.), the U.S. Department of Defense (W81XWH-21-1-0789, HT9425-23-1-0494, HT9425-25-1-0021 to H.X.), a 2024 Rice Synthetic Biology Institute Seed Grant (H.X.) and a Medical Research Award from the Robert J. Kleberg, Jr. and Helen C. Kleberg Foundation.
Sequence Display enables large-scale sequence–activity datasets for rapid protein evolution
Article Publication Date
8-Apr-2026
Can artificial intelligence match medical interview assessments by clinicians?
Researchers report how artificial intelligence-based scoring of interview transcripts is comparable to clinicians’ scores while reducing evaluation time
Artificial intelligence can evaluate medical interview transcripts with accuracy comparable to expert clinicians while enabling faster and more scalable feedback in medical training
Credit: Professor Toshio Naito from Department of General Medicine, Juntendo University Faculty of Medicine, Japan
Clinical interviewing is one of the most important skills physicians develop during their training. It forms the foundation for accurate diagnosis and effective patient care. However, evaluating these skills is often time-intensive, requiring repeated observations and detailed feedback from experienced clinicians. As medical education continues to expand, this growing assessment burden has become a significant challenge. The incorporation of generative artificial intelligence (AI) has the potential to significantly improve the assessment of interviewing skills; however, its efficiency compared to standard evaluation systems is not well understood.
To fill this gap, researchers from Japan explored whether artificial intelligence could help address this issue by evaluating medical interview transcripts. Their findings were published on February 17, 2026 in Volume 12 of the journal JMIR Medical Education. The research team led by Dr. Hiromizu Takahashi (corresponding author) and Professor Toshio Naito, both from the Department of General Medicine, Juntendo University Faculty of Medicine, Japan, examined whether AI-based assessment (ABA) could match traditional human-based assessment (HBA).
“Our central message is that AI may help make medical training fairer, faster, and more scalable,” explains Prof. Naito.
To evaluate ABA vs HBA systems, the researchers designed a cross-sectional validation study using a virtual patient system. Seven participants, including medical students, resident physicians, and attending physicians, conducted clinical interviews with an AI-simulated patient presented with bilateral leg weakness. These conversations were automatically recorded and converted into transcripts. The transcripts were then evaluated using the Master Interview Rating Scale, a standardized tool that assesses various aspects of clinical communication, such as information gathering, organization, and empathy. For the ABA system, AI models, specifically GPT-o1 Pro and GPT-5 Pro, were used to assess the transcripts. On the other hand, five experienced clinical instructors independently evaluated the same transcripts comprising the HBA approach.
According to the researchers, ABA showed strong agreement with clinician evaluations, with only minimal differences in scores. At the same time, AI demonstrated greater consistency across repeated evaluations. Importantly, the use of AI also reduced the time required to assess each transcript by more than half, highlighting its potential to ease the workload of educators. “Rather than replacing teachers, this research suggests a practical ‘AI-first, faculty-verified’ model in which AI handles the first pass and educators focus their time on coaching, judgment, and high-stakes decisions,” says Dr. Takahashi.
These results have important implications for medical education. In many training programs, delays in feedback can limit opportunities for students to improve their communication skills. By providing rapid and consistent evaluations, AI could make repeated practice more accessible, particularly in settings with limited faculty resources. “Students could interview an AI-simulated patient and receive feedback almost immediately instead of waiting days or weeks,” Prof. Naito adds, highlighting the potential for more timely learning experiences.
At the same time, the researchers emphasize that AI should be used with care. While AI performed well in this study, it was based on a small number of participants and a single clinical scenario. In addition, transcript-based evaluation cannot capture nonverbal cues, tone, or cultural nuances that are often important in real-world patient interactions. Prof. Naito and Dr. Takahashi note with caution, “AI should be used with human oversight, because text-only scoring can miss nuances such as tone, nonverbal communication, and cultural context.”
Overall, this study highlights the growing role of AI in medical education. By combining the speed and consistency of AI with the expertise and judgment of clinicians, it may be possible to create more efficient and scalable training systems. As the demand for high-quality medical education continues to rise, such approaches could help ensure that future clinicians receive the best training while reducing the burden on educators.
Affiliations 1Department of General Medicine, Faculty of Medicine, Juntendo University, Tokyo, Japan
2Department of Community-Oriented Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan
3Center for Postgraduate Clinical Training and Career Development, Nagoya University Hospital, Nagoya, Japan
4The School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
5Brookdale Department of Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, United States 6Department of General Internal Medicine, Itabashi Chuo Medical Center, Tokyo, Japan
7Department of Internal Medicine, Nishiwaki Municipal Hospital, Hyogo, Japan
8Anesthesiology and Critical Care Medicine, Tenri Hospital, Nara, Japan
9Department of Nursing, School of Nursing, University of Human Environments, Aichi, Japan
10Department of General Medicine, Saga University Hospital, Saga, Japan
11Department of General Medicine, Graduate School of Medical and Dental Sciences, Institute of Science Tokyo, Tokyo, Japan 12Department of General Medicine, Bibai City Hospital, Hokkaido, Japan
13Department of General Internal Medicine, Kita-Harima Medical Center, Hyogo, Japan
About Professor Toshio Naito Dr. Toshio Naito, MD, PhD, MBA, is a Professor in the Department of General Medicine at Juntendo University Faculty of Medicine, Tokyo, Japan. With over 30 years of clinical and academic experience, his research focuses on general medicine, infectious diseases, HIV, and medical education. He has authored 112 original articles and 4 review articles, achieving an h-index of 23 and 1,799 citations. His contributions have significantly advanced both clinical practice and medical training.
The understanding and generating AI agents are differentiated by a role prompt. The understanding AI agent takes the original data, channel sensing data, and task requirements as input, and outputs the understanding embedding. The generating AI agent receives the noisy understanding embedding, channel sensing data, and task requirements as input, and outputs the generated content. Tool integrations and functional heads of both AI agents are configured as needed.
A study published in Engineering delves into the integration of generative artificial intelligence (GAI) with semantic communication (SemCom), a key technology for sixth-generation (6G) wireless networks, and proposes a novel large language model (LLM)-native generative SemCom system that shifts the communication paradigm from information recovery to information regeneration. SemCom, which transmits semantic meaning rather than raw bitstreams to boost communication efficiency, has long faced limitations in generalization, robustness and reasoning capabilities when based on traditional deep learning. GAI, with its strengths in learning complex data distributions and generating high-quality content, emerges as a viable solution to address these challenges, according to the research team composed of scholars from institutions including The Chinese University of Hong Kong, Shenzhen and Pengcheng Laboratory.
The paper systematically analyzes three SemCom systems empowered by classical GAI models: variational autoencoders (VAEs), generative adversarial networks (GANs) and diffusion models (DMs). For each model, the research elaborates on its fundamental concepts, corresponding SemCom architectures and latest research progress, noting their respective applications in semantic coding, joint source-channel coding (JSCC), channel modeling and equalization across text, image and audio modalities. Building on this foundation, the study introduces an LLM-driven generative SemCom system, which equips both transmitter and receiver with LLM-based AI agents acting as the core for information understanding and content generation, respectively, alongside channel adaptation modules for reliable signal transmission. The LLM-based AI agents integrate components like perception encoders, data-driven LLM adaptation and memory systems, enabling the transmitter to extract compact semantic embeddings from multimodal input data and the receiver to directly generate task-oriented content from the received semantic information.
A point-to-point video retrieval case study validates the system’s effectiveness, showing it achieves a 99.98% reduction in communication overhead and a 53% improvement in average retrieval accuracy compared to traditional communication systems, while also demonstrating superior robustness against channel noise. The research further identifies four promising application scenarios for generative SemCom: industrial Internet of Things (IIoT), vehicle-to-everything (V2X), the metaverse and the low-altitude economy, where the technology can streamline data transmission, enhance real-time processing and improve privacy protection. Additionally, the paper outlines three key open issues for future research, including the deployment of LLMs on resource-constrained edge devices, the dynamic evolution of AI agents at transceivers and privacy and security concerns during semantic transmission, proposing targeted solutions such as model compression, continual learning and advanced encryption technologies for each challenge.
The study provides a comprehensive guideline for the application of GAI in SemCom, laying a groundwork for the efficient deployment of generative SemCom in future 6G wireless networks and offering insights for the integration of advanced AI technologies with next-generation communication systems.
The paper “Generative Semantic Communication: Architectures, Technologies, and Applications,” is authored by Jinke Ren, Yaping Sun, Hongyang Du, Weiwen Yuan, Chongjie Wang, Xianda Wang, Yingbin Zhou, Ziwei Zhu, Fangxin Wang, Shuguang Cui. Full text of the open access paper: https://doi.org/10.1016/j.eng.2025.07.022. For more information about Engineering, visit the website at https://www.sciencedirect.com/journal/engineering.
Perovskite solar cells have emerged as one of the most promising next-generation photovoltaic technologies, but their development still depends heavily on time-consuming trial-and-error synthesis and labor-intensive device fabrication. Researchers have already explored more than one hundred thousand recipes to improve device performance, yet the formulas remain complex, additives are highly diverse, and crystallization is extremely sensitive to environmental conditions. As a result, fabrication remains difficult to control, while the related physical and chemical mechanisms are still not fully understood. Although high-throughput robotic systems can accelerate data collection, they often struggle to analyze rapidly growing numerical datasets effectively or to provide timely feedback for semantic recipe optimization and mechanistic reasoning at the device scale.
Researchers from the Hong Kong Polytechnic University and collaborating institutions report an agentic robotics system for perovskite solar cell research in Engineering in 2026. The work combines a language agent, a domain-specific recipe language model (RLM), and 11 interconnected robotic boxes within a unified framework for synthesis, fabrication, characterization, and feedback-driven optimization. Using this system, the team carried out 50,764 perovskite solar cell device experiments, achieved a champion power conversion efficiency of 27.0%, with a certified value of 26.5%, and generated more than 578 million tokens to strengthen recipe recommendation and mechanistic reasoning.
At the core of the study is the idea that robotic experimentation should do more than automate repeated operations. The researchers designed a seven-layer artificial intelligence (AI) architecture covering learning, generating, RecipeQA, fine-tuning, reasoning, evaluation, and optimization. Within this framework, both numerical and semantic recipes can be continuously learned from literature corpora and robot-generated corpora, enabling iterative refinement of the RLM. Formulas and parameters are encoded into machine-readable recipes, translated into robot-executable commands, and returned as structured feedback after fabrication and characterization. In this way, the system establishes a closed-loop workflow linking recommendation, execution, validation, and model improvement.
The hardware system upgrades an earlier robotic synthesis system into a full-device fabrication system for perovskite solar cells. A digital twin serves as a real-time software–hardware interface, translating model-generated recipes into executable robotic instructions while synchronizing experimental states and feedback. The 11 robotic boxes form an enclosed and interconnected environment for synthesis, fabrication, and characterization. Altogether, the system includes 101 functional modules, more than 1,500 components, and 4,300 controllable parameters, reconstructing traditionally fragmented glovebox-based manual operations into coupled robotic execution.
According to the researchers, the key advance is the integration of three advantages within one closed-loop AI–robotics framework: controllable fabrication of full perovskite solar cell devices by robotic boxes, robotic characterization that converts high-throughput experimental outputs into structured mechanism-related evidence, and domain-specific RLM which is trained and continuously improves recipe recommendation, mechanistic reasoning, and subsequent robotic execution.
The significance of the work extends beyond perovskite photovoltaics. By integrating a language agent, an RLM, robotic fabrication, robotic characterization, and feedback-driven optimization into one research framework, the study provides a practical route toward next-generation materials research tools. More broadly, this work highlights a paradigm shift from manual discovery, providing a scalable architectural foundation of materials intelligence. In the longer term, such AI and robotics systems could be deployed in extreme environments to support on-site materials intelligent manufacturing.
The article, titled “Agentic Robotic Boxes for Perovskite Solar Cell Fabrication with Recipe Language Model,” was authored by Zijian Chen, Wenjin Yu, Chuang Wu, Feibei Chen, Zixuan Wang, Chao Zhou, Yimeng You, Shaojie Li, Qiyuan Zhu, Ning Ma, Yao Sun, Donghui Li, Billy Fanady, Shengchou Jiang, Zhongliang Yan, Shumin Zhou, Liang Li, Chang-Yu Hsieh, Yang Bai, Lixin Xiao, Chi-yung Chung, Ching-chuen Chan, Zhanfeng Cui, Michael Grätzel, Haitao Zhao. It was published in the journal Engineering. Full text of the open access paper: https://doi.org/10.1016/j.eng.2026.04.002. For more information about Engineering, visit the website at https://www.sciencedirect.com/journal/engineering.
No comments:
Post a Comment