It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Tuesday, May 12, 2026
Generative artificial intelligence can significantly reduce the number of animal experiments
Between 30 and 50 percent fewer mice for pharmacological research experiments
FRANKFURT. In early phases of drug development, new active substances are tested in animals –alongside numerous other experimental methods. Researchers face a dilemma: on the one hand, for ethical reasons, they aim to keep the number of animals used in an experiment as low as possible. On the other hand, animal experiments must include enough animals to produce reliable and representative results, for example to determine whether a new drug candidate produces a specific effect.
Professor Jörn Lötsch, data scientist and clinical pharmacologist at Goethe University, in cooperation with computer scientist Professor Alfred Ultsch from Philipps University Marburg—neither of whom conducts animal experiments himself—has developed a generative artificial intelligence called genESOM. genESOM is based on a network of thousands of artificial neurons that “learns” the internal structure of a dataset. This allows it to expand the volume of experimentally obtained data and simulate a larger number of animals in the experiment than were actually used.
Integrated Error Monitoring
To train the AI, the scientists used existing data from a previously published mouse study conducted at Fraunhofer ITMP. The research team achieved two key innovations: first, training the AI to generate new data points based on the study data that integrate into the learned data structure as if they had been obtained in real experiments.
The second innovation was integrating error monitoring directly into the process of generating new data points. Generative AI methods generally risk amplifying not only the relevant signal but also noise and random variation. This problem is known as error inflation and can lead to variables that are actually insignificant being incorrectly identified as treatment-relevant (so-called false-positive variables).
By deliberately separating the learning phase from the synthesis phase, it becomes possible to introduce an artificial error signal into the process and precisely measure its propagation. This results in a data-driven stopping criterion that halts data generation before scientific validity is compromised.
AI Training with Published Study Data
genESOM passed a practical test using data from a preclinical study on a multiple sclerosis model. In the original study, 26 mice were divided into three treatment groups to investigate the effects of an experimental drug. Lötsch and Ultsch reduced the dataset to 18 animals (six per group) to simulate a smaller experiment. When they analyzed this reduced dataset, all previously detected treatment effects disappeared completely: statistical tests showed no significance, and machine learning methods could not distinguish between the treatment groups. After augmenting the reduced dataset with additional data points using genESOM, all effects of the full experiment reappeared at the original level of significance – without introducing relevant false-positive findings. Alternative AI methods, including complex deep-learning neural networks tested by the researchers, failed in this case.
Lötsch explains: “We have now tested a number of datasets in a similar way and can say today: with genESOM, the number of animals used in exploratory research can be reduced by 30 to 50 percent while maintaining scientific validity.” However, the data scientist emphasizes that genESOM can only learn from data obtained in real animal experiments. Nor can the number of laboratory animals be reduced arbitrarily: “If too few animals are included in an experiment and the number is then simply supplemented using generative AI, the experiment could quickly become scientifically worthless due to the amplification of random findings.” Nevertheless, Lötsch is convinced: “With genESOM, we can make an important contribution to reducing the number of animal experiments in large areas of preclinical research.”
The project was funded by the German Research Foundation (DFG) under the title “Generative artificial intelligence-based algorithm to increase the predictivity of preclinical studies while keeping sample sizes small.”
Publications: Jörn Lötsch, Benjamin Mayer, Natasja de Bruin, Alfred Ultsch: Self-organizing neural network-based generative AI with embedded error inflation control enhances effective knowledge extraction from preclinical studies with reduced sample size. Pharmacological Research (2026) https://doi.org/10.1016/j.phrs.2026.108159
Jörn Lötsch, André Himmelspach, Dario Kringel: Dimensionality-modulated generative AI for safe biomedical dataset augmentation. iScience (2026) https://doi.org/10.1016/j.isci.2025.114321
Alfred Ultsch, Jörn Lötsch: Augmenting small biomedical datasets using generative AI methods based on self-organizing neural networks Open Access. Briefings in Bioinformatics (2024) https://doi.org/10.1093/bib/bbae640
Self-organizing neural network-based generative AI with embedded error inflation control enhances effective knowledge extraction from preclinical studies with reduced sample size.
Article Publication Date
11-May-2026
Robots, AI to help shipbuilding stay on track
American and Japanese researchers will develop robots and AI to help shipbuilders pivot when the built ship deviates from the planned design
As ships are built, internal parts—pipes, cables and equipment—can arrive out of order, and scheduling pressure can cause parts to be installed such that the remaining parts no longer fit as expected.
Re-installing parts could delay construction, but robotic and AI assistants can help shipbuilders catch problems early and predict issues ahead of time, as well as suggest solutions.
University of Michigan Engineering leads a team of American researchers developing the technology, with a $6.2M grant from the Japanese Ministry of Land, Infrastructure, Transport and Tourism.
Autonomous robots and AI models could help shipyard workers catch when a ship's built structure differs from design drawings, allowing workers to fix problems or adapt sooner. University of Michigan Engineering is leading the American arm of an international project to develop such a system.
Funded with a $6.2 million grant from the Japanese Ministry of Land, Infrastructure, Transport and Tourism, the collaboration will design and prototype AI and robot teammates to track what was actually built inside the growing ship and compare it to a digital twin of the intended structure. The system will then create reports of mismatches that workers can use to make adjustments.
"We want to build a co-pilot system that uses AI and robotics to take some of the detective work off workers' shoulders," said Alan Papalia, U-M assistant professor of naval architecture and marine engineering and the principal investigator of the American research team. "The system should automatically map what's installed, identify where reality is drifting from the design, and suggest workable alternatives when something needs to change."
Papalia's team includes researchers from U-M and the Massachusetts Institute of Technology. The project is funded through the first quarter 2027 and overseen by the Monohakobi Technology Institute, an R&D Center within NYK Line, a global shipping and logistics company based in Japan.
"It's very complementary to our other research projects led by Japanese universities, in which the main focus is robots for automation of hull construction and steel welding," said Hideyuki Ando, managing director of the Monohakobi Technology Institute.
"We wanted to partner with the University of Michigan because of their unique status as a high-output research university with a dedicated department for naval architecture and marine engineering."
Helping construction stay on track
The American team is developing technology to help shipyard workers with outfitting—the installation of pipes, cables, electrical systems and other equipment inside the ship. Hundreds of thousands of individual components have to be placed inside confined, changing spaces, and scheduling pressure often causes the outfitting schedule to be dictated by crew and part availability rather than an ideal build sequence.
In the shifting build schedule, workers can find that parts don't fit as they expected and the original drawings sometimes prove impractical as outfitting progresses. Compartments may have closed earlier than expected, and the shortest route to an electrical box or pipe may be blocked. If issues aren't caught early, some installations may need to be reworked, which could delay delivery of the ship.
To help workers pivot, the robots will be designed to roam the growing ship structure and collect LiDAR and camera data that will be fed to an AI model along with other human-made measurements. The AI model will then construct a digital model of the built structure to be compared with the intended design. With the digital model, the AI will look for deviations from the plan and predict problems that may arise based on how equipment has been installed.
When the model finds a problem—such as a pipe that no longer fits as expected or a build sequence that will likely be disrupted—the system will generate a list of potential solutions and the tradeoffs between them. With that information, workers can verify problems and decide how to resolve them. The entire robotic system will be automated to help alleviate some of the burden of verifying that construction is on track, but the AI model will also flag when and where it has insufficient sensor data, so that people can help fill in gaps as needed.
Training shipbuilding helpers
To train the AI to understand the robot's images of the ship and identify problems, the researchers will create a synthetic dataset by simulating the shipbuilding process many times. The researchers will also interview tradespeople at shipyards in the U.S. and Japan to ensure that the AI matches how skilled workers reason on the job and provides realistic suggestions.
Once trained, the AI could potentially run at an offline workstation, a remote server wirelessly connected to the robot or on the robot itself.
The robots and AI models will be tested with a new physical model of a ship section, which the researchers call the Shipbuilding Test Block. The model will be reconfigurable so that it can represent many different stages of outfitting, shipboard systems and shipbuilding issues.
The roles of American team members include:
Development of robotic systems and algorithms for ship outfitting, led by Papalia
Establishment of shipyard collaborations, managed by Dave Singer, a professor of naval architecture and marine engineering
Interviews with tradespeople and the integration of human knowledge, led by Matt Collette, professor of naval architecture and marine engineering; Leia Stirling, professor of robotics and industrial and operations engineering; and Patricia Alves-Oliveira, assistant professor of robotics
Design and production of the Shipbuilding Test Block, led by Thomas McKenney, associate professor of practice in naval architecture and marine engineering
Development of AI models that can process multiple kinds of data to help find optimal solutions, led by Faez Ahmed, associate professor of mechanical engineering at MIT.
The complementary Japanese projects are led by Yokohama National University, Osaka University, Osaka Metropolitan University and the National Maritime Research Institute.
Reasoning like a human: New prompting strategy boosts AI accuracy in healthcare advice
New study finds that mimicking human intuition helps ChatGPT better identify when patients can safely use self-care.
(Toronto, May 11, 2026) Researchers at Technische Universität Berlin have discovered that teaching Large Language Models (LLMs) to mimic human intuition and reasoning significantly improves their ability to provide accurate medical care-seeking advice. The study, published in JMIR Biomedical Engineering from JMIR Publications, suggests a paradigm shift in prompt engineering: moving away from computer-focused instructions toward strategies rooted in applied psychology.
As millions of users turn to tools like ChatGPT for health advice, a persistent issue remains: AI often defaults to emergency or professional care recommendations, even for minor issues, out of extreme caution. This over-triage can lead to unnecessary healthcare costs and patient anxiety.
The Breakthrough: Naturalistic Decision-Making (NDM)
The research team, led by Marvin Kopka and Markus A. Feufel, tested 10 different ChatGPT models (including the newest GPT-4o and GPT-5 series) using prompts inspired by Naturalistic Decision-Making (NDM). Unlike traditional logic, NDM focuses on how human experts make high-stakes decisions under uncertainty.
The study utilized two specific psychological frameworks:
Recognition-Primed Decision-Making (RPD): Instructing the AI to match the patient’s symptoms to "ypical cases and mentally simulate the outcome.
Data-Frame Theory: Tasking the AI to build a mental frame of the situation and constantly question it as new data emerges.
Key Results
Significant Accuracy Boost: NDM-inspired prompts increased overall accuracy across all models. The most notable gains were in self-care advice, which jumped from a meager 13.4% with standard prompts to nearly 30% with NDM reasoning.
Activating "Thinking" in Simpler Models: Non-reasoning models (which typically failed to identify self-care cases) began providing accurate, nuanced advice when given a "human reasoning blueprint."
Safety Maintained: While the AI became better at identifying when it was safe to stay home, it maintained its high accuracy in identifying true emergencies.
“When testing AI, we too often give it perfect information and then see that it performs extremely well,” said author Marvin Kopka. “But many problems in the real world are ill-defined. We have good models for how experts make decisions in such situations, so using them as prompts seemed like an obvious next step. I hope that applying human decision-making to LLMs will help us develop AI tools that are also useful in real-world decision-making.”
Bridging the Gap to Personalized Medicine
The study suggests that in real-world situations, where medical data is often messy or incomplete, a "reasoning blueprint" based on human cognition can be more effective than standard computational logic. By instructing the AI to simulate outcomes and question its own initial "frames" of a situation, the researchers were able to mitigate the common AI tendency toward over-caution.
While these findings mark a significant step forward in making LLMs more effective partners in clinical decision-making, the team notes that the model is currently best suited for controlled environments. Future research will be essential to determine if these NDM-inspired prompts translate into better decision support for everyday users in non-standardized settings.
Recognition for Excellence
About the Author Team: The research was conducted by Marvin Kopka and Markus A. Feufel at the Division of Ergonomics, Department of Psychology & Ergonomics (IPA) at Technische Universität Berlin. Their work focuses on human factors and the safe integration of AI into human decision-making environments. Marvin was recently recognized as one of the five winners of the 2025 JMIR Publications Early Career Researcher Award, an honor that underscores the caliber and impact of the research presented in this study.
Original article: Kopka M, Feufel M. Increasing Large Language Model Accuracy for Care-Seeking Advice Using Prompts Reflecting Human Reasoning Strategies in the Real World: Validation Study. JMIR Biomed Eng 2026;11:e88053
JMIR Publications is a leading open access publisher of digital health research and a champion of open science. With a focus on author advocacy and research amplification, JMIR Publications apartners with researchers to advance their careers and maximize the impact of their work. As a technology organization with publishing at its core, we provide innovative tools and resources that go beyond traditional publishing, supporting researchers at every step of the dissemination process. Our portfolio features a range of peer-reviewed journals, including the renowned Journal of Medical Internet Research.
Head office: 130 Queens Quay East, Unit 1100, Toronto, ON, M5A 0P6 Canada
Media contact: communications@jmir.org
The content of this communication is licensed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, published by JMIR Publications, is properly cited.
Increasing Large Language Model Accuracy for Care-Seeking Advice Using Prompts Reflecting Human Reasoning Strategies in the Real World: Validation Study
Smart AI gives electric vehicle batteries 23 per cent longer life – without increasing the charging time
Fast charging shortens the life of vehicle batteries, but is necessary on longer journeys with electric vehicles. Researchers at Chalmers University of Technology, Sweden, have now developed a new AI method that adapts fast charging to the health of the battery. Their study shows that battery life can be increased by almost 23 per cent without extending the charging time. All that is required is an update of the vehicle’s software.
When individuals or companies consider acquiring electric vehicles, the possibility of fast charging is an important factor.
“For taxis or heavy vehicles in industry, for example, access to fast charging means a lot, but this is also true for passenger cars. Although private motorists usually charge their electric cars at home, the availability of fast charging outside the home is a crucial factor, as it facilitates commuting and driving over longer distances,” says Changfu Zou, professor at the Department of Electrical Engineering at Chalmers.
Electric vehicle batteries currently have a life of approximately 8-15 years*, depending on use and charging. Several studies of the European EV market* show that consumers who are considering buying an EV are concerned about the limited life of batteries.
The requirement for efficient fast charging is also in conflict with battery health, as such charging is stressful for the batteries and shortens their life.
Changfu Zou has taken on this challenge with Meng Yuan, Assistant Professor at Victoria University of Wellington, New Zealand, and a former researcher at Chalmers. In the recently published study, they show that it is possible to increase the life of batteries without significantly increasing the charging speed – with the help of artificial intelligence.
Adapting charging to battery health
In the study, the researchers present an AI-based charging strategy that adapts the current during each fast charge to the battery’s chemistry and ‘state of health’. The adapted charging extends battery life by around 23 per cent compared to the standard method today. At the same time, the charging time is unaffected, give or take a few seconds.
“We show that it is possible to charge more or less as fast as today, but with significantly less long-term degradation of the battery,” says Meng Yuan.
When a battery is charged fast, a large current is forced into the various cells, which causes a greater risk of chemical side reactions, among other things. One of the most problematic is known as lithium plating, in which metallic lithium precipitates on the electrode instead of being stored correctly in the battery’s structure. This can reduce capacity and may also affect safety, as unevenness in the structure of the lithium can, in a worst case scenario, cause a short circuit.
“The risk of lithium plating increases with the age of the battery. However, the standard methods of charging today use the same current and voltage regardless of whether the battery is new or has been used for years,” says Meng Yuan.
Short charging time and less wear and tear
The new, AI-based charging strategy is based on reinforcement learning**, in which the right actions are rewarded and thus reinforced. The training environment consisted of a model of one of the most common electric vehicle batteries on the market and a simulation of the parameters that have an impact on both charging time and battery health.
The AI model was trained to adapt the charging according to how charged or discharged the battery was at the time of charging. It also needed to take into account the overall health of the battery, as this is crucial to both capacity and electrochemistry. The result was a charging strategy that both keeps the charging time short and minimises harmful reactions.
“Our study shows that smart adaptation of the current during charging, taking into account the changing electrochemical state of the battery, can maximise both the performance and the life of the battery,” says Changfu Zou.
Easy to implement – but adaptation required
The new charging strategy is both easy and cost-effective to implement, according to the researchers: in principle, it could be implemented through software updates in the vehicle's battery management systems. However, some adaptation is needed for the method to be used generally.
“There are not so many different battery types today, but the method needs to be calibrated for it to be used by everyone. Using transfer learning, we can take advantage of what our AI model has already learned, and thus adapt the AI model to new batteries more quickly,” says Changfu Zou.
The next step is to test the method directly on physical batteries. The researchers hope that the AI-based charging strategy will make a significant contribution to the electrification of the transport sector.
“To reduce emissions and transition to a fossil-free society, it is important for people to be prepared to switch to electric vehicles. The possibility of fast charging, combined with an increased battery life, are important driving forces,” says Meng Yuan.
“And for the automotive industry, an almost 23 per cent increase in battery life can mean lower warranty costs, better resale value and more efficient use of critical raw materials,” says Changfu Zou.
**Reinforcement learning is a method in machine learning in which an algorithm learns by interacting with an environment, and gradually improving its decisions based on the feedback it receives.
This work was supported by the European Union’s Horizon Europe research and innovation programme through the Marie Skłodowska-Curie Actions Postdoctoral Fellowships, Swedish Research Council, and Swedish Foundation for International Cooperation in Research and Higher Education.
More about fast charging and battery life
An electric vehicle battery today has a life of approximately 8-15 years, depending on use and charging (1). The capacity of the battery gradually decreases with age. Volvo Cars’ electric vehicles, for example, come with a battery warranty of eight years or 160,000 kilometres (2).
In the study, the researchers measured the battery’s life in equivalent full cycles (EFC) – that is, how many full charge and discharge cycles the battery can withstand before the capacity drops to 80 per cent of its original value. At this limit, the battery still works, but is noticeably degraded and has a shorter range and reduced power (3).
Fast charging generally accounts for up to about 10-12 per cent of all charging, according to an analysis of 22,000 electric vehicles in the United States, Canada and Europe (4). Fast charging is more commonly used by long-distance commuters and those without access to home charging. The use of public charging, including fast charging, is also higher in regions where fewer people have the opportunity to charge at home, such as southern Europe and China (5).
A computational method combining generative AI with atomistic simulations can identify promising platinum alloy catalyst structures for hydrogen fuel cells, report researchers from Science Tokyo. Their approach addresses a longstanding challenge in catalyst design and consistently produces high-performing candidates from several material combinations.
Proton exchange membrane fuel cells (PEMFCs) are a promising clean energy technology that can generate electricity by combining hydrogen and oxygen into water. However, their performance depends heavily on a chemical step known as the oxygen reduction reaction (ORR), which requires an efficient catalyst to proceed at practical rates. Although platinum (Pt) remains the standard ORR catalyst in PEMFCs due to its remarkable electrochemical properties, its high cost and scarcity are barriers to large-scale adoption. As a result, researchers have turned to platinum-based alloys as less expensive alternatives that still maintain strong catalytic performance.
Designing these alloy-based catalysts, however, is far from straightforward. The number of possible atomic arrangements in alloy materials is enormous, making it impractical to test every candidate through experiments or computational methods like density functional theory. At the same time, catalysts must satisfy more than one requirement; they need to be highly reactive for ORR, but also stable under real operating conditions. Most machine learning-based approaches address these properties separately and thus lack the ability to propose atomic structures that fulfill both criteria simultaneously. How can we search for suitable alloy designs more efficiently?
In a recent study, Associate Professor Atsushi Ishikawa of the School of Environment and Society at Institute of Science Tokyo, Japan, together with graduate student Taishiro Wakamiya, developed a new strategy to address this challenge. Their work, published in the journal npj Computational Materials on April 14, 2026, introduces a method that combines atomistic simulations with generative artificial intelligence to design alloy catalysts for the ORR.
The proposed approach hinges on two key tools. The first is a neural network potential (NNP) model—a machine learning model trained on quantum mechanical calculations that can quickly estimate key material properties. The second is a generative model known as a conditional variational autoencoder (CVAE), which can propose new atomic structures based on desired properties. In this case, the model was trained to target both low overpotential (a measure of catalytic activity) and low alloy formation energy (a measure of stability).
The workflow operates as an iterative loop, with the NNP model evaluating the performance of proposed alloys and the CVAE refining them and feeding them back to the NNP stage. Over multiple iterations, this process gradually shifts the alloys toward better-performing arrangements. When applied to Pt–nickel alloys, the method generated structures that met overpotential and formation energy criteria simultaneously. Notably, the model also rediscovered known design principles by itself, such as how platinum-rich surface layers can enhance ORR activity.
The team further demonstrated the versatility of their workflow by extending it to multiple alloy systems, including Pt–titanium and Pt–yttrium. “The present work demonstrates that the combined use of atomistic calculations and the CVAE provides a general computational screening method that can produce new alloy surface structures satisfying both activity and stability criteria from limited initial data,” explains Ishikawa.
Beyond fuel-cell catalysts, the researchers believe their framework could have wide-ranging applications. “The newly developed workflow may be applicable to a broad range of materials challenges, including water electrolysis for hydrogen production, battery electrode materials, and catalysts for chemical processes,” concludes Ishikawa.
By enabling faster and more targeted exploration of complex material spaces, this work could help accelerate the development of sustainable energy technologies.
***
About Institute of Science Tokyo (Science Tokyo)
Institute of Science Tokyo (Science Tokyo) was established on October 1, 2024, following the merger between Tokyo Medical and Dental University (TMDU) and Tokyo Institute of Technology (Tokyo Tech), with the mission of “Advancing science and human wellbeing to create value for and with society.”
Prof. LI Ping, Dean of the School of Humanities and Social Science and Chair Professor of Psychology and Cognitive Science at HKUST (right) and Dr. PENG Yingying, HKUST Postdoctoral Fellow and the paper’s first author (left). Prof. Li led the research team to find that a brief one-on-one pre-lecture conversation—whether with a human or an AI instructor—improves students’ neural synchrony and learning outcomes.
Millions of students worldwide have long relied on self-paced learning through pre-recorded video lectures, a model that forms the backbone of massive open online courses (MOOCs) and large-scale online education. Since the COVID-19 pandemic, dependence on video-based online learning has increased significantly, with learner participation rising sharply. However, this expansion has also been accompanied by a widespread decline in student engagement, undermining overall learning outcomes.
A research team at The Hong Kong University of Science and Technology (HKUST), led by Prof. LI Ping, Dean of the School of Humanities and Social Science and Chair Professor of Psychology and Cognitive Science, has found that a brief one-on-one pre-lecture conversation (8–10 minutes) — whether with a human or an AI instructor — improves students’ neural synchrony and learning outcomes.
Human and AI instructors achieve comparable learning outcomes, but through different neural pathways. Human interaction engages both cognitive scaffolding and strong social-emotional processing, mediated by gaze alignment, while AI interaction supports more top-down cognitive processing. The study shows that AI-led and human-led pre-class interactions yield statistically indistinguishable learning outcomes across recall, comprehension, and knowledge transfer.
How the Study Was Conducted The research team recruited 57 university students and randomly assigned them to three groups:
• Group 1 (No interaction): Watched a 14-minute video lecture with no prior student-teacher conversation. • Group 2 (Human instructor interaction): Engaged in a brief structured face-to-face conversation (8–10 minutes) with a human instructor beforehand. • Group 3 (AI instructor interaction): Participated in a similarly timed interaction with an AI instructor that closely resembled the human instructor in appearance and voice. The AI instructor, powered by GPT-4, incorporated speech recognition, content generation, text-to-speech synthesis, and real-time talking-head animation. Students were aware they were interacting with an AI.
All participants then watched the same 14-minute video lecture inside an MRI scanner, while their eye movements, brain responses, and learning outcomes were recorded.
The Results The results were striking. Students who spoke with either the human or the AI instructor showed stronger synchronized neural activity in brain regions responsible for information processing, cognitive resource allocation, and socio-emotional responses during subsequent video learning. No significant differences were found between the two groups across recall, comprehension, and knowledge transfer.
By contrast, students who had no pre-lecture interaction did not exhibit these patterns, and their learning outcomes paled in comparison.
Prof. Li explained: "Both groups—students who interacted with a human instructor and those who interacted with the AI instructor—showed similar brain synchrony patterns during learning, and both outperformed students who had no interaction, especially on challenging comprehension questions. This tells us that social scaffolding, even when brief and AI mediated, fundamentally shapes how the brain prepares us to learn."
Different Pathways, Same Destination While AI-led interaction produced comparable learning outcomes, the study also identified meaningful differences. Students who interacted with the AI instructor reported lower perceived social closeness and showed lower gaze alignment during the lecture compared with those in the human-interaction group.
Brain imaging shows synchronized neural activity in information-processing, cognitive control and socio-emotional regions, while eye-tracking data demonstrates gaze alignment. Although students reported feeling less socially close to the AI instructor and showed lower gaze alignment, their learning outcomes were equally strong.
Both methods proved effective. These findings suggest that effective AI educational systems do not need to perfectly replicate human interaction. AI instruction can succeed by generating sufficient social-emotional resonance while leveraging its computational strengths in retrieving knowledge and delivering personalized learning.
A particularly novel contribution of the study is its demonstration of a multi-stage, reciprocal cascade linking eye movements, brain activity, and learning outcomes — which researchers call "eye-brain-behavior correspondence."
Students who had prior interaction with the human instructor showed significantly higher gaze alignment: their eyes moved in more coordinated directions and followed more similar patterns to one another and to the instructor's gaze. Further analyses revealed that this shared visual attention was associated with better learning, mediated by activity in the superior temporal sulcus (STS), a region involved in social perception and language comprehension. At the same time, alignment in the posterior cingulate cortex — a core hub of the default mode network — appeared to guide coordinated gaze behavior in a top-down fashion.
Dr. PENG Yingying, HKUST Postdoctoral Fellow and the paper's first author, said, "We found bidirectional pathways in the workings of the mind and the brain: when students fixate their attention on the same learning material, their brains align, and aligned brains further help keep their attention in sync. Together, these processes reinforce one another and support learning."
What This Means for Education This research reveals multiple routes to improving students' online learning. Human interaction engages both cognitive scaffolding and strong social-emotional processing mediated by visual alignment, whereas AI interaction supports more top-down cognitive processing while still providing meaningful emotional support.
Prof. Li remarked, "This points toward a looming future for the social fabric of education, where even an AI instructor can pause, notice a student's subtle changes, and respond with care. These subtle aspects of human communication, if successfully realized in AI-empowered systems, may help cultivate what it means to feel seen, heard, and socially connected in a digital classroom."
As AI continues to evolve at a rapid pace, gaining deeper insight into how AI shapes human cognition and brain function - and how the brain, in turn, adapts to AI - will be critical for the development of scalable and socially enriched learning environments. Such understanding will help ensure that AI enhances, rather than replaces, human‑centered, active learning.
Schematic of the student experiment. A total of 57 university students were randomly assigned to three groups. During the experiment, the research team simultaneously recorded the students’ eye movements, brain responses, and learning outcomes.
An early warning system for sepsis, one of the deadliest infections for hospital patients, has been approved for use by the FDA, one of the first AI-based medical tools to get clearance.
The tool, developed by Johns Hopkins University researchers and now commercialized by Bayesian Health, detects sepsis hours faster than doctors and has reduced deaths by nearly 20%.
“Pre-suspicion screening is what creates lead time, and lead time is what changes outcomes in sepsis. Once a clinician already suspects sepsis, the clock has been running — often for hours or even days,” says lead researcher Suchi Saria, a Johns Hopkins professor and director of the AI & Healthcare Lab, who began translating her lab’s research into a real-world system after losing her nephew to sepsis in 2017. “No other cleared test or device monitors for sepsis prior to clinician suspicion.”
With sepsis, every hour that sepsis detection is delayed significantly decreases a patient’s chance for survival. Sepsis is easily missed because its symptoms, such as fever and confusion, are common in other medical conditions.
To beat those odds, Saria and a team at Johns Hopkins created the Targeted Real-Time Early Warning System. The federally-funded work, which integrates electronic health records with advanced clinical AI, has helped doctors spot sepsis cases nearly two to 48 hours earlier than traditional methods.
The system is reducing sepsis mortality rates by 18% in dozens of hospitals across the United States—a significant advance in addressing a deadly immune response that claims more than 250,000 lives each year.
“It gives physicians an additional set of eyes and ears and could genuinely help save lives,” says Albert Wu, a Johns Hopkins expert in patient safety expert and a co-investigator on the work. “This is a significant milestone for Johns Hopkins and Dr. Saria’s team.”
In 2023, under the FDA’s Breakthrough Designation— a designation to expedite technologies with potential to improve care for life-threatening conditions—the technology was deployed at several health systems, including Cleveland Clinic, MemorialCare in California and University of Rochester School of Medicine, where it significantly reduced in-hospital mortality, morbidity, and length of stays for patients with sepsis.
“Few clinical AI systems can reason across the full breadth of messy, real-world hospital data and deliver guidance clinicians can reliably act on,” said Saria. “FDA approval is a regulatory first that shifts what the standard of care can be for a condition associated with roughly one in three in-hospital deaths. This represents decades of clinical AI research at Johns Hopkins translated into practice — not just models built in the lab, but technology delivered where it matters: at the bedside.”
FDA clearance also opens the door to allow hospitals using the system to receive Medicare and Medicaid reimbursement under the New Technology Add-on Payment program, that compensates hospitals for use of new technologies.
“Suchi’s work has reached a major milestone,” says Ed Schlesinger, dean of Johns Hopkins’ Whiting School of Engineering. “It’s poised to have a significant role in preventing hospital deaths and complications.”
Heart failure care enters the precision era: New drugs, biomarkers, and AI are redefining “one-size-fits-all” treatment
Article by Dr.Francisco Epelde in Current Cardiology Reviews, 2026
Heart failure (HF) remains one of the world’s most urgent cardiovascular challenges—common, costly, and clinically complex. Despite major advances in medications, devices, and care pathways, HF continues to drive high rates of hospitalization and long-term disability, especially among older adults and patients living with multiple comorbidities. A central problem is heterogeneity: people with HF can look similar at the bedside yet have very different underlying biology, trajectories, and responses to therapy.
A recent review in Current Cardiology Reviews synthesizes recent evidence and outlines how the field is moving from broad categories—largely organized around left ventricular ejection fraction (LVEF)—towards a precise, mechanism-based approach that aims to match the right intervention to the right patient at the right time.
A changed therapeutic foundation: benefits across the EF spectrum
One of the most consequential shifts in contemporary HF care is the expansion of therapies beyond traditional EF “silos.” Sodium–glucose cotransporter-2 inhibitors (SGLT2i), originally developed for diabetes, have demonstrated consistent reductions in HF hospitalization and cardiovascular events across LVEF categories, supporting their role as a cornerstone therapy even in many patients without diabetes. This broad efficacy reinforces a growing emphasis on disease mechanisms—metabolic, renal, hemodynamic—rather than relying solely on phenotype labels.
At the same time, angiotensin receptor–neprilysin inhibitors (ARNIs), particularly sacubitril/valsartan, have further advanced outcomes in HF with reduced EF and may offer benefits in selected subgroups of HF with preserved EF. Together with beta-blockers and mineralocorticoid receptor antagonists, these agents help form a contemporary “multi-target” base that is increasingly initiated earlier and more efficiently in clinical practice.
Treating the “right disease”: comorbidities and specific etiologies
Beyond mainstream HF pharmacotherapy, the review underscores progress in diagnosing and treating previously under-recognized HF drivers—especially in older adults. Transthyretin cardiac amyloidosis (ATTR) is highlighted as a now-treatable cause of HF, where timely recognition through non-invasive imaging and biomarker strategies can open the door to disease-specific therapies such as transthyretin stabilizers.
The article also emphasizes iron biology as a modifiable contributor: iron deficiency is common, measurable, and treatable in HF, with intravenous iron formulations improving symptoms and functional capacity in appropriate patients. Conversely, iron overload syndromes can cause restrictive cardiomyopathy and arrhythmias, where advanced imaging (notably cardiac MRI T2*) can quantify myocardial iron and guide chelation therapy decisions.
Devices, monitoring, and the rise of proactive care
Innovation is not limited to medications. Evolving device strategies—cardiac resynchronization therapy (CRT) in selected patients, durable LVAD technologies, and implantable hemodynamic monitors—support a shift from reactive decompensation management to proactive risk detection and personalized fluid optimization. By detecting early physiologic changes, these tools can enable earlier interventions that may prevent hospital admissions and preserve quality of life.
Precision medicine: biomarker panels, multi-omics, and AI
A core theme of the review is the growing feasibility of precision medicine in HF. Traditional biomarkers such as BNP/NT-proBNP remain essential for diagnosis and monitoring, but emerging markers—reflecting fibrosis, inflammation, myocardial injury, and cardiorenal stress—can provide a more multidimensional profile of a patient’s disease. The review describes how combining markers (for example, natriuretic peptides with soluble ST2 and galectin-3) may improve risk stratification and better inform treatment intensity and follow-up planning.
Genetic testing and transcriptomic profiling are also beginning to reveal HF “endotypes,” particularly within the diverse HFpEF population, where conventional approaches have historically underperformed. Artificial intelligence and machine learning add another layer: models trained on large datasets can support early detection of deterioration, improve interpretation of echocardiography and ECGs, and generate dynamic risk scores that respond to real-time clinical changes.
The unfinished work: equity, representation, and implementation
While progress is real, the review emphasizes persistent barriers that may limit impact unless addressed head-on. Key patient groups remain underrepresented in clinical trials—especially women, older adults, multimorbid patients, and populations from low- and middle-income settings—reducing confidence that “average trial results” translate fairly to real-world patients. In addition, uptake of guideline-directed medical therapy can be inconsistent due to cost, therapeutic inertia, fragmented care systems, and unequal access to specialty services and digital tools.
A call to action: precision that is practical, interpretable, and equitable
The review concludes that the next era of HF care must move beyond LVEF alone and adopt multidimensional phenotyping (biomarkers, imaging, genomics, comorbidities, and social determinants) alongside multidisciplinary care models. To make precision medicine real—not just aspirational—health systems will need inclusive pragmatic trials, scalable digital infrastructures, value-aligned reimbursement, and a focus on patient-centered outcomes such as function and quality of life.
Article Title:Heart Failure in the Era of Precision Medicine: Advances, Challenges, and Future Directions
In The Crop Journal, researchers introduced Hi4GS, an AI-driven framework improving wheat yield prediction accuracy by 82%. Hi4GS streamlines SNP selection, employs intelligent optimization, and uncovers key genetic markers, enabling transparent genomic insights and advancing cost-effective breeding for global food security.
As the global population grows, increasing wheat (Triticum aestivum L.) yields is critical for food security. While genomics selection (GS) has become a core technology in modern breeding by predicting breeding values using genome-wide markers, it faces a notable hurdle: the "small n large p" problem. With hundreds of thousands of genetic markers (SNPs) but relatively few breeding samples, models often suffer from overfitting and high computational costs.
In a study published in The Crop Journal, a research team led by Professor Fa Cui from Ludong University, along with colleagues from several agricultural institutions, has unveiled Hi4GS (Hybrid Feature Selection for Genomic Selection). This novel, interpretable framework streamlines high-dimensional genotypic data, improving prediction accuracy and identifying the potential biological "drivers" behind wheat yield.
"Our goal was to move beyond the 'black box' nature of traditional genomic models," says Shanghui Zhang, the study's first author. "By filtering through the noise of tens of thousands of SNPs, Hi4GS allows us to achieve much higher predictive precision with a fraction of the data, while simultaneously uncovering the actual genes that influence yield."
The Hi4GS framework operates through a multi-stage strategy that moves from broad screening and intelligent optimization to deep biological interpretation. Initially, the system tackles the vast amount of genetic data by creating an "Elite SNP Candidate Pool." It does this through a dual-track approach: integrating multiple importance-ranking algorithms (like Ridge regression and GWAS) with a novel weighting scheme, while also using quantity-determining algorithms to capture all potentially valuable genetic information without bias.
Following this broad screening, Hi4GS employs a Prior-guided Grey Wolf Optimizer (P-GWO) for fine-tuned selection. Unlike standard algorithms that search randomly, this intelligent optimizer focuses its search within the pre-screened 'Elite Pool'.
"This acts like a navigation map," explains Professor Fa Cui, the corresponding author. "By using prior knowledge to guide the starting point, we find the optimal SNP combination faster and more accurately."
For the first time in this context, the team applied SHAP (SHapley Additive exPlanations) values, a technique from game theory. This allows them to quantify whether a specific SNP has a positive or negative impact on yield and to understand how different markers interact, effectively opening up the model's "black box."
Further testing across 11 yield traits in four wheat datasets, the researchers found that GS models using SNPs selected by Hi4GS improved average predictive accuracy by more than 82% compared to using the entire SNP set.
"Furthermore these findings are not just statistical coincidences," says Zhang. "The SNPs identified by Hi4GS fell into gene regions at a rate of 9.17%—significantly higher than the genomic background of 5.61%. This confirms that the framework successfully isolates functional biological information."
The implications for "Breeding 4.0" are significant. By narrowing down thousands of markers to a few dozen high-impact candidate genes, Hi4GS provides a blueprint for developing low-cost, high-efficiency breeding chips.
"This framework is not limited to wheat," notes Cui. "It can be applied to other major crops and animals, providing a powerful tool for genomic-assisted breeding and helping to accelerate the development of high-yielding varieties worldwide."
The team has released the Hi4GS R package as open-source software on GitHub, making this advanced tool available to the global agricultural research community.
###
Contact the author:
Fa Cui
Email address: 3314@ldu.edu.cn
The publisher KeAiwas established by Elsevier and China Science Publishing & Media Ltd to unfold quality research globally. In 2013, our focus shifted to open access publishing. We now proudly publish more than 200 world-class, open access, English language journals, spanning all scientific disciplines. Many of these are titles we publish in partnership with prestigious societies and academic institutions, such as the National Natural Science Foundation of China (NSFC).
No comments:
Post a Comment