It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Monday, May 11, 2026
New book ‘AI TO EYE’ brings together 40+ voices from science, art, and media to ask: how do we really want to live with AI?
Credit: publisher/author/designer of AI TO EYE: Between Code and Conscience
As artificial intelligence reshapes education, healthcare, work, and creativity, public debate too often swings between hype and fear. A new book from SmartBot editorial board member Prof. Robert Riener cuts through the noise — not with technical jargon or a single expert opinion, but with a chorus of human voices.
Artificial intelligence (AI) is transforming the world at a breathtaking pace. Few technologies have penetrated our lives so deeply in such a short time, affecting education, work, health, art, and communication. This rapid transformation both fascinates and unsettles. Public discourse swings between visions of unbounded optimism and apocalyptic warnings. Media and films amplify fears of losing control or employment, while inflated promises can strain trust in the technology itself. Amid all the hype and alarm, what we truly need is in danger of being overlooked: a thoughtful, grounded conversation about how we want to coexist with this new form of intelligence.
AI TO EYE seeks to open that space. Rather than offering a technical manual or a single interpretive voice, the book captures the AI moment as it is currently unfolding, through a carefully curated chorus of perspectives. It brings together contributions from science, business, art, journalism, and media. More than 40 individuals from California’s Silicon Valley and Silicon Beach, the symbolic epicenters of the digital world, share their views. Its contributors include international leaders and visionaries as well as renowned scientists, journalists, artists, composers, film producers, actors, an astronaut, and a Disney executive. Others come from the German-speaking world but maintain close personal or professional ties to California. This mosaic took shape during my research stay in the summer of 2025, when I was a Thomas Mann Fellow in California.
"I aim to examine the relationship between humans and machines not only from a technical standpoint but also from a cultural and societal one. AI TO EYE is neither a textbook nor a collection of scientific reports; it brings together essays and concise statements in a deliberately polyphonic exploration of artificial intelligence’s role in a society in flux. These voices do not advance a single argument. Instead, they challenge and complement one another, revealing tensions and contradictions, and allowing society itself to speak back to the technology that is increasingly shaping it. Together, they paint a vivid, often surprising, picture of how AI is reshaping our self-understanding and what it discloses about us."
———Robert Riener
The cultural engagement with artificial intelligence is by no means new. As early as 1968, in 2001: A Space Odyssey, Stanley Kubrick presented one of the most precise and simultaneously poetic visions of machine intelligence. The onboard computer H.A.L. 9000 initially appears as the ideal rational system, devoted to its purpose: “I am putting myself to the fullest possible use, which is all, I think, that any conscious entity can ever hope to do.” Yet this apparent perfection begins to shift. H.A.L. no longer merely serves the human crew but starts to assert control over them, captured in one of the most iconic lines of technological defiance: “I am sorry, Dave. I’m afraid I can’t do that.” In the end, the system itself seems to unravel, losing not only its function but gaining something eerily human in the process: “Dave, my mind is going. I can feel it. I can feel it. My mind is going …” Since then, this theme has echoed through film history, from Colossus: The Forbin Project to Blade Runner and The Terminator, to Her and Ex Machina. The central question remains the same: Where does the machine end, and where does the human begin?
Today, as AI is becoming an inseparable part of daily life, this question gains renewed urgency. AI TO EYE gathers voices that do not seek to instruct but to explore, that do not judge but observe, and that speak not merely about technology, but about the society that produces, deploys, and negotiates it.
The book invites readers to see AI as a mirror of our times, as an expression of our creativity, our fears, and our desire for understanding and control. It addresses anyone curious about what AI reveals about us and willing to meet it, unflinchingly, eye to eye.
About the content:
The book contains 14 essays, followed by about 15-20 quotes in respective themes. The quotes are from influential, partly famous protagonists, which I interviewed during my stay as a Thomas Mann Fellow in California. Here is the list of essays:
Essay 1 From Myths to Machines: How AI Learned to Think. By Robert Riener (Zurich, L.A.)
Essay 2 Chances of AI for Healthcare and Beyond. By Julia Vogt (Zurich)
Essay 3 AI: A Tool for Inclusion? By Robert Riener (Zurich, L.A.)
Essay 4 AI and Education: A Student Perspective. By Luke Reinkensmeyer (Irvine)
Essay 5 Is AI Disrupting the Path from Campus to Career? By Ursula Renold (Zurich)
Essay 6 AI and the Arts: Risks, Possibilities and Human Responsibility. By Kelli Sharp (Irvine)
Essay 7 Aura Farming: Can AI Generate Rizz? Renée Reizman (L.A.)
Essay 8 The Infinite Rehearsal: Music and AI. By Steven Walter (Bonn)
Essay 9 Reflections on AI Privacy and Security. By Verena Zimmermann (Zurich)
Essay 10 Outgrowing the Paperclip Obsession: There Is Hope That AI Will Become Ethical. By Haewon Jeong (Santa Barbara)
Essay 11 AI and Intellectual Property: Evolution, Disruption, or Both? Markus Hauschild (Pasadena)
Essay 12 When Algorithms Meet Accountability: AI and the Future of Journalism. By Lukas Görög (Zurich)
Essay 13 Could One Steer Humans and Societies with Generative AI? By Dirk Helbing (Zurich)
Essay 14 After Intelligence: On What Remains Human. By Robert Riener (Zurich, L.A.)
AI-generated images of depression depict more stereotypes and arouse greater stigmatization
So determines a study by UPF that analyzed the opinions of associations of people suffering from depression, young people and professionals of science and health communication
Images generated using artificial intelligence (AI) depict more stereotypes and stigmas around depression than images used by the media to illustrate the disease. This is the main conclusion of a study on the perception held by different groups –including associations of patients, young people and communication professionals– of the images used by the media when talking about depression. “The images generated by AI depict more concepts related to stigma such as marginalization or social exclusion”, warns Núria Saladié, first author of the study and a member of the Science, Communication and Society Studies Centre (CCS) at Pompeu Fabra University (UPF). According to the authors, in order to convey news about mental health responsibly, avoiding reproducing stereotypes, there is a need to understand that technology is not neutral and to take into account the recommendations issued by patient associations.
AI-generated images tend to depict people alone, in the shadows or against a backlight, with their faces hidden and without taking part in any activity. This accentuates stereotypes and stigma and has a negative effect on people suffering from depression. So determines a study published in the journal JMIR Human Factors, which has examined the perception of different groups of the population –including patient associations, young people and communication professionals– of the images used by the media to depict the disease.
“Many AI-generated images do not reflect the diversity of experiences associated with the disease”, explains Carolina Llorente, also an author of the study and a researcher at the CCS-UPF. Llorente highlights that “being able to take into account the vision of people who have experienced the disease up close has been one of the most valuable aspects to avoid perpetuating stereotypes”.
The study also reveals that when people know that the image has been generated by AI they are more critical than when they do not, which suggests that transparency around the use of AI can influence the way these representations are interpreted. “AI is already being used –and will be increasingly used– in mental health communication”, Saladié explains. And she adds, “If we want this communication to be responsible, we require a more careful and critical approach to the use of AI”.
To be able to communicate news about mental health responsibly, avoiding stereotypes, it must be understood that “AI tools do not generate images neutrally: they respond to the instructions they receive. Therefore, it is important to think carefully about the prompts and review the results critically”, points out Gema Revuelta, director of the CCS-UPF and leader of the study, which concludes that “improving the quality of visual representations related to depression depends on teamwork when pooling the vision and knowledge of patient organizations, mental health experts, science journalists, AI developers and researchers”.
Embodying surgical robots with next-gen AI can safely augment practice if ethical and regulatory questions are addressed, say experts writing today in Frontiers in Science.
A team of pioneering surgeons and researchers from King’s College London says AI-enhanced surgical robotics could enable “true personalized surgery” and enhance the performance, situational awareness, decision-making, and effectiveness of surgical teams.
Their analysis also addresses regulatory questions including reducing risks from systems that continue to learn and change after approval. It also tackles how we can prevent dataset biases from reinforcing inequalities, and how we address the concentration of research and industry in resource-rich nations.
Lead author and robotic urological surgeon Prof Prokar Dasgupta, formerly of King’s College London and Guy’s Hospital, London—who recently performed the UK's first long-distance robotic operation—said: “Using advanced AI and robotics in the operating room is very exciting. The next few years will see intelligent robots impact all stages of surgery—including techniques, emergency responses, team roles, workflows, and assistive functions.”
The authors warn that AI must sustain—not disrupt—operating rooms, and should support advances and refinement in surgical skill, procedure and technology, they warn. Most importantly, its use should be safeguarded by robust human and regulatory oversight, with surgeons remaining chief decision-makers.
Prof Dasgupta added: “With AI’s promise comes profound implications for clinical practice and the continued safe function of surgical teams. These warrant multistakeholder discussion to ensure clarity of liability, minimization of bias, integration of autonomous robotic systems within surgical teams, global equity, and robust product regulation.”
True personalized surgery
Anticipated advances include AI embedded into surgical robots, known as ‘embodied AI’, linked to sensor-equipped operating rooms that generate spatial understanding, adaptive learning, performance benchmarking, autonomous surgical assistance, and feedback to teams mid-operation.
Future surgical AI will also harness new data streams—gathered from patients, surgical teams, and sensors in robots—to provide real-time mid-operation guidance and decision support to optimize surgical actions.
Predictive AI could also allow surgeons to accurately visualize the outcomes of various actions before taking them—called cause-and-effect recognition. This could in the future be used to help improve patient outcomes.
First author Dr Alejandro Granados from King’s College London said: “Surgery is on the brink of a profound transformation, where technology will not only help predict outcomes but also guide clinicians toward the most optimal, personalized treatment for each patient.”
Regulating adaptive systems
Currently, regulators authorize medical technologies based on their submitted form—but AI-embodied surgical robots present a challenge given their ability to learn, adapt, and change post-approval.
To address this challenge, they authors call for regulatory reforms, including changes to licensing pathways, device classifications, post-market monitoring, and compliance standards to better serve the higher risk profile of changing systems.
Dr Granados said: “AI’s ability to learn presents an unprecedented puzzle. We are at a pivotal time in surgery where we need to begin answering those questions to ensure patients can benefit from the wealth of benefits AI-powered operating rooms bring.”
Clinical trials, the paper asserts, should adopt standardized metrics for evaluating AI software and assessing human–AI and human–robot interactions. It also recommends that regulators work alongside professional bodies to oversee surgical training as practice transitions from clinical expertise to data driven approaches.
It also recommends new models of collaboration between academia, industry and healthcare systems in lower income countries to build cost-effective AI and robotic ecosystems from which all can benefit.
Prof Dasgupta said: "We require a new set of frameworks—spanning regulatory and compliance, trial methods, reporting standards, and training approaches—to ensure the ongoing safety and effectiveness of robotics and AI in surgery.”
Dr Granados said: “Realizing this vision on a global scale will require careful stewardship. We must ensure that healthcare professionals and patients everywhere can benefit equitably from the compelling potential of AI and robotics innovation that is coming.”
Human decision-makers
The authors expect future iterations of robotics to operate with ever-greater degrees of autonomy while maintaining essential human oversight.
They describe how the surgeon’s role will shift towards supervision, coordination and high-level decision-making, while nurses, anesthetists and assistants can expect to gain additional skills. They also expect surgical teams to be complemented by clinical data scientists plus AI and robotic integration engineers.
Prof Dasgupta said: “Human surgeons must continue to be the chief decision-makers, and insights from AI models must be presented differently to members of the surgical team, based on their role, if we are to maintain the clear chain of authority necessary for safe surgical practice.”
Dr Granados said: “AI and robotics, strategically deployed in the operating room, will form the foundation of the shift towards systems that learn from every procedure, support surgical teams in real time, and potentially deliver safer, more precise, and better outcomes for patients.
“However, we must ensure that human judgment remains central, while addressing today’s unmet surgical needs and disparities in who benefits from access.”
The study shows why AI tools require more real-world testing beyond lab data before they can be trusted in medicine.
Tools like PanPep AI can help predict how the immune system targets disease but can still miss or misread important signals.
Better-validated AI could speed up drug discovery and immunotherapy, but it’s not ready to guide patient care on its own.
TAMPA, Fla. (May 6, 2026) – Artificial intelligence is increasingly being used to help scientists accelerate drug discovery and search for new treatments. But for AI tools to work effectively, researchers need to know whether they can be validated and applied in real-world situations.
A research team at the University of South Florida is taking a step in that direction by merging AI and immunology in ways that could enhance oncology treatment and the development of new drugs and vaccines.
In an embargoed new study publishing Wednesday, May 6, at 5 a.m. ET in Nature Machine Intelligence, researchers at the USF Health Morsani College of Medicine examined how well AI tools can predict one of the immune system’s most important jobs: recognizing when something does not belong in the body.
That process is central to fighting infections and plays a major role in the development of immunotherapies, which are treatments designed to help a patient’s own immune system attack disease.
“AI tools are playing an increasingly important role in helping researchers develop vaccines, drugs and cancer therapies,” said Dong Xu, professor in the USF Health Informatics Institute. “However, if these tools aren’t carefully tested in real-world conditions, they can produce misleading or biased results.”
The study was led by Xu and Fei He, assistant research professor also with the USF Health Informatics Institute. Xianyu Wang, an intern from the University of Missouri-Columbia, was also a co-author.
Working with an AI model called PanPep — short for Pan-peptide meta learning — the researchers developed a systematic and comprehensive evaluation framework for testing how well computational tools can predict whether certain cells in the body will recognize and respond to antigens, which substances that trigger immune responses.
The question is crucial to drug discovery, because the immune system’s ability to recognize these targets helps determine whether the body can detect and respond to infections, tumors or vaccines.
Their new framework can be applied to a broad class of immunology prediction problems, including peptide–HLA (human leukocyte antigen) binding; peptide–T-cell receptor interaction; antigen presentation; and other peptide- or antigen-driven interactions. These vital processes help immune cells identify what belongs in the body and what may be a threat.
“Our study tested how well AI tools can predict an important immune-system interaction that could help guide the development of cancer immunotherapies and vaccines,’’ He said. “Our findings highlight the strengths and weaknesses of current AI approaches and provide guidance for building safe, more reliable AI tools for healthcare.”
Immune cells recognize and react to antigens, which are proteins on bacteria, viruses or tumor cells that can act as foreign markers, alerting the immune system to a possible threat.
Adaptive immune cells, including T and B cells, use specific receptors to recognize harmful invaders such as viruses, allergens, toxins or cancer cells. Other immune cells ingest these invaders, break them into pieces of antigens and present those pieces to activate a targeted immune defense.
The team used PanPep and other tools to predict how T-cell receptors behave in binding to antigens. Developed to address the challenges of limited data the tool can create scenarios to predict binding for unseen or rare peptides, which are small chains of amino acids that can serve as key immune-system targets.
Accurately predicting peptide and T-cell receptor binding allows scientists to identify and design the right “trigger” peptides for specific immune cells. Those trigger peptides could accelerate immunotherapies and save lives.
By narrowing down the best candidates for laboratory testing, researchers can reduce the need for large-scale biological experiments that are time-consuming and costly.
The USF research represents a significant step toward more reliable AI-guided, personalized cancer therapies and vaccines. For example, with tools such as PanPep, scientists may be able to simulate oncology screening processes on computers, potentially reducing time frames from months or years to a matter of days.
If doctors can quickly identify a promising treatment for a person with stage-4 cancer, for instance, it could extend their life. But the authors note that while meta-learning approaches can build accurate, target-specific models using only a small amount of experimental data, they require careful testing and refinement before they can be safely used to guide personalized care.
“Since real-world applications often involve entirely new immune targets, it remains unclear to what extent these models can handle truly unseen cases,” the authors said. “This is the initial rationale of this study.”
###
About the University of South Florida
The University of South Florida is a top-ranked research university serving approximately 50,000 students from across the globe at campuses in Tampa, St. Petersburg, Sarasota-Manatee and USF Health. In 2025, U.S. News & World Report recognized USF with its highest overall ranking in university history, as a top 50 public university for the seventh consecutive year and as one of the top 15 best values among all public universities in the nation. U.S. News also ranks the USF Health Morsani College of Medicine in the highest tier, placing it as one of the top 16 medical schools in the nation and inside the top 10 among public universities. USF is a member of the Association of American Universities (AAU), a group that includes only the top 3% of universities in the U.S. With an all-time high of $750 million in research funding in 2025 and as a top 20 public university for producing U.S. patents, USF uses innovation to transform lives and shape a better future. The university generates an annual economic impact of nearly $10 billion for the state of Florida. USF’s Division I athletics teams compete in the American Conference. Learn more at www.usf.edu.
Breaking down tightly organized defenses ("low blocks") is one persistent challenge in modern football. In these situations, defenders crowd their own penalty area, leaving attackers with little space and time to create scoring opportunities. While existing data-driven approaches have improved the analysis of passes and shots, they often overlook a crucial aspect of the game: how players move and coordinate without the ball.
In a new study published in Intelligent Sports and Health, researchers from China and France developed an artificial intelligence (AI) model that learns from real-world match data to better understand and optimize attacking play against such compact defensive structures. Using large-scale event and tracking data from professional matches, the framework models each attacking player as an individual decision-maker while capturing how all players interact as a coordinated unit.
"Our goal was to move beyond analyzing isolated actions and instead understand football as a truly collective decision-making process," says Yi Pan, corresponding author of the study, "In particular, we wanted to capture how off-ball movements, which are often invisible in traditional statistics, contribute to creating space and breaking defensive lines.”"
The model simultaneously evaluates both on-ball actions, such as passing and carrying, and off-ball movements, such as runs that stretch or disrupt the defensive structure. "By learning from historical match data, it can assess the effectiveness of different strategies and even suggest alternative actions that could have led to better outcomes," says Pan.
One of the main findings is that the AI model tends to recommend more proactive and coordinated attacking behaviors compared to those typical observed in real matches. "Human players often favor safer, lower-risk decisions, but the model identifies opportunities where more dynamics movements and coordinated positioning could create space and increase scoring potential," explains Pan.
Importantly, the generated strategies remain tactically realistic and consistent with professional play, while introducing creative solutions that are rarely seen in practice. This balance between realism and innovation makes the approach particularly valuable for practical applications.
"This framework provides a new way for coaches and analysts to evaluate not just what happened in a match, but what could have happened," Pan adds. "By enabling counterfactual analysis of decisions and movements, it supports more informed tactical planning and offers deeper insights into how coordinated team behavior emerge on the pitch."
Beyond football, the research also contributes to the broader field of artificial intelligence by advancing methods for multi-agent decision-making in complex real-world environments, where coordination, uncertainty, and limited data pose significant challenges.
###
Contact the author:
Yi Pan
Affiliation: Institute of Automation, Chinese Academy of Sciences
Email address: yi.pan@ia.ac.cn
The publisher KeAiwas established by Elsevier and China Science Publishing & Media Ltd to unfold quality research globally. In 2013, our focus shifted to open access publishing. We now proudly publish more than 200 world-class, open access, English language journals, spanning all scientific disciplines. Many of these are titles we publish in partnership with prestigious societies and academic institutions, such as the National Natural Science Foundation of China (NSFC).
Offline multi-agent reinforcement learning for evaluating and optimizing football attacking strategies against low-block defences
Physician-reported safety outcomes of AI-generated hospital course summaries
JAMA Network Open
About The Study:
In this study, a large language model-based agentic workflow produced hospital course summaries that were frequently used with minimal risk of harm identified. The intervention was associated with a reduction in physician burnout, supporting the viability of AI summarization to mitigate documentation burden.
Corresponding Author: To contact the corresponding author, Francois Grolleau, MD, PhD, email grolleau@stanford.edu.
Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.
# # #
Media advisory: This study is being presented at the 2026 Society of General Internal Medicine Annual Meeting.
Embed this link to provide your readers free access to the full-text article
About JAMA Network Open:JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication.
Frontiers in Science Deep Dive webinar series: AI-embodied surgical robots can revolutionize surgery—if regulatory questions addressed
Embodying surgical robots with next-gen AI can safely augment practice if ethical and regulatory questions are addressed.
This is according to a new Frontiers in Science lead article in which researchers Prof Prokar Dasgupta, Dr Alejandro Granados, and Dr Nicholas Raison, explore how sensor-rich operating rooms and AI surgical co-pilots could enable more precise, data-driven, personalized surgery. Their article outlines how advances in multimodal data integration, machine learning, and robotic systems could enhance situational awareness, intraoperative decision-making, and team performance.
It highlights how these technologies may enable anticipatory behaviors, adaptive learning, and improved coordination across surgical teams.
Join the authors at our Frontiers in Science Deep Dive webinar on 11 June 2026, 16:00–17:30 CEST, as they explore how surgical roles may evolve, and how predictive AI and robotics could improve patient outcomes while maintaining safe and effective clinical practice.
Evolving surgical teams in the age of artificial intelligence and robotics | 11 June 2026 | Register
Frontiers in Science Deep Dive sessions bring researchers, policy experts, and innovators together from around the world to discuss a specific area of transformational science published in Frontiers' flagship, multidisciplinary journal, Frontiers in Science, and explore next steps for the field.
Inspired by the brain, researchers build smarter, more efficient computer hardware
A University of Missouri study shows that small material changes can boost brain-like computing, which could one day help make artificial intelligence more energy-efficient.
As traditional computer chips reach their physical limits and artificial intelligence demands more energy than ever, University of Missouri researchers are rethinking how computers work by taking cues from the human brain.
The timing is critical. Energy use from AI data centers is projected to double by the end of the decade, raising urgent questions about sustainability.
The solution may lie in neuromorphic computing, an approach that reimagines computer hardware to process information more like biological neural networks rather than conventional chips.
“One of the brain’s greatest advantages is its efficiency,” Suchi Guha, a professor of physics in Mizzou’s College of Arts and Science, said. “It performs incredibly complex tasks using about 20 watts of power — roughly the same as an old light bulb. By comparison, today’s computer architecture is extremely energy-intensive.”
Making neuromorphic computing a reality starts at the hardware level. Guha and her team are developing electronic components designed to function like the connections between neurons that allow the brain to learn, adapt and store information — laying the groundwork for computers that are not only more powerful, but dramatically more efficient.
Rethinking the computer chip
For decades, computers have relied on transistors — tiny electronic switches that let machines process information. In most modern chips, however, thinking and memory happen in separate places. Every time a computer runs a task, data must shuttle back and forth between those two areas, which slows performance and burns energy.
The brain takes a different approach. Instead of separating memory and processing, individual connections between neurons — called synapses — do both at the same time. That setup allows the brain to learn and adapt while using surprisingly little energy.
Guha’s team is borrowing that idea for electronics. They are developing organic transistors that can both store and process information in the same place, much like synapses do in the brain.
“We’re not just trying to make faster transistors,” Guha, who is also a core faculty member with the MU Materials Science and Engineering Institute, said. “We’re trying to make devices that behave more like the brain itself.”
To see how well the approach works, the researchers tested several organic materials that looked almost identical on the surface. But once those materials were built into synaptic transistors, their performance differed dramatically.
The key factor turned out to be the interface — the thin boundary where the semiconductor meets an insulating layer inside the device.
“This shows us that performance isn’t just about what a material is made of,” Guha said. “It’s also about how it interacts with everything around it. Even small structural differences can have a big impact.”
Moving toward energy‑efficient, brain‑like AI
By clarifying how molecular design and interface quality influence synaptic behavior, Mizzou’s work provides other researchers with guiding principles for building more effective neuromorphic hardware. Such systems could eventually lead to brain-like AI that learns more efficiently, consumes far less power and excels at tasks such as pattern recognition and decision-making.
While brain-inspired computing is still in its early stages, Guha said advances such as hers are narrowing the gap between biology and machines.
“The brain remains the gold standard for efficient computation,” she said. “If we want truly intelligent machines, we have to start building hardware that learns the way biology does.”
The study, “Structure–Function Coupling in Pyridyl Triazole Copolymers for Neuromorphic Synaptic Transistors,” was published in ACS Applied Electronic Materials. Co-authors are Arash Ghobadi, Abhijeet Abhi, Thomas Kallos, Dillan Gamachchi, Indeewari Karunarathne, Andrew Meng, Jospeh Mathai, Shubhra Gangopadhyay and Steven Kelley at Mizzou; and Salahuddin Attar and Mohammed Al-Hashimi at Hamad Bin Khalifa University.
Credit: Y.Park and S.Nycklemoe, “MARGO: Machine Learning-Assisted Adaptive Randomization for Group Sequential Trials Based on Overlap Weights,” Statistics in Medicine44, no. 15-17 (2025): e70158, https://doi.org/10.1002/sim.70158.
Professor Yeonhee Park of the Department of Statistics at Sungkyunkwan University has developed a novel statistical framework — MARGO (Machine Learning-Assisted Adaptive Randomization for Group Sequential Trials Based on Overlap Weights) — that makes machine learning practically applicable in clinical trial design. This work provides the first rigorous solution to the fundamental statistical challenges that arise when integrating ML/AI-driven decision-making into the scientifically demanding environment of clinical trials.
The Promise and the Barrier: Why ML/AI Alone Is Not Enough Machine learning and artificial intelligence have garnered widespread attention as transformative tools for personalized treatment assignment in clinical trials. In particular, adaptive randomization — which dynamically adjusts treatment allocation based on accumulating trial data — is a promising approach for improving patient outcomes by steering more participants toward more effective treatments. However, applying this approach in practice can introduce a critical statistical problem. When patient characteristics (e.g., biomarkers) are used to guide treatment assignment, systematic imbalances can emerge between treatment groups. This covariate imbalance leads to biased treatment effect estimates and an inflated type I error rate, risking false conclusions. The problem is further compounded in group sequential designs, which include planned interim analyses for early stopping decisions.
Machine Learning Meets Causal Inference: A Two-in-One Solution To address this fundamental challenge, MARGO integrates machine learning-based predictive models with overlap weights (OW), a propensity score–based approach widely used in causal inference to adjust for covariate imbalance. MARGO uses patient covariate information to predict the probability of treatment success via machine learning, then uses these predictions to preferentially assign patients to the more effective treatment. Simultaneously, OW corrects covariate imbalance across treatment groups, effectively controlling the bias and type I error inflation induced by adaptive randomization. The framework was evaluated using four machine learning algorithms: Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Random Forest (RF), and Multi-Layer Perceptron (MLP).
Rigorously Validated Performance Through extensive simulation studies, MARGO demonstrated superior performance over conventional fixed randomization and existing adaptive randomization methods across three key dimensions. First, MARGO allocated a greater proportion of patients to the more effective treatment. Second, it maintained the overall type I error rate below the target threshold of 0.05 — even in scenarios where conventional methods inflated the error rate to as high as 0.08–0.18. Third, it preserved high statistical power under alternative scenarios while reducing the number of treatment failures. Together, these results demonstrate that MARGO can simultaneously improve the ethical standards and scientific integrity of clinical trials.
Beyond "Using AI" — Toward "Trusting AI in Clinical Trials" The most important contribution of this research goes beyond simply applying machine learning to clinical trials — it rigorously resolves the fundamental statistical problems that emerge in that process. MARGO is designed to accommodate a wide range of AI models and holds broad potential for extension to precision medicine and data-driven decision-making across diverse fields.
This study was published in Statistics in Medicine.
Simulation results: Controlled Type I Error Rates Under the Nominal Level.
Credit
Y.Park and S.Nycklemoe, “MARGO: Machine Learning-Assisted Adaptive Randomization for Group Sequential Trials Based on Overlap Weights,” Statistics in Medicine44, no. 15-17 (2025): e70158, https://doi.org/10.1002/sim.70158.
MARGO: Machine Learning-Assisted Adaptive Randomization for Group Sequential Trials Based on Overlap Weights
Prof. Dafna Kariv of Reichman University’s Adelson School of Entrepreneurship Wins European Union Innovation Award for AI System Simulating Entrepreneur–Investor Interactions
Congratulations to Prof. Dafna Kariv, Head of the Entrepreneurship–Business Administration track at the Adelson School of Entrepreneurship at Reichman University, on being awarded the Inspiration Award in the ETF New Learning Award 2026 competition, presented by the European Training Foundation (ETF), an agency of the European Commission, to groundbreaking initiatives in teaching and learning.
Prof. Kariv received the award in the category “Entrepreneurial Learning in the Digital Age” in recognition of her development of an innovative approach to cultivating entrepreneurial skills through AI-based simulations.
The research, conducted in collaboration with Prof. Zohar Elyoseph of the University of Haifa and Yuval Haber of Bar-Ilan University, and developed using the cesura.ai platform, addresses one of the central challenges in entrepreneurship: bridging the gap between theoretical knowledge and the ability to perform effectively in complex, real-world situations. While academic institutions excel at imparting professional knowledge, many entrepreneurs struggle with developing essential soft skills such as communication, operating under pressure, and the ability to read situations.
To address this gap, the team developed a unique system built on multiple AI agents: one simulates a demanding investor who engages students in realistic discussions about their ventures; another simulates a mentor who helps analyze emotional and communicative dynamics and refine response strategies; and two additional agents operate behind the scenes, mapping the user’s applied skills during the interaction and providing a structured space for reflection and processing of the encounter with the “investor.” The system enables repeated practice, immediate feedback, and measurable progress over time.
The pedagogical approach integrates experiential learning, real-time simulation, and data-driven analysis, fostering critical skills such as entrepreneurial thinking, resilience, interpersonal communication, and the ability to navigate uncertainty. The system has already been implemented among approximately 300 students and entrepreneurs in Israel, Canada, and the United Kingdom, demonstrating a significant impact on the development of practical skills, alongside increased confidence and a stronger sense of self-efficacy among participants.
According to Prof. Kariv, “Success in entrepreneurship is not measured by knowledge alone, but by the ability to act under pressure, communicate with precision, and build trust in real time. By integrating artificial intelligence into the learning process, we enable students to experience complex, real-world scenarios and develop the skills that truly determine success.”
The ETF New Learning Award is regarded as one of Europe’s leading distinctions in the field of educational innovation. Its aim is to advance teaching methods that integrate key competencies for a rapidly changing world, including digital, entrepreneurial, and social skills. Beyond Prof. Kariv’s personal achievement, this award reflects Reichman University’s commitment to advancing innovation in teaching and integrating cutting-edge technologies, positioning it among the leading academic institutions on the global stage.
SPACE/COSMOS
New study finds no significant joint damage in astronauts after short-duration spaceflight, highlighting promise of ultrasound monitoring
DENVER - Researchers at National Jewish Health have published new findings demonstrating that short-duration spaceflight may not significantly impact lower extremity joint structures, while also identifying a promising, non-invasive tool to monitor astronaut musculoskeletal health on future long-duration missions.
The study, led by Richard Meehan, MD, and Smarika Sapkota, MD, evaluated three astronauts before and after Axiom Mission 4 (Ax-4), an 18-day mission onboard the International Space Station (ISS). Using advanced musculoskeletal ultrasound imaging, researchers assessed cartilage thickness, synovial fluid levels, and tendon and ligament integrity in the hips, knees and ankles. The results, published in the International Journal of Clinical Rheumatology(Opens in a new window), showed no statistically significant changes in joint structures or evidence of inflammation following the mission. Dr. Sapkota will present the results at the May 2026 Annual Scientific Meeting of the Aerospace Medical Association in Denver.
“This study provides encouraging early evidence that short-duration spaceflight, combined with exercise and medical countermeasures, may help preserve joint health,” said Dr. Meehan, senior author and rheumatologist at National Jewish Health. “Equally important, it demonstrates that ultrasound can serve as a powerful, real-time tool to monitor joint health in space.”
Astronauts in the study engaged in cycling exercise during the mission and used anti-inflammatory medications, both of which may have contributed to maintaining joint health. Researchers observed no significant differences in cartilage thickness across the hips, knees or ankles, no meaningful overall change in knee synovial fluid levels, and no evidence of inflammation using power Doppler imaging. Tendon and ligament thickness also remained stable before and after spaceflight.
While the findings are reassuring, researchers caution that the study’s short duration and small sample size limit broader conclusions, particularly for longer missions to the Moon or Mars, where astronauts may face extended exposure to microgravity.
“Although we did not observe measurable changes after 18 days, longer missions could present very different risks to cartilage and joint structures,” said Dr. Sapkota, co-author and rheumatologist at National Jewish Health. “Our findings highlight the importance of continued research and the potential of ultrasound to guide personalized countermeasures for astronaut health.”
The study is among the first to use quantitative ultrasound immediately following spaceflight to assess multiple joint structures in humans, capturing changes within hours of return to Earth. Researchers believe this approach could play a critical role in future missions by enabling real-time monitoring of joint health, informing personalized exercise protocols, and reducing the risk of injury during and after spaceflight. The implications may extend beyond space exploration, offering potential benefits for patients on Earth, including those recovering from prolonged immobility or facing the risk of joint degeneration.
“This technology has the potential to transform how we monitor and protect joint health, not only for astronauts, but for patients here on Earth,” Dr. Meehan added.
The observational pilot study analyzed pre- and post-flight ultrasound measurements from three astronauts participating in the Ax-4 mission. Imaging was conducted within hours of return to Earth, and the research was supported by National Jewish Health in collaboration with Axiom Space and other partners.
“Leveraging the unique environment of space provides a vital laboratory for developing the next generation of biomedical technologies and medicine for terrestrial use,” explained Emmanuel Hilaire, PhD, director of Technology Transfer at National Jewish Health. Dr. Hilaire oversees the commercialization of innovations developed at National Jewish Health and is spearheading a space research initiative to accelerate further biomedical advancements.
National Jewish Health is the leading respiratory hospital in the nation delivering excellence in multispecialty care and world class research. Founded in 1899 as a nonprofit hospital, National Jewish Health today is the only facility in the world dedicated exclusively to groundbreaking medical research and treatment of children and adults with respiratory, cardiac, immune and related disorders. Patients and families come to National Jewish Health from around the world to receive cutting-edge, comprehensive, coordinated care. To learn more, visit njhealth.org or the media resources page.
These images taken on Aug. 18 (left) and Aug. 27 (right), 2016, by the near-infrared camera on Japan’s Akatsuki Venus probe, show the clear line of denser (darker) clouds moving across the planet.
Credit: T. Imamura, Y. Maejima, K. Sugiyama et al., 2026
The mysterious origin of an impressive cloud disturbance on Venus has now been revealed by a team including the University of Tokyo. Researchers used numerical models to show that an enormous 6,000-kilometer-wide atmospheric wave front, which circumnavigates the planet for days at a time, is caused by a large “hydraulic jump.” This is when a fluid abruptly slows down, changing from shallow and fast to deep and slow. On Venus, a sudden change in airflow in the lower cloud region is coupled with the creation of a strong updraft, forcing sulfuric acid vapor higher into the atmosphere where it condenses into a massive line of cloud. Future planetary studies can consider the potential impacts of this process, and what it might mean for any exploratory missions.
A grim, gray day may spoil weekend plans now and then, but on Venus, it’s cloudy all day every day with a chance of sulfuric acid showers. On the bright side, Venus’ constant thick cloud cover provides an excellent opportunity for us to study patterns and processes that would be difficult to spot on planets where clouds are more sparse or intermittent, like here on Earth.
A key feature of Venusian clouds is that they “superrotate,” moving about 60 times faster than the planet turns. We now know that superrotation also occurs elsewhere, including on Mars, our sun, and even Earth’s upper atmosphere. In 2016, images from Japan’s Akatsuki Venus orbiter also revealed that an enormous atmospheric wave — sometimes 6,000 km wide — repeatedly sweeps around the planet’s equator.
“We identified the phenomena, but for years we couldn’t understand it,” said Professor Takeshi Imamura from the Graduate School of Frontier Sciences at the University of Tokyo. “However, thanks to this research, we’re now able to show that this cloud disruption is caused by the largest known hydraulic jump in the solar system.”
We can see a hydraulic jump in action in the humble kitchen sink. As water from the tap hits the basin, it appears fast and shallow at first, but suddenly slows and becomes deeper as it spreads.
The hydraulic jump on Venus occurs when an eastward-moving atmospheric wave (called a Kelvin wave) in the lower to middle cloud region suddenly becomes unstable. Wind speed as seen from the atmospheric wave abruptly slows down and a strong localized updraft is created, which carries sulfuric acid vapor higher into the atmosphere. The droplets condense into clouds which trail behind, causing the massive wave front which can be seen sweeping around the planet.
“Venus has three distinct cloud layers, and the dynamics of the lower and middle layers are not so well understood,” said Imamura. “Our discovery of a hydraulic jump on Venus connecting a very large-scale horizontal process with a strong localized vertical wave is unexpected, as in fluid dynamics these are usually disconnected.”
The hydraulic jump was simulated using a fluid dynamic model (a numerical analysis which simulates how gas or liquids flow), and the cloud formation studied using a microphysical box model (which follows the behavior of an example section of air as it moves through the atmosphere). As well as simulating the same cloud disturbance, the team also found that this process helps to maintain the superrotation of Venus’ atmosphere.
“Up until now, we used a global circulation model (GCM) for Venus that is similar to Earth’s, but this model doesn’t include the hydraulic jump which we have now identified,” explained Imamura. “Our next step will be to test this discovery within a more inclusive climate model that includes other atmospheric processes. We will face some challenges due to the huge amount of processing power required to run such simulations. Even with modern supercomputers, it isn’t easy.”
Although this is the first observation of a hydraulic jump of this scale on another planet, the physics behind it may also occur on other celestial bodies. “Under some circumstances, Mars’ atmosphere may also have the right conditions for a hydraulic jump,” mentioned Imamura. Creating more accurate models of atmospheric conditions will aid in the success of future missions to Mars, as well as wider space exploration.
--- ---- --- Paper and Contact Details --- --- ---
Journal:
Takeshi Imamura, Yasumitsu Maejima, Ko-ichiro Sugiyama, Takehiko Satoh, Javier Peralta, Kevin McGouldrick, Takeshi Horinouchi, Kohei Ikeda, “A planetary-scale hydraulic jump driving Venus' cloud front”. Journal of Geophysical Research: Planets. April 24 2026. DOI: 10.1029/2026JE009672
Funding:
This work was funded by JSPS KAKENHI Grant Numbers 24H00021, 24K21565, and 23H01236. J.P. acknowledges project EMERGIA20_00414 funded by Junta de Andalucía in Spain, and project PID2023-149055NB-C33 funded by the Spanish MCIN.
Conflicts of Interest:
The authors declare there are no conflicts of interest for this manuscript.
The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 5,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on X (formerly Twitter) at @UTokyo_News_en.
In this image, the clearly defined hydraulic jump can be seen in the difference between the smooth inner circle of shallow and fast water, and the ripples of deeper, slower water beyond.
This cross section of the Venusian atmosphere shows a numerical simulation of a hydraulic jump in action. The color indicates the “potential temperature,” which represents the atmospheric material surface. The jump appears as a stepwise transition of the material surface.
University of Cincinnati astrophysics graduate and current geosciences student Paul Smith visits the Cincinnati Observatory's historic telescope in Mount Lookout. Smith spent a 20-year career at P&G and another 10 as a writer and speaker on business leadership before returning to UC to study physics and geosciences. He also is pursuing a master's degree in planetary science from the University of Aberdeen in Scotland.
One evening last fall, University of Cincinnati astrophysics graduate Paul Smith waited anxiously for data to start rolling across his computer screen from the James Webb Space Telescope a million miles from Earth.
The telescope was directed at an object even farther away — much farther away. Smith is studying a planet 901 light years away. That means light from its star takes 901 years to reach Earth.
The planet is named after this star, TOI-2031A, in accordance with NASA’s unpoetic, numbered naming conventions. The TOI stands for Transiting Exoplanet Survey Satellite Object of Interest.
Even though it was a clear night, the star was too faint to see with the naked eye. Its starlight captured in the space telescope was generated in the Middle Ages.
Smith and his research partners beat out other scientists for precious telescope time. Roughly 90% of research applications don’t make the cut each year in the competitive peer-review process.
Now they were hoping their calculations were correct and the planet would cross in front of its star during their allotted observation time.
Using the telescope’s powerful near-infrared spectrographic sensors, researchers would be able to learn more about the planet and its atmosphere as it transited its star’s face. As leader of the data analysis for the project’s first planet, Smith got to retrieve the data, what astrophysicists call the first look.
“It was a lifelong dream of mine coming true. I was up all night to get the first look at the data,” he said.
Smith and his research colleagues presented their findings on TOI-2031Ab at the American Astronomical Society meeting in Denver in April.
Physicists call planets outside our solar system exoplanets. To date, astrophysicists have identified about 6,400 of them.
Smith and his international collaborators from 19 other institutions are studying gas giants like Jupiter to learn more about their atmospheres and why so many of them orbit so close to their stars. The exopolanet is a quarter bigger in circumference than Jupiter, the biggest planet in our solar system, although it has 20% less mass.
Smith regularly travels to Ohio State University to meet with some of his project co-authors, grad student Everett McArthur and Professor Ji Wang. And he talks regularly with Peter Gao from the Carnegie Science Institute.
“We’re trying to figure out how these big gas giants got there. We’re studying the formation and migration pathways of big planets,” Smith said. “Where do they form in their solar systems and how do they get so close to their stars?”
TOI-2031Ab was discovered just last year, the only known planet in its solar system. The exoplanet orbits its star closer than Mercury orbits the sun.
Its year lasts just six Earth days as it hurtles through space four times faster around its star.
Researchers can study its atmosphere using the portion of its star’s light that slices through its atmosphere on its way to the James Webb Space Telescope.
“The atmosphere is very similar to Jupiter’s — mostly hydrogen and helium, water and carbon dioxide,” Smith said.
Cincinnati Observatory astronomer Wes Ryle, who was not part of the study, said planets outside our solar system are helping us understand our own.
“Exoplanets are one of the hottest topics in astrophysics right now, with the ultimate goal of learning how our solar system compares to others and the likelihood of finding other habitable worlds,” Ryle said. “Studies like this help evaluate the role of gas giant planets and their migration in creating a planetary system.”
The historic 1845 telescope at the Cincinnati Observatory.