New book highlights real-world applications of artificial intelligence (AI) and machine learning (ML)
Bentham Science is pleased to announce the release of Advancements in Artificial Intelligence and Machine Learning, a timely publication exploring the transformative role of AI and ML across diverse industries and domains.
This book is a detailed collection of twelve research-backed chapters, each focusing on a key area of artificial intelligence and machine learning, ranging from mechatronics and cybersecurity to digital health and automation. Readers will get insights into the impact of AI-based techniques in power systems, social media management, drone robotics, smart grids, healthcare diagnostics, and crime prevention systems.
Beginning with an overview of AI’s potential in next-generation mechatronics, this volume navigates through the role of AI and deep learning in energy systems, sentiment analysis, image processing, wireless networks, and advanced medical technologies such as brain tumor detection. Each chapter provides practical insights into emerging tools and real-world applications, making this an essential reference for researchers and professionals.
Key Features:
- A multidisciplinary focus on AI and ML applications in robotics, energy, cybersecurity, and healthcare
- Contemporary research exploring intelligent automation and secure systems
- Case-based insights from industry and academia
- Practical guidance on AI-driven innovation, decision-making, and technology implementation
Learn more about this book here: http://bit.ly/4524NYk
For media inquiries, review copies, or interviews, please contact Bentham Science Publishers.
Bentham Science is pleased to announce the release of Advancements in Artificial Intelligence and Machine Learning, a timely publication exploring the transformative role of AI and ML across diverse industries and domains.
This book is a detailed collection of twelve research-backed chapters, each focusing on a key area of artificial intelligence and machine learning, ranging from mechatronics and cybersecurity to digital health and automation. Readers will get insights into the impact of AI-based techniques in power systems, social media management, drone robotics, smart grids, healthcare diagnostics, and crime prevention systems.
Beginning with an overview of AI’s potential in next-generation mechatronics, this volume navigates through the role of AI and deep learning in energy systems, sentiment analysis, image processing, wireless networks, and advanced medical technologies such as brain tumor detection. Each chapter provides practical insights into emerging tools and real-world applications, making this an essential reference for researchers and professionals.
Key Features:
- A multidisciplinary focus on AI and ML applications in robotics, energy, cybersecurity, and healthcare
- Contemporary research exploring intelligent automation and secure systems
- Case-based insights from industry and academia
- Practical guidance on AI-driven innovation, decision-making, and technology implementation
Learn more about this book here: http://bit.ly/4524NYk
For media inquiries, review copies, or interviews, please contact Bentham Science Publishers.
About the Editors
Dr. Asif Khan
Dr. Khan is an Assistant Professor in the Department of Computer Applications at Integral University, Lucknow. He holds a Ph.D. and postdoctoral experience from the University of Electronic Science and Technology of China (UESTC), with over eight years of academic and research experience. His expertise includes artificial intelligence, robotics, and machine learning. He has published over 100 research articles and holds nine patents. Dr. Khan is an active member of IEEE, IAENG, and other professional bodies.
Dr. Mohammad Kamrul Hasan
Dr. Hasan is an Associate Professor at Universiti Kebangsaan Malaysia (UKM), where he leads the Network and Communication Technology Research Lab at the Center for Cyber Security. He earned his Ph.D. in Electrical and Communication Engineering from the International Islamic University Malaysia. His research interests span cybersecurity, Industrial IoT, privacy protection, and transparent AI. He has authored over 150 indexed research publications and serves on editorial boards of top-tier journals including IEEE, Elsevier, and MDPI.
Dr. Naushad Warish
Dr. Warish earned his Ph.D. in Computer Science from the Indian Institute of Technology (ISM), Dhanbad. His specialization includes content-based image retrieval, artificial intelligence, and digital image processing. He has also qualified the UGC-NET and has extensive knowledge of programming languages such as C, C++, and Python. His work bridges core computational theory with practical implementation in AI and ML.
Dr. Mohammed Aslam Husain
Dr. Husain is an Assistant Professor in the Department of Electrical Engineering at Rajkiya Engineering College, Ambedkar Nagar, India. He has held academic positions at Integral University and Aligarh Muslim University. Recipient of the Young Scientist Award (2017) and the Institution of Engineers (India) Young Engineers Award (2022–23), he has authored more than 60 scientific publications. His areas of interest include electrical systems, automation, and AI-driven control systems.
Dr. Asif Khan
Dr. Khan is an Assistant Professor in the Department of Computer Applications at Integral University, Lucknow. He holds a Ph.D. and postdoctoral experience from the University of Electronic Science and Technology of China (UESTC), with over eight years of academic and research experience. His expertise includes artificial intelligence, robotics, and machine learning. He has published over 100 research articles and holds nine patents. Dr. Khan is an active member of IEEE, IAENG, and other professional bodies.
Dr. Mohammad Kamrul Hasan
Dr. Hasan is an Associate Professor at Universiti Kebangsaan Malaysia (UKM), where he leads the Network and Communication Technology Research Lab at the Center for Cyber Security. He earned his Ph.D. in Electrical and Communication Engineering from the International Islamic University Malaysia. His research interests span cybersecurity, Industrial IoT, privacy protection, and transparent AI. He has authored over 150 indexed research publications and serves on editorial boards of top-tier journals including IEEE, Elsevier, and MDPI.
Dr. Naushad Warish
Dr. Warish earned his Ph.D. in Computer Science from the Indian Institute of Technology (ISM), Dhanbad. His specialization includes content-based image retrieval, artificial intelligence, and digital image processing. He has also qualified the UGC-NET and has extensive knowledge of programming languages such as C, C++, and Python. His work bridges core computational theory with practical implementation in AI and ML.
Dr. Mohammed Aslam Husain
Dr. Husain is an Assistant Professor in the Department of Electrical Engineering at Rajkiya Engineering College, Ambedkar Nagar, India. He has held academic positions at Integral University and Aligarh Muslim University. Recipient of the Young Scientist Award (2017) and the Institution of Engineers (India) Young Engineers Award (2022–23), he has authored more than 60 scientific publications. His areas of interest include electrical systems, automation, and AI-driven control systems.
DOI
Researchers advocate for separate roles between AI and humans
Radiological Society of North America
OAK BROOK, Ill. – Renowned physician-scientist Eric J. Topol, M.D., and Harvard artificial intelligence (AI) expert Pranav Rajpurkar, Ph.D., advocate for a clear separation of the roles between AI systems and radiologists in an editorial published today in Radiology, a journal of the Radiological Society of North America (RSNA).
“We’re stuck between distrust and dependence, and missing out on the full potential of AI,” said Dr. Rajpurkar, associate professor of Biomedical Informatics at Harvard University.
The authors urge a rethinking of the assistive role of AI, which is designed to work alongside human radiologists to improve diagnostic accuracy. But so far, fully integrating AI into radiology workflows has fallen short of expectations.
“It’s still early for getting a definitive assessment,” said Dr. Topol, professor and executive vice president, Scripps Research. “But several recent studies of GenAI have not demonstrated the widely anticipated synergy between AI and physicians.”
"Current evidence suggests that neither fully integrated assistive approaches nor complete automation are optimal," Dr. Rajpurkar said. "Radiologists don't know when to trust AI and when to trust themselves. Add AI errors into the mix, and you get a perfect storm of uncertainty."
Implementing assistive AI has presented notable challenges, including cognitive biases that cause radiologists to disregard or over-rely on AI suggestions. Misaligned incentives, unclear workflows, liability concerns, and economic models that don't support AI integration have also slowed its adoption.
"After years of hype, AI penetration in U.S. radiology remains surprisingly low," Dr. Rajpurkar said. "This suggests we've been implementing AI like sprinkling digital fairy dust on broken workflows. The real opportunity isn't marginal accuracy gains, it's workflow transformation."
The authors propose a careful, measured approach to role separation—guided by rigorous clinical validation and real-world evidence—as the most pragmatic path forward. Their framework includes three models:
- AI-First Sequential Model—Where effective, AI processes the initial segment of the workflow (e.g., preparing clinical context from electronic health records), followed by the radiologist providing expert interpretation.
- Doctor-First Sequential Model—The radiologist initiates the diagnostic process while AI performs complementary tasks such as report generation and follow-up recommendations to enhance the workflow.
- Case Allocation Model—Cases are triaged based on complexity and clarity, with some managed entirely by AI, others by a radiologist, and the rest through a combination of both.
“Radiologists are stuck in the worst of both worlds—afraid to trust AI fully, but too reliant to ignore it,” Dr. Rajpurkar said. “Clear role separation breaks this cycle.”
The authors envision institutions implementing their framework through repeated interactions rather than strict, sequential processes.
“We’re providing a framework, but the real innovation will come from frontline radiologists adapting it to their specific needs,” Dr. Rajpurkar said. “Institutions will likely discover hybrid approaches we haven’t even imagined yet.”
For example, a trauma center might use the AI-First model to review chest X-rays overnight, then switch to a Doctor-First model when teaching residents. Under the Case Allocation model, an AI screening system may identify and ‘clear’ normal results, escalating only abnormal cases to the radiologist for review.
“The breakthrough moment comes when practices stop asking ‘Which model?’ and start asking ‘Which model when?’” he said. “That’s where the magic happens—adaptive workflows that respond to real-time clinical needs, not rigid theoretical constructs.”
Implementing their vision will require carefully designed pilot programs to test the models in real clinical environments, measuring accuracy, workflow efficiency, radiologist satisfaction and downstream outcomes.
“Results must be shared openly; the field desperately needs honest case studies,” Dr. Rajpurkar said. “Our framework gives radiologists not another promise of AI magic, but a concrete, practical roadmap for integration that acknowledges both the current limitations and the inevitable evolution of AI.”
The researchers also suggest establishing a clinical certification pathway for AI systems, something no single agency is equipped to handle alone.
“The Food & Drug Administration needs to maintain safety oversight, but clinical certification requires understanding real-world workflow integration, which goes beyond traditional regulatory scope,” Dr. Rajpurkar said. “We need new models, perhaps independent certification bodies with input from multiple stakeholders and consortia that bring together clinical expertise, technical knowledge and implementation experience.”
The researchers are awaiting the emergence of general medical AI systems capable of handling routine tasks, preparing cases, and drafting reports, all while learning the patterns of the practice.
“We're not there yet,” Dr. Rajpurkar said. “But when these systems can competently manage the breadth of tasks a senior medical resident handles, the entire conversation changes. That’s the inflection point we’re watching for.”
###
“Beyond Assistance: The Case for Role Separation in AI-Human Radiology Workflows.”
Radiology is edited by Linda Moy, M.D., New York University, New York, N.Y., and owned and published by the Radiological Society of North America, Inc. (https://pubs.rsna.org/journal/radiology)
RSNA is an association of radiologists, radiation oncologists, medical physicists and related scientists promoting excellence in patient care and health care delivery through education, research and technologic innovation. The Society is based in Oak Brook, Illinois. (RSNA.org)
For patient-friendly information on medical imaging, visit RadiologyInfo.org.
Journal
Radiology
Subject of Research
Not applicable
Article Title
Beyond Assistance: The Case for Role Separation in AI-Human Radiology Workflows
Article Publication Date
29-Jul-2025
Jack Burgess
BBC News

Pope Leo XIV has told the Vatican's first Mass for Catholic social media influencers that human dignity needs to be protected online as the world faces the "challenge" of artificial intelligence (AI).
"Nothing that comes from man and his creativity should be used to undermine the dignity of others," the Pope said in St Peter's Basilica.
He said the developing technology should be used for the "benefit of all humanity" during comments at the Vatican's Jubilee of Youth, a week-long gathering for young worshipers which is held every 25 years.
It is the latest in a string of interventions the Pope has made on the subject of AI since he was elected in May.
During Tuesday's speech, the Pope called on the world to protect "our ability to listen and speak" in a "new era".
"We have a duty to work together to develop a way of thinking, to develop a language, of our time, that gives voice to love," the Pope said.
He also urged social media influencers to seek out "those who suffer and need to know the Lord" with their content.
"Be agents of communion, capable of breaking down the logic of division and polarisation, of individualism and egocentrism," he added.

During his first Sunday address in May, Pope Leo XIV suggested that the development of AI, and other advances, meant the Church was necessary for the defence of human dignity and justice.
Pope Leo XIV, who studied maths at Philadelphia's Villanova University in 1977, is the first pontiff from the United States.
Born in Chicago in 1955 to parents of Spanish and Franco-Italian descent, Leo served as an altar boy and was ordained in 1982.
Although he moved to Peru three years later, he returned regularly to the US to serve as a priest and a prior in his home city.
He has Peruvian nationality and is fondly remembered as a figure who worked with marginalised communities and helped build bridges.
AI Warfare: Can India take the lead?
As autonomous drones execute precision strikes and algorithms command battlefield operations, the question isn't whether AI will dominate military strategy, but whether India will lead or lag in this critical transformation.
AI is rewriting future warfare rules. (Photo: Representational/Getty)
India Today Global Desk
Jul 29, 2025
The future of warfare is being rewritten by artificial intelligence, and India is positioning itself at the forefront of this digital revolution. As autonomous drones execute precision strikes and algorithms command battlefield operations, the question isn’t whether AI will dominate military strategy, but whether India will lead or lag in this critical transformation.
AI: The New Backbone of Military Operations
By 2030, AI is projected to become the backbone of global military operations, merging land, sea, air, space, and cyber warfare into a unified, intelligent theatre. Modern armies cannot remain relevant without embracing this technology, as AI systems now process more data in seconds than human generals can analyse in days.
India has recognised this paradigm shift. The Ministry of Defence has declared 2025 the “Year of Reforms”, with AI and robotics taking centre stage. This isn’t merely symbolic: over 75 AI-powered defence products have been indigenously developed, ranging from autonomous drones to smart surveillance and cyber defence platforms.
Building India’s AI-First Military Strategy
The creation of the Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA) signals India’s serious commitment to AI integration. Each service arm, the Army, Navy, and Air Force, now operates dedicated AI working groups, with annual budgets allocated and comprehensive roadmaps established.
On the ground, this technology is already operational. In Kashmir, AI-powered drones patrol terrain too dangerous for human forces. Along the Line of Control, swarm drones provide area denial and predictive threat detection. The Avekshan system distinguishes between livestock and genuine threats, filtering false alarms while delivering real-time alerts.
Combat-Ready Innovations
India’s drone capabilities showcase its AI ambitions most clearly. Surveillance UAVs like Heron and Rustom sweep vast border zones with precision, while combat drones like Rudrastra execute strikes in hostile terrain. Swarm technology enables dozens of AI-powered drones to operate as a collective intelligence, jamming enemy radars and intercepting intrusions.
The Indrajaal defensive drone shield protects 4,000 square kilometres using AI-driven interception technology. The D4 Anti-Drone system, featuring 360-degree radar and laser tracking, has intercepted over 80% of rogue drones encountered.
Smart Borders and Strategic Defence
India’s extensive borders now feature “smart” defensive systems with laser walls, facial recognition, motion sensors, and real-time alerts. Project Himshakti utilises satellite data and AI modelling to predict potential cross-border movement routes, shifting focus from reaction to anticipation.
Beyond Government: A Growing Ecosystem
India’s AI defence ecosystem extends beyond government laboratories. Startups like ideaForge and DSRL produce battlefield-ready drones and surveillance tools, while the iDEX initiative fuels defence technology entrepreneurship. Microsoft’s $3 billion commitment to India’s AI infrastructure demonstrates international confidence in the country’s potential.
Global Ambitions and Challenges
With a $5 billion defence export target by 2025, India eyes international markets for its AI-enabled products. These technologies often have dual-use civilian applications in disaster response, logistics, and medical aid, expanding market opportunities and diplomatic influence.
However, challenges remain: procurement delays, fragmented frameworks, ethical considerations, and shortages of AI-literate personnel. The critical question is whether India can maintain momentum in a rapidly evolving global landscape where China, the United States, and Israel continue advancing their own AI warfare capabilities.
India stands at the threshold of military innovation, ready to become not just AI-ready, but AI-dominant in the algorithmic age of warfare.
- Ends
How to survive the explosion of AI slop
PNAS Nexus
image:
A deepfake in which the author inserted his own face (source in upper left) into an AI-generated image of an inmate in an orange jumpsuit.
view moreCredit: AI image created by Hany Farid
In a Perspective, Hany Farid highlights the risk of manipulated and fraudulent images and videos, known as deepfakes, and explores interventions that could mitigate the harms deepfakes can cause. Farid explains that visually discriminating the real from the fake has become increasingly difficult and summarizes his research on digital forensic techniques, used to determine whether images and videos have been manipulated. Farid celebrates the positive uses of generative AI, including helping researchers, democratizing content creation, and, in some cases, literally giving voice to those whose voice has been silenced by disability. But he warns against harmful uses of the technology, including non-consensual intimate imagery, child sexual abuse imagery, fraud, and disinformation. In addition, the existence of deepfake technology means that malicious actors can cast doubt on legitimate images by simply claiming the images are made with AI. So, what is to be done? Farid highlights a range of interventions to mitigate such harms, including legal requirements to mark AI content with metadata and imperceptible watermarks, limits on what prompts should be allowed by services, and systems to link user identities to created content. In addition, social media content moderators should ban harmful images and videos. Furthermore, Farid calls for digital media literacy to be part of the standard educational curriculum. Farid summarizes the authentication techniques that can be used by experts to sort the real from the synthetic, and explores the policy landscape around harmful content. Finally, Farid asks researchers to stop and question if their research output can be misused and if so, whether to take steps to prevent misuse or even abandon the project altogether. Just because something can be created does not mean it must be created.
Journal
PNAS Nexus
Article Title
Mitigating the harms of manipulated media: Confronting deepfakes and digital deception
Article Publication Date
29-Jul-2025
COI Statement
H.F. is the co-founder and Chief Science Officer at GetReal Labs, an advisor to the Content Authenticity Initiative, serves on the Board of Directors of the not for profit Cyber Civil Rights Initiative, and is a LinkedIn Scholar.
AI-Powered brain stimulation at home could enhance concentration, new research finds
University of Surrey
image:
Professor Roi Cohen Kadosh
view moreCredit: University of Surrey
A personalised brain stimulation system powered by artificial intelligence (AI) that can safely enhance concentration from home has been developed by researchers from the University of Surrey, the University of Oxford and Cognitive Neurotechnology. Designed to adapt to individual characteristics, the system could help people improve focus during study, work, or other mentally demanding tasks.
Published in npj Digital Medicine, the study is based on a patented approach that uses non-invasive brain stimulation alongside adaptive AI to maximise its impact. The technology uses transcranial random noise stimulation (tRNS) – a gentle and painless form of electrical brain stimulation – and an AI algorithm that learns to personalise stimulation based on individual features, including attention level and head size. By tailoring stimulation intensity to these characteristics, the system identified optimal settings without the need for expensive MRI scans, making the personalisation scalable and cost-effective.
The AI was trained using data from 103 people aged 18 to 35, who completed 290 home-based sessions using CE-marked (European Union standard) headgear and a tablet-based sustained attention task. The system was then evaluated in a double-blind study involving 37 new participants. Those who received personalised AI-guided stimulation showed significantly better performance than during standard or placebo stimulation. The strongest improvements were seen in individuals who initially showed lower levels of attention.
Professor Roi Cohen Kadosh, Head of Psychology at the University of Surrey, Founder of Cognite Neurotechnology Ltd and lead author of the study, said:
"Our modern world constantly competes for our attention. What is exciting about this work is that we have shown it is possible to safely and effectively enhance cognitive performance using a personalised system that people can use independently at home. This opens new possibilities for improving sustained attention, learning, and other cognitive abilities in a way that is accessible, adaptive, and scalable.
"Our work highlights the growing role of AI and wearable neurotechnology in enabling personalised, real-world cognitive enhancement, with potential applications across education, training, and future clinical use."
The study found no serious side effects and the frequency and severity of sensations during stimulation were no different from those experienced during placebo. The AI also helped avoid stimulation levels that could impair performance – something previous non-personalised methods could not achieve.
[ENDS]
Notes to editors
Professor Cohen Kadosh is available for interview; please contact mediarelations@surrey.ac.uk to arrange.
The full paper is available at: https://www.nature.com/articles/s41746-025-01744-6
Journal
npj Digital Medicine
Article Title
Personalized home based neurostimulation via AI optimization augments sustained attention
Article Publication Date
29-Jul-2025
Researchers create ‘virtual scientists’ to solve complex biological problems
AI-powered scientists
There may be a new artificial intelligence-driven tool to turbocharge scientific discovery: virtual labs.
Modeled after a well-established Stanford School of Medicine research group, the virtual lab is complete with an AI principal investigator and seasoned scientists.
“Good science happens when we have deep, interdisciplinary collaborations where people from different backgrounds work together, and often that’s one of the main bottlenecks and challenging parts of research,” said James Zou, PhD, associate professor of biomedical data science who led a study detailing the development of the virtual lab. “In parallel, we’ve seen this tremendous advance in AI agents, which, in a nutshell, are AI systems based on language models that are able to take more proactive actions.”
People often think of large language models, the type of AI harnessed in this study, as simple question-and-answer bots. “But these are systems that can retrieve data, use different tools, and communicate with each other and with us through human language,” Zou said. (The collaboration shown through these AI models is an example of agentic or agential AI, a structure of AI systems that work together to solve complex problems.)
The leap in capability gave Zou the idea to start training these models to mimic top-tier scientists in the same way that they think critically about a problem, research certain questions, pose different solutions based on a given area of expertise and bounce ideas off one another to develop a hypothesis worth testing. “There’s no shortage of challenges for the world’s scientists to solve,” said Zou. “The virtual lab could help expedite the development of solutions for a variety of problems.”
Already, Zou’s team has been able to demonstrate the AI lab’s potential after tasking the “team” to devise a better way to create a vaccine for SARS-CoV-2, the virus that causes COVID-19. And it took the AI lab only a few days.
A paper describing the findings of the study will be published July 29 in Nature. Zou and John Pak, PhD, a scientist at Chan Zuckerberg Biohub, are the senior authors of the paper. Kyle Swanson, a computer science graduate student at Stanford University, is the lead author.
Running a virtual lab
The virtual lab begins a research project just like any other human lab — with a problem to solve, presented by the lab’s leader. The human researcher gives the AI principal investigator, or AI PI, a scientific challenge, and the AI PI takes it from there.
“It’s the AI PI’s job to figure out the other agents and expertise needed to tackle the project,” Zou said. For the SARS-CoV-2 project, for instance, the PI agent created an immunology agent, a computation biology agent and a machine learning agent. And, in every project, no matter the topic, there’s one agent that assumes the role of critic. Its job is to poke holes, caution against common pitfalls and provide constructive criticism to other agents.
Zou and his team equipped the virtual scientists with tools and software systems, such as the protein modeling AI system AlphaFold, to better stimulate creative “thinking” skills. The agents even created their own wish list. “They would ask for access to certain tools, and we’d build it into the model to let them use it,” Zou said.
As research labs go, the virtual team runs a swift operation. Just like Zou’s research group, the virtual lab has regular meetings during which agents generate ideas and engage in a conversational back-and-forth. They also have one-on-one meetings, allowing lab members to meet with the PI agent individually to discuss ideas.
But unlike human meetings, these virtual gatherings take a few seconds or minutes. On top of that, AI scientists don’t get tired, and they don’t need snacks or bathroom breaks, so multiple meetings run in parallel.
“By the time I’ve had my morning coffee, they’ve already had hundreds of research discussions,” Zou said during the RAISE Health Symposium, during which he presented on this work.
Moreover, the virtual lab is an independent operation. Aside from the initial prompt, the main guideline consistently given to the AI lab members is budget-related, barring any extravagant or outlandish ideas that aren’t feasible to validate in the physical lab. Not one prone to micromanagement — in the real or virtual world — Zou estimates that he or his lab members intervene about 1% of the time.
“I don’t want to tell the AI scientists exactly how they should do their work. That really limits their creativity,” Zou said. “I want them to come up with new solutions and ideas that are beyond what I would think about.”
But that doesn’t mean they’re not keeping a close eye on what’s going on — each meeting, exchange and interaction in the virtual lab is captured via a transcript, allowing human researchers to track progress and redirect the project if needed.
SARS-CoV-2 and beyond
Zou’s team put the virtual lab to the test by asking it to devise a new basis for a vaccine against recent COVID-19 variants. Instead of opting for the tried-and-true antibody (a molecule that recognizes and attaches to a foreign substance in the body), the AI team opted for a more unorthodox approach: nanobodies, a fragment of an antibody that’s smaller and simpler.
“From the beginning of their meetings the AI scientists decided that nanobodies would be a more promising strategy than antibodies — and they provided explanations. They said nanobodies are typically much smaller than antibodies, so that makes the machine learning scientist’s job much easier, because when you computationally model proteins, working with smaller molecules means you can have more confidence in modeling and designing them,” Zou said.
So far, it seems like the AI team is onto something. Pak’s team took the nanobody structural designs from the AI researchers and created them in his real-world lab. Not only did they find that the nanobody was experimentally feasible and stable, they also tested its ability to bind to one of the new SARS-CoV-2 variants — a key factor in determining the effectiveness of a new vaccine — and saw that it clung tightly to the virus, more so than existing antibodies designed in the lab.
They also measured off-target effects, or whether the nanobody errantly binds to something other than the targeted virus, and found it didn’t stray from the COVID-19 spike protein. “The other thing that’s promising about these nanobodies is that, in addition to binding well to the recent COVID strain, they’re also good at binding to the original strain from Wuhan from five years ago,” Zou said, referring to the nanobody’s potential to ground a broadly effective vaccine. Now, Zou and his team are analyzing the nanobody’s ability to help create a new vaccine. And as they do, they’re feeding the experimental data back to the AI lab to further hone the molecular designs.
The research team is eager to apply the virtual lab to other scientific questions, and they’ve recently developed agents that act as sophisticated data analysts that can reassess previously published papers.
“The datasets that we collect in biology and medicine are very complex, and we’re just scratching the surface when we analyze those data,” Zou said. “Often the AI agents are able to come up with new findings beyond what the previous human researchers published on. I think that’s really exciting.”
This study was supported by the Knight-Hennessy Scholarship and the Stanford Bio-X Fellowship.
Stanford’s Human Centered AI Institute and Department of Biomedical Data Science also supported the work.
# # #
About Stanford Medicine
Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.
Journal
Nature
Method of Research
Data/statistical analysis
Subject of Research
Not applicable
Article Title
The Virtual Lab of AI Agents Designs New SARS-CoV-2 Nanobodies
Article Publication Date
29-Jul-2025
AI tools accelerate the race toward next-generation solar cells
image:
Workflow of the machine learning approach for predicting CBM, VBM, and bandgap properties in halide perovskites.
view moreCredit: Bo Qu and Yucheng Ye from Peking University; Runyi Li from Peking University Shenzhen Graduate School.
A research team from Peking University and Peking University Shenzhen Graduate School have used artificial intelligence (AI) to quickly and accurately predict the properties of materials that could improve solar energy devices. Their algorithms were able to predictic important properties such as: conduction band minimum (CBM), valence band maximum (VBM), and bandgap of halide perovskites.
This work provides valuable insights into the rational design of halide perovskites with tailored properties. These findings can impulse the discovery of better-performing solar materials, paving the way for more affordable and efficient solar panels.
One of the most popular way to produce electric energy is through the use of photovoltaic technology. It offers a sustainable and environmentally friendly way to generate electricity. Among various materials studied for solar cell applications, halide perovskites have garnered significant attention due to their remarkable photovoltaic properties, simple and low cost fabrication. These materials, characterized by their ABX₃ crystal structure, can incorporate various organic and inorganic constituents, which influence their optoelectronic properties such as bandgap, charge transport, and stability.
Recent advancements have pushed the power conversion efficiency (PCE) of perovskite solar cells to over 27%, with tandem configurations reaching above 30%, making them competitive with traditional silicon-based solar panels. Despite these successes, there are still challenges, including toxicity concerns related to lead content and issues with material stability. Addressing these challenges requires discovering new perovskite compositions with optimal properties, such as suitable bandgaps and energy level alignment, to improve performance and longevity.
Understanding and tailoring the electronic band structure of perovskite materials, particularly the bandgap, conduction band minimum (CBM), and valence band maximum (VBM), is critical to optimizing their performance. The bandgap determines the spectral range of sunlight that the material can absorb, whereas the alignment of CBM and VBM affects charge separation and transport efficiency. Precise control over these parameters is critical for reducing recombination losses while increasing device efficiency.
A key aspect of optimizing perovskite performance lies in understanding and engineering their electronic band structure. The bandgap, conduction band minimum (CBM), and valence band maximum (VBM) dictate light absorption, interfacial alignment, carrier dynamics and device efficiency. Precise control over these parameters is crucial for minimizing recombination losses and maximizing device efficiency.
Traditional experimental and theoretical approaches, like high-throughput screening and density functional theory (DFT) calculations, though effective, are often time-consuming and resource-intensive. Therefore, there is a growing need for efficient, data-driven strategies to accelerate the identification and design of promising perovskite materials. Machine learning (ML), with its ability to analyze large datasets and uncover complex patterns, has emerged as a powerful tool in this context, enabling rapid prediction of key properties and facilitating rational materials design for photovoltaic applications.
Data-driven ML is efficient, eco-friendly, and cost-effective, yet prior ML research has largely been limited to inorganic halide perovskites, lacking comprehensive predictions of CBM and VBM energy levels.
The Solution: A research team Peking University and Peking University Shenzhen Graduate School built high-accuracy ML models to predict CBM, VBM and bandgaps of halide perovskites, applicable to both inorganic halides and organic-inorganic hybrid halides.
The XGB (Extreme Gradient Boosting) model achieved test set R² = 0.8298 (MAE = 0.151 eV) for CBM, R² = 0.8481 (MAE = 0.149 eV) for VBM, and R² = 0.8008 (MAE = 0.285 eV) for bandgaps computed with the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional. The XGB approach also delivered test set R² = 0.9316 and MAE = 0.102 eV on a larger set of bandgaps computed with the Perdew-Burke-Ernzerhof (PBE) functional. SHapley Additive exPlanations (SHAP) analysis of the optimal models identified the dominant chemical and structural features controlling these energy levels, providing practical design guidelines for tailoring halide-perovskite band structures.
The metrics R² and MAE are standard measures for assessing predictive performance: R² indicates the proportion of variance in the data explained by the model, with values closer to 1 representing better fits. MAE quantify the average prediction errors in electronvolts, with lower values reflecting higher accuracy. Overall, these results highlight the models’ strong ability to reliably predict complex electronic properties, facilitating accelerated discovery and rational design of new halide perovskite materials for photovoltaic applications.
The Future: A future direction is to leverage both the explainability of shallow ML models and the powerful learning capabilities of deep learning models to enable efficient and eco-friendly discovery of outstanding photovoltaic perovskite materials.
The Impact: This work developed a machine learning approach for predicting comprehensive energy band properties of halide perovskites, which accelerated the discovery of suitable photovoltaic materials and guides the rational design of high-efficiency solar cells.
The research has been recently published in the online edition of Materials Futures, a prominent international journal in the field of interdisciplinary materials science research.
Reference: Yucheng Ye, Runyi Li, Bo Qu, Hantao Wang, Yueli Liu, Zhijian Chen, Jian Zhang, Lixin Xiao. Machine learning for energy band prediction of halide perovskites[J]. Materials Futures. DOI: 10.1088/2752-5724/adeead
Journal
Materials Futures
No comments:
Post a Comment