Monday, December 01, 2025

Small changes make some AI systems more brain-like than others


For visual AI systems, selecting the right blueprint may accelerate learning



Johns Hopkins University





Artificial intelligence systems that are designed with a biologically inspired architecture can simulate human brain activity before ever being trained on any data, according to new research from Johns Hopkins University.

The findings, published in Nature Machine Intelligence, challenge conventional approaches to building AI by prioritizing architectural design over the type of deep learning and training that takes months, costs billions of dollars and requires thousands of megawatts of energy. 

“The way that the AI field is moving right now is to throw a bunch of data at the models and build compute resources the size of small cities. That requires spending hundreds of billions of dollars. Meanwhile, humans learn to see using very little data,” said lead author Mick Bonner, assistant professor of cognitive science at Johns Hopkins University. “Evolution may have converged on this design for a good reason. Our work suggests that architectural designs that are more brain-like put the AI systems in a very advantageous starting point.”

Bonner and a team of scientists focused on three classes of network designs that AI developers commonly use as blueprints for building their AI systems: transformers, fully connected networks, and convolutional networks. 

The scientists repeatedly modified the three blueprints, or the AI architectures, to build dozens of unique artificial neural networks. Then, they exposed these new and untrained AI networks to images of objects, people, and animals and compared the models’ responses to the brain activity of humans and primates exposed to the same images.

When transformers and fully connected networks were modified by giving them many more artificial neurons, they showed little change. Tweaking the architectures of convolutional neural networks in a similar way, however, allowed the researchers to generate activity patterns in the AI that better simulated patterns in the human brain. 

The untrained convolutional neural networks rivaled conventional AI systems, which generally are exposed to millions or billions of images during training, the researchers said, suggesting that the architecture plays a more important role than researchers previously realized. 

“If training on massive data is really the crucial factor, then there should be no way of getting to brain-like AI systems through architectural modifications alone,” Bonner said. “This means that by starting with the right blueprint, and perhaps incorporating other insights from biology, we may be able to dramatically accelerate learning in AI systems.”

Next, the researchers are working on developing simple learning algorithms modeled after biology that could inform a new deep learning framework.

Asia PGI and partners unveil preview of PathGen: New AI-powered outbreak intelligence tool



Asia-led, “sovereign-by-design” platform built for secure, decentralised pathogen intelligence-sharing across borders aims to break data silos and provide faster “time to actionable insight” of outbreaks, from detection to control measures 




Duke-NUS Medical School

Asia Pathogen Genomics Initiative and partners unveil preview of PathGen 

image: 

From left to right. Seated: Mr Ng Boon Heong, Executive Director & Chief Executive Officer, Temasek Foundation; Ms Ho Ching, Chairman, Temasek Trust; Mr Ong Ye Kung, Minister for Health and Coordinating Minister for Social Policies; Ms Jennie Chua, Chairman, Temasek Foundation; Mr Goh Yew Lin, Chair, Governing Board, Duke-NUS Medical School. 

Standing: Dr Lee Fook Kay, Head, Pandemic Preparedness, Temasek Foundation; Prof Vernon Lee, Chief Executive, Communicable Diseases Agency; Ms Zeng Xiaofan, Senior Program Officer, Gates Foundation; Prof Patrick Tan, Dean-designate, Duke-NUS Medical School; Prof Thomas Coffman, Dean, Duke-NUS Medical School; Prof Paul Pronyk, Director, Duke-NUS Centre for Outbreak Preparedness at the preview of PathGen // Image credit: Duke-NUS Medical School 

view more 

Credit: Duke-NUS Medical School




SINGAPORE, 1 December 2025 – Asia Pathogen Genomics Initiative (Asia PGI) today offered the first public preview of PathGen, an AI-powered sense-making and decision-making support platform of pathogen genomics and contextual data. Designed for public health practitioners, clinicians and industry, it can help detect emerging disease threats earlier, assess risks faster, and coordinate responses within and across borders, all without compromising countries’ ownership of their respective sovereign data. The objective is to strengthen health security across Asia and beyond, reducing lives lost and livelihoods disrupted, and the economic impacts of communicable diseases.

The preview demonstration, hosted by the Duke-NUS Centre for Outbreak Preparedness (COP) and Temasek Foundation, showcased how PathGen could integrate diverse data sources – pathogen genomics, clinical information, population data, climate, and mosquito habitat patterns – powered by the latest AI technology and foundation models to provide enhanced situational awareness and decision-making support through timely, high-quality, actionable insights. The result: Faster decisions taken with higher resolution and greater confidence – for example guiding decisions on when to adjust treatment protocols, where to deploy vaccines, and how to allocate resources, before outbreaks spiral out of control.

More than 100 attendees, comprising senior health officials from the region, as well as philanthropic, scientific and technology partners, were at the preview and discussed governance, strategy, and next steps for regional deployment. In a symbolic show of regional commitment, partners from Indonesia, Malaysia, Singapore, Thailand, and Vietnam placed their hands on the PathGen logo to affirm their pledge to co-create PathGen as a shared public good for regional health security, while partners from the Philippines joined virtually. Mr Ong Ye Kung, Singapore’s Minister for Health and Coordinating Minister for Social Policies was the guest-of-honour at the PathGen preview.

PathGen is housed by Asia PGI which is led by the Duke-NUS Centre for Outbreak Preparedness. A coordination and capacity development hub advancing pathogen genomics sequencing for early detection, control and elimination of infectious diseases in the region, Asia PGI convenes more than 50 government and academic partners across 15 countries, with Singapore as its nerve centre. Asia PGI and PathGen are propelled by three key catalytic funders – the Gates Foundation, Temasek Foundation, and Philanthropy Asia Alliance. Four development partners – Amazon Web Services (AWS), IXO, Sequentia Biotech, and Sydney ID at the University of Sydney – are contributing core technologies and expertise to bring PathGen from concept to practice.

Why now?

Traditional epidemiological surveillance reports what is happening (e.g., counts of disease cases, hospitalisations, etc), while genomic surveillance has the potential to reveal the “who, where, and how” infections emerge in populations. But today’s systems are often hindered by silo-ed, non-interoperable databases that do not share information with one another, limited contextual data, data sovereignty barriers and policy constraints that slow responses and limit preparedness. Urgent action is needed to overcome these barriers as rapid population growth, unprecedented mobility, climate disruptions and antimicrobial resistance are driving more frequent and complex disease outbreaks across human and animal populations.

Recent AI breakthroughs which underpin PathGen allow better synthesis of genomic, clinical, population, environmental, and mobility data to help timely clinical and public health decisions. A “federated”, “sovereign-by-design” platform such as PathGen shares only the analytics; the underlying raw data is not moved or centralised to one location and remains under the control of the respective country/owner, enabling cooperation without compromising data integrity or eroding trust.

Professor Paul Pronyk, Director of Duke-NUS’ Centre for Outbreak Preparedness said: “This proof of concept shows how AI and pathogen genomics can work together to provide actionable intelligence for clinicians and public health authorities. By sharing only essential insights, countries can respond faster to outbreaks while strengthening trust and sovereignty.”

Dr Lee Fook Kay, Head of Pandemic Preparedness, Temasek Foundation said, “Every delay between detecting a pathogen and making the right public health decision costs lives. Temasek Foundation is catalysing PathGen, as it can integrate genomic information with other relevant surveillance, population and environmental data sources into timely insights that health authorities can act upon. A shared intelligence system that protects sovereignty, cuts response time, and stops outbreaks before they become crises – that’s the future of health security and preparedness!”

Shaun Seow, CEO of Philanthropy Asia Alliance, added: “PathGen fills a critical gap with a decision-support platform built for Asia’s needs and complexities. It enables shared intelligence without compromising data sovereignty, helping us better prepare for the next pandemic. Through our Health for Human Potential Community, we’re proud to support this effort to strengthen public health resilience across the region.”

What’s next?

PathGen will advance from proof-of-concept towards a launch-ready platform over the next 18 months, with pilots from early 2026 and a staged roll-out through 2027.

These efforts will be supported through the Asia PGI network of country partners. Country-level engagement aims to help define priorities and technical needs; establish plans for secure in-country deployment; set governance and benefit-sharing arrangements; deliver core analytics and decision support tools with integration to national systems; build capacity for public health laboratories and implementation teams; and provide regular briefings and demonstrations to align partners on strategy, governance, and next steps.

Information on PathGen’s development will be updated on the PathGen website.

 

### 

 

About Duke-NUS Medical School

Duke-NUS is Singapore’s flagship graduate entry medical school, established in 2005 with a strategic, government-led partnership between two world-class institutions: Duke University School of Medicine and the National University of Singapore (NUS). Through an innovative curriculum, students at Duke-NUS are nurtured to become multi-faceted ‘Clinicians Plus’ poised to steer the healthcare and biomedical ecosystem in Singapore and beyond. A leader in ground-breaking research and translational innovation, Duke-NUS has gained international renown through its five signature research programmes and 10 centres. The enduring impact of its discoveries is amplified by its successful Academic Medicine partnership with Singapore Health Services (SingHealth), Singapore’s largest healthcare group. This strategic alliance has led to the creation of 15 Academic Clinical Programmes, which harness multi-disciplinary research and education to transform medicine and improve lives.   

For more information, please visit www.duke-nus.edu.sg

 

About Temasek Foundation

Temasek Foundation supports a diverse range of programmes that uplift lives and communities in Singapore and Asia. Temasek Foundation’s programmes are made possible through philanthropic endowments gifted by Temasek, as well as gifts and other contributions from other donors. These programmes strive to deliver positive outcomes for individuals and communities now, and for generations to come. Collectively, Temasek Foundation’s programmes strengthen social resilience; foster international exchange and catalyse regional capabilities; advance science; and protect the planet.

For more information, please visit www.temasekfoundation.org.sg

Schaeffler-NTU Corporate Lab to advance AI-enabled humanoid robotics




Nanyang Technological University

Launch of the Schaeffler-NTU Corporate Lab: Intelligent Mechatronics Hub 

image: 

[From left to right] Mr Maximilian Fiedler, Regional CEO Asia Pacific and Managing Director Schaeffler (Singapore); Mr Uwe Wagner, CTO, Schaeffler AG; Dr Tan See Leng, Minister for Manpower and Minister-in-charge of Energy and Science & Technology, Ministry of Trade and Industry; Prof Lam Khin Yong, Vice President (Industry), NTU and Prof Christian Wolfrum, Deputy President and Provost, NTU

view more 

Credit: NTU Singapore





Nanyang Technological University, Singapore (NTU Singapore) and the leading global Motion Technology company Schaeffler have officially launched the next phase of their corporate laboratory partnership to drive research and innovation in AI-enabled humanoid robotics.

Gracing the launch of the new Schaeffler-NTU Corporate Lab: Intelligent Mechatronics Hub today as Guest of Honour was Dr Tan See LengMinister for Manpower and Minister-in-charge of Energy and Science & Technology, Ministry of Trade and Industry.

Located on NTU Singapore’s campus, the new 900-square-metre facility will contribute to Singapore’s strategic goal of strengthening advanced manufacturing and robotics. It marks another milestone in the collaboration between NTU and Schaeffler, which started in 2017.

The corporate laboratory is supported by the National Research Foundation, Singapore (NRF) under the Research, Innovation and Enterprise (RIE) 2025 plan, and developed in partnership with the Singapore Economic Development Board (EDB).

It will focus on advancing collaborative robotics, autonomous mobile robot platforms and assistive robotic systems, targeting applications in manufacturing, logistics and healthcare.

The lab will also collaborate with researchers from other institutions of higher learning, further reinforcing Singapore’s position as a regional hub for intelligent automation and humanoid robotics innovation.

It is part of Schaeffler’s global Schaeffler Hub for Advanced Research (SHARE) network that collaborates with leading universities worldwide through its company-on-campus concept.

Uwe WagnerChief Technology Officer at Schaeffler AG, said: “The next phase of collaboration at the Schaeffler Hub for Advanced Research at NTU marks a significant milestone in our long-standing partnership and reinforces our commitment to pioneering innovation in robotics and artificial intelligence. With a focus on advancing technologies for humanoid robotics, this partnership represents a key step forward in our holistic agenda to drive progress in this future field. Drawing on expertise across our eight product families, Schaeffler is best equipped to shape the future of humanoid robotics. By working closely with leading researchers at NTU, we strive to accelerate development and deliver value that resonates far beyond the regional level.”

Prof Lam Khin YongNTU Vice President (Industry), added: “This expanded collaboration with Schaeffler reinforces NTU's position as a leading research university with strong multi-party partnerships between academia, industry, and public agencies. The corporate lab provides a platform for our researchers, doctoral candidates, and students to work on challenges in robotics alongside industry experts. We have also collaborated closely with Schaeffler engineers to develop robots that can co-work with humans, with advanced sensors improving sensitivity and safety, which has direct industrial impact. I’m confident that our innovations can boost the manufacturing sector and shape the future of autonomous and assistive robotics in Singapore and beyond.”

Cindy KohExecutive Vice President, EDB said: “Schaeffler's continued investments in Singapore have contributed important capabilities to our advanced manufacturing ecosystem, and created highly skilled research, engineering and corporate jobs. The expanded corporate lab builds on the success of Schaeffler's longstanding partnership with Singapore's research community and universities, helping to connect academic research with real-world industry applications. This aligns with Singapore’s strategic interest to increase adoption of robotics and embodied AI in advanced manufacturing and unlock new opportunities across industries.”

Since the collaboration began in 2017, the NTU–Schaeffler partnership has produced numerous innovations.

These include a real-time visualisation of touch and force technology that enhances the precision and safety of robots in industrial settings through real-time sensing, and a universal soft gripper that can handle a wide range of objects with diverse geometries, stiffness levels, and surface properties to boost productivity and efficiency in manufacturing and supply chain applications.

Beyond research, the partnership supports talent development by training PhD, Master’s, and undergraduate students, providing them with hands-on experience through working alongside Schaeffler engineers and researchers on real-world projects. Many alumni of the programme have since assumed leadership positions in academia and industry, contributing to Singapore’s deep technology ecosystem and advanced manufacturing sectors.

SHARE at NTU will further enhance Schaeffler’s innovation footprint in Asia and support NTU’s continued drive for interdisciplinary research and industry collaboration to address some of the world’s most critical challenges.


How does AI think? KAIST achieves first visualization of the internal structure behind AI decision-making​




The Korea Advanced Institute of Science and Technology (KAIST)

How Does AI Think? KAIST Achieves First Visualization of the Internal Structure Behind AI Decision-Making​ 

image: 

<(From Left) Ph.D candidate Daehee Kwon, Ph.D candidate Sehyun lee, Professor Jaesik Choi>

view more 

Credit: KAIST




Although deep learning–based image recognition technology is rapidly advancing, it still remains difficult to clearly explain the criteria AI uses internally to observe and judge images. In particular, technologies that analyze how large-scale models combine various concepts (e.g., cat ears, car wheels) to reach a conclusion have long been recognized as a major unsolved challenge.

KAIST (President Kwang Hyung Lee) announced on the 26th of November that Professor Jaesik Choi’s research team at the Kim Jaechul Graduate School of AI has developed a new explainable AI (XAI) technology that visualizes the concept-formation process inside a model at the level of circuits, enabling humans to understand the basis on which AI makes decisions.

The study is evaluated as a significant step forward that allows researchers to structurally examine “how AI thinks.”

Inside deep learning models, there exist basic computational units called neurons, which function similarly to those in the human brain. Neurons detect small features within an image—such as the shape of an ear, a specific color, or an outline—and compute a value (signal) that is transmitted to the next layer.

In contrast, a circuit refers to a structure in which multiple neurons are connected to jointly recognize a single meaning (concept). For example, to recognize the concept of cat ear, neurons detecting outline shapes, neurons detecting triangular forms, and neurons detecting fur-color patterns must activate in sequence, forming a functional unit (circuit).

Up until now, most explanation techniques have taken a neuron-centric approach based on the idea that “a specific neuron detects a specific concept.” However, in reality, deep learning models form concepts through cooperative circuit structures involving many neurons. Based on this observation, the KAIST research team proposed a technique that expands the unit of concept representation from “neuron → circuit.”

The research team’s newly developed technology, Granular Concept Circuits (GCC), is a novel method that analyzes and visualizes how an image-classification model internally forms concepts at the circuit level.

GCC automatically traces circuits by computing Neuron Sensitivity and Semantic Flow. Neuron Sensitivity indicates how strongly a neuron responds to a particular feature, while Semantic Flow measures how strongly that feature is passed on to the next concept. Using these metrics, the system can visualize, step-by-step, how basic features such as color and texture are assembled into higher-level concepts.

The team conducted experiments in which specific circuits were temporarily disabled (ablation). As a result, when the circuit responsible for a concept was deactivated, the AI’s predictions actually changed.

In other words, the experiment directly demonstrated that the corresponding circuit indeed performs the function of recognizing that concept.

This study is regarded as the first to reveal, at a fine-grained circuit level, the actual structural process by which concepts are formed inside complex deep learning models. Through this, the research suggests practical applicability across the entire explainable AI (XAI) domain—including strengthening transparency in AI decision-making, analyzing the causes of misclassification, detecting bias, improving model debugging and architecture, and enhancing safety and accountability.

The research team stated, “This technology shows the concept structures that AI forms internally in a way that humans can understand,” adding that “this study provides a scientific starting point for researching how AI thinks.”

Professor Jaesik Choi emphasized, “Unlike previous approaches that simplified complex models for explanation, this is the first approach to precisely interpret the model’s interior at the level of fine-grained circuits,” and added, “We demonstrated that the concepts learned by AI can be automatically traced and visualized.”

This study, with Ph.D. candidates Dahee Kwon and Sehyun Lee from KAIST Kim Jaechul Graduate School of AI as co–first authors, was presented on October 21 at the International Conference on Computer Vision (ICCV).
Paper title: Granular Concept Circuits: Toward a Fine-Grained Circuit Discovery for Concept Representations
Paper link: https://openaccess.thecvf.com/content/ICCV2025/papers/Kwon_Granular_Concept_Circuits_Toward_a_Fine-Grained_Circuit_Discovery_for_Concept_ICCV_2025_paper.pdf

This research was supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP) under the “Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation” project, the AI Research Hub Project, and the KAIST AI Graduate School Program, and was carried out with support from the Defense Acquisition Program Administration (DAPA) and the Agency for Defense Development (ADD) at the KAIST Center for Applied Research in Artificial Intelligence.

 

 

New AI could teach the next generation of surgeons



Doctors too busy? AI offers med students real-time expert feedback



Johns Hopkins University

VIDEO: AI Could Teach the Next Generation of Surgeons 

video: 

In an increasingly acute surgeon shortage, artificial intelligence could help fill the gap, coaching medical students as they practice surgical techniques.

A new tool, developed at Johns Hopkins University and trained on videos of expert surgeons at work, offers students real-time personalized advice as they practice suturing. Initial trials suggest AI can be a powerful substitute teacher for more experienced students.

view more 

Credit: Johns Hopkins University





In an increasingly acute surgeon shortage, artificial intelligence could help fill the gap, coaching medical students as they practice surgical techniques.

A new tool, trained on videos of expert surgeons at work, offers students real-time personalized advice as they practice suturing. Initial trials suggest AI can be a powerful substitute teacher for more experienced students.

“We’re at a pivotal time. The provider shortage is ever increasing and we need to find new ways to provide more and better opportunities for practice. Right now, an attending surgeon who already is short on time needs to come in and watch students practice, and rate them, and give them detailed feedback—that just doesn’t scale,” said senior author Mathias Unberath, an expert in AI assisted medicine who focuses on how people interact with AI. “The next best thing might be our explainable AI that shows students how their work deviates from expert surgeons.”

Developed at Johns Hopkins University, the pioneering technology was showcased and honored at the recent International Conference on Medical Image Computing and Computer Assisted Intervention.

Currently many medical students watch videos of experts performing surgery and try to imitate what they see. There are even existing AI models that will rate students, but according to Unberath they fall short because they don’t tell students what they’re doing right or wrong.

“These models can tell you if you have high or low skill, but they struggle with telling you why,” he said. “If we want to enable meaningful self-training, we need to help learners understand what they need to focus on and why.”

The team’s model incorporates what’s known as “explainable AI,” an approach to AI that – in this example – will rate how well a student closes a wound and then also tell them precisely how to improve.

The team trained their model by tracking the hand movements of expert surgeons as they closed incisions. When students try the same task, the AI texts them immediately to tell them how they compared to an expert and how to refine their technique.

“Learners want someone to tell them objectively how they did,” said first author Catalina Gomez, a Johns Hopkins PhD student in computer science. “We can calculate their performance before and after the intervention and see if they are moving closer to expert practice.”

The team performed a first-of-its-kind study to see if students learned better from the AI or by watching videos. They randomly assigned 12 medical students with suturing experience to train with one of the two methods.

All participants practiced closing an incision with stitches. Some got immediate AI feedback while others tried to compare what they did to a surgeon in a video. Then everyone tried suturing again.

Compared to students who watched videos, some students coached by AI, those with more experience, learned much faster.

“In some individuals the AI feedback has a big effect,” Unberath said. “Beginner students still struggled with the task but students with a solid foundation in surgery, who are at the point where they can incorporate the advice, it had a great impact.”

Next the team plans to refine the model to make it easier to use. They hope to eventually create a version that students could use at home.

“We’d like to offer computer vision and AI technology that allows someone to practice in the comfort of their home with a suturing kit and a smart phone,” Unberath said. “This will help us scale up training in the medical fields. It’s really about how can we use this technology to solve problems.”

Authors include Lalithkumar Seenivasan, Xinrui ZouJeewoo YoonSirui ChuAriel Leon; Patrick Kramer; Yu-Chun KuJose L. Porrasand Masaru Ishii, all of Johns Hopkins, and Alejandro Martin-Gomez of University of Arkansas.




The team trained their model by tracking the hand movements of expert surgeons as they closed incisions.


The team trained their model by tracking the hand movements of expert surgeons as they closed incisions. When students try the same task, the AI texts them immediately to tell them how they compared to an expert and how to refine their technique.

Credit
Johns Hopkins University


Can AI make us more creative? New study reveals surprising benefits of human-AI collaboration




Swansea University

Genetic Car Designer Game 

image: 

Study participants were tasked with designing a virtual car on the Genetic Car Designer Game. 

view more 

Credit: Dr Sean Walton, Swansea University




Artificial intelligence (AI) is often seen as a tool to automate tasks and replace humans, but new research from Swansea University challenges this view, showing that AI can also act as a creative, engaging and inspiring partner.

A team from the University’s Computer Science Department has conducted one of the largest studies to date on how humans collaborate with AI during design tasks. More than 800 participants took part in an online experiment using an AI-powered system that supported users as they designed virtual cars.

Unlike many AI tools that optimise solutions behind the scenes, this system used a technique called MAP-Elites to generate diverse visual design galleries. These galleries included a wide range of potential car designs, including high-performing examples, unusual ideas and some deliberately imperfect ones.

Turing Fellow Dr Sean Walton, Associate Professor of Computer Science and lead author of the study, explained: “People often think of AI as something that speeds up tasks or improves efficiency, but our findings suggest something far more interesting. When people were shown AI-generated design suggestions, they spent more time on the task, produced better designs and felt more involved. It was not just about efficiency. It was about creativity and collaboration.”

A key insight from the study, published in the ACM journal Transactions on Interactive Intelligent Systems, is that traditional ways of evaluating AI design tools may be too narrow. Metrics such as how often users click or copy suggestions fail to capture the emotional, cognitive and behavioural dimensions of engagement. The Swansea team argues for more holistic evaluation methods that consider how AI systems influence how people feel, think and explore.

Dr Walton added: “Our study highlights the importance of diversity in AI output. Participants responded most positively to galleries that included a wide variety of ideas, including bad ones! These helped them move beyond their initial assumptions and explore a broader design space. This structured diversity prevented early fixation and encouraged creative risk-taking.

“As AI becomes increasingly embedded in creative fields, from engineering and architecture to music and game design, understanding how humans and intelligent systems work together is essential. As the technology evolves, the question is not only what AI can do but how it can help us think, create and collaborate more effectively.”

Read From Metrics to Meaning: Time to Rethink Evaluation in Human–AI Collaborative Design in full.

No comments: