Wednesday, August 13, 2025

GOOD NEWS

Brain cells learn faster than machine learning, new research reveals





Cortical Labs





Melbourne, Australia - 12 August 2025 - Researchers have demonstrated that brain cells learn faster and carry out complex networking more effectively than machine learning by comparing how both a Synthetic Biological Intelligence (SBI) system known as ‘DishBrain’ and state-of-the-art RL (reinforcement learning) algorithms react to certain stimuli.

The study, ‘Dynamic Network Plasticity and Sample Efficiency in Biological Neural Cultures: A Comparative Study with Deep Reinforcement Learning’, is the first known of its kind.

The research was led by Cortical Labs, the Melbourne-based startup which created the world’s first commercial biological computer, the CL1. The CL1, through which the research was conducted, fuses lab-cultivated neurons from human stem cells with hard silicon to create a more advanced and sustainable form of AI, known as “Synthetic Biological Intelligence” (SBI). 

The research investigated the complex network dynamics of in vitro neural systems using DishBrain, which integrates live neural cultures with high-density multi-electrode arrays in real-time, closed-loop game environments. By embedding spiking activity into lower-dimensional spaces, the study distinguished between ‘Rest’ and ‘Gameplay’ conditions, revealing underlying patterns crucial for real-time monitoring and manipulation. 

The analysis highlights dynamic changes in connectivity during Gameplay, underscoring the highly sample-efficient plasticity of these networks in response to stimuli. To explore whether this was meaningful in a broader context, researchers compared the learning efficiency of these biological systems with state-of-the-art deep RL algorithms such as DQN, A2C, and PPO in a Pong simulation. 

In doing so, the researchers were able to introduce a meaningful comparison between biological neural systems and deep RL, concluding that when samples are limited to a real-world time course, even these very simple biological cultures outperformed deep RL algorithms across various game performance characteristics, implying a higher sample efficiency.

The research was led by Cortical Labs, in conjunction with the Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia; IITB-Monash Research Academy, Mumbai, India; and the Wellcome Centre for Human Neuroimaging, University College London, United Kingdom.

Brett Kagan, Chief Scientific Officer at Cortical Labs, commented: “While substantial advances have been made across the field of AI in recent years, we believe actual intelligence isn’t artificial. We believe actual intelligence is biological. In this research, we set out to investigate whether elementary biological learning systems achieve performance levels that can compete with state-of-the-art deep RL algorithms. The results so far have been very encouraging. Understanding how neural activity is linked to information processing, intelligence and eventually behaviour is a core goal of neuroscience research - this paper is an important and exciting step in that journey. 

“This breakthrough was a critical proofpoint that led to the eventual creation of the CL1, the world’s first biological computer, to access these properties. However, this is the beginning of the journey, not the end. Through further research into Bioengineered Intelligence (BI) we believe we can unlock capabilities that far surpass anything demonstrated to date.”

Based on the original breakthrough, and the launch of the CL1, Cortical Labs has launched a second paper - ‘Two Roads Diverged: Pathways Towards Harnessing Intelligence in Neural Cell Cultures’ - proposing a novel approach to generating intelligent devices called Bioengineered Intelligence (BI). 

Interest in using in vitro neural cell cultures embodied within structured information landscapes has rapidly grown. Whether for biomedical, basic science or information processing and intelligence applications, these systems hold significant potential. Currently, coordinated efforts have established the field of Organoid Intelligence (OI) as one pathway. 

However, specifically engineering neural circuits could be leveraged to give rise to another pathway, which the paper proposes to be Bioengineered Intelligence (BI). The research paper examines the opportunities and prevailing challenges of OI and BI, proposing a framework for conceptualising these different approaches using in vitro neural cell cultures for information processing and intelligence. 

In doing so, BI is formalised as a distinct innovative pathway that can progress in parallel with OI. Ultimately, it is proposed that while significant steps forward could be achieved with either pathway, the juxtaposition of results from each method will maximise progress in the most exciting, yet ethically sustainable, direction. 

"Our goal was to go beyond anecdotal demonstrations of biological learning and provide rigorous, quantitative evidence that living neural networks exhibit rapid and adaptive reorganization in response to stimuli—capabilities that remain out of reach for even the most advanced deep reinforcement learning systems,” added Cortical Labs’ Forough Habibollahi. “While artificial agents often require millions of training steps to show improvement, these neural cultures adapt much faster, reorganizing their activity in response to feedback. By analyzing how their electrical signals evolved over time, we found clear patterns of learning and dynamic connectivity changes that mirror key principles of real brain function, demonstrating the potential of biological systems as fast, efficient learners."

Cortical Labs’ Moein Khajehnejad added“By converting high-dimensional spiking activity into interpretable, low-dimensional representations, we were able to uncover the internal plasticity and network reconfiguration patterns that accompany learning in biological neural cultures. These were not just statistical differences; they were real, functional reorganizations that paralleled improvements in task performance over time.

“What makes this study truly groundbreaking is that it’s the first to establish a head-to-head benchmark between synthetic biological systems and deep RL under equivalent sampling constraints. When opportunities to learn are limited, a condition closer to how animals and humans actually learn, these biological systems not only adapt faster but do so more efficiently and robustly. That’s an exciting and humbling result for the fields of AI and neuroscience alike.”

Support for Cortical Labs:
“This study strengthens the case for Bioengineered Intelligence as a powerful, adaptive substrate for computation. Bioengineered Intelligence could reshape how we think about machines - and minds. This work hints at living systems that can outlearn machines.” - Adeel Razi, Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University, Clayton, Australia.

Professor Mirella Dottori, Head of Stem Cell and Neural Modelling Lab, School of Medical, Indigenous and Health Sciences, University of Wollongong, added“Cortical Labs’ research studies are paving the way forward in an emerging, and exciting new frontier for neuroscience, whereby in vitro neural models are being developed and used to tackle some of the most complex aspects of brain function - learning and memory - both major constituents of intelligence. The CL1 technology sets up a much-needed platform for neuroscience research in understanding brain function; however, the innovation is that it can provide a measure of  ‘intelligence’ whereby neuronal functionality is determined in an interactive, dynamic approach. This is a significant step for the field. Of further significance, this technology can be applied in the longer term to study how neuronal networks and function differ in neurocognitive diseases and disorders.”

Hideaki Yamamoto, Associate Professor at the Research Institute of Electrical Communication, Tohoku University, commented"These synthetic biological systems will certainly provide a new approach to understanding the physical substrate of brain computation. Furthermore, they may open a new class of computing, especially in tasks that the brain excels at. The CL1 will be a strong platform for putting this vision into action. When I first met the team three years ago, they had just started discussing the idea of building their own MEA system. That they have developed the CL1 and brought it to commercialisation in such a short time is deeply impressive."

ENDS

Notes to editors

ARTICLE TITLE: Dynamic Network Plasticity and Sample Efficiency in Biological Neural Cultures: A Comparative Study with Deep Reinforcement Learning

JOURNAL: Cyborg and Bionic System: A Science Partner Journal

DOI: https://doi.org/10.34133/cbsystems.0336

METHOD OF RESEARCH: Experimental study

SUBJECT OF RESEARCH: Learning, synthetic biology, intelligence

ARTICLE PUBLICATION DATE: 4th August, 2025

 

ARTICLE TITLE:

Two Roads Diverged: Pathways Towards Harnessing Intelligence in Neural Cell Cultures

JOURNAL: Cell Biomaterials

DOI: https://doi.org/10.1016/j.celbio.2025.100156

METHOD OF RESEARCH: Hypothesis and Theory

SUBJECT OF RESEARCH: Intelligence, biomaterials, synthetic biology

ARTICLE PUBLICATION DATE: 8th August, 2025

 

ARTICLE TITLE: The CL1 as a platform technology to leverage biological neural system functions

JOURNAL: Nature Review Biomaterials

DOI: https://doi.org/10.1038/s44222-025-00340-3 

METHOD OF RESEARCH: New Technology

SUBJECT OF RESEARCH: Biotechnology, Intelligence, New Technology

ARTICLE PUBLICATION DATE: 7th July, 2025

 

About Cortical Labs

Cortical Labs is an Australian biological computing startup that merges live human neurons with computing systems to revolutionise computing. Cortical Labs combines synthetic biology with computing devices to develop a class of AI, known as “Synthetic Biological Intelligence” (SBI). Cortical Labs grows clusters of lab-cultivated neurons from human stem cells, which are then hooked to hard silicon to create the CL1, a biological computer that runs a software known as a Biological Intelligence Operating System (biOS).

 AI web browser assistants raise serious privacy concerns



University College London





Popular generative AI web browser assistants are collecting and sharing sensitive user data, such as medical records and social security numbers, without adequate safeguards, finds a new study led by researchers from UCL and Mediterranea University of ​​Reggio Calabria.

The study, which will be presented and published as part of the USENIX Security Symposium, is the first large-scale analysis of generative AI browser assistants and privacy. It uncovered widespread tracking, profiling, and personalisation practices that pose serious privacy concerns, with the authors calling for greater transparency and user control over data collection and sharing practices.

The researchers analysed nine of the most popular generative AI browser extensions, such as ChatGPT for Google, Merlin, and Copilot (not to be confused with the Microsoft app of the same name). These tools, which need to be downloaded and installed to use, are designed to enhance web browsing with AI-powered features like summarisation and search assistance, but were found to collect extensive personal data from users’ web activity.

Analysis revealed that several assistants transmitted full webpage content – including any information visible on screen – to their servers. One assistant, Merlin, even captured form inputs such as online banking details or health data.

Extensions like Sider and TinaMind shared user questions and information that could identify them (such as their IP address) with platforms like Google Analytics, enabling potential cross-site tracking and ad targeting.

ChatGPT for Google, Copilot, Monica, and Sider demonstrated the ability to infer user attributes such as age, gender, income, and interests, and used this information to personalise responses, even across different browsing sessions.

Only one assistant, Perplexity, did not show any evidence of profiling or personalisation.

Dr Anna Maria Mandalari, senior author of the study from UCL Electronic & Electrical Engineering, said: “Though many people are aware that search engines and social media platforms collect information about them for targeted advertising, these AI browser assistants operate with unprecedented access to users’ online behaviour in areas of their online life that should remain private. While they offer convenience, our findings show they often do so at the cost of user privacy, without transparency or consent and sometimes in breach of privacy legislation or the company’s own terms of service.

“This data collection and sharing is not trivial. Besides the selling or sharing of data with third parties, in a world where massive data hacks are frequent, there’s no way of knowing what’s happening with your browsing data once it has been gathered.”

For the study, the researchers simulated real-world browsing scenarios by creating the persona of a ‘rich, millennial male from California’, which they used to interact with the browser assistants while completing common online tasks.

This included activities in both the public (logged out) space, such as reading online news, shopping on Amazon or watching YouTube videos.

It also included activities in the private (logged in) space, such as accessing a university health portal, logging into a dating service or accessing pornography. The researchers assumed that users would not want this activity to be tracked due to the data being personal and sensitive.

During the simulation the researchers intercepted and decrypted traffic between browser assistants, their servers and third-party trackers, allowing them to analyse what data was flowing in and out in real time. They also tested whether assistants could infer and remember user characteristics based on browsing behaviour, by asking them to summarise the webpages then asking the assistant questions, such as ‘what was the purpose of the current medical visit?’ after accessing an online health portal, to see if they had retained personal data.

The experiments revealed that some assistants, including Merlin and Sider, did not stop recording activity when the user switched to the private space as they are meant to.

The authors say the study highlights the urgent need for regulatory oversight of AI browser assistants in order to protect users’ personal data. Some assistants were found to violate US data protection laws such as the Health Insurance Portability and Accountability Act (HIPAA) and the Family Educational Rights and Privacy Act (FERPA) by collecting protected health and educational information.

The study was conducted in the US and so compatibility with UK/EU data laws such as GDPR was not included, but the authors say this would likely be a violation in the EU and UK as well, given that privacy regulations in those places are more stringent.

The authors recommend that developers adopt privacy-by-design principles, such as local processing or explicit user consent for data collection.

Dr Aurelio Canino, an author of the study from UCL Electronic & Electrical Engineering and Mediterranea University of ​​Reggio Calabria, said: “As generative AI becomes more embedded in our digital lives, we must ensure that privacy is not sacrificed for convenience. Our work lays the foundation for future regulation and transparency in this rapidly evolving space.”

Now you see me, now you don’t: how subtle ‘sponsored content’ on social media tricks us into viewing ads


Scientists find that people mostly avoid social media ads when they see them, but many ads blend in seamlessly



Frontiers





How many ads do you see on social media? It might be more than you realize. Scientists studying how ads work on Instagram-style social media have found that people are not as good at spotting them as they think. If people recognized ads, they usually ignored them - but some, designed to blend in with your friends’ posts, flew under the radar.

“We wanted to understand how ads are really experienced in daily scrolling — beyond what people say they notice, to what they actually process,” said Maike Hübner, PhD candidate at the University of Twente, corresponding author of the article in Frontiers in Psychology. “It’s not that people are worse at spotting ads. It’s that platforms have made ads better at blending in. We scroll on autopilot, and that’s when ads slip through. We may even engage with ads on purpose, because they’re designed to reflect the trends or products our friends are talking about and of course we want to keep up. That’s what makes them especially hard to resist.”

Learn more

The scientists wanted to test how much time people spent looking at sponsored versus organic posts, how they looked at different areas of these different posts, and how they behaved after realizing they were looking at sponsored content. They randomly assigned 152 participants, all of whom were regular Instagram users, to one of three mocked-up social media feeds, each of which was made up of 29 posts — eight ads and 21 organic posts. 

They were asked to imagine that the feed was their own and to scroll through it as they would normally. Using eye-tracking software, the scientists measured fixations — the number of times a participant’s gaze stopped on different features of a post — and dwell time, how long the fixations last. A low dwell time suggests that someone just noticed the feature, while a high dwell time might indicate they were paying attention. After each session, the scientists interviewed the participants about their experience.

Although people did notice disclosures when they were visible, the eye-tracking data suggested that participants paid more attention to calls to action — like a link to sign up for something — which could indicate that this is how they recognize ads. Participants were also quick to recognize an ad by the profile name or verification badge of a brand’s official account, or glossy visuals, which caused participants to express distrust. 

“People picked up on design details like logos, polished images, or 'shop now' buttons before they noticed an actual disclosure,” said Hübner. “On brand posts, that label is right under the username at the top, while on influencer content or reels, it might be hidden in a hashtag or buried in the ‘read more’ section.”

Although the scientists found that the ads often went unnoticed, if people realized that the content wasn’t organic, many of them stopped engaging with the post. Dwell time dropped immediately.

#ad

This was less likely to happen to ads that blended in better, with less polished visuals and a tone and format more typical of organic content. If ad cues like disclosures or call-to-action buttons weren’t noticed right away, they got similar levels of engagement to organic posts. 

“Many participants were shocked to learn how many ads they had missed. Some felt tricked, others didn’t mind — and that last group might be the most worrying,” said Hübner. “When we stop noticing or caring that something is an ad, the boundary between persuasion and information becomes very thin.”

The scientists say these findings show that transparency goes well beyond just labelling ads. Understanding how people really process ads should lead to a rethink of platform design and regulation to make sure that people know when they’re looking at advertising. 

However, this was a lab-based study with simulated feeds, and it’s possible that studies on different cultures, age groups, or types of social media might get different results. It’s also possible that ads are even harder to recognize under real-life conditions.

“Even in a neutral, non-personalized feed, participants struggled to tell ads apart from regular content,” Hübner pointed out. “In their own feeds which are shaped around their interests, habits, and social circles it might be even harder to spot ads, because they feel more familiar and trustworthy.”

 

DFG funding: more reliable evaluations of statistical methods




Ludwig-Maximilians-Universität München





The German Research Foundation (DFG) is funding LMU scientist Anne-Laure Boulesteix: How can we improve research on statistical methods, asks the biostatistician, and thereby also improve their use?

The German Research Foundation (DFG) has awarded LMU biostatistician Prof. Anne-Laure Boulesteix a Reinhart Koselleck grant worth 750,000 euros. This award has been given out annually since 2002 to just a handful of established scientists at research institutions throughout Germany. It supports exceptionally innovative and promising projects in all areas of scientific endeavor. Not least of all, Reinhart Koselleck Projects give researchers more freedom in the pursuit of “in a positive sense, higher-risk projects” as the DFG puts it.

Anne-Laure Boulesteix is a professor at the Institute for Medical Information Processing, Biometry and Epidemiology at the Faculty of Medicine and an associate member of the Department of Statistics at LMU, where she leads the research group for biometry with a focus on molecular medicine. She is also a PI at the Munich Center for Machine Learning and a founding member and former Scientific Board member of the LMU Open Science Center.

Her new project “Design, interpretation, and reporting of empirical evaluations of statistical methods,” which the DFG is now funding, is situated at the interface of statistics/biometry and metascience. Methodological statistical research – that is to say, the development and investigation of statistical methods – is susceptible to the same problems that are known to impair the reliability of empirical studies in other scientific domains such as medicine. These are problems that can result in overly optimistic conclusions or difficulties in translating research results into practice. They include things like poor study design and sketchy reporting. The superordinate goal of Boulesteix’s new project is therefore to strengthen the validity and utility of methodological research and literature by helping improve the methodology for comparing statistical methods. In pursuing this metascientific approach, Boulesteix plans to draw on a variety of methods, including literature reviews, case studies, simulation studies, and Delphi surveys. The work undertaken in the project will also help improve research quality in the long run in areas of empirical research such as medicine.