Tuesday, October 07, 2025

 

Scientists agree chemicals can affect behavior, but industry workers more reluctant about safety testing



New research exposes divide between industry scientists and academic scientists over testing chemicals for behavioral impacts



University of Portsmouth

Chemical effects on behaviour survey results 

image: 

Results from the study 'Perceptions about the use of Behavioral (Eco)Toxicology to protect human health and the environment', published in Integrated Environmental Assessment and Management.

view more 

Credit: University of Portsmouth




  • Survey of 166 international experts highlights concerns about protecting human and wildlife health from environmental pollutants
  • Less than a third of industry scientists support including behavioural tests in chemical safety assessments, compared to 80 per cent of academics and 91 per cent of government scientists

  • Despite almost all scientists (97 per cent) agreeing that chemicals can affect wildlife behaviour, most testing is done by universities rather than chemical companies, leaving gaps in safety assessment

Peer-reviewed, survey

An international study led by the University of Portsmouth has revealed reluctance from industry scientists to testing chemicals for their effects on human and wildlife behaviour, despite growing evidence linking environmental pollutants to neurological disorders and behavioural changes.

The researchers surveyed 166 scientists across 27 countries working in environmental toxicology and behavioural ecology. They found that whilst 97 per cent of experts agree that contaminants can impact wildlife behaviour and 84 per cent believe they can affect human behaviour, there remains a stark divide between sectors on how to address these risks.

Industry scientists were consistently more sceptical about the reliability and necessity of behavioural testing compared to their academic and government counterparts, raising questions about potential conflicts of interest in chemical safety assessment.

The findings, published in Integrated Environmental Assessment and Management, revealed 76 per cent of academics and 68 per cent of government scientists considered behavioural experiments reliable, compared to just 30 per cent from industry. 

When asked whether regulatory authorities should consider behavioural tests when assessing chemical safety, 80 per cent of academics and 91 per cent of government scientists agreed, but less than a third (30 per cent) of industry respondents supported this approach.

The connection between chemical exposure and behavioural changes is far from new. The English language shows evidence of these links in historical phrases like “mad as a hatter” - referring to hat-makers who suffered neurological damage from mercury poisoning - and “crazy as a painter,” describing the erratic behaviour of artists exposed to lead-based paints.

Today's concerns centre on whether modern pollution could be contributing to rising rates of dementia, Alzheimer's disease, autism, and even criminal behaviour. Recent studies have linked air pollution to neurological disorders, including Parkinson's and Huntington’s disease, whilst research continues to examine the role of environmental contaminants in neurodevelopmental conditions.

Professor Alex Ford from the University of Portsmouth's Institute of Marine Sciences, who led the research, expressed concern about industry attitudes: “What worries me is that industry appears apprehensive that testing chemicals for their behavioural effects will lead to increased costs and potentially uncover effects they'd rather not have to address. When we're talking about protecting human health and wildlife, surely using the most sensitive, and thereby most protective, data should take priority over profit margins.”

While the study found that industry respondents were significantly more likely to question the reliability and relevance of behavioural testing, the pharmaceutical industry extensively uses behavioural tests in drug development and there are regulations governing behavioural impairment from substances like alcohol and cannabis.

Recent studies have shown a 34-fold increase in research papers on behavioural effects in environmental toxicology since 2000, yet there’s still reluctance to incorporate these harm measurements into regulatory frameworks. 

Our previous research shows that whilst European law doesn't prevent regulators from introducing behavioural tests for chemicals, there are very few official testing requirements in place,” explained Marlene Ă…gerstrand, co-author and researcher at Stockholm University.

“This means that most studies examining how chemicals affect behaviour are carried out by university researchers rather than chemical companies, resulting in incomplete coverage of potentially harmful substances.”

The new study builds on award-winning research from 2021, when Professor Ford and international colleagues won two best paper awards for their work on chemical behavioural studies.

The researchers want behavioural testing to become a standard part of chemical safety checks, with consistent testing methods and better cooperation between industry, government and academic scientists.

“The overwhelming majority of scientists - including those in industry - agree that contaminants can affect behaviour,” said Professor Ford. “The question now is whether we have the collective will to act on that knowledge to better protect human health and the environment.”

The study surveyed scientists from academia (47 per cent), government agencies (21 per cent), and industry/consultancy (27 per cent), with the remainder working in environmental NGOs and research institutions.

It was a collaboration between researchers from the University of Portsmouth in England, Stockholm University, Swedish University of Agricultural Sciences, German Environment Agency (UBA), Australian Environment Protection Agency, US EPA, Monash University in Australia), and Baylor University in the USA.

Should regulatory authorities consider behavioural tests when assessing chemical safety? 

A graph from the paper ‘Perceptions about the use of Behavioral (Eco)Toxicology to protect human health and the environment’, published in Integrated Environmental Assessment and Management

Credit

University of Portsmouth

 

Scientists create ChatGPT-like AI model for neuroscience to build one of the most detailed mouse brain maps to date




Artificial intelligence reveals undiscovered regions of the brain from large-scale spatial transcriptomics data




Allen Institute

Feature Image 

image: 

AI-produced rendering of mouse brain regionalization overlaid with network motifs, symbolizing the fusion of artificial intelligence and neuroanatomical discovery.  

view more 

Credit: University of California, San Francisco






Seattle, WASH.—October 7, 2025—In a powerful fusion of AI and neuroscience, researchers at the University of California, San Francisco (UCSF) and Allen Institute designed an AI model that has created one of the most detailed maps of the mouse brain to date, featuring 1,300 regions/subregions. This new map includes previously uncharted subregions of the brain, opening new avenues for neuroscience exploration. The findings were published today in Nature Communications. They offer an unprecedented level of detail and advance our understanding of the brain by allowing researchers to link specific functions, behaviors, and disease states to smaller, more precise cellular regions—providing a roadmap for new hypotheses and experiments about the roles these areas play.  

“It’s like going from a map showing only continents and countries to one showing states and cities,” said Bosiljka Tasic, Ph.D., director of molecular genetics at the Allen Institute and one of the study authors. “This new, detailed brain parcellation solely based on data, and not human expert annotation, reveals previously uncharted subregions of the mouse brain. And based on decades of neuroscience, new regions correspond to specialized brain functions to be discovered.” 

At the heart of this breakthrough is CellTransformer, a powerful AI model that can automatically identify important subregions of the brain from massive spatial transcriptomics datasets. Spatial transcriptomics reveals where certain brain cell types are positioned in the brain but does not reveal regions of the brain based on their composition. Now, CellTransformer allows scientists to define brain regions and subdivisions based on calculations of shared cellular neighborhoods, much like sketching a city’s borders based on the types of buildings within it. 

“Our model is built on the same powerful technology as AI tools like ChatGPT. Both are built on a ‘transformer’ framework which excels at understanding context,” said Reza Abbasi-Asl, Ph.D., associate professor of neurology and bioengineering at UCSF and senior author of the study. “While transformers are often applied to analyze the relationship between words in a sentence, we use CellTransformer to analyze the relationship between cells that are nearby in space. It learns to predict a cell's molecular features based on its local neighborhood, allowing it to build up a detailed map of the overall tissue organization.” 

This model successfully replicates known regions of the brain, such as the hippocampus; but more importantly, it can also discover previously uncatalogued, finer-grained subregions in poorly understood brain regions, such as the midbrain reticular nucleus, which plays a complex role in movement initiation and release. 

 

What Makes this Brain Map Distinct from Others 

This new brain map depicts brain regions, versus cell types; and unlike previous brain maps, CellTransformer’s is entirely data-driven, meaning its boundaries are defined by cellular and molecular data rather than human interpretation. With 1,300 regions and subregions, it also represents one of the most granular and complex data-driven brain maps of any animal to date. 

 

Role of the Allen Institute’s Common Coordinates Framework (CCF) 

The Allen Institute’s Common Coordinate Framework (CCF) served as the essential gold standard for validating CellTransformer’s accuracy. “By comparing the brain regions automatically identified by CellTransformer to the CCF, we were able to show that our data-driven method was identifying areas aligned with known expert-defined anatomical structures,” said Alex Lee, a PhD candidate at UCSF and first author of the study. “Seeing that our model produces results so similar to CCF, which is such a well-characterized and high-quality resource for the field, was reassuring. The high level of agreement with the CCF provided a critical benchmark, giving confidence that the new subregions discovered by CellTransformer may also be biologically meaningful. We are hoping to explore and validate the results with further computational and experimental studies." 

The potential of this research to unlock critical insights reaches beyond neuroscience. CellTransformer’s powerful AI capabilities are tissue agnostic: They can be used on other organ systems and tissues, including cancerous tissue, where large-scale spatial transcriptomics data is available to better understand the biology of health and disease and fuel the discovery of new treatments and therapies. 

 

About the Allen Institute
The Allen Institute is an independent, 501(c)(3) nonprofit research organization founded by philanthropist and visionary, the late Paul G. Allen. The Allen Institute is dedicated to answering some of the biggest questions in bioscience and accelerating research worldwide. The Institute is a recognized leader in large-scale research with a commitment to an open science model. Its research institutes and programs include the Allen Institute for Brain Science, the Allen Institute for Cell Science, the Allen Institute for Immunology, and the Allen Institute for Neural Dynamics. In 2016, the Allen Institute expanded its reach with the launch of The Paul G. Allen Frontiers Group, which identifies pioneers with new ideas to expand the boundaries of knowledge and make the world better. For more information, visit alleninstitute.org

Three-dimensional representation of region/subregion in mouse brain map created by CellTransformer. Fewer regions are generated for visual clarity/simplicity 

Examples from 1300 regions/subregion in mouse brain created by CellTransformer

Credit
University of California, San Francisco 


 

2023 ocean heatwave ‘unprecedented but not unexpected’




University of Exeter




The June 2023 heatwave in northern European seas was “unprecedented but not unexpected”, new research shows.

During the heatwave, temperatures in the shallow seas around the UK (including the North Sea and Celtic Sea) reached 2.9°C above the June average for 16 days.

While unprecedented since observations began, the study warns that rapid climate change means there is now about a 10% chance of a marine heatwave of this scale occurring each year.

The June 2023 marine heatwave significantly disrupted phytoplankton blooms. Although its full impact on marine ecosystems remain to be assessed, such heatwaves can stress marine species and increase concentrations of bacteria that can harm humans.

The study was carried out by the University of Exeter, the Met Office and Cefas.

“Our findings show that marine heatwaves are a problem now – not just a risk from future climate change,” said Dr Jamie Atkins, who led the study during his PhD at Exeter, and is now at Utrecht University.

“The unprecedented nature of the June 2023 event put European marine heatwaves firmly in the public consciousness.

“However, our study shows that – in today’s climate – such events should not be unexpected.”

Co-author Professor Adam Scaife, of the University of Exeter and Head of Long Range Forecasting at the Met Office, said: “This is another example of how steady climate warming is leading to an exponential increase in the occurrence of extreme events.”

The study used a large number of climate model simulations to assess the likelihood of heatwaves at the June 2023 level or above.

It focussed on two locations:

  • In the Celtic Sea – off the south coast of Ireland – the annual chance of such a heatwave rose from 3.8% in 1993 to 13.8% now.
  • In the central North Sea, the chance rose from 0.7% in 1993 to 9.8% now.

Previous research showed that the June 2023 marine heatwave also contributed to record-breaking temperatures and increased rainfall over the British Isles.

Explaining this, Dr Atkins said: “Warmer seas provide a source of heat off the coast, contributing to higher temperatures on land.

“Additionally, warmer air carries more moisture – and when that cools it leads to increased rainfall.”

The team say more research is now needed to investigate the impacts of marine heatwaves in European North-West shelf seas.

Dr Atkins’ work was funded by the Natural Environment Research Council (NERC) via the GW4+ Doctoral Training Partnership.

The paper, published in the journal Communications Earth & Environment, is entitled: “Recent European marine heatwaves are unprecedented but not unexpected.”

 

Johns Hopkins researchers develop AI to predict risk of US car crashes


AI-based model can help traffic engineers to predict future sites of possible crashes.


Johns Hopkins University





In a significant step towards improving road safety, Johns Hopkins University researchers have developed an A.I.-based based tool that can identify the risk factors contributing to car crashes across the United States and to accurately predict future incidents.  

The tool, called SafeTraffic Copilot, aims to provide experts with both crash analyses and crash predictions to reduce the rising number of fatalities and injuries that happen on U.S. roads each year. 

The work, led by Johns Hopkins University researchers, is published in Nature Communications. 

“Car crashes in the U.S. continue to increase, despite decades of countermeasures, and these are complex events affected by numerous variables, like weather, traffic patterns, and driver behavior,” said senior author Hao (Frank) Yang, a professor of civil and systems engineering. “With SafeTraffic Copilot, our goal is to simplify this complexity and provide infrastructure designers and policymakers with data-based insights to mitigate crashes.” 

The team uses a type of AI known as Large Language Models (LLMs) which are designed to process, understand, and learn from vast amounts of data. SafeTraffic Copilot was trained using text (i.e., descriptions of road conditions), numerical values (i.e., blood alcohol levels), satellite images and on-site photography. The team’s model also has the ability to evaluate both individual and combined risk factors, offering a more detailed understanding of how these elements interact to influence crashes.  

By design, SafeTraffic Copilot incorporates a continuous learning loop so that prediction performance improves as more crash-related data is entered into the model, making it even more accurate over time. Even more importantly, by using LLMs, researchers can quantify the trustworthiness of the prediction—in other words, they can say a given prediction will be 70% accurate in a real-world scenario. 

“By reframing crash prediction as a reasoning task and using LLMs to integrate written and visual data, the stakeholders can move from coarse, aggregate statistics, to a fine-tuned understanding of what causes specific crashes,” Yang said. 

The model gives policymakers and transportation designers a trustworthy and interpretable tool to identify combinations of factors that elevate crash risk. The data can then be used to execute evidence-based interventions and more effective infrastructure planning to save lives and reduce injuries.  

The researchers see the model as a copilot for human decision-making. 

“Rather than replacing humans, LLMs should serve as copilots—processing information, identifying patterns, and quantifying risks—while humans remain the final decision-makers,” Yang said.  

SafeTraffic Copilot has the potential to be a blueprint for responsibly integrating AI-based models into high-stakes fields, like public health and human safety. Because LLMs operate as large black-box models, users do not know how predictions are generated, deterring their use in high-risk decision-making scenarios.  

The team plans to continue their research to better understand how AI models can be used responsibly in those settings.  

“The central focus of our ongoing research is to find the best way to combine the strengths of humans and LLMs so that decisions in high-stakes domains are not only data-driven, but also transparent, accountable, and aligned with societal values,” he added. 

Study authors include Hongru Du, assistant professor at the University of Virginia, and Johns Hopkins doctoral candidates Yang Zhao, Pu Wang, and Yibo Zhao. 

 

  ### 

  

Johns Hopkins University news releases are available online, as is information for reporters. To arrange interviews with Johns Hopkins experts, contact a media representative. Find more Johns Hopkins experts on the Experts Hub, and more Johns Hopkins stories on the Hub