Thursday, December 18, 2025

Vision-impaired individuals estimate the arrival time of approaching vehicles surprisingly






New international research shows how people with age-related macular degeneration use visual and auditory cues to judge the arrival time of approaching vehicles



Johannes Gutenberg Universitaet Mainz

VR scenes 

image: 

Screenshots from the VR-scenes shown to participants in the TTC-AMD study: first video frame of an approaching car (top panel) and last frame before the car's disappearance

view more 

Credit: © P.R. DeLucia et al., PLOS ONE, 2025 / CC BY 4.0





People with central vision loss can judge the motion of vehicles almost as accurately as people with normal vision, a new international study shows. Despite age-related macular degeneration (AMD), they estimated the moment when an approaching car would reach them with comparable accuracy as a group with normal vision. These are the findings of new research conducted by Johannes Gutenberg University Mainz (JGU) in collaboration with Rice University in Texas, USA, and other American as well as French researchers. Their research, recently published in the open-access journal PLOS One, compared older adults with AMD to a control group with normal vision in virtual-reality traffic scenarios.

The new study built on earlier work of the authors that investigated arrival time judgments in normally sighted participants using virtual-reality methods. This time, the team wanted to understand whether people with impaired vision rely more heavily on sound and whether having both sight and sound provides an advantage compared to having vision alone. "There are few studies that look specifically at collision judgments in people with visual impairments," explained Professor Patricia DeLucia, perceptual and human factors psychologist from Rice University. "Even though tasks like crossing a street or navigating busy environments depend on this ability."

Decisions based on vision and sound

The study's experimental design used a virtual roadway scene in which a vehicle approached the observer from a pedestrian's viewpoint. The virtual reality system provided realistic simulations of the vehicle sound, implemented by Daniel Oberfeld-Twistel, Professor of Experimental Psychology at Mainz University. The visual and auditory information were systematically varied: the scene was presented either visually, auditorily, or with both modalities available simultaneously. The participants were then asked to press a button at the moment they believed the vehicle would have reached them. Using data analysis strategies developed at JGU, the study provides a detailed analysis of the perceptual cues that were associated with participants' arrival time judgments, and examines how features such as optical size, optical expansion, or sound intensity contributed to their estimates.

"Thanks to our advanced audiovisual simulation system and customized data analysis, we gained an almost microscopic insight into how pedestrians use auditory and visual information to estimate the arrival time of an approaching vehicle," said Professor Daniel Oberfeld-Twistel. "This goes beyond what we knew from previous studies."

Surprisingly, the group with AMD in both eyes performed very similarly to the group who had normal vision when estimating the time the vehicle would reach them. The team observed that, under purely visual conditions, older adults with AMD tended to rely somewhat more on pictorial or heuristic cues – such as the apparent size of the vehicle – compared to normally sighted participants. However, when both visual and auditory information were available, the two groups still showed comparable accuracy, and there was no clear advantage of combining sight and sound over vision alone.

No evidence for safe navigation in real traffic

"Our results indicate that even reduced central vision still provides useful information for judging approaching objects," explained Oberfeld-Twistel. "People with age-related macular degeneration continue to benefit from their residual vision instead of relying solely on auditory cues." He pointed out, however, that the study used deliberately simplified virtual-reality scenes with just a single approaching vehicle.

"Future work will therefore need to examine whether the findings hold in more complex environments, for example with multiple vehicles or when the vehicles are accelerating," Patricia DeLucia added. Such research could help guide developments in mobility, rehabilitation, and traffic safety.

In addition to Johannes Gutenberg University Mainz and Rice University, the research team included collaborators from the University of Iowa, Lamar University, Retina Consultants of Texas, the Davies Institute for Speech and Hearing, and the University of Toulouse. This work was supported by the National Eye Institute of the National Institutes of Health under award number R01EY030961. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

 

Publications:
P. R. DeLucia, D. Oberfeld-Twistel, J. K. Kearney, M. Cloutier, A. M. Jilla, A. Zhou et al., Visual, auditory, and audiovisual time-to-collision estimation among participants with age-related macular degeneration compared to a normal-vision group: The TTC-AMD study, PLOS One, 4 December 2025,
DOI: 10.1371/journal.pone.0337549
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0337549

D. Oberfeld, M. Wessels, D. Büttner, Overestimated time-to-collision for quiet vehicles: Evidence from a study using a novel audiovisual virtual-reality system for traffic scenarios, Accident Analysis & Prevention 175, 106778, 22 July 2022,
DOI: 10.1016/j.aap.2022.106778
https://www.sciencedirect.com/science/article/pii/S0001457522002135

 

Related links:

 

Read more:

Call your pop-pop: Unlocking conversations between generations





Washington University in St. Louis




By Leah Shaffer

Researchers at Washington University in St. Louis are investigating the conversations that happen between grandparents and grandchildren in the St. Louis area.

The work builds off of the St. Louis Personality and Aging Network (SPAN) study, which started in 2007 with a group of about 1,600 participants in middle age and now follows 500 of them as they enter the grandparent years.

Although there’s evidence that intergenerational connections can benefit both young and elderly people, little work has been done looking at the content and quality of those connections. How do these conversations stack up in different cultures and genders, and how do they compare to previous generations? Mary Cox, a graduate student in psychological and brain sciences in Arts & Sciences, was also curious about how the vast cultural and technological changes of the 21st century factored into these conversations.

“Grandparents are more accessible (thanks to technology) even as people moved further away and generations don’t co-habitate anymore,” Cox said.

“Part of this project was to create a survey that was capturing what the grandparenting process was like,” said Patrick Hill, Cox’s adviser and a professor of psychological and brain sciences.

 “Despite how important grandparenting is, this is one of the first studies to really ask what’s going on in these conversations,” Hill added.

The results, now published in the journal Research in Human Development, focused on examining the topics most frequently discussed with grandchildren and whether those main topics differed based on race (comparing Black and white grandparents) or gender (grandmothers versus grandfathers). Researchers also looked at topics associated with the grandparents’ sense of social contribution: whether they have positive feelings about the future and how they help shape the future.

The study also asked participants to contrast their conversations with grandchildren to what they talked about with their own grandparents (or if they talked at all). That’s where changing cultural and technological norms had the most notable effect. People live longer and have access to unprecedented communication technologies, so it was no surprise that researchers found this generation of grandparents are talking with grandchildren much more than previous generations.

No surprises were found with gender dynamics either: grandmothers tend to speak more with grandchildren than grandfathers, particularly on topics related to jobs, friends, social change and racism. This could be due to demographics, as women typically live longer than men, but women also tend to embrace the role of caretaker of family culture and history.

“Women are the keepers of these narratives and stories in their family,” Cox said. 

The study also investigated cultural dynamics between white and Black families. Again, not surprisingly, Black participants discussed race, racism and identity more frequently than white grandparents. “The talk,” about how to survive in a world with institutional racism, is common in Black households, but not necessarily only coming from parents. Grandparents, as well as other elders in the community, play a role in passing on this knowledge and experience, according to the analysis.

But there is room for nuance here, Cox said. Just because their initial sweep of survey data shows a difference in discussions about racism doesn’t mean that white grandparents aren’t talking about social issues. They may define terms like “political” differently, and further research will sort that out.

The next step will involve digging deeper into those details and getting the grandchildren’s point of view.

“We only have one side of the story right now,” Hill said. “What we don’t know right now is how the grandchildren are thinking of these relationships.”

Cox said they want to understand how grandparents “shape younger generations’ view of the world and the way they interact with the world around them.”

Next steps will also include analyzing the directions these relationships take; do grandchildren more commonly reach out to grandparents, or vice versa? Researchers also will explore how that dynamic shapes the grandchild’s life outcomes longer term, she said.

The bottom line: there is work ahead to understand the benefits and impacts of being in community with older adults.

 “The grandparenting role does seem to be salient in people’s lives, as this study is showing,” Hill said.

The research highlights the importance for both the older adult and the grandchild to invest in their well-being because the conversations benefit both parties, Cox said. All types of conversations are valuable, she stressed, even if not in person. Digital means of communication was the most common way the generations talked to each other.

“It’s just as beneficial to give older adults a call or give them a text,” Cox said.


Cox MA, Beatty-Wright JF, Wolk MW, Hill PL. Intergenerational Conversations and Social Well-Being: How Race and Gender Shape Grandparent-Grandchild Discussions. Research in Human Development. 1–15. https://doi.org/10.1080/15427609.2025.2586919

This research was supported by National Institutes of Health Grants [R01-AG045231], [R01 MH077840], and [R01-AG061162].

 

“AI advisor” helps self-driving labs share control



Inspired by investment software, a novel approach helps AI and humans work together to guide robots in the creation and optimization of next-generation materials



University of Chicago

“AI advisor” helps self-driving labs share control 

image: 

UChicago Pritzker School of Molecular Engineering Asst. Prof. Jie Xu, who has a joint appointment with Argonne National Laboratory, and Argonne staff scientist Henry Chan look at Polybot, a "self-driving lab" that uses AI to help researchers guide the materials discovery process.

view more 

Credit: Photo by Yukun Wu




“Self-driving” or “autonomous” labs are an emerging technology where artificial intelligence guides the discovery process, helping design experiments or perfecting decision strategies.

While these labs have generated heated debate about whether humans or machines should lead scientific research, a new paper from Argonne National Laboratory and the University of Chicago Pritzker School of Molecular Engineering (UChicago PME) has proposed a novel answer: Both.

In a paper published today in Nature Chemical Engineering, the team led by UChicago PME Asst. Prof. Jie Xu, who has a joint appointment at Argonne, outlined an “AI advisor” model that helps humans and machines share the driver’s seat in self-driving labs. 

Inspired by the software used to help investors trade stocks, the model leverages AI’s data-processing prowess but keeps decisions in the hands of experienced researchers accustomed to making real-time choices using limited datasets.

“The advisor will perform real-time data analysis and monitor the progress of the self-driving lab’s autonomous discovery journey. If the advisor observes a decline in performance, the advisor is going to prompt the human researchers to see if they want to switch the strategy, refine the design space or so on,” said Xu. “Compared to the traditional self-driving lab where we stick with one decision strategy from the beginning to the end, this makes the entire decision workflow adaptive and boosts the performance significantly.”

Co-corresponding author Henry Chan, a staff scientist at the Nanoscience and Technology division at Argonne, said the goal is not to put either AI or humans in charge, but to have each focus on what they do best. 

“People have been focusing a lot on self-improving AI—AI that can modify its own algorithm, generate its own data set, retrain itself and all that,” Chan said. “But here we're taking a cooperative approach where humans can play a role in the process also. We want to facilitate the collaboration between human and AI to achieve co-discovery.”

Putting the AI advisor to work

The team applied the advisor model to work on an electronic materials challenge, using the self-driving lab Polybot, located in Argonne’s Center for Nanoscale Materials to study and design electronic material called a mixed ion-electron conducting polymer (MIECP)

The MIECP created through this merger of machine and human intelligence showed a 150% increase in mix conducting performance over MIECPs created through the previous cutting-edge technique.

It also helped identify two factors key to increasing this volumetric capacitance—a larger crystalline lamellar spacing and higher specific surface area. This advance in pure science will help future researchers better design MIECP, said UChicago PME Assoc. Prof. Sihong Wang, also a co-corresponding author on the new paper.

“For material science research, there are two intercorrelated goals,” Wang said. “One is to improve the material’s performance or develop new performance. But to enable that, you need the second goal: a deep understanding about how different material design strategies, parameters and processing conditions will influence that performance. By making the entire space of the structure variation much larger, this AI model has helped to achieve two goals at the same time.”

“While AI is excellent at this form of data analysis, it falters at decision-making when there are few data points to guide it,” Xu said. This is where experienced human researchers excel. 

“The methodology that we use for this study offers a generalizable framework that can be adopted by other self-driving labs,” Xu said. “But basically, we cannot promote humans in the lengthy design, fabrication and test-analysis loop. We promote human-machine collaboration to boost discovery together.”

The team next looks to improve the communication not from the AI, but to it, helping humans and software better advance science together.

“Currently, the interaction is mostly one-way. Information is coming from the AI advisor, then humans take optional actions,” Chan said. “In the future, we want a tighter integration between AI and humans, where the AI can learn from human actions and modify the way it thinks in subsequent iterations, modeling the way of human decision-making.”

Citation: “Adaptive AI decision interface for autonomous electronic material discovery,” Dai et al, Nature Chemical Engineering, December 18, 2025. DOI: 10.1038/s44286-025-00318-3

 

Most Americans still get nicotine wrong




Rutgers University




Nicotine is the drug that keeps people coming back to cigarettes but not the substance that causes serious health effects in people who use tobacco. It is the tar and toxic chemical mix in tobacco and tobacco smoke that causes cancer, lung disease and 490,000 deaths in the U.S. each year.

Researchers have known for decades that many Americans incorrectly think nicotine to be inherently deadly, but different studies have reached different conclusions about the prevalence of the misconception.

Now, new work from Rutgers Health explains why previous studies have disagreed and may suggest strategies for reducing misconceptions and tobacco-related harms.

The study in Nicotine & Tobacco Research presented survey takers with differently worded questions about the dangers of nicotine and found that such differences could push the percentage of people answering correctly from 10% to 80%.

“The headline is that there are widespread misperceptions about nicotine’s role in health harms from smoking, and those misperceptions have been growing over time,” said Andrea Villanti, deputy director of the Rutgers Institute for Nicotine and Tobacco Studies and lead author of the study. “Our work shows that you can also move those numbers by changing how you ask the question.”

Nicotine is not harmless. The addictive substance can affect the heart and blood vessels. But authoritative reviews have not identified nicotine as a carcinogen in tobacco smoke, and the major health risks of cigarettes come from inhaling smoke filled with cancer-causing chemicals. 

Noncombustible products such as nicotine replacement therapies and some smokeless products deliver nicotine with fewer toxic compounds than cigarettes because they don’t burn tobacco.

Misperceptions about nicotine matter in part because they can discourage people who smoke from using tools that could help them quit. Earlier research by Villanti and others has found that people who wrongly believe nicotine causes cancer are less likely to use nicotine patches, gum or lozenges or to switch completely from cigarettes to less harmful products.

For the latest study, the Rutgers team embedded a randomized experiment in the Rutgers Omnibus Survey, a quarterly online survey that tracks tobacco and nicotine use. In August 2022, 2,526 adults aged 18 to 45 were randomly assigned to one of 10 questions about nicotine and cancer, drawn from national surveys and new items the researchers designed. After answering, each person typed a short explanation of how they had arrived at that answer.

When the team used wording similar to national surveys that ask whether nicotine is responsible for “most of the cancer caused by smoking,” about 44% of respondents answered correctly. When the question was more direct – for example, “Nicotine is a cause of cancer” – that figure dropped to about 23% to 24%. 

One novel statement, “Just the nicotine in cigarettes causes cancer,” was answered as “correct” by 81% of respondents, but some people may have disagreed because they believed many chemicals in smoke cause cancer – including nicotine.

The open-ended explanations showed how the public tries to make sense of nicotine. Some respondents emphasized that exposure to smoke and other chemicals, not nicotine alone, causes cancer. Others said nicotine directly causes cancer, and a third group said nicotine causes cancer only because it keeps people addicted to smoking. 

These misunderstandings have high stakes. The Food and Drug Administration has proposed a product standard that would require cigarette makers to reduce nicotine in cigarettes to nonaddictive levels, a move intended to make quitting easier and keep young people from getting hooked in the first place. 

Villanti, who is also a professor at the Rutgers School of Public Health,  said if people continue to see nicotine as the primary danger, they could read “low nicotine” on cigarette packs as “low risk” and keep smoking, even though the smoke would still be just as harmful.

“We did not come out of this with a single best question to use in future studies,” Villanti said. “But if we want to design better messages and better policies around nicotine, we need to be clear on what people actually believe – and how much room there is to move them toward an accurate understanding of nicotine.”