Saturday, January 24, 2026

 

Computing using real life traffic, novel approach to AI cuts its energy usage



Advanced Institute for Materials Research (AIMR), Tohoku University


Figure 1 

image: 

Computing with Real-World and Social Dynamics: A Smart City Perspective. 

view more 

Credit: Hiroyasu Ando et el.





What if traffic could compute? This may sound strange, but researchers at Tohoku University's WPI-AIMR have unveiled a bold new idea: using road traffic itself as a computer.

Researchers at the Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, have proposed a novel artificial intelligence (AI) framework that treats road traffic itself as a computing resource. The approach, called Harvested Reservoir Computing (HRC), opens up a path toward energy-efficient AI systems that reuse the dynamics already existing in our environment instead of relying solely on power-hungry dedicated hardware.

Their AI framework, called Harvested Reservoir Computing, taps into the natural dynamics of traffic flow to enable energy-efficient AI - turning everyday motion into computational power without energy-hungry hardware.

In recent years, machine learning and deep learning have been widely applied to traffic forecasting, demand prediction, and various forms of social infrastructure management. However, these approaches typically require massive computational power and large energy consumption. Reservoir computing (RC), and its extension to real-world physical systems - physical reservoir computing (PRC) - have attracted attention as promising alternatives.

Building on this concept, Professor Hiroyasu Ando and colleagues propose HRC, a framework that "harvests" complex physical dynamics present in the natural and social environment and uses them directly for computation. As a proof of concept, the team systematically evaluated the performance of Road Traffic Reservoir Computing (RTRC), which exploits traffic flow on road networks as a computational reservoir.

Combining controlled traffic experiments using 1/27-scale autonomous miniature cars with numerical simulations of grid-shaped urban road networks, the researchers discovered a striking feature: prediction accuracy is not highest under free-flow or heavily congested conditions. Instead, it peaks just before congestion begins, at a critical, medium-density state where traffic dynamics are most diverse and informative. In this regime, the traffic system naturally processes incoming information, allowing accurate forecasts of future traffic states with minimal computational overhead.

Importantly, this method requires no new specialized hardware. By reusing existing traffic sensors and observational data, it has the potential to support high-precision traffic prediction and adaptive signal control while significantly reducing energy consumption compared with conventional AI approaches.

The study suggests that social infrastructure such as roads can be reinterpreted as "large-scale, continuously operating computers." Beyond traffic management, the concept may enable future applications in smart mobility, urban planning, and energy management, where environmental dynamics are leveraged as part of the computational process.

"These results demonstrate that computation does not have to be confined to silicon chips," says Ando. "By recognizing and harnessing the rich dynamics already present in our environment, we may build AI systems that are both powerful and sustainable."

The research also contributes a new perspective to the development of AI foundation technologies: rather than endlessly scaling up hardware, it may be possible to scale intelligence by integrating physical systems and data in innovative ways.

The findings were published online in Scientific Reports on November 27, 2025.

About the World Premier International Research Center Initiative (WPI)

The WPI program was launched in 2007 by Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) to foster globally visible research centers boasting the highest standards and outstanding research environments. Numbering more than a dozen and operating at institutions throughout the country, these centers are given a high degree of autonomy, allowing them to engage in innovative modes of management and research. The program is administered by the Japan Society for the Promotion of Science (JSPS).

See the latest research news from the centers at the WPI News Portal: https://www.eurekalert.org/newsportal/WPI

Main WPI program site:  www.jsps.go.jp/english/e-toplevel

Advanced Institute for Materials Research (AIMR)

Tohoku University

Establishing a World-Leading Research Center for Materials Science

AIMR aims to contribute to society through its actions as a world-leading research center for materials science and push the boundaries of research frontiers. To this end, the institute gathers excellent researchers in the fields of physics, chemistry, materials science, engineering, and mathematics and provides a world-class research environment.

Malicious AI swarms pose emergent threats to democracy



Summary author: Walter Beckwith


American Association for the Advancement of Science (AAAS)




In a Policy Forum, Daniel Schroeder and colleagues discuss the risks of malicious “Artificial Intelligence (AI) swarms”, which enable a new class of large-scale, coordinated disinformation campaigns that pose significant risks to democracy. Manipulation of public opinion has long relied on rhetoric and propaganda. However, modern AI systems have created powerful new tools for shaping human beliefs and behavior on a societal scale. Large language models (LLMs) and autonomous agents can now generate vast amounts of persuasive, human-like content. When combined into collaborative AI Swarms – collections of AI-driven personas that retain memory and identity – these systems can mimic social dynamics and easily infiltrate online communities, making false narratives appear credible and widely shared. According to the authors, unlike earlier labor-intensive influence operations run by humans, AI systems can operate cheaply, consistently, and at tremendous scale, transforming once isolated disinformation efforts into persistent, adaptive campaigns that pose serious risks to democratic processes worldwide. Here, Schroeder et al. discuss the technology underpinning these malicious systems and identify pathways through which they can harm democratic discourse through widely used digital platforms. The authors argue that defense against these systems must be layered and pragmatic, aiming not for total prevention of their use, which is highly unlikely, but for raising the cost, risk, and visibility of manipulation. Because such efforts would require global coordination outside of corporate and governmental interests, Schroeder et al. propose a distributed “AI Influence Observatory,” consisting of a network of academic groups, nongovernmental organizations, and other civil institutions to guide independent oversight and action. “Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable,” write the authors. “By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance.”

Journal

DOI

Article Title

Article Publication Date

AI is already writing almost one-third of new software code

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date

To make AI more fair, tame complexity



Biases in AI models can be reduced by better reflecting the complexities of the real world



University of Texas at Austin





In April 2025, OpenAI’s popular ChatGPT hit a milestone of a billion active weekly users, as artificial intelligence continued its explosion in popularity.

But with that popularity has come a dark side. Biases in AI’s models and algorithms can actively harm some of its users and promote social injustice. Documented biases have led to different medical treatments due to patients’ demographics and corporate hiring tools that discriminate against female and Black candidates.

New research from Texas McCombs suggests both a previously unexplored source of AI biases and some ways to correct for them: complexity.

“There’s a complex set of issues that the algorithm has to deal with, and it’s infeasible to deal with those issues well,” says Hüseyin Tanriverdi, associate professor of information, risk, and operations management. “Bias could be an artifact of that complexity rather than other explanations that people have offered.”

With John-Patrick Akinyemi, a McCombs Ph.D. candidate in IROM, Tanriverdi studied a set of 363 algorithms that researchers and journalists had identified as biased. The algorithms came from a repository called AI Algorithmic and Automation Incidents and Controversies.

The researchers compared each problematic algorithm with one that was similar in nature but had not been called out for bias. They examined not only the algorithms but also the organizations that created and used them.

Prior research has assumed that bias can be reduced by making algorithms more accurate. But that assumption, Tanriverdi found, did not tell the whole story. He found three additional factors, all related to a similar problem: not properly modeling for complexity.

Ground truth. Some algorithms are asked to make decisions when there’s no established ground truth: the reference against which the algorithm’s outcomes are evaluated. An algorithm might be asked to guess the age of a bone from an X-ray image, even though in medical practice, there’s no established way for doctors to do so.

In other cases, AI may mistakenly treat opinions as objective truths — for example, when social media users are evenly split on whether a post constitutes hate speech or protected free speech.

AI should only automate decisions for which ground truth is clear, Tanriverdi says. “If there is not a well-established ground truth, then the likelihood that bias will emerge significantly increases.”

Real-world complexity. AI models inevitably simplify the situations they describe. Problems can arise when they miss important components of reality.

Tanriverdi points to a case in which Arkansas replaced home visits by nurses with automated rulings on Medicaid benefits. It had the effect of cutting off disabled people from assistance with eating and showering.

“If a nurse goes and walks around to the house, they will be able to understand more about what kind of support this person needs,” he says. “But algorithms were using only a subset of those variables, because data was not available on everything.

“Because of omission of the relevant variables in the model, that model was no longer a good enough representation of reality.”

Stakeholder involvement.  When a model serving a diverse population is designed mostly by members of a single demographic, it becomes more susceptible to bias. One way to counter this risk is to ensure that all stakeholder groups have a voice in the development process.

By involving stakeholders who may have conflicting goals and expectations, an organization can determine whether it’s possible to meet them all. If it’s not, Tanriverdi says, “It may be feasible to reach compromise solutions that everyone is OK with.”

The research concludes that taming AI bias involves much more than making algorithms more accurate. Developers need to open up their black boxes to account for real-world complexities, input from diverse groups, and ground truths.

“The factors we focus on have a direct effect on the fairness outcome,” Tanriverdi says. “These are the missing pieces that data scientists seem to be ignoring.”

“Algorithmic Social Injustice: Antecedents and Mitigations”  is published in MIS Quarterly.

 

Generative AI use and depressive symptoms among US adults



JAMA Network




About The Study: 

This survey study found that artificial intelligence (AI) use was significantly associated with greater depressive symptoms, with magnitude of differences varying by age group. Further work is needed to understand whether these associations are causal and explain heterogeneous effects.


Corresponding Author: To contact the corresponding author, Roy H. Perlis, MD, MSc, email rperlis@mgb.org.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2025.54820)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

#  #  #

Embed this link to provide your readers free access to the full-text article 

 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2025.54820?guestAccessKey=1b34668e-afe8-4888-aa3d-dd05b3b83eff&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=012126

About JAMA Network Open: JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication.