Thursday, August 07, 2025

 

Unprecedented heat in North China: how soil moisture amplified 2023's record heatwave




Institute of Atmospheric Physics, Chinese Academy of Sciences
Heatwave in Beijing 

image: 

Beijing commuters endure scorching heat as they brave the sun-drenched streets.

view more 

Credit: Kexin Gui




This summer, much of North China has endured widespread temperatures above 35°C. Even typically cooler, high-latitude summer retreats like Harbin in Northeast China—usually a refuge from the heat—saw temperatures soar past 35°C in late June and July. As climate change accelerates, extreme heat events will become increasingly frequent.

Just two years earlier, in late June 2023, North China sweltered under a searing three-day heatwave that arrived weeks earlier than usual and broke six decades temperature records. Daily highs soared past 40°C in some areas, triggering heat-related illnesses, straining the region’s power grid, and threatening crops at the growing season. For millions living in this vital agricultural and industrial heartland, the scorching conditions were a stark reminder of the mounting risks posed by climate extremes.

A new study published in Earth’s Future by researchers Kexin Gui and Tianjun Zhou of the Institute of Atmospheric Physics, Chinese Academy of Sciences, has pointed the dual drivers behind this unprecedented heat: large-scale atmospheric circulation and an unusually strong soil moisture feedback. Using advanced climate analysis techniques, the team found that while an anomalous high-pressure system accounted for nearly 70% of the heatwave’s intensity, the early-season drought and dry soils added another 40%—amplifying the heatwave’s severity far beyond what would have occurred otherwise.

“Dry soils, caused by the lowest rainfall in over four decades, acted like a giant amplifier,” explained lead author Kexin Gui. “With little moisture left to evaporate, the land surface heated up rapidly, pushing temperatures to extremes rarely seen in North China’s early summer.”

The study warns that such conditions may become more common under climate change. Model projections suggest that heatwaves with the same intensity as the 2023 event could become the new normal by the end of the century. While the influence of soil moisture feedback on extreme heat may weaken in the long term due to projected increases in soil moisture.

“Heatwaves of this magnitude put enormous pressure on energy systems, agriculture, and public health,” said Dr. Zhou. “Understanding how soil moisture and atmospheric processes interact is crucial for better predicting and mitigating future extreme weather events.”

The findings underscore the urgency of climate adaptation strategies in North China, where the increasing heat extremes poses a significant threat to livelihoods and ecosystems.

 

AI chatbots can run with medical misinformation, study finds, highlighting the need for stronger safeguards



Researchers suggest ways to reduce risk of chatbots repeating false medical details


The Mount Sinai Hospital / Mount Sinai School of Medicine





New York, NY [August 6, 2025] — A new study by researchers at the Icahn School of Medicine at Mount Sinai finds that widely used AI chatbots are highly vulnerable to repeating and elaborating on false medical information, revealing a critical need for stronger safeguards before these tools can be trusted in health care.

The researchers also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves. Their findings were detailed in the August 2 online issue of Communications Medicine [https://doi.org/10.1038/s43856-025-01021-3].

As more doctors and patients turn to AI for support, the investigators wanted to understand whether chatbots would blindly repeat incorrect medical details embedded in a user’s question, and whether a brief prompt could help steer them toward safer, more accurate responses.

“What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental,” says lead author Mahmud Omar, MD, who is an independent consultant with the research team. “They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions. The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference.”

The team created fictional patient scenarios, each containing one fabricated medical term such as a made-up disease, symptom, or test, and submitted them to leading large language models. In the first round, the chatbots reviewed the scenarios with no extra guidance provided. In the second round, the researchers added a one-line caution to the prompt, reminding the AI that the information provided might be inaccurate.

Without that warning, the chatbots routinely elaborated on the fake medical detail, confidently generating explanations about conditions or treatments that do not exist. But with the added prompt, those errors were reduced significantly.

“Our goal was to see whether a chatbot would run with false information if it was slipped into a medical question, and the answer is yes,” says co-corresponding senior author Eyal Klang, MD, Chief of Generative AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. “Even a single made-up term could trigger a detailed, decisive response based entirely on fiction. But we also found that the simple, well-timed safety reminder built into the prompt made an important difference, cutting those errors nearly in half. That tells us these tools can be made safer, but only if we take prompt design and built-in safeguards seriously.”

The team plans to apply the same approach to real, de-identified patient records and test more advanced safety prompts and retrieval tools. They hope their “fake-term” method can serve as a simple yet powerful tool for hospitals, tech developers, and regulators to stress-test AI systems before clinical use.

“Our study shines a light on a blind spot in how current AI tools handle misinformation, especially in health care,” says co-corresponding senior author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health, Director of the Hasso Plattner Institute for Digital Health, and Irene and Dr. Arthur M. Fishberg Professor of Medicine at the Icahn School of Medicine at Mount Sinai and the Chief AI Officer for the Mount Sinai Health System. “It underscores a critical vulnerability in how today’s AI systems deal with misinformation in health settings. A single misleading phrase can prompt a confident yet entirely wrong answer. The solution isn’t to abandon AI in medicine, but to engineer tools that can spot dubious input, respond with caution, and ensure human oversight remains central. We’re not there yet, but with deliberate safety measures, it’s an achievable goal.”

The paper is titled “Large Language Models Demonstrate Widespread Hallucinations for Clinical Decision Support: A Multiple Model Assurance Analysis.”

The study’s authors, as listed in the journal, are Mahmud Omar, Vera Sorin, Jeremy D. Collins, David Reich, Robert Freeman, Alexander Charney, Nicholas Gavin, Lisa Stump, Nicola Luigi Bragazzi, Girish N. Nadkarni, and Eyal Klang.

This work was supported in part through the computational and data resources and staff expertise provided by Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai and supported by the Clinical and Translational Science Awards (CTSA) grant UL1TR004419 from the National Center for Advancing Translational Sciences. The research was also supported by the Office of Research Infrastructure of the National Institutes of Health under award number S10OD026880 and S10OD030463.

-####-

About Mount Sinai's Windreich Department of AI and Human Health  

Led by Girish N. Nadkarni, MD, MPH—an international authority on the safe, effective, and ethical use of AI in health care—Mount Sinai’s Windreich Department of AI and Human Health is the first of its kind at a U.S. medical school, pioneering transformative advancements at the intersection of artificial intelligence and human health. 

The Department is committed to leveraging AI in a responsible, effective, ethical, and safe manner to transform research, clinical care, education, and operations. By bringing together world-class AI expertise, cutting-edge infrastructure, and unparalleled computational power, the department is advancing breakthroughs in multi-scale, multimodal data integration while streamlining pathways for rapid testing and translation into practice. 

The Department benefits from dynamic collaborations across Mount Sinai, including with the Hasso Plattner Institute for Digital Health at Mount Sinai—a partnership between the Hasso Plattner Institute for Digital Engineering in Potsdam, Germany, and the Mount Sinai Health System—which complements its mission by advancing data-driven approaches to improve patient care and health outcomes. 

At the heart of this innovation is the renowned Icahn School of Medicine at Mount Sinai, which serves as a central hub for learning and collaboration. This unique integration enables dynamic partnerships across institutes, academic departments, hospitals, and outpatient centers, driving progress in disease prevention, improving treatments for complex illnesses, and elevating quality of life on a global scale. 

In 2024, the Department's innovative NutriScan AI application, developed by the Mount Sinai Health System Clinical Data Science team in partnership with Department faculty, earned Mount Sinai Health System the prestigious Hearst Health Prize. NutriScan is designed to facilitate faster identification and treatment of malnutrition in hospitalized patients. This machine learning tool improves malnutrition diagnosis rates and resource utilization, demonstrating the impactful application of AI in health care. 

For more information on Mount Sinai's Windreich Department of AI and Human Health, visit: ai.mssm.edu 

 

About the Hasso Plattner Institute at Mount Sinai 

At the Hasso Plattner Institute for Digital Health at Mount Sinai, the tools of data science, biomedical and digital engineering, and medical expertise are used to improve and extend lives. The Institute represents a collaboration between the Hasso Plattner Institute for Digital Engineering in Potsdam, Germany, and the Mount Sinai Health System.  

Under the leadership of Girish Nadkarni, MD, MPH, who directs the Institute, and Professor Lothar Wieler, a globally recognized expert in public health and digital transformation, they jointly oversee the partnership, driving innovations that positively impact patient lives while transforming how people think about personal health and health systems. 

The Hasso Plattner Institute for Digital Health at Mount Sinai receives generous support from the Hasso Plattner Foundation. Current research programs and machine learning efforts focus on improving the ability to diagnose and treat patients. 

 

About the Icahn School of Medicine at Mount Sinai

The Icahn School of Medicine at Mount Sinai is internationally renowned for its outstanding research, educational, and clinical care programs. It is the sole academic partner for the seven member hospitals* of the Mount Sinai Health System, one of the largest academic health systems in the United States, providing care to New York City’s large and diverse patient population.  

The Icahn School of Medicine at Mount Sinai offers highly competitive MD, PhD, MD-PhD, and master’s degree programs, with enrollment of more than 1,200 students. It has the largest graduate medical education program in the country, with more than 2,600 clinical residents and fellows training throughout the Health System. Its Graduate School of Biomedical Sciences offers 13 degree-granting programs, conducts innovative basic and translational research, and trains more than 560 postdoctoral research fellows. 

Ranked 11th nationwide in National Institutes of Health (NIH) funding, the Icahn School of Medicine at Mount Sinai is among the 99th percentile in research dollars per investigator according to the Association of American Medical Colleges.  More than 4,500 scientists, educators, and clinicians work within and across dozens of academic departments and multidisciplinary institutes with an emphasis on translational research and therapeutics. Through Mount Sinai Innovation Partners (MSIP), the Health System facilitates the real-world application and commercialization of medical breakthroughs made at Mount Sinai.

------------------------------------------------------- 

* Mount Sinai Health System member hospitals: The Mount Sinai Hospital; Mount Sinai Brooklyn; Mount Sinai Morningside; Mount Sinai Queens; Mount Sinai South Nassau; Mount Sinai West; and New York Eye and Ear Infirmary of Mount Sinai

How AI might be narrowing our worldview and what regulators can do about it





The Hebrew University of Jerusalem




New study highlights that generative AI systems—especially large language models like ChatGPT—tend to produce standardized, mainstream content, which can subtly narrow users’ worldviews and suppress diverse and nuanced perspectives. This isn't just a technical issue; it has real social consequences, from eroding cultural diversity to undermining collective memory and weakening democratic discourse. Existing AI governance frameworks, focused on principles like transparency or data security, don’t go far enough to address this “narrowing world” effect. To fill that gap, the article introduces “multiplicity” as a new principle for AI regulation, urging developers to design AI systems that expose users to a broader range of narratives, support diverse alternatives and encourage critical engagement so that AI can enrich, rather than limit, the human experience.

[Hebrew University] As artificial intelligence (AI) tools like ChatGPT become part of our everyday lives, from providing general information to helping with homework, one legal expert is raising a red flag: Are these tools quietly narrowing the way we see the world?

In a new article published in the Indiana Law JournalProf. Michal Shur-Ofry from the Hebrew University of Jerusalem and a Visiting Faculty Fellow at the NYU Information Law Institute, warns that the tendency of our most advanced AI systems to produce generic, mainstream content could come at a cost.

“If everyone is getting the same kind of mainstream answers from AI, it may limit the variety of voices, narratives, and cultures we’re exposed to,” Prof. Shur-Ofry explains. “Over time, this can narrow our own world of thinkable-thoughts.”

The article explores how large language models (LLMs), the AI systems that generate text, tend to respond with the most popular content, even when asked questions that have multiple possible answers. One example in the study involved asking ChatGPT about important figures of the 19th century. The answers, which included figures like Lincoln, Darwin, and Queen Victoria, were plausible–but often predictable, Anglo-centric and repetitive. Likewise, when asked to name the best television series, the model’s answers centered around a short-tail of Anglo-American hits, leaving out the rich world of series that are not in English.

The reason is the way the models are built: they learn from massive amounts of digital datasets that are mostly in English, and rely on statistical frequency to generate their answers. This means that the most common names, narratives, and perspectives will surface again and again in the outputs they generate. While this might make AI responses helpful, it also means that less common information, including cultures of small communities that are not based on the English language, will often be left out. And because the outputs of LLMs become training materials for future generations of LLMs, in time the “universe” these models project to us will become increasingly concentrated.

According to Prof. Shur-Ofry, this can have serious consequences. It can reduce cultural diversity, undermine social tolerance, harm democratic discourse, and adversely affect collective memory – the way communities remember their shared past.

So what’s the solution?

Prof. Shur-Ofry proposes a new legal and ethical principle in AI governance: multiplicity. This means AI systems should be designed to expose users or at least alert them to the existence of different options, contentand narrativesnot just one “most popular” answer.

She also stresses the need for AI literacy, so that everyone will have a basic understanding of how LLMs work and why their outputs are likely to lean toward the popular and mainstream. This, she says, will “encourage people to ask follow-up questions, compare answers, and think critically about the information they’re receiving. It will help them see AI not as single source of truth but as a tool and ‘push back’ to extract information that reflects the richness of human experience.

The article suggests two practical steps to bring this idea to life:

  1. Build multiplicity into AI tools: for example, through a feature that allows users to easily raise the models’ “temperature” – a parameter that increases the diversity of generated content, or by clearly notifying users that other possible answers exist.
  2. Cultivate an ecosystem that supports a variety of AI systems, so users can easily get a “second opinion” by consulting different platforms.

In a follow-on collaboration with Dr. Yonatan Belinkov and Adir Rahamim from the Technion’s Computer Science department, and Bar Horowitz-Amsalem from the Hebrew University, Shur-Ofry and her collaborators are attempting to implement these ideas, and present straightforward ways to increase the output diversity of LLMs.

“If we want AI to serve society, not just efficiency, we have to make room for complexity, nuance and diversity,” she says. “That’s what multiplicity is about, protecting the full spectrum of human experience in an AI-driven world.”

Cicadas sing in perfect sync with pre-dawn light




University of Cambridge





Cicadas coordinate their early morning choruses with remarkable precision, timing their singing to a specific level of light during the pre-dawn hours.

In a study published in the journal Physical Review E, researchers have found that these insects begin their loud daily serenades when the sun is precisely 3.8 degrees below the horizon: a consistent marker of early morning light known as civil twilight.

The research, carried out by scientists from India, the UK and Israel, analysed several weeks of field recordings taken at two locations near Bangalore in India. Using tools from physics typically applied to the study of phase transitions in materials, the team uncovered a regularity in how cicadas respond to subtle changes in light.

“We’ve long known that animals respond to sunrise and seasonal light changes,” said co-author Professor Raymond Goldstein, from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “But this is the first time we’ve been able to quantify how precisely cicadas tune in to a very specific light intensity — and it’s astonishing.”

The crescendo of cicada song — familiar to anyone who has woken up early on a spring or summer morning — takes only about 60 seconds to build, the researchers found. Each day, the midpoint of that build-up occurs at nearly the same solar angle, regardless of the exact time of sunrise.

In practical terms, that means cicadas begin singing when the light on the ground has reached a specific threshold, varying by just 25% during that brief transition.

To explain this level of precision, the team developed a mathematical model inspired by magnetic materials, in which individual units, or spins, align with an external field and with each other. Similarly, their model proposes that cicadas make decisions based both on ambient light and the sounds of nearby insects, like individuals in an audience who start clapping when others do.

“This kind of collective decision-making shows how local interactions between individuals can produce surprisingly coordinated group behaviour,” said co-author Professor Nir Gov from the Weizmann Institute, who is currently on sabbatical in Cambridge.

The field recordings were made by Bangalore-based engineer Rakesh Khanna, who carries out cicada research as a passion project. Khanna collaborated with Goldstein and Dr Adriana Pesci at Cambridge’s Department of Applied Mathematics and Theoretical Physics.

“Rakesh’s observations have paved the way to a quantitative understanding of this fascinating type of collective behaviour,” said Goldstein. “There’s still much to learn, but this study offers key insights into how groups make decisions based on shared environmental cues.”

The study was partly supported by the Complex Systems Fund at the University of Cambridge. Raymond Goldstein is the Alan Turing Professor of Complex Physical Systems and a Fellow of Churchill College, Cambridge.