Wednesday, August 27, 2025

RISK INSURANCE ANALYSIS

Rising temperatures intensify "supercell thunderstorms" in Europe





University of Bern

Supercell thunderstorm 

image: 

A supercell thunderstorm over Lake Maggiore, photographed from Locarno Monti.

view more 

Credit: © MeteoSwiss, Luca Panziera





Supercell thunderstorms are among the most impactful weather events in Europe. They typically occur in summer and are characterized by a rotating updraft of warm, humid air that brings strong winds, large hail and heavy rain. The impact is significant and often leads to property damage, agricultural losses, traffic chaos and even threats to human safety.

The collaboration between the Institute of Geography, the Oeschger Center for Climate Change Research and the Mobiliar Lab for Natural Risks at the University of Bern and the Institute for Atmospheric and Climate Science at ETH Zurich has enabled a detailed simulation of these storms. Their high-resolution digital storm map allows a precise representation of individual storm cells and thus surpasses previous possibilities. The study, published in Science Advances, shows that the Alpine region and parts of Central and Eastern Europe can expect a significant increase in storm activity - up to 50% more on the northern side of the Alps with a temperature increase of 3 degrees Celsius compared to pre-industrial values.

Simulations in line with reality

While European supercell thunderstorms are tracked via weather radar, differences in the countries' radar networks make a comprehensive analysis difficult. "This makes cross-border storm detection more difficult," explains corresponding author Monika Feldmann from the Mobiliar Lab for Natural Risks and the Oeschger Center for Climate Change Research at the University of Bern. For the first time, a new type of climate model simulates supercell thunderstorms with a precision of 2.2 kilometers, developed as part of the scClim project (see box on page 3).

The team carried out an eleven-year simulation and compared it with real storm data from 2016 to 2021. "Our simulation largely reflects reality, although it captures slightly fewer storms," notes Feldmann. This is to be expected, as the model only captures storms larger than 2.2 kilometers and lasting longer than an hour, leaving out smaller, shorter-lived events.

Alpine region: a constant "thunderstorm hotspot"

The simulation underlines the Alps as a "hotspot" for supercell thunderstorms, as Feldmann points out. The simulation shows around 38 supercell thunderstorms per season on the northern side of the Alps and 61 on the southern slopes. With an increase of 3 degrees Celsius, these storms will continue to be concentrated in the Alpine region, with up to 52% more storms north of the Alps and 36% more in the south. In contrast, the Iberian Peninsula and southwest France could see a decrease. Overall, an 11% increase in supercell thunderstorms is expected across Europe. "These regional differences illustrate the diverse effects of climate change in Europe," explains Feldmann.

Few storms, big impacts

This project improves the accuracy of forecasts of supercell thunderstorms. Despite their rarity, these storms account for a significant proportion of thunderstorm-related hazards and financial losses. "The inclusion of supercell thunderstorms in weather risk assessments and disaster strategies is crucial," emphasizes Feldmann. The rise of these storms poses growing challenges to society, increasing potential damage to infrastructure, agriculture and private property and increasing risks to the public. "Understanding the conditions that favor these storms is key to better preparedness."

 

UC to launch center focused on ethical AI



New center will unite humanities scholars with the public to guide AI’s ethical, transparent use




University of Cincinnati





A nearly $500,000 federal grant was awarded to the University of Cincinnati to establish the Center for Explainable, Ethical, and Trustworthy AI (CEET), a first-of-its-kind hub for humanities-based research and public engagement on the societal dimensions of artificial intelligence.

The National Endowment for the Humanities (NEH) awarded $498,430 in funding for the center, which will launch with a five-member interdisciplinary team led by Director Andre Curtis-Trudel, PhD, assistant professor of philosophy. The team will include representatives from philosophy, English and philosophy/physics disciplines — part of a hiring initiative of UC’s College of Arts and Sciences Dean James Mack.

Director Curtis-Trudel says the initiative reflects Mack’s vision to position UC as a leader in AI-focused humanities research in Ohio and the Midwest.  

“AI impacts all human beings, and it is our responsibility as humanists to ensure we use it properly,” says Mack. 

While AI development is often driven by technical fields, the center will address questions humanists are uniquely equipped to explore — such as how AI should be used, what makes it trustworthy and how it affects society, says Curtis-Trudel. 

“With the support of UC leadership and the Office of Research we were able to present the NEH with a proposal that draws on humanities insights to promote the public good,” says Curtis-Trudel.  

Focus of effort 

The center will provide a forum for research on AI explainability, ethics and trustworthiness — and translate this scholarship into programs and resources for the broader public. UC’s existing strengths in public engagement, outreach and cross-disciplinary collaboration make it an ideal home for the new center which will operate through two core units: a research unit and an engagement unit, each with a distinct but complementary mission. 

The research unit will:  

  • Support three interdisciplinary research projects focused on the themes of explainable, ethical and trustworthy AI. 

  • Convene a regular speaker series bringing together scholars, practitioners and the public. 

  • Host an annual conference and produce collaborative, interdisciplinary outputs — including journal articles, edited collections and monographs. 

The engagement unit will:  

  • Collaborate with the Cincinnati Ethics Center to develop K–12 lesson plans, activities and educational materials on AI ethics.

  • Host events focused on AI in K–12 education with the Cincinnati Summer Language Institute.

  • Partner with the Institute for Research in Sensing to host public conversations about the role of AI in society.

  • Partner with the Cincinnati-based Gaskins Foundation to launch an annual AI Ethics Summer Camp for high school students.  

Goals

A primary goal of the center is to establish a pipeline from humanities-based AI research to impactful, real-world applications — ensuring that ethical considerations, transparency and public trust are embedded in AI systems from the ground up. 

In addition to the NEH funding, UC’s College of Arts and Sciences has committed roughly $165,000 to support the center, which will also pursue further matching funds.  

 

Humanoid robots are advancing but face a massive ‘data gap’



In two new papers, UC Berkeley roboticist Ken Goldberg explains why robots are not gaining real-world skills as quickly as AI chatbots are gaining language fluency.





University of California - Berkeley






AI chatbots have advanced rapidly over the past few years, so much so that people are now using them as personal assistantscustomer service representatives and even therapists

The large language models (LLMs) that power these chatbots were created using machine learning algorithms trained on the vast troves of text data found on the internet. And their success has many tech leaders, including Elon Musk and NVIDIA CEO Jensen Huang, claiming that a similar approach will yield humanoid robots capable of performing surgeryreplacing factory workers or serving as in-home butlers within a few short years.  

But robotics experts disagree, says UC Berkeley roboticist Ken Goldberg

In two new papers published online today (Aug. 27) in the journal Science Robotics, Goldberg describes how what he calls the “100,000-year data gap” will prevent robots from gaining real-world skills as quickly as AI chatbots are gaining language fluency. In the second article, leading roboticists from MIT, Georgia Tech and ETH-Zurich summarize the heated debate among roboticists over whether the future of the field lies in collecting more data to train humanoid robots or relying on “good old-fashioned engineering” to program robots to complete real-world tasks.

UC Berkeley News spoke with Goldberg about the “humanoid hype,” the emerging paradigm shift in the robotics field and whether AI really is on the cusp of taking everyone’s jobs. 

UC Berkeley News: Recently, tech leaders like Elon Musk have made claims about the future of humanoid robots, such as that robots will outshine human surgeons within the next five years. Do you agree with these claims?

Goldberg: No; I agree that robots are advancing quickly but not that quickly. I think of it as hype because it's so far ahead of the robotic capabilities that researchers in the field are familiar with.

We're all very familiar with ChatGPT and all the amazing things it's doing for vision and language, but most researchers are very nervous about the analogy that most people have, which is that now that we've solved all these problems, we're ready to solve [humanoid robots], and it's going to happen next year. 

I'm not saying it's not going to happen, but I'm saying it’s not going to happen in the next two years, or five years or even 10 years. We're just trying to reset expectations so that it doesn't create a bubble that could lead to a big backlash.

What are the limitations that will prevent us from having humanoid robots performing surgery or serving as personal butlers in the near future? What do they still really struggle with?

The big one is dexterity, the ability to manipulate objects. Things like being able to pick up a wine glass or change a light bulb. No robot can do that.  

It's a paradox — we call it Moravec's paradox — because humans do this effortlessly, and so we think that robots should be able to do it, too. AI systems can play complex games like chess and Go better than humans, so it's understandable that people think, "Well, why can't they just pick up a glass?" It seems much easier than playing Go. But the fact is that picking up a glass requires that you have a very good perception of where the glass is in space, move your fingertips to that exact location and close your fingertips appropriately around the object. It turns out that’s still extremely difficult. 

In your new paper, you discuss what you call the 100,000-year “data gap.” What is the data gap, and how does it contribute to this disparity between the language abilities of AI chatbots and the real-world dexterity of humanoid robots?

To calculate this data gap, I looked at how much text data exists on the internet and calculated how long it would take a human to sit down and read it all. I found it would take about 100,000 years. That’s the amount of text used to train LLMs.

We don’t have anywhere near that amount of data to train robots, and 100,000 years is just the amount of text that we have to train language models. We believe that training robots is much more complex, so we’ll need much more data.

Some people think we can get the data from videos of humans — for instance, from YouTube — but looking at pictures of humans doing things doesn't tell you the actual detailed motions that the humans are performing, and going from 2D to 3D is generally very hard. So that doesn't solve it. 

Another approach is to create data by running simulations of robot motions, and that actually does work pretty well for robots running and performing acrobatics. You can generate lots of data by having robots in simulation do backflips, and in some cases that transfers into real robots. 

But for dexterity — where the robot is actually doing something useful, like the tasks of a construction worker, plumber, electrician, kitchen worker or someone in a factory doing things with their hands — that has been very elusive, and simulation doesn't seem to work. 

Currently people have been doing this thing called teleoperation, where humans operate a robot like a puppet so it can perform tasks. There are warehouses in China and the U.S. where humans are being paid to do this, but it's very tedious. And every eight hours of work gives you just eight more hours of data. It’s going to take a long time to get to 100,000 years. 

Do roboticists believe it is possible to advance the field without first creating all this data?

I believe that robotics is undergoing a paradigm shift, which is when science makes a big change — like going from physics to quantum physics — and the change is so massive that the field gets broken into two camps, and they battle it out for years. And we're in the midst of that kind of debate in robotics.

Most roboticists still believe in what I call good old-fashioned engineering, which is pretty much everything that we teach in engineering school: physics, math and models of the environment. 

But there is a new dogma that claims that robots don’t need any of those old tools and methods. They say that data is all we need to get us to fully functional humanoid robots.

This new wave is very inspiring. There is a lot of money behind it and a lot of younger-generation students and faculty members are in this new camp. Most newspapers, Elon Musk, Jensen Huang and many investors are completely sold on the new wave, but in the research field there’s a raging debate between the old and new approaches to building robots.  

What do you see as the way forward?

I've been advocating that engineering, math and science are still important because they allow  us to get these robots functional so that they can collect the data that we need. 

This is a way to bootstrap the data collection process. For example, you could get a robot to perform a task well enough that people will buy it, and then collect data as it works. 

Waymo, Google’s self-driving car company, is doing that. They're collecting data every day from real robot cars and their cars are getting better and better over time.

That’s also the story behind Ambi Robotics, which makes robots that sort packages. As they work in real warehouses, they collect data and improve over time.   

In the past, there was a lot of fear that robotic automation would steal blue-collar factory jobs, and we’ve seen that happen to some extent. But with the rise of chatbots, now the discussion has shifted to the possibility of LLMs taking over white-collar jobs and creative professions. How do you think AI and robots will impact what jobs are available in the future? 

To my mind as a roboticist, the blue-collar jobs, the trades, are very safe. I don't think we're going to see robots doing those jobs for a long time. 

But there are certain jobs — those that involve routinely filling out forms, such as intake at a hospital — that will be more automated. 

One example that’s very subtle is customer service. When you have a problem, like your flight got canceled, and you call the airline and a robot answers, you just get more frustrated. Many companies want to replace customer service jobs with robots, but the one thing a computer can’t say to you is, “I know how you feel.”

Another example is radiologists. Some claim that AI can read X-rays better than human doctors.  But do you want a robot to inform you that you have cancer?

The fear that robots will run amok and steal our jobs has been around for centuries, but I’m confident that humans have many good years ahead — and most researchers agree.  

This interview has been edited for length and clarity. 

 

Study explores how teacher training and reading programs affect literacy in Mozambique




University of Illinois College of Agricultural, Consumer and Environmental Sciences
A large group of African school children engaging in activities in an outdoor setting 

image: 

Children participating in out-of-school reading activities in Mozambique.

view more 

Credit: College of ACES





URBANA, Ill. – Literacy rates in Sub-Saharan Africa remain low, despite increased primary school enrollment. In rural Mozambique, only 3% of children possess grade-level reading skills. Poor learning outcomes in lower grades are a barrier to further expanding school enrollment at higher grade levels. A new study from the University of Illinois Urbana-Champaign explores whether a teacher training program and reading camps could improve literacy levels among elementary school children.

“There is a learning crisis in Sub-Saharan Africa. Around 80 to 90% of the kids who are attending primary school are not proficient in reading. Mozambique is a particularly critical case, because the country has very low levels of literacy and overall human capital, even compared with the region in general,” said study co-author Catalina Herrera-Almanza, assistant professor in the Department of Agricultural and Consumer Economics, part of the College of Agricultural, Consumer and Environmental Sciences (ACES) at Illinois. She conducted the research in cooperation with the International Food Policy Research Institute (IFPRI) and colleagues at Universidade Eduardo Mondlane and Universidade Pedagodica Maputo in Mozambique.

Teacher training has been shown to improve student outcomes, but comprehensive training programs are costly and difficult to conduct at a large scale; therefore, the researchers aimed to evaluate the impact of lighter programs that are more affordable and easier to scale up.

“We cooperated with World Vision, a non-governmental organization that runs educational programs in Mozambique. They were interested in exploring if a ‘light touch’ teacher training intervention could be effective, and whether its effects would be enhanced if it was supplemented with reading camps outside of the school,” Herrera-Almanza said.

The study included 160 elementary schools in Mozambique’s rural Nampula province, randomly assigned to one of three conditions. In the first group, teachers in grades one through three received five days of training in the Unlock Literacy program — which focuses on core reading skills — as well as teaching materials in Portuguese and Emakhuwa, the local language. The second treatment group received the Unlock Literacy training, and students were encouraged to attend reading camps conducted weekly by community volunteers outside of school. The third group served as a control group that received no assistance.    

The researchers evaluated children’s test scores on the Early Grade Reading Assessment (EGRA) for elementary school students. They compared test scores between children in each intervention group and the control group after two years of the program. 

Furthermore, they conducted surveys with teachers, principals, school administrators, and a subset of 10 students from each school.

Overall, they found very little effect of either training program on reading scores. For both the light-touch training only and the combined program, there was a small positive effect for the lowest-scoring children, as fewer students received scores of zero on some reading measures. 

Herrera-Almanza says one reason for the results could be lack of compliance with the training. The surveys showed that teachers only completed two of the five days in the program, on average.

“It’s possible the lack of incentives for teachers, and a lack of supervision, resulted in low interest to attend. These teachers also move around a lot and typically stay at a school only for a few years, and training may not be a priority for them,” Herrera-Almanza noted.

The reading camps had better turnout. They were implemented in most of the communities and about half of the students participated regularly. The camps were run by a volunteer, often a high school student, but they were supported by a teacher at the school, who helped with teaching and encouraged students to attend. World Vision also provided print materials. The idea was to encourage children to read through games, activities, and stories.  

The researchers concluded that light touch training is not sufficient to make a substantial difference in literacy levels.

“It’s difficult to disentangle the results because the teacher training was not implemented as we expected. There is also the fact that Mozambique has very low literacy levels, so it is harder to move the needle,” Herrera-Almanza stated.

“We do find weak effects for the bottom part of the distribution with the combination of teacher training and out-of-classroom support, and that is valuable. But our findings indicate that more intensive school and community interventions are required to meaningfully improve learning outcomes.”

The paper, “The effect of teacher training and community literacy programming on teacher and student outcomes,” is published in the Journal of Development Economics [DOI: 10.1016/j.jdeveco.2025.103578].

The research was supported by World Vision, funded by the US Department of Agriculture (USDA) grant no. FFE-656-2019/018-00-IFPRI.

 

Dark ages: Genomic analysis shows how cavefish lost their eyes



In a new study, Yale researchers used genomic analysis to show when cavefishes lost their eyes, which provides a method for dating cave systems.





Yale University





Small, colorless, and blind, amblyopsid cavefishes inhabit subterranean waters throughout the eastern United States. In a new study, Yale researchers reveal insights into just how these distinctive cave dwellers evolved — and provide a unique method for dating the underground ecosystems where they reside.  

In an analysis of the genomes of all known amblyopsid species, the researchers found that the different species colonized caves systems independently of each other and separately evolved similar traits — such as the loss of eyes and pigment — as they adapted to their dark cave environments. 

Their findings are published in the journal Molecular Biology and Evolution.

By studying the genetic mutations that caused the fishes’ eyes to degenerate, the researchers developed a sort of mutational clock that allowed them to estimate when each species began losing their eyes. They found that vision-related genes of the oldest cavefish species, the Ozark cavefish (Troglichthys rosae), began degenerating up to 11 million years ago. 

The technique provides a minimum age for the caves that the fishes colonized since the cavefish must have been inhabiting subterranean waters when their eyesight began devolving, the researchers said. 

“The ancient subterranean ecosystems of eastern North America are very challenging to date using traditional geochronological cave-dating techniques, which are unreliable beyond an upper limit of about 3 to 5 million years,” said Chase Brownstein, a student in Yale’s Graduate School of Arts and Sciences, in the Department of Ecology & Evolutionary Biology, and the study’s co-lead author. “Determining the ages of cave-adapted fish lineages allows us to infer the minimum age of the caves they inhabit because the fishes wouldn’t have started losing their eyes while living in broad daylight. In this case we estimate a minimum age of some caves of over 11 million years.” 

Maxime Policarpo of the Max Planck Institute for Biological Intelligence and the University of Basel is the co-lead author. 

For the study, the researchers reconstructed a time-calibrated evolutionary tree for amblyopsids, which belong to an ancient, species-poor order of freshwater fishes called Percopsiformes, using the fossil record as well as genomic data and high-resolution scans of all living relevant species. 

All the cavefish species have similar anatomies, including elongated bodies and flattened skulls, and their pelvic fins have either been lost or severely reduced. Swampfish (Chologaster cornuta), a sister to cavefish lineage that inhabits murky surface waters, also has a flattened skull, elongated body, and no pelvic fin. While it maintains sight and pigment, there is softening of the bones around its eyes, which disappear in cavefishes. This suggests that cavefishes evolved from a common ancestor that was already equipped to inhabit low-light environments, Brownstein said. 

To understand when the cavefish began populating caves — something impossible to discern from the branches of an evolutionary tree — the researchers studied the fishes’ genomes, examining 88 vision-related genes for mutations. The analysis revealed that the various cavefish lineages had completely different sets of genetic mutations involved in the loss of vision. This, they said, suggests that separate species colonized caves and adapted to those subterranean ecosystems independently of each other. 

From there, the researchers developed a method for calculating the number of generations that have passed since cavefish species began adapting to life in caves by losing the functional copies of vision-related genes. 

Their analysis suggests that cave adaptations occurred between 2.25 and 11.3 million years ago in Ozark cavefish and between 342,000 to 1.70 million years ago (at minimum) and 1.7 to 8.7 million years ago (at maximum) for other cavefish lineages. The findings support the conclusion that at least four amblyopsid lineages independently colonized caves after evolving from surface-dwelling ancestors, the researchers said. 

The maximum ages exceed the ranges of traditional cave-dating methods, which includes isotope analysis of cosmogenic nuclides that are produced within rocks and soils by cosmic rays, the researchers noted.

The findings also suggest potential implications for human health, said Thomas Near, professor of ecology and evolutionary biology in Yale’s Faculty of Arts and Sciences (FAS), and senior author of the study. 

“A number of the mutations we see in the cavefish genomes that lead to degeneration of the eyes are similar to mutations that cause ocular diseases in humans,” said Near, who is also the Bingham Oceanographic Curator of Ichthyology at the Yale Peabody Museum. “There is the possibility for translational medicine through which by studying this natural system in cavefishes, we can glean insights into the genomic mechanisms of eye diseases in humans.”

The other co-authors are Richard C. Harrington of the South Carolina Department of Natural Resources, Eva A. Hoffman of the American Museum of Natural History, Maya F. Stokes of Florida State University, and Didier Casane of Paris-Cité University.