Friday, December 12, 2025

 

New window insulation blocks heat, but not your view




University of Colorado at Boulder
Seeing clearly 

image: 

Abram Fluckiger holds up a sample panel square that has five sandwiched layers of a new material nearly transparent insulation material called MOCHI, which was designed buy CU Boulder researchers in physics professor Ivan Smalyukh’s lab.

view more 

Credit: Photo by Glenn J. Asakawa/CU Boulder





Physicists at the University of Colorado Boulder have designed a new material for insulating windows that could improve the energy efficiency of buildings worldwide—and it works a bit like a high-tech version of Bubble Wrap. 

The team’s material, called Mesoporous Optically Clear Heat Insulator, or MOCHI, comes in large slabs or thin sheets that can be applied to the inside of any window. So far, the team only makes the material in the lab, and it’s not available for consumers. But the researchers say MOCHI is long-lasting and is almost completely transparent.

That means it won’t disrupt your view, unlike many insulating materials on the market today.

“To block heat exchange, you can put a lot of insulation in your walls, but windows need to be transparent,” said Ivan Smalyukh, senior author of the study and a professor of physics at CU Boulder. “Finding insulators that are transparent is really challenging.”

He and his colleagues will publish their results Dec. 11 in the journal Science.

Buildings, from single-family homes to office skyscrapers, consume about 40% of all energy generated worldwide. They also leak, losing heat to the outdoors on cold days and absorbing heat when the temperature rises. 

Smalyukh and his colleagues aim to slow down that exchange. 

The group’s MOCHI material is a silicone gel with a twist: The gel traps air through a network of tiny pores that are many times thinner than the width of a human hair. Those tiny air bubbles are so good at blocking heat that you can use a MOCHI sheet just 5 millimeters thick to hold a flame in the palm of your hand.

“No matter what the temperatures are outside, we want people to be able to have comfortable temperatures inside without having to waste energy,” said Smalyukh, a fellow at the Renewable And Sustainable Energy Institute (RASEI) at CU Boulder.

Bubble magic 

Smalyukh said the secret to MOCHI comes down to precisely controlling those pockets of air.

The team’s new invention is similar to aerogels, a class of insulating material that is in widespread use today. (NASA uses aerogels inside its Mars rovers to keep electronics warm). 

Like MOCHI, aerogels trap countless pockets of air. But those bubbles tend to be distributed randomly throughout aerogels and often reflect light rather than let it pass through. As a result, these materials often look cloudy, which is why they’re sometimes called “frozen smoke.”

In the new research, Smalyukh and his colleagues wanted to take a different approach to insulation.

To make MOCHI, the group mixes a special type of molecule known as surfactants into a liquid solution. These molecules natural clump together to form thin threads in a process not unlike how oil and vinegar separate in salad dressing. Next, molecules of silicone in the same solution begin to stick to the outside of those threads.

Through a series of steps, the researchers then replace the clumps of detergent molecules with air. That leaves silicone surrounding a network of incredibly small pipes filled with air, which Smalyukh compares to a “plumber’s nightmare.”

In all, air makes up more than 90% of the volume of the MOCHI material.

Trapping heat

Smalyukh said that heat passes through a gas in a process something like a game of pool: Heat energizes molecules and atoms in the gas, which then bang into other molecules and atoms, transferring the energy. 

The bubbles in MOCHI material are so small, however, that the gases inside can’t bang into each other, effectively keeping heat from flowing through.

“The molecules don’t have a chance to collide freely with each other and exchange energy,” Smalyukh said. “Instead, they bump into the walls of the pores.”

At the same time, the MOCHI material only reflects about .2% of incoming light.

The researchers see a lot of uses for this clear-but-insulating material. Engineers could design a device that uses MOCHI to trap the heat from sunlight, converting it into cheap and sustainable energy. 

“Even when it’s a somewhat cloudy day, you could still harness a lot of energy and then use it to heat your water and your building interior,” Smalyukh said.

You probably won’t see these products on the market soon. Currently, the team relies on a time-intensive process to produce MOCHI in the lab. But Smalyukh believes the manufacturing process can be streamlined. The ingredients his team uses to make MOCHI are also relatively inexpensive, which the physicist said bodes well for turning this material into a commercial product.  

For now, the future for MOCHI, like the view through a window coated in this insulating material, looks bright. 


Co-authors of the new study include Amit Bhardwaj, Blaise Fleury, Eldo Abraham and Taewoo Lee, postdoctoral research associates in the Department of Physics at CU Boulder. Bohdan Senyuk, Jan Bart ten Hove and Vladyslav Cherpak, former postdoctoral researchers at CU Boulder, also served as co-authors.

Shakshi Bhardwaj holds up blocks in different sizes of a new material nearly transparent insulation material called MOCHI, which was designed buy CU Boulder researchers in physics professor Ivan Smalyukh’s lab.

Credit

Photo by Glenn J. Asakawa/CU Boulder

Eldho Abraham, left, and Taewoo Lee, right, hold up a new window insulation material called MOCHIaffixed to a thin sheet of plastic, which was designed by CU Boulder researchers in physic professor Ivan Smalyukh’s lab.

Credit

Photo by Glenn J. Asakawa/CU Boulder

 

Researchers pitch strategies to identify potential fraudulent participants in online qualitative research



A Rutgers Health researcher notes that certain red flags can make misleading respondents easier to recognize




Rutgers University






Recruiting participants for injury and violence-related studies can be challenging. Online qualitative data collection can increase accessibility for some participants, expand a study’s reach to potential participants, offer convenience and extend a sense of safety.

 

But the data can be marred by fraudulent responses.

 

As online data collection has increased since the COVID-19 pandemic, widely available online platforms and sophisticated bots can potentially expose studies to would-be fraudulent participants, that can jeopardize the research. Fraudulent participants are artificial bots or human participants who don’t meet study criteria and who attempt to, or do, participate in data collection.

 

A Rutgers Health–led study, published in BMJ Open Quality, examines potential challenges associated with online qualitative data collection and how to prevent possible fraudulent respondents.

 

Building on past studies examining the presence of fraudulent participants in online research studies, the researchers looked at the impact upon the field of injury and violence prevention.

 

Distinguishing fraudulent participants from real participants may present a challenge, and highlighting certain red flags can make these anomalies easier to recognize and remove, the researchers said. They reviewed past research on strategies that are used and highlighted a recent research project as a case study to outline ways to prevent and detect potential fraudulent participants.

 

“The presence of bots or humans attempting to engage in fraudulent research participation is a potential reality that researchers should be aware of, work to prevent where possible, and mitigate when detected to preserve research integrity and data quality,” said Devon Ziminski, a postdoctoral fellow at the New Jersey Gun Violence Research Center, and lead author of the study.

 

The paper outlines various strategies researchers can use to prevent potential fraudulent responses, including developing an outreach and recruitment plan, using a short screener survey and using community-engaged research methods for qualitative research.

 

Esprene Liddell-Quinty, a research consultant at the University of Washington Firearm Injury & Policy Research Program and a former postdoctoral researcher at the New Jersey Gun Violence research center  co-authored the study.

 

AI can pick up cultural values by mimicking how kids learn



University of Washington






Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won’t work equally well for people from different cultures.

But a new University of Washington study suggests that AI could learn cultural values by observing human behavior. Researchers had AI systems observe people from two cultural groups playing a video game. On average, participants in one group behaved more altruistically. The AI assigned to each group learned that group’s degree of altruism, and was able to apply that value to a novel scenario beyond the one they were trained on.

The team published its findings Dec. 9 in PLOS One

“We shouldn’t hard code a universal set of values into AI systems, because many cultures have their own values,” said senior author Rajesh Rao, a UW professor in the Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology. “So we wanted to find out if an AI system can learn values the way children do, by observing people in their culture and absorbing their values.”

As inspiration, the team looked to previous UW research showing that 19-month-old children raised in Latino and Asian households were more prone to altruism than those from other cultures. 

In the AI study, the team recruited 190 adults who identified as white and 110 who identified as Latino. Each group was assigned an AI agent, a system that can function autonomously. 

These agents were trained with a method called inverse reinforcement learning, or IRL. In the more common AI training method, reinforcement learning, or RL, a system is given a goal and gets rewarded based on how well it works toward that goal. In IRL, the AI system observes the behavior of a human or another AI agent, and infers the goal and underlying rewards. So a robot trained to play tennis with RL would be rewarded when it scores points, while a robot trained with IRL would watch professionals playing tennis and learn to emulate them by inferring goals such as scoring points.

This IRL approach more closely aligns with how humans develop. 

“Parents don’t simply train children to do a specific task over and over. Rather, they model or act in the general way they want their children to act. For example, they model sharing and caring towards others,” said co-author Andrew Meltzoff, a UW professor of psychology and co-director of Institute for Learning & Brain Sciences (I-LABS). “Kids learn almost by osmosis how people act in a community or culture. The human values they learn are more ‘caught’ than ‘taught.’”

In the study, the AI agents were given the data of the participants playing a modified version of the video game Overcooked, in which players work to cook and deliver as much onion soup as possible. Players could see into another kitchen where a second player had to walk further to accomplish the same tasks, putting them at an obvious disadvantage. Participants didn’t know that the second player was a bot programmed to ask the human players for help. Participants could choose to give away onions to help the bot but at the personal cost of delivering less soup. 

Researchers found that overall the people in the Latino group chose to help more than those in the white group, and the AI agents learned the altruistic values of the group they were trained on. When playing the game, the agent trained on Latino data gave away more onions than the other agent. 

To see if the AI agents had learned a general set of values for altruism, the team conducted a second experiment. In a separate scenario, the agents had to decide whether to donate a portion of their money to someone in need. Again, the agents trained on Latino data from Overcooked were more altruistic. 

“We think that our proof-of-concept demonstrations would scale as you increase the amount and variety of culture-specific data you feed to the AI agent. Using such an approach, an AI company could potentially fine-tune their model to learn a specific culture’s values before deploying their AI system in that culture,” Rao said. 

Additional research is needed to know how this type of IRL training would perform in real-world scenarios, with more cultural groups, competing sets of values, and more complicated problems.

“Creating culturally attuned AI is an essential question for society,” Meltzoff said. “How do we create systems that can take the perspectives of others into account and become civic minded?”

Nigini Oliveira, a UW post-doctoral scholar in the Allen School, and Jasmine Li, a software engineer at Microsoft who completed this research as a UW student, were co-lead authors. Other co-authors include Koosha Khalvati, a scientist at the Allen Institute who completed this research as a UW doctoral student; Rodolfo Cortes Barragan, an assistant professor at San Diego State University, who completed this research as a post-doctoral scholar at UW; and Katharina Reinecke, a professor in the Allen School and director of the Center for Globally Beneficial AI at UW. 

For more information, contact Rao at rao@cs.washington.edu.