Wednesday, June 11, 2025

 

Window-sized device taps the air for safe drinking water



MIT engineers have developed an atmospheric water harvester that produces fresh water anywhere — even Death Valley, California.




Massachusetts Institute of Technology



Today, 2.2 billion people in the world lack access to safe drinking water. In the United States, more than 46 million people experience water insecurity, living with either no running water or water that is unsafe to drink. The increasing need for drinking water is stretching traditional resources such as rivers, lakes, and reservoirs. 

To improve access to safe and affordable drinking water, MIT engineers are tapping into an unconventional source: the air. The Earth’s atmosphere contains millions of billions of gallons of water in the form of vapor. If this vapor can be efficiently captured and condensed, it could supply clean drinking water in places where traditional water resources are inaccessible. 

With that goal in mind, the MIT team has developed and tested a new atmospheric water harvester and shown that it efficiently captures water vapor and produces safe drinking water across a range of relative humidities, including dry desert air. 

The new device is a black, window-sized vertical panel, made from a water-absorbent hydrogel material, enclosed in a glass chamber coated with a cooling layer. The hydrogel resembles black bubble wrap, with small dome-shaped structures that swell when the hydrogel soaks up water vapor. When the captured vapor evaporates, the domes shrink back down in an origami-like transformation. The evaporated vapor then condenses on the the glass, where it can flow down and out through a tube, as clean and drinkable water.

The system runs entirely on its own, without a power source, unlike other designs that require batteries, solar panels, or electricity from the grid. The team ran the device for over a week in Death Valley, California — the driest region in North America. Even in very low-humidity conditions, the device squeezed drinking water from the air at rates of up to 160 milliliters (about two-thirds of a cup) per day. 

The team estimates that multiple vertical panels, set up in a small array, could passively supply a household with drinking water, even in arid desert environments. What’s more, the system’s water production should increase with humidity, supplying drinking water in temperate and tropical climates.

“We have built a meter-scale device that we hope to deploy in resource-limited regions, where even a solar cell is not very accessible,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor of Mechanical Engineering and Civil and Environmental Engineering at MIT. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact.”

Zhao and his colleagues present the details of the new water harvesting design in a paper appearing in the journal Nature Water. The study’s lead author is former MIT postdoc “Will” Chang Liu, who is currently an assistant professor at the National University of Singapore (NUS). MIT co-authors include Xiao-Yun Yan, Shucong Li, and Bolei Deng, along with collaborators from multiple other institutions.

Carrying capacity

Hydrogels are soft, porous materials that are made mainly from water and a microscopic network of interconnecting polymer fibers. Zhao’s group at MIT has primarily explored the use of hydrogels in biomedical applications, including adhesive coatings for medical implantssoft and flexible electrodes, and noninvasive imaging stickers

“Through our work with soft materials, one property we know very well is the way hydrogel is very good at absorbing water from air,” Zhao says. 

Researchers are exploring a number of ways to harvest water vapor for drinking water. Among the most efficient so far are devices made from metal-organic frameworks, or MOFs — ultra-porous materials that have also been shown to capture water from dry desert air. But the MOFs do not swell or stretch when absorbing water, and are limited in vapor-carrying capacity. 

Water from air

The group’s new hydrogel-based water harvester addresses another key problem in similar designs. Other groups have designed water harvesters out of micro- or nano-porous hydrogels. But the water produced from these designs can be salty, requiring additional filtering. Salt is a naturally absorbent material, and researchers embed salts — typically, lithium chloride — in hydrogel to increase the material’s water absorption. The drawback, however, is that this salt can leak out with the water when it is eventually collected. 

The team’s new design significantly limits salt leakage. Within the hydrogel itself, they included an extra ingredient: glycerol, a liquid compound that naturally stabilizes salt, keeping it within the gel rather than letting it crystallize and leak out with the water. The hydrogel itself has a microstructure that lacks nanoscale pores, which further prevents salt from escaping the material. The salt levels in the water they collected were below the standard threshold for safe drinking water, and significantly below the levels produced by many other hydrogel-based designs. 

In addition to tuning the hydrogel’s composition, the researchers made improvements to its form. Rather than keeping the gel as a flat sheet, they molded it into a pattern of small domes resembling bubble wrap, that act to increase the gel’s surface area, along with the amount of water vapor it can absorb. 

The researchers fabricated a half-square-meter of hydrogel and encased the material in a window-like glass chamber. They coated the exterior of the chamber with a special polymer film, which helps to cool the glass and stimulates any water vapor in the hydrogel to evaporate and condense onto the glass. They installed a simple tubing system to collect the water as it flows down the glass. 

In November 2023, the team traveled to Death Valley, California, and set up the device as a vertical panel. Over seven days, they took measurements as the hydrogel absorbed water vapor during the night (the time of day when water vapor in the desert is highest). In the daytime, with help from the sun, the harvested water evaporated out from the hydrogel and condensed onto the glass. 

Over this period, the device worked across a range of humidities, from 21 to 88 percent, and produced between 57 and 161.5 milliliters of drinking water per day. Even in the driest conditions, the device harvested more water than other passive and some actively powered designs. 

“This is just a proof-of-concept design, and there are a lot of things we can optimize,” Liu says. “For instance, we could have a multipanel design. And we’re working on a next generation of the material to further improve its intrinsic properties.”

“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” says Zhao, who has plans to further test the panels in many resource-limited regions. “Then you could have many panels together, collecting water all the time, at household scale.”

This work was supported, in part, by the MIT J-WAFS Water and Food Seed Grant, the MIT-Chinese University of Hong Kong collaborative research program, and the UM6P-MIT collaborative research program.

###

Written by Jennifer Chu, MIT News

 

Sniffing out hunger: a nose-to-brain connection linked to appetite



How the smell of food triggers brain cells that make mice feel less hungry




Max Planck Institute for Biology of Ageing





No more hunger after cooking? A newly identified network of nerve cells is responsible, a research group at the Max Planck Institute for Metabolism Research has discovered in mice. They discovered a direct connection from the nose to a group of nerve cells in the brain that are activated by the smell of food and, when activated, trigger a feeling of fullness. This was not the case in obese mice. This discovery suggests that treating obesity might require different advice about smelling food before a meal based on a person's weight. 

The researchers used brain scans to investigate which regions of the mice's brains respond to food odours, and were able to identify a new group of nerve cells in the medial septum of the brain. These nerve cells respond to food in two steps: When the mouse smells food, the nerve cells fire and create a sensation of fullness. This happens within a few seconds because the nerve cells are directly connected to the olfactory bulb. The nerve cells react to different food smells, but not to other smells. When the mice started to eat, the nerve cells were inhibited. Overall, the mice ate less when these nerve cells are active before eating.

“We think this mechanism helps mice in the wild protect themselves from predators. By eating for shorter periods, they reduce their chances of being caught.," explains Janice Bulk, the first author of the study.

Excess weight disturbs perception

In obese mice, the same group of nerve cells was not activated when the mice could smell food. The mice did not feel fuller and did not eat less overall. The authors point out that it is already known that obesity disrupts the olfactory system, including neuronal activity in the olfactory bulb. The newly identified group of nerve cells could also be affected by obesity.

And in humans?

The human brain contains the same group of nerve cells as the mouse, but it is not yet known whether they also respond to food odours. Studies by other research groups have shown that smelling some specific odors before a meal can reduce people’s appetite. In contrast, other studies have shown that overweight persons eat significantly more in the same situation.

"Our findings highlight how crucial it is to consider the sense of smell in appetite regulation and in the development of obesity.  Our study shows how much our daily-lives’ eating habits are influenced by the smell of food. Since we discovered that the pathway only reduces appetite in lean mice, but not in obese mice, our study opens up a new way to help prevent overeating in obesity”, says Sophie Steculorum, the head of the study and research group leader at the Max Planck Institute for Metabolism Research.

 

First-of-its-kind technology helps man with ALS ‘speak’ in real time



Previous system similar to texting. New system enables more natural conversation



University of California - Davis Health

BCI Participant 

image: 

The participant is enrolled in the BrainGate2 clinical trial at UC Davis Health. His ability to communicate through a computer has been made possible with an investigational brain-computer interface (BCI). It consists of four microelectrode arrays surgically implanted into the region of the brain responsible for producing speech. These devices record the activity of neurons in the brain and send it to computers that interpret the signals to reconstruct voice.

view more 

Credit: UC Davis Health





(Sacramento, Calif.) — Researchers at the University of California, Davis, have developed an investigational brain-computer interface that holds promise for restoring the voices of people who have lost the ability to speak due to neurological conditions.

In a new study published in the scientific journal Nature, the researchers demonstrate how this new technology can instantaneously translate brain activity into voice as a person tries to speak — effectively creating a digital vocal tract.

The system allowed the study participant, who has amyotrophic lateral sclerosis (ALS), to “speak” through a computer with his family in real time, change his intonation and “sing” simple melodies.

“Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It’s a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,” said Sergey Stavisky, senior author of the paper and an assistant professor in the UC Davis Department of Neurological Surgery. Stavisky co-directs the UC Davis Neuroprosthetics Lab.

“With instantaneous voice synthesis, neuroprosthesis users will be able to be more included in a conversation. For example, they can interrupt, and people are less likely to interrupt them accidentally,” Stavisky said.

Decoding brain signals at heart of new technology

The man is enrolled in the BrainGate2 clinical trial at UC Davis Health. His ability to communicate through a computer has been made possible with an investigational brain-computer interface (BCI). It consists of four microelectrode arrays surgically implanted into the region of the brain responsible for producing speech.

These devices record the activity of neurons in the brain and send it to computers that interpret the signals to reconstruct voice.

“The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak,” said Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC Davis. “Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice.”

Instantaneous, expressive speech with BCI shows promise

The brain-computer interface was able to translate the study participant’s neural signals into audible speech played through a speaker very quickly — one-fortieth of a second. This short delay is similar to the delay a person experiences when they speak and hear the sound of their own voice.

The technology also allowed the participant to say new words (words not already known to the system) and to make interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a sentence.

The participant also took steps toward varying pitch by singing simple, short melodies.

His BCI-synthesized voice was often intelligible: Listeners could understand almost 60% of the synthesized words correctly (as opposed to 4% when he was not using the BCI).

Real-time speech helped by algorithms

The process of instantaneously translating brain activity into synthesized speech is helped by advanced artificial intelligence algorithms.

The algorithms for the new system were trained with data collected while the participant was asked to try to speak sentences shown to him on a computer screen. This gave the researchers information about what he was trying to say.

The neural activity showed the firing patterns of hundreds of neurons. The researchers aligned those patterns with the speech sounds the participant was trying to produce at that moment in time. This helped the algorithm learn to accurately reconstruct the participant’s voice from just his neural signals.

Clinical trial offers hope

“Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,” said David Brandman, co-director of the UC Davis Neuroprosthetics Lab and the neurosurgeon who performed the participant’s implant.

“The results of this research provide hope for people who want to talk but can’t. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis.”

Brandman is an assistant professor in the Department of Neurological Surgery and is the site-responsible principal investigator of the BrainGate2 clinical trial.

Limitations

The researchers note that although the findings are promising, brain-to-voice neuroprostheses remain in an early phase. A key limitation is that the research was performed with a single participant with ALS. It will be crucial to replicate these results with more participants, including those who have speech loss from other causes, such as stroke.

The BrainGate2 trial is enrolling participants. To learn more about the study, visit braingate.org or contact braingate@ucdavis.edu.

Caution: Investigational device, limited by federal law to investigational use.  

A complete list of coauthors and funders is available in the article. 

Resources



The new BCI system allowed the study participant, who has ALS, to “speak” through a computer with his family in real time, change his intonation and “sing” simple melodies.

Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC Davis, operating the BCI system.


Three-dimensional model of brain and microelectrode arrays.

Credit

UC Regents.

AI Energy Demands Reshape Global Power Grids

  • The rise of AI is causing a significant increase in data center energy demands,
  •  transforming them into major electricity consumers globally.

  • Technology companies are actively pursuing renewable energy sources and flexible energy management strategies to support the growing data center sector and ensure grid stability.

  • Countries worldwide are adapting their regulations and infrastructure to accommodate the data center boom, recognizing its economic potential while addressing challenges related to energy supply and sustainability.



The rise of artificial intelligence (AI) on the global stage has resulted in data centers putting immense strain on global power grids, a turnaround from their previous role as minor electricity consumers. As chatbots become a permanent fixture in work and daily life, demand is being propelled to record heights as each search consumes power. While US data centers consumed 50 terawatt-hours (TWh) of power a decade ago, that figure has risen to 140 TWh today, thereby accounting for 3.5% of the country’s total electricity consumption.

As a natural consequence of their energy consumption, technology companies are increasingly fitting the mold of large industrial energy consumers, signing power purchase agreements (PPAs) to ensure a secure and continuous supply of energy for their operations. Amazon, for example, has become the world's largest corporate buyer of renewable energy, signing more than 500 PPAs across 27 countries – a tally on par with some European nations.

To fuel the rapidly expanding US data center sector, technology leaders such as Amazon, Google and Microsoft are actively pursuing a secure and sustainable power supply. With the US hosting more than 50 gigawatts (GW) of data center capacity in 2024, tech companies are leaving no stone unturned as they evaluate a range of options – from solar photovoltaic (PV) and battery storage, to gas and nuclear power. While renewables and batteries are advancing, small modular reactor (SMRs) technologies, which could provide baseload supply and flexibility, still need to prove their commercial viability. Both solutions present merit, but one thing is clear: the US data center boom is coming, and it urgently needs power by any means necessary.

This also extends beyond the US. According to Rystad Energy’s research, global data center electricity consumption is projected to more than double by 2030. By 2040, power demand could soar to 1,800 TWh – enough to power about 150 million US homes for a year – as major tech companies continue to expand their processing capacity.

While this ramp-up may seem threatening, the sizeable energy demands of data centers – if managed effectively – could help stabilize local power grids.

To evaluate the stress on power grids at a granular level, it is crucial to understand the intricacies of training AI models and their batch-processing nature. These models gather and process data infrequently, allowing data centers to manage their energy use effectively. This is done through power-capping, which limits the maximum power that processing units can consume and reduces energy consumption, while only marginally raising the time taken to complete tasks.

Meanwhile, AI model training can be paused and resumed to support energy-efficient scheduling, which can be short-term or long-term. Short-term scheduling shifts workloads to times when renewable energy sources – such as solar power during the day – are plentiful and power prices are low. Long-term scheduling involves planning for different seasons, running more processes in the summer when energy costs are lower, and scaling back in the winter when prices rise. These strategies can also be adapted in real-time to optimize power use by moving workloads to off-peak hours, helping to balance the energy grid.

With this in mind, big tech companies are searching globally for suitable locations to build energy-intensive data centers, and countries such as Norway stand out as ideal candidates. Norway has historically offered low power prices, along with a high share of clean hydropower and a cold climate that naturally cools the heat generated by data centers.

Norway's ability to provide flexible energy to the European grid is becoming increasingly important, especially as the transmission grid struggles to meet high demand during peak times. Many people and political groups are concerned that data centers could drive up electricity prices for households during these times. However, with appropriate regulations, data centers could use energy flexibly, acting as reliable buyers for power producers when there is excess supply. This elastic demand could help stabilize overall consumption, optimize grid utilization, and reduce price volatility.

However, increasing interest in Norway as a data center hub has ignited political debate, highlighting the need for discussions on how to move forward. Proposals that have surfaced include a licensing system that incorporates criteria for social benefit and the use of waste heat, along with potential measures to limit data storage.

Globally, the data center boom has sparked mixed responses. Ireland and the Netherlands are restricting new developments due to grid strain and other concerns, while other countries are seeking solutions to accommodate surging electricity demand. Singapore, for instance – recognizing the economic potential of data centers – has lifted previous restrictions and is exploring changes in legislation to accommodate the data center boom, in a bid to ensure that the city-state does not fall behind.

The willingness to explore unconventional options highlights the efforts nations are taking to capitalize on the data center market while attempting to address the inherent challenges, particularly regarding energy supply and their alignment with decarbonization goals.

Market liberalization adds another checkbox for tech giants, as current regulatory bottlenecks could hinder their future ambitions. Thailand is one country that fits the bill. While its grid and climate policies differ from Norway’s, the Southeast Asian nation is actively pushing for power sector deregulation.

By opening its market to competition and loosening government control, Thailand is creating a more attractive environment for private investment. This proactive approach has already garnered significant interest, including 47 data center projects that have raked in more than $5 billion in investments as of December 2024.

As data centers multiply, managing their electricity demand and expansion requires a patchwork of regulatory and development strategies. While this balancing act is essential, sound policy and infrastructure investments can play a contributing role. Strategic measures, including flexible energy consumption, real-time demand response and the integration of renewable power sources, can help alleviate these pressures.

By Rystad Energy

 FINLAND

Karelian clears key hurdle for EU’s first diamond mine


Stream sediment sampling in Northern Ireland. (Image courtesy of Karelian Diamond Resources.)

Karelian Diamond Resources (LON: KDR) has registered its Lahtojoki mining concession in the Finnish land registry, advancing its plan to develop what could become the European Union’s first diamond mine.

This registration, handled by the Finnish mining authority TUKES, allows the company to proceed with further development plans for the Lahtojoki diamond deposit.

TUKES had previously approved the concession and is also responsible for issuing the mining certificate.

Karelian noted that a hearing on compensation matters related to the project has been postponed until the Fall of 2025, potentially impacting the timeline for full-scale operations.

The company says Lahtojoki hosts high-quality gem diamonds, including rare pink and coloured stones that can fetch up to 20 times more than typical colourless gems. The company believes the diamondiferous kimberlite pipe has the potential to support a profitable, low strip ratio open-pit operation.

The Dublin-based company is simultaneously exploring and advancing other assets in Finland, containing nickel, copper and platinum group elements. It is also advancing exploration at a site in the Kuhmo region where it aims to discover the source of a rare green diamond it found in 2022.