Wednesday, September 01, 2021

When walked on, these wooden floors harvest enough energy to turn on a lightbulb

When walked on, these wooden floors harvest enough energy to turn on a lightbulb
This graphical abstract shows how footsteps on functionalized wood floors can 
be used to power small devices. Credit: Sun et al./Matter

Researchers from Switzerland are tapping into an unexpected energy source right under our feet: wooden floorings. Their nanogenerator, presented September 1 in the journal Matter, enables wood to generate energy from our footfalls. They also improved the wood used in the their nanogenerator with a combination of a silicone coating and embedded nanocrystals, resulting in a device that was 80 times more efficient—enough to power LED lightbulbs and small electronics.

The team began by transforming  into a  by sandwiching two pieces of functionalized wood between electrodes. Like a shirt-clinging sock fresh out of the dryer, the wood pieces become electrically charged through periodic contacts and separations when stepped on, a phenomenon called the . The electrons can transfer from one object to another, generating electricity. However, there's one problem with making a nanogenerator out of wood.

"Wood is basically triboneutral," says senior author Guido Panzarasa, group leader in the professorship of Wood Materials Science located at Eidgenössische Technische Hochschule (ETH) Zürich and Swiss Federal Laboratories for Materials Science and Technology (Empa) Dübendorf. "It means that wood has no real tendency to acquire or to lose electrons." This limits the material's ability to generate electricity, "so the challenge is making wood that is able to attract and lose electrons," Panzarasa explains.

To boost wood's triboelectric properties, the scientists coated one piece of the wood with polydimethylsiloxane (PDMS), a silicone that gains electrons upon contact, while functionalizing the other piece of wood with in-situ-grown nanocrystals called zeolitic imidazolate framework-8 (ZIF-8). ZIF-8, a hybrid network of metal ions and organic molecules, has a higher tendency to lose electrons. They also tested different types of wood to determine whether certain species or the direction in which wood is cut could influence its triboelectric properties by serving as a better scaffold for the coating.

A prototype of how energy could be captured from footsteps on a functionalized wood nanogenerator to light a bulb. Credit: Sun et al./Matter

The researchers found that a triboelectric nanogenerator made with radially cut spruce, a common wood for construction in Europe, performed the best. Together, the treatments boosted the triboelectric nanogenerator's performance: it generated 80 times more electricity than natural wood. The device's electricity output was also stable under steady forces for up to 1,500 cycles.

"Our focus was to demonstrate the possibility of modifying wood with relatively environmentally friendly procedures to make it triboelectric," says Panzarasa. "Spruce is cheap and available and has favorable mechanical properties. The functionalization approach is quite simple, and it can be scalable on an industrial level. It's only a matter of engineering."The researchers found that a wood floor prototype with a surface area slightly smaller than a piece of paper can produce enough energy to drive household LED lamps and small electronic devices such as calculators. They successfully lit up a lightbulb with the prototype when a human adult walked upon it, turning footsteps into electricity.

Besides being efficient, sustainable, and scalable, the newly developed nanogenerator also preserves the features that make the wood useful for interior design, including its mechanical robustness and warm colors. The researchers say that these features might help promote the use of wood nanogenerators as green energy sources in smart buildings. They also say that wood construction could help mitigate  by sequestering CO2 from the environment throughout the material's lifespan.

The next step for Panzarasa and his team is to further optimize the nanogenerator with chemical coatings that are more eco-friendly and easier to implement. "Even though we initially focused on basic research, eventually, the research that we do should lead to applications in the real world," says Panzarasa. "The ultimate goal is to understand the potentialities of wood beyond those already known and to enable wood with new properties for future sustainable smart buildings."

Using softened wood to create electricity in homes
More information: Matter, Sun et al.: "Functionalized wood with tunable tribo-polarity for efficient triboelectric nanogenerators" www.cell.com/matter/fulltext/S2590-2385(21)00393-3  , DOI: 10.1016/j.matt.2021.07.022
Journal information: Matter Provided by Cell Press 

Making the case for hydrogen in a zero-carbon economy

Making the case for hydrogen in a zero-carbon economy
Credit: DOI: 10.1016/j.apenergy.2021.117314

As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

"As we move to more and more renewable penetration, this intermittency will make a greater impact on the ," says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That's because grid operators will increasingly resort to fossil-fuel-based "peaker"  that compensate for the intermittency of the variable renewable  (VRE) sources of sun and wind. "If we're to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources," Gençer says.

Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and  power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that's useful for policymakers, investors, and utilities engaged in the clean energy transition.

Now, Gençer and Drake D. Hernandez SM '21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and , based on a detailed inventory of California's power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

"Our study's essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries—even today, when the costs of hydrogen production, transmission, and storage are very high," says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, "If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition."

Adding up the costs

California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

"We looked at all the peaker plants in California," recounts Gençer. "We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries." The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: "There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them," says Gençer.

But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. "The most difficult thing is finding cost assumptions for new technologies," says Hernandez. "You can't do this through a literature review, so we had many conversations with equipment manufacturers and plant operators."

The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: "Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid," says Hernandez. In addition, notes Gençer, "While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants."

A tool for energy investors

When he began this project, Gençer admits he "wasn't hopeful" about hydrogen replacing natural gas in peaker plants. "It was kind of shocking to see in our different scenarios that there was a place for hydrogen." That's because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won't take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. "With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies," says Gençer.

The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a -powered plant with one powered by hydrogen.

"As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward," says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

A study group member of MITEI's soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, "Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary."

Green hydrogen production from curtailed wind and solar power
More information: Drake D. Hernandez et al, Techno-economic analysis of balancing California's power system on a seasonal basis: Hydrogen vs. lithium-ion batteries, Applied Energy (2021). DOI: 10.1016/j.apenergy.2021.117314
Provided by Massachusetts Institute of Technology 
This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

A carbon-neutral response to rising electricity demand

A carbon-neutral response to rising electricity demand
An expansion of hydropower could contribute significantly to the energy
 transition. But plans for new reservoirs, such as the one at the Trift glacier in 
the canton of Bern (taken in 2007), face political opposition.
 Credit: Wikimedia Commons/Thisisbossi

Many everyday activities rely on electricity. As we look to 2050, this dependence is set to increase, with demand for electricity in Switzerland likely to rise to 50 percent. The increased demand can only be met by transforming the energy system.

Switzerland has set itself the goal of ending its  by 2050. With this net zero target, the country hopes to play its part in limiting global warming to less than 1.5°C. The implications of this target for future  requirements—and the potential contributions of geothermal energy and hydropower in particular—have been the subject of ETH-led research at 25 Swiss scientific institutions, industrial companies and federal authorities as part of the Swiss Competence Center for Energy Research—Supply of Electricity (SCCER-SoE). Although this project initially focused on renewables as a substitute for , it ultimately took on a much broader scope. After all, the  of the future will not only need to deliver more power, but do so with negative emissions wherever possible. This requires much more comprehensive and, above all, more integrated solutions.

The electricity mix of the future

Led by the SCCER-SoE, a total of eight competence centers used scenarios to model the future composition of electricity supply and demand. The increased demand for electricity by 2050 will be largely driven by electrification in two areas: transport and heating.

To meet this rising demand and, above all, to compensate for the elimination of nuclear power plants, the supply of renewable energies will need to almost double by 2050. The greatest potential lies in photovoltaics. "However, this potential can only be utilized in full if we also take measures to offset the deficienciescompensate the fluctuations of this form of energy," says Peter Burgherr from the Paul Scherrer Institute. Photovoltaics are poorly suited to delivering sufficient power in the winter months, and they produce a surplus of energy in the middle of the day in the summer months, which can tax the power grid.

A carbon-neutral response to rising electricity demand
Photovoltaics, hydropower and geothermal energy, combined with pumped 
storage, CO2 capture and long-term underground storage, form the backbone 
of a climate-neutral electricity supply in 2050. Credit: SCCER-SoE

To better cope with the irregular supply of electricity, it is imperative that we also make better use of the potential offered by other renewables such as wind, hydropower, biomass and geothermal energy. Surplus energy from photovoltaic systems could be stored in batteries temporarily, used for pumped storage plants, or converted into heat or hydrogen.

This is where hydropower comes into play. As the most important domestic energy source in Switzerland, both now and in the future, it not only contributes directly to the electricity supply but is also taking on an important role as a form of energy storage. But Robert Boes, head of the Laboratory of Hydraulics, Hydrology and Glaciology and a professor at ETH Zurich, qualifies this potential: "A significant expansion of hydropower in the next few decades is unrealistic given the stringent environmental protection requirements, profitability which is low or non-existent, and poor public acceptance of such projects." Even under optimistic assumptions, that means additional electricity imports and domestic gas-fired power stations will still be required to meet demand.

In Switzerland, geothermal energy has the potential to contribute to future power generation and to provide a large proportion of the heat needed for heating purposes, hot water and certain industrial processes. And it is not only that water can be heated underground and then extracted—the subsurface can also be used to store water heated on the surface using surplus energy from photovoltaics or waste incineration plants, for example.

Not without negative emissions

As well as expanding its use of renewable energies, increasing the efficiency of existing technologies and implementing measures to minimize  consumption, Switzerland will need to achieve negative emissions if it is to meet the net zero target. For example, these negative emissions could be achieved by capturing carbon dioxide directly from ambient air (direct air capture) or by burning biomass, capturing the resulting CO2, and placing it in long-term storage underground. Current findings suggest that the options for underground storage in Switzerland are not as extensive as originally hoped, and so there is a need for further exploration—along with research into storage options abroad.

The results from the SCCER-SoE's seven years of research indicate that the net zero target is technically achievable by 2050. "However, this will require extensive and coordinated adjustments in many different areas that affect the whole of society. We can't afford to waste any more time if we want to meet the stipulated climate goals by 2050," says Domenico Giardini, professor at ETH Zurich and Head of the SCCER-SoE.

Switzerland's energy transition
Provided by ETH Zurich 

How much wildfire smoke is infiltrating our homes?

How much wildfire smoke is infiltrating our homes?
The Air Quality Index across the San Francisco Bay Area measured by crowdsourced
 PurpleAir sensors on Aug. 28, 2021, when wildfire smoke made air quality unhealthy in
 much of the region. On the whole, the AQI measured by outdoor sensors (right) was 
higher than indoor sensors (left), indicating that many residents are taking action to
 reduce smoke exposure indoors. Credit: UC Berkeley

Though overall air quality in the U.S. has improved dramatically in recent decades, smoke from catastrophic wildfires is now creating spells of extremely hazardous air pollution in the Western U.S. And, while many people have learned to reduce their exposure by staying inside, keeping windows closed and running air filtration systems on smoky days, data remains limited on how well these efforts are paying off.

In a new study, scientists from the University of California, Berkeley, used data from 1,400 indoor air sensors and even more outdoor air sensors included on the crowdsourced PurpleAir network to find out how well residents of the San Francisco and Los Angeles metropolitan areas were able to protect the air inside their homes on days when the air outside was hazardous.

They found that, by taking steps like closing up their houses and using filtration indoors, people were able to cut the infiltration of PM2.5  to their homes by half on wildfire days compared to non-wildfire days.

"While the particulate matter indoors was still three times higher on wildfire days than on non-wildfire days, it was much lower than it would be if people hadn't closed up their buildings and added filtration," said study senior author Allen Goldstein, a professor of environmental engineering and of environmental science, policy and management at UC Berkeley. "This shows that when people have information about the smoke coming their way, they are acting to protect themselves, and they are doing it effectively."

While individuals can take steps to reduce smoke infiltration to their homes, the ability for air pollution to get inside can also depend heavily on the nature of the building itself. To study these effects, the researchers also used the real estate website Zillow to estimate the characteristics of buildings in the sensor network, including the structure's relative age, the type of building and the socio-economic status of the neighborhood.

Not surprisingly, they found that newly constructed homes and those that were built with central air conditioning were significantly better at keeping wildfire smoke out.

"One of the things that makes this study exciting is it shows what you can learn with crowdsourced data that the government was never collecting before," said study co-author Joshua Apte, an assistant professor of civil and environmental engineering and of public health at UC Berkeley. "The Environmental Protection Agency (EPA) has a mandate to measure outdoor air quality and not —that's just how our air quality regulations are set up. So, these kinds of crowdsourced data sets allow us to learn about how populations are affected indoors, where they spend most of their time."

Protecting the air of the great indoors

Like many residents of the Western U.S., both Goldstein and Apte regularly use websites like AirNow and PurpleAir to check how wildfire smoke is affecting the air quality where they live. Both have even installed their own PurpleAir sensors to track the concentrations of PM2.5 particulate matter inside their homes.

So, when UC Berkeley graduate student Yutong Liang wrote a term paper using data from the PurpleAir network to study the impact of wildfire smoke on indoor air quality, Goldstein and Apte thought the work was worth expanding to a full research study.

"Our friends and neighbors and colleagues were all looking at this real-time data from PurpleAir to find out when smoke was affecting their area, and using that information to decide how to behave," Goldstein said. "We wanted to use actual data from this network to find out how effective that behavior was at protecting them."

The analysis compared indoor and outdoor sensor data collected during August and September of 2020, when both San Francisco and Los Angeles experienced a number of "fire days," which the researchers defined as days when the average PM2.5 measured by the EPA exceeded 35ug/m3. This value corresponds to an Air Quality Index (AQI) of approximately 100, which represents the boundary between PM2.5 levels that the EPA considers "moderate" and those that are considered "unhealthy for sensitive groups."

While scientists are still working to puzzle out just what types of chemical compounds are found in this particulate matter from wildfire smoke, a growing body of research now suggests that it may be even worse for human health than other types of PM2.5 air pollution.

"Wildfires create thousands of different organic chemicals as particulate matter and gases that can cause respiratory and cardiovascular problems in people," said Liang, who is lead author of the study. "The Goldstein research group is trying to identify these unique compounds, as well as the ways that they react to transform the composition of wildfire smoke over time."

To protect indoor air during fire season, the research team suggests closing up your home before smoke arrives in your area and investing in an air filtration system. If you can't afford a commercial air filter—or they are all sold out—you can also build your own for less than $50 using a box fan, a MERV-rated furnace filter and some tape.

"There are lots of very informative Twitter threads about how to build a good DIY system, and if you are willing to go a little crazy—spend $90 instead of $50—you can build an even better design," Apte said. "Every air quality researcher I know has played around with these because they are so satisfying and simple and fun, and they work."

Where you put the filters also matters. If you only have one, Apte suggests putting it in your bedroom and leaving the door closed while you sleep, to keep the air in your bedroom as clean as possible.

Finally, the researchers suggest cooking as little as possible during smoky days. Cooking can generate surprising amounts of both particulate matter and gases, neither of which can be easily ventilated out of the house without inviting wildfire smoke in.

"Air filters can help remove particulate matter from cooking, but running a kitchen or bathroom exhaust fan during smoke events can actually pull PM2.5-laden air from outdoors to indoors," Goldstein said.

In the future, the researchers hope to find ways to sample the indoor air quality of a more diverse array of households. Because PurpleAir sensors cost at least $200 apiece, households that contribute data to the network tend to be affluent, and the Zillow estimates show that the average price of homes in the network is about 20% higher than median property values in their areas.

"One thing that we're deeply interested in is understanding what happens to people in indoor environments, because that's where people spend most of their time, and there's still an awful lot we don't know about indoor pollution exposure," Apte said. "I think that these new methods of sensing the indoor environment are going to allow us to grapple a lot more with questions of environmental justice and find out more about who gets to breathe cleaner air indoors."

Why wildfire smoke is especially bad for your health
More information: Yutong Liang et al, Wildfire smoke impacts on indoor air quality assessed using crowdsourced data in California, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2106478118
Journal information: Proceedings of the National Academy of Sciences 
Provided by University of California - Berkeley 

Parks are about promoting everyone's public health, not just boosting homeowners' property value

Public park
Credit: CC0 Public Domain

The COVID-19 pandemic put a lot of attention on the role of parks and green spaces —particularly in large cities. But, not all of this attention has been positive.

Although the pandemic has clarified the beneficial role of parks in promoting health and well-being in urban communities, it has also highlighted inequities in accessing parks and green spaces, problems with a culture of enforcement and led to a series of policy responses that were heavily criticized.

Recently, police in Halifax clashed with protesters and violently evicted people staying at Peace and Friendship Park and Spring Garden Road Library. Earlier this summer, Toronto police forcibly evicted residents living in encampments at Lamport StadiumTrinity Bellwoods Park and Alexandra Park. Under the guise of " remediation" these violent evictions were described as reasonable, firm and compassionate by Mayor John Tory despite the clashes with protesters, use of pepper spray and numerous injuries and arrests.

These actions have been repeatedly justified as a means of protecting , but activists,  and even  councilors have spoken out against the use of violence in these responses.

Although  have created a buzz, they reflect a trend in public policy that has been developing for some time and changing the way we see, use and value parks in our cities. They also highlight some of the limitations in our thinking about how parks can serve as a  resource for our communities.

Multiple visions of urban parks

The idea of parks as a public health resource was central in the early vision of parks. In the Progressive Era (1896–1916), an interest in health and hygiene motivated the development of parks so there could be clean and sanitary spaces for outdoor play in the overcrowded conditions of growing industrial towns.

However, other visions and motivations have long driven urban park development. City boosters and beautification societies invested in parks as a way to create civic landmarks and spaces of esthetic and natural beauty for residents to enjoy as leisure. Middle-class social reformers saw parks as spaces for the social improvement of the working class through organized sport and physical education. Public parks have long been valued as spaces for urban amusement and entertainment.

Our vision of urban parks —particularly in  like Toronto —is also affected by broader economic conditions, local development agendas and gentrification.

Parks advocacy groups actively promoted the health benefits of parks as a strategy to promote park investment in cities during periods of chronic underfunding. For some advocates of urban development, parks tend to be positioned as a leisure resource for homeowners and a source of property value. This is reflective of broader social trends where private wealth is valued over public goods.

The logics of park policy

Park-related policy is generally established at the municipal level. Examining policy implementation and enforcement helps us understand the motivations that guide policy development.

The events that unfolded in Trinity Bellwoods Park during the pandemic illuminate some of the gaps between the rhetoric of parks as a health-promoting resource and the realities of park use.

We see these gaps in how the City of Toronto responded to concerns related to the spread of COVID-19 in parks. It did so by revising park by-laws to mandate physical distancing. These rules were broken when thousands of people congregated in Trinity Bellwoods in May 2020.

The city then expanded its efforts with additional enforcement and signage, including painting white circles on park grass. This response was designed to enable leisurely park use, despite widely acknowledged rule-breaking (like alcohol consumption and not physical distancing).

The pandemic also created a health crisis in the city's shelter system where it became impossible to follow physical distancing rules. As a result, between 300 and 400 residents moved to city parks, with the aim of reducing their risk of becoming infected with the virus. The city responded to this health-promoting action with forced evictions.

These different responses illustrate the limited way in which we think about parks in relation to health. The responses show that we think of parks as health resources only when we define health promotion in terms of individual engagement in leisure-based health-promoting activities.

Private interest over public health

The World Health Organization suggests health is created by caring for oneself and others, by being able to make decisions and have control over one's life circumstances by ensuring that the society one lives in creates conditions that allows the attainment of health by all its members.

When we think about health in this way, we can see how parks might serve as a health resource.

Historically, Toronto engaged parks differently in times of crisis. Forest and open air schools were created in parks for children diagnosed with tuberculosis in the early 1900s. These schools saw increased use again during the Spanish Flu. At the time, people saw parks as more than just a place to exercise or socialize.

In order for parks to become health-promoting resources, cities must use a broader vision of health to guide park policy-making. This vision might consider parks not just as a place for healthy leisure-based activity, but also as a resource that can be put to use to address other significant health concerns, particularly for those most vulnerable. Future parks health policy must reimagine parks as more than a contributor to property value.

U.S. National Park Service issues mask mandate
Provided by The Conversation 
This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

 

Hurricane Ida turned into a monster thanks to a giant warm patch in the Gulf of Mexico

Hurricane Ida turned into a monster thanks to a giant warm patch in the Gulf of Mexico – here's what happened
A computer animation reflects the temperature change as eddies spin off from the Loop 
Current and Gulf Stream along the U.S. Coast. Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio

As Hurricane Ida headed into the Gulf of Mexico, a team of scientists was closely watching a giant, slowly swirling pool of warm water directly ahead in its path.

That warm pool, an eddy, was a warning sign. It was around 125 miles (200 kilometers) across. And it was about to give Ida the power boost that in the span of less than 24 hours would turn it from a weak  into the dangerous Category 4 storm that slammed into Louisiana just outside New Orleans on Aug. 29, 2021.

Nick Shay, an oceanographer at the University of Miami's Rosenstiel School of Marine and Atmospheric Sciences, was one of those scientists. He explains how these eddies, part of what's known as the Loop Current, help storms rapidly intensify into monster hurricanes.

How do these eddies form?

The Loop Current is a key component of a large gyre, a circular current, rotating clockwise in the North Atlantic Ocean. Its strength is related to the flow of warm  from the tropics and Caribbean Sea into the Gulf of Mexico and out again through the Florida Straits, between Florida and Cuba. From there, it forms the core of the Gulf Stream, which flows northward along the Eastern Seaboard.

In the Gulf, this current can start to shed large warm eddies when it gets north of about the latitude of Fort Myers, Florida. At any given time, there can be as many as three warm eddies in the Gulf. The problem comes when these eddies form during hurricane season. That can spell disaster for coastal communities around the Gulf.

Subtropical water has a different temperature and salinity than Gulf common water, so its eddies are easy to identify. They have warm water at the surface and temperatures of 78 degrees Fahrenheit (26 C) or more in water layers extending about 400 or 500 feet deep (about 120 to 150 meters). Since the strong salinity difference inhibits mixing and cooling of these layers, the warm eddies retain a considerable amount of .

When heat at the  is over about 78 F (26 C), hurricanes can form and intensify. The eddy that Ida passed over had surface temperatures over 86 F (30 C).

Hurricane Ida turned into a monster thanks to a giant warm patch in the Gulf of Mexico – here's what happened
The Loop Current runs from the tropics through the Caribbean and into the Gulf of Mexico, 
then joins the Gulf Stream moving up the East Coast. Credit: NASA/Goddard Space Flight Center Scientific Visualization Studio

How did you know this eddy was going to be a problem?

We monitor ocean heat content from space each day and keep an eye on the ocean dynamics, especially during the summer months. Keep in mind that warm eddies in the wintertime can also energize atmospheric frontal systems, such as the "storm of the century" that caused snowstorms across the Deep South in 1993.

To gauge the risk this heat pool posed for Hurricane Ida, we flew aircraft over the eddy and dropped measuring devices, including what are known as expendables. An expendable parachutes down to the surface and releases a probe that descends about 1,300 to 5,000 feet (400 to 1,500 meters) below the surface. It then send back data about the temperature and salinity.

This eddy had heat down to about 480 feet (around 150 meters) below the surface. Even if the storm's wind caused some mixing with cooler water at the surface, that deeper water wasn't going to mix all the way down. The eddy was going to stay warm and continue to provide heat and moisture.

That meant Ida was about to get an enormous supply of fuel.

When warm water extends deep like that, we start to see the atmospheric pressure drop. The moisture transfers, also referred to as latent heat, from the ocean to atmosphere are sustained over the warm eddies since the eddies are not significantly cooling. As this release of latent heat continues, the central pressures continue to decrease. Eventually the surface winds will feel the larger horizontal pressure changes across the storm and begin to speed up.

That's what we saw the day before Hurricane Ida made landfall. The storm was beginning to sense that really warm water in the eddy. As the pressure keeps going down, storms get stronger and more well defined.

When I went to bed at midnight that night, the wind speeds were about 105 miles per hour. When I woke up a few hours later and checked the National Hurricane Center's update, it was 145 miles per hour, and Ida had become a major hurricane.

Hurricane Ida turned into a monster thanks to a giant warm patch in the Gulf of Mexico – here's what happened
Ida’s route to Louisiana passed through very warm water. The scale, in meters, shows
 the maximum depth at which temperatures were 78 degrees Fahrenheit (26 C) or 
greater. Credit: University of Miami, CC BY-ND

Is rapid intensification a new development?

We've known about this effect on hurricanes for years, but it's taken quite a while for meteorologists to pay more attention to the upper ocean heat content and its impact on rapid intensification.

In 1995, Hurricane Opal was a minimal tropical storm meandering in the Gulf. Unknown to forecasters at the time, a big warm eddy was in the center of the Gulf, moving about as fast as Miami traffic in rush hour, with warm water down to about 150 meters. All the meteorologists saw in the satellite data was the surface temperature, so when Opal rapidly intensified on its way to eventually hitting the Florida Panhandle, it caught a lot of people by surprise.

Today, meteorologists keep a closer eye on where the pools of heat are. Not every storm has all the right conditions. Too much wind shear can tear apart a storm, but when the atmospheric conditions and ocean temperatures are extremely favorable, you can get this big change.

Hurricanes Katrina and Rita, both in 2005, had pretty much the same signature as Ida. They went over a warm eddy that was just getting ready to be shed form the Loop Current.

Hurricane Michael in 2018 didn't go over an eddy, but it went over the eddy's filament—like a tail—as it was separating from the Loop Current. Each of these storms intensified quickly before hitting land.

Of course, these warm eddies are most common right during hurricane season. You'll occasionally see this happen along the Atlantic Coast, too, but the Gulf of Mexico and the Northwest Caribbean are more contained, so when a  intensifies there, someone is going to get hit. When it intensifies close to the coast, like Ida did, it can be disastrous for coastal inhabitants.

How hurricanes draw fuel from water water. Credit: NOAA

What does climate change have to do with it?

We know global warming is occurring, and we know that surface temperatures are warming in the Gulf of Mexico and elsewhere. When it comes to rapid intensification, however, my view is that a lot of these thermodynamics are local. How great a role  plays remains unclear.

This is an area of fertile research. We have been monitoring the Gulf's ocean heat content for more than two decades. By comparing the temperature measurements we took during Ida and other hurricanes with satellite and other atmospheric data, scientists can better understand the role the oceans play in the rapid intensification of storms.

Once we have these profiles, scientists can fine-tune the computer model simulations used in forecasts to provide more detailed and accurate warnings in the futures.

Study targets warm water rings that fuel hurricane intensification in the Caribbean Sea
Provided by The Conversation 
This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation