Sunday, December 08, 2024

 

So you want to build a solar or wind farm? Here’s how to decide where


MIT engineers show how detailed mapping of weather conditions and energy demand can guide optimization for siting renewable energy installations

"Decarbonized Energy System Planning with High-Resolution Spatial Representation of Renewables Lowers Cost"


Massachusetts Institute of Technology




Deciding where to build new solar or wind installations is often left up to individual developers or utilities, with limited overall coordination. But a new study shows that regional-level planning using fine-grained weather data, information about energy use, and energy system modeling can make a big difference in the design of such renewable power installations. This also leads to more efficient and economically viable operations.

The findings show the benefits of coordinating the siting of solar farms, wind farms, and storage systems, taking into account local and temporal variations in wind, sunlight, and energy demand to maximize the utilization of renewable resources. This approach can reduce the need for sizable investments in storage, and thus the total system cost, while maximizing availability of clean power when it’s needed, the researchers found.

The study, which will appear in the journal Cell Reports Sustainability, was co-authored by Liying Qiu and Rahman Khorramfar, postdocs in MIT’s Department of Civil and Environmental Engineering, and professors Saurabh Amin and Michael Howland. 

Qiu, the lead author, says that with the team’s new approach, “we can harness the resource complementarity, which means that renewable resources of different types, such as wind and solar, or different locations can compensate for each other in time and space. This potential for spatial complementarity to improve system design has not been emphasized and quantified in existing large-scale planning.”

Such complementarity will become ever more important as variable renewable energy sources account for a greater proportion of power entering the grid, she says. By coordinating the peaks and valleys of production and demand more smoothly, she says, “we are actually trying to use the natural variability itself to address the variability.”

Typically, in planning large-scale renewable energy installations, Qiu says, “some work on a country level, for example saying that 30 percent of energy should be wind and 20 percent solar. That’s very general.” For this study, the team looked at both weather data and energy system planning modeling on a scale of less than 10-kilometer (about 6-mile) resolution. “It’s a way of determining where should we exactly build each renewable energy plant, rather than just saying this city should have this many wind or solar farms,” she explains.

To compile their data and enable high-resolution planning, the researchers relied on a variety of sources that had not previously been integrated. They used high-resolution meteorological data from the National Renewable Energy Laboratory, which is publicly available at 2-kilometer resolution but rarely used in a planning model at such a fine scale. These data were combined with an energy system model they developed to optimize siting at a sub-10-kilometer resolution. To get a sense of how the fine-scale data and model made a difference in different regions, they focused on three U.S. regions — New England, Texas, and California — analyzing up to 138,271 possible siting locations simultaneously for a single region.

By comparing the results of siting based on a typical method vs. their high-resolution approach, the team showed that “resource complementarity really helps us reduce the system cost by aligning renewable power generation with demand,” which should translate directly to real-world decision-making, Qiu says. “If an individual developer wants to build a wind or solar farm and just goes to where there is the most wind or solar resource on average, it may not necessarily guarantee the best fit into a decarbonized energy system.”

That’s because of the complex interactions between production and demand for electricity, as both vary hour by hour, and month by month as seasons change. “What we are trying to do is minimize the difference between the energy supply and demand rather than simply supplying as much renewable energy as possible,” Qiu says. “Sometimes your generation cannot be utilized by the system, while at other times, you don’t have enough to match the demand.”

In New England, for example, the new analysis shows there should be more wind farms in locations where there is a strong wind resource during the night, when solar energy is unavailable. Some locations tend to be windier at night, while others tend to have more wind during the day. 

These insights were revealed through the integration of high-resolution weather data and energy system optimization used by the researchers. When planning with lower resolution weather data, which was generated at a 30-kilometer resolutionglobally and is more commonly used in energy system planning, there was much less complementarity among renewable power plants. Consequently, the total system cost was much higher. The complementarity between wind and solar farms was enhanced by the high-resolution modeling due to improved representation of renewable resource variability.

The researchers say their framework is very flexible and can be easily adapted to any region to account for the localgeophysical and other conditions. In Texas, for example, peak winds in the west occur in the morning, while along the south coast they occur in the afternoon, so the two naturally complement each other.

Khorramfar says that this work “highlights the importance of data-driven decision making in energy planning.” The work shows that using such high-resolution data coupled with carefully formulated energy planning model “can drive the system cost down, and ultimately offer more cost-effective pathways for energy transition.”

One thing that was surprising about the findings, says Amin, who is a principal investigator in the Laboratory of Information and Data Systems, is how significant the gains were from analyzing relatively short-term variations in inputs and outputs that take place in a 24-hour period. “The kind of cost-saving potential by trying to harness complemetarity within a day was not something that one would have expected before this study,” he says. 

In addition, Amin says, it was also surprising how much this kind of modeling could reduce the need for storage as part of these energy systems. “This study shows that there is actually a hidden cost-saving potential in exploiting local patterns in weather, that can result in a monetary reduction in storage cost.”

The system-level analysis and planning suggested by this study, Howland says, “changes how we think about where we site renewable power plants and how we design those renewable plants, so that they maximally serve the energy grid. It has to go beyond just driving down the cost of energy of individual wind or solar farms. And these new insights can only be realized if we continue collaborating across traditional research boundaries, by integrating expertise in fluid dynamics, atmospheric science, and energy engineering.”

The research was supported by the MIT Climate and Sustainability Consortium and MIT Climate Grand Challenges.

###

Written by David L. Chandler, MIT News Office

Impact studies should include high-sensitivity climate models



University of Reading





High-sensitivity climate models should not be excluded when predicting future regional climate impacts because the level of warming measured globally is not always the only good indicator of regional changes, a new study suggests. 

Some models which scientists use to predict future changes in Earth's climate show faster global warming than others, leading to temperature projections that are considered unlikely. Some experts suggest that these more sensitive (or ‘hotter’) models should be omitted when studying future climate impacts.  

New research published today (Thursday, 5 December) in Earth’s Future shows no clear correlation between the rate of warming and some important regional drivers. Instead, how the behaviour of regional weather patterns control impacts needs to be considered too. 

Dr Ranjini Swaminathan, lead author at the University of Reading and National Centre for Earth Observation, said: "We should not exclude climate models from impact assessments based on their climate sensitivity as this could lead to ignoring future outcomes that are potentially serious and realistic.

“What happens globally doesn't always match what happens locally and we show that no universal correlation exists between climate sensitivity and regional climate drivers. For example, we see a general increase in the number of drought events in the future, but we don’t see a statistically significant correlation between the change in the number of drought events and climate sensitivity. This is because the magnitude of global warming is just one of many factors influencing drought and is often not the most important. 

“Our results contradict suggestions that models showing higher warming should be excluded from studies about future climate impacts.”  

Preparing communities 

The researchers studied how different models predict three major climate impacts: heavy rains that cause flooding, droughts that affect farming and water supplies, and conditions that increase the risk of wildfire. They looked at these across different parts of the world, including the Amazon rainforest, Australia, East Asia, and parts of Africa and India. 

They discovered that how much global warming a model predicts isn't the main factor in determining local impacts - regional factors matter too. If models are selected based only upon their prediction of global warming, important and physically plausible outcomes of regional climate impacts could be missed. This could lead to an inaccurate portrayal of the risks that need to be considered by governments and communities as they adapt to climate change. 

No comments:

Post a Comment