Friday, February 26, 2021

New model describes wave behavior in straits, predicts killer waves

SKOLKOVO INSTITUTE OF SCIENCE AND TECHNOLOGY (SKOLTECH)

Research News

A Skoltech researcher has developed a theoretical model of wave formation in straits and channels that accounts for nonlinear effects in the presence of a coastline. This research can improve wave prediction, making maritime travel safer and protecting coastline infrastructure. The paper was published in the journal Ocean Dynamics.

Predicting surface weather at sea has always been a challenging task with very high stakes; for instance, over 4,000 people died due to rough seas during Operation Overlord at Normandy in June 1944, an allied incursion where poor forecasting altered the course of the operation quite significantly. Current wave forecasting models used, for example, by NOAA in the US, are imperfect, but they have many tunable parameters to ensure a reasonably good prediction.

However, as Andrei Pushkarev, senior research scientist at Skoltech and Lebedev Physical Institute of the Russian Academy of Sciences notes in his paper, coastlines complicate the situation: he writes that, "wave forecasting in the English Channel nowadays is still almost as hard as it was in 1944." His research suggests that the wave behavior in channels or straits will differ quite significantly from that in open seas.

"Coastlines create inhomogeneity - a gradient of the wave energy distribution between its zero value at the boundary and non-zero value off-shore. This gradient launches wave advection, and its mutual interplay with nonlinear wave interaction creates peculiar effects of generating waves orthogonal to the wind," Pushkarev says.

The specific conditions of the channels allow for a precise solution of the Hasselmann equation describing wave behavior, the one current models approximate because it is still impossible to solve with modern computers. Pushkarev's theoretical modeling of wave formation in an English Channel-like strait showed that the development of turbulence did not match predictions from conventional models, as turbulence structure was significantly different due to nonlinear interactions and wave advection. Since the phenomenon researchers observed has some similarities to laser radiation, they call it the Nonlinear Ocean Wave Amplifier, or NOWA.

"The strait shores play the role of the semi-reflecting mirrors for generated waves, which makes the situation similar to conventional lasers, with the nonlinear wave media playing the role of active resonator, in some sense similar to the conventional lasers. The power of the radiation excited orthogonally to the wind grows significantly with the growth of the reflection coefficient of the strait boundaries. In a sense, we are dealing with some sort of nonlinear laser," Pushkarev notes.

"This model, exploiting the exact version of the Hasselmann Equation, shows that existing operational wave weather forecasting models miss the described effect, considering it rather a numerical artifact," he adds.

The researcher says that this laser-like effect of wave generation orthogonally to the wind can be observed not only in straits, but also in the open seas with specific inhomogeneous winds, where spatial wind turning points create conditions similar to those observed in presence of shores.

The new research holds promise in explaining the nature of seiches, peculiar standing waves in semi-enclosed bodies of water that present a big problem for ships in ports. But it also suggests that a correct description of turbulence in the presence of coastlines will allow for rogue waves, seemingly unpredictable surface waves that are extremely dangerous even to large vessels.

###

Skoltech is a private international university located in Russia. Established in 2011 in collaboration with the Massachusetts Institute of Technology (MIT), Skoltech is cultivating a new generation of leaders in the fields of science, technology and business, is conducting research in breakthrough fields, and is promoting technological innovation with the goal of solving critical problems that face Russia and the world. Skoltech is focusing on six priority areas: data science and artificial intelligence, life sciences, advanced materials and modern design methods, energy efficiency, photonics and quantum technologies, and advanced research. Web: https://www.skoltech.ru/.

Disclaimer: AAAS and EurekAlert! are not r

A cat of all trades

UNIVERSITY OF COPENHAGEN - FACULTY OF SCIENCE

Research News

IMAGE

IMAGE: LEOPARD view more 

CREDIT: PHOTO BY BRUTUS ÖSTLING

Large carnivores are generally sensitive to ecosystem changes because their specialized diet and position at the top of the trophic pyramid is associated with small population sizes. This in turn leads to lower genetic diversity in top predators compared to animals lower down the food chain. Genetic diversity is very important for a species' ability to survive and adapt to future changes.

Extraordinary genetic diversity in an extraordinary cat

In this study, the researchers sequenced the complete genome of 53 African leopards and compared them to the Amur leopards and other big cat species. To their surprise, the researchers found that the genetic diversity of African leopards is extremely high: Almost four times higher than the Amur leopard's, twice as high as the lion's and almost five times higher than the cheetah's.

Patrícia Pečnerová, postdoc at the Department of Biology and one of the shared first authors, explains:

- "The exceptional genetic diversity is likely a result of the leopard's ability to avoid population crashes and reductions. We found that during hundreds of thousands of years, African leopard populations remained large. We think that this reflects the versatility of the species, feeding on a wider variety of prey than any of the other large predators."

Roaming every habitat in Africa - but for how long?

The high genetic diversity of the African leopards is not the only surprise hidden in the leopard genomes. They also show fewer genetic barriers within the continent of Africa than other mammal species.

Kristian Hanghøj, one of the corresponding authors of the study explains:

- "We believe that during evolutionary history the leopard roamed the African continent more freely than almost any other mammal species, exchanging genetic material throughout. It has a unique ability to succeed in almost any habitat and climate and neither rain forests, nor deserts seem to have blocked the movements of leopards over the millennia."

The surprising findings demonstrate how the ecology of a species - such as how 'picky' it is about habitat and prey, can influence its genomic variation. The exceptionally high genetic diversity could give the African leopard an advantage in coping with environmental challenges, including climate change and habitat destruction.

However, habitat fragmentation is a severe threat to all large animals, even those as adept at coping with human presence as the leopard. The human-made changes to natural habitats are occurring at a pace that is likely too fast for almost any wild animal species to adapt to, and previous studies have shown that African leopards have lost 48-67% of their natural habitat over the last 300 years.

- "To place our results into a greater context, we emphasize that the African leopard is also facing severe threats to its survival, despite being sort of an evolutionary success", finishes Patrícia Pečnerová.

 FOREVER CHEMICALS

Just published: Compilation of research on PFAS in the environment

SOCIETY OF ENVIRONMENTAL TOXICOLOGY AND CHEMISTRY

Research News

Per- and polyfluoroalkyl substances (PFAS) are a class of man-made chemical compounds and a current, emerging concern to environmental health. PFAS substances have unique characteristics-resistance to heat, water, oil and stains-that make them useful in a variety of industrial applications and popular in consumer goods. Many PFAS are stable and long-lasting in the environment, acquiring the name "forever chemicals." Industrial use of some of these compounds has been halted; however, many derivatives are still in commerce and more are under development. PFAS are now found in many compartments of the environment.

In order to manage PFAs in the environment, a lot of research has been directed at understanding their sources, fate and transport in the environment, and their potential effects on humans and wildlife. Recently, Environmental Toxicology and Chemistry (ET&C) published a special issue dedicated to PFAS with 32 articles, providing a valuable summarization of risk assessment approaches for PFAS, which are needed for environmental managers and regulatory bodies to set appropriate drinking water standards and health advisory guidelines.

The published research illustrates that PFAS are ubiquitous and raise more questions than answers about their potential toxicity to humans and wildlife. The articles in the issue report that PFAS were found near defense bases, urban environments, treatment plants and waste disposal sites but also in remote, less inhabited areas. PFAS were detected in breeding kittiwakes in Svalbard, Norway, and ducks in Australian estuaries. They were found in hens' eggs, soil, tadpoles, zebrafish, house crickets; the list goes on. The breadth of the published research illustrates that PFAS have dispersed in every medium in the environment (soil, water and wildlife).

In some of the studies, the presence of PFAS was related to a nearby source while in others it was not determined. For example, the authors of the article that investigated PFAS in Australian ducks did find a correlation between local sources of PFAS and bioaccumulation in ducks and noted that "Human health risk assessment showed that only ducks inhabiting wetlands near local sources of PFAS were likely to pose a risk to consumers," and continued, "Management of food consumption from these locations is an effective measure to limit exposure." In another study published in this same issue, long-chain PFAS were found in eight fish species across ten European glacial lakes in the Alps region, and while correlated to urban areas, could not be attributed to a specific, nearby industry source.

The issue illustrates that there are a tremendous number of PFAS substances, and it is a challenge for environmental managers and regulatory bodies to devise an approach to identify, understand and manage them all. The series provided a great review of the state of the science of PFAs risk assessment and also identified data gaps and the work needed to fill them in order to devise an effective approach to manage PFAS.

###

Indian agriculture: Groundwater depletion could reduce winter cropped acreage significantly in years

UNIVERSITY OF MICHIGAN

Research News

Map and photo

India is the world's second-largest producer of wheat and rice and is home to more than 600 million farmers.

The country has achieved impressive food-production gains since the 1960s, due in part to an increased reliance on irrigation wells, which allowed Indian farmers to expand production into the mostly dry winter and summer seasons.

But those gains have come at a cost: The country that produces 10% of the world's crops is now the world's largest consumer of groundwater, and aquifers are rapidly becoming depleted across much of India.

Indian government officials have suggested that switching from groundwater-depleting wells to irrigation canals, which divert surface water from lakes and rivers, is one way to overcome projected shortfalls.

But in a study scheduled for publication Feb. 24 in the journal Science Advances, a University of Michigan researcher and her colleagues conclude that a switch to canal irrigation will not fully compensate for the expected loss of groundwater in Indian agriculture.

The authors estimate that if Indian farmers lose all access to groundwater in overexploited regions, and if that irrigation water is not replaced with water from other sources, then winter cropped acreage could be reduced by up to 20% nationwide. However, that scenario seems highly unlikely and was included in the study only as an upper-bound estimate.

It seems more likely that any future groundwater shortfalls would be at least partially offset by increases in canal irrigation. But even if all Indian regions currently using depleted groundwater switch to canal irrigation, winter cropped acreage could still decline by 7% nationwide and by 24% in the most severely affected locations, according to the researchers.

"Our results highlight the critical importance of groundwater for Indian agriculture and rural livelihoods, and we were able to show that simply providing canal irrigation as a substitute irrigation source will likely not be enough to maintain current production levels in the face of groundwater depletion," said study lead author Meha Jain of the University of Michigan.

The study analyzed high-resolution satellite imagery and village-level census data and focused on winter cropped acreage. While nearly all Indian farmers plant crops during the monsoon to take advantage of seasonal rains, winter agriculture is mainly reliant on groundwater irrigation and now accounts for 44% of the country's annual cropped acreage for food grains.

"These findings suggest that other adaptation strategies, in addition to canal expansion, are needed to cope with ongoing groundwater losses," said Jain, an assistant professor at the U-M School for Environment and Sustainability.

The possibilities include switching from winter rice to less water-intensive cereals, increased adoption of sprinklers and drip irrigation to conserve water in the fields, and policies to increase the efficiency of irrigation canals.

While groundwater depletion is becoming a global threat to food security, and the extent of current and projected groundwater depletion are well documented, the potential impacts on food production remain poorly quantified.

The study by Jain and colleagues is the first to use high-resolution empirical data, including census data about the irrigation methods used in more than 500,000 Indian villages, to estimate the crop production losses that may occur when overexploited groundwater is lost.

The proliferation of deep (>100 feet) irrigation wells called tube wells since the 1960s has enabled Indian farmers to increase the number of seasons when crops are planted in a given year. This increase in "cropping intensity" is credited for much of the country's food-production gains.

The researchers used satellite data to measure Indian winter cropped area, a key determinant of cropping intensity. They then linked the satellite data to census information about the three main types of irrigation infrastructure in India: shallow "dug wells," deeper tube wells and canals that divert surface water.

Linking the two datasets allowed them to determine the relative efficacy of each irrigation method. That, in turn, enabled them to estimate potential future acreage losses and the ability of canal expansion to fill the gap.

The study's worst-case scenario found that winter cropped area could decrease by up to 20% nationwide and by 68% in the most severely affected regions, if farmers lose all access to groundwater and if that irrigation water is not replaced from another source. The expected losses would largely occur in northwest and central India, according to the study.

The researchers also found that increased distance from existing irrigation canals is strongly associated with decreased acreage planted with winter crops. In the future, a greater reliance on canals could increase inequities related to irrigation access, according to the authors.

"This suggests that while canals may be a viable form of irrigation for those who live near canals, they may lead to more unequal access to irrigation across villages compared to wells, with negative impacts for those who live farther from canals," the authors wrote.

In addition, the lakes and rivers that feed irrigation canals rise and fall in response to rainfall variability, unlike deep groundwater wells. So, a greater reliance on canal irrigation in the future would result in increased sensitivity to year-to-year precipitation fluctuations, as well as any long-term trends due to human-caused climate change.

"Understanding the complex relationship between food security and water availability is crucial as we prepare for future rainfall variability due to global climate change," said co-author Gillian Galford of the University of Vermont.

###

The authors of the Science Advances study, in addition to Jain and Galford, are Ram Fishman of Tel Aviv University; Pinki Mondal of the University of Delaware; Nishan Bhattarai of the U-M School for Environment and Sustainability; Shahid Naeem, Upmanu Lall and Ruth DeFries of Columbia University; and Balwinder-Singh of the International Maize and Wheat Improvement Center in New Delhi.

The work was funded by a NASA New Investigator Award to Jain and two NASA Land Cover and Land Use Change grants, one awarded to R.S. DeFries and one to Jain.

Study: Groundwater depletion will reduce cropping intensity in India

Meha Jain

 

A tangled food web

Trophic levels are an 'insufficient' measure of sustainability for today's aquaculture policy

UNIVERSITY OF CALIFORNIA - SANTA BARBARA

Research News

Born in food web ecology, the concept of trophic levels -- the hierarchy of who eats who in the natural world -- is an elegant way to understand how biomass and energy move through a natural system. It's only natural that the idea found its way into the realm of aquaculture, where marine and freshwater farmers try to maximize their product with efficient inputs.

"It's often used as a measure of how sustainable it is to harvest or consume that species," said Rich Cottrell(link is external), a postdoctoral researcher at UC Santa Barbara's National Center for Ecological Analysis & Synthesis (NCEAS). As plants (level 1) become food to plant eaters (level 2), who in turn are consumed by carnivores (level 3) and so on, the amount of energy required to support the same weight of organisms increases, he explained. As a result, species at levels 4 or 5, such as tuna, require far more energy per pound than would species in the lower trophic levels. It's the same reason vegetarian diets are often considered to be more sustainable than meat-eating ones.

"In the same manner, trophic level measures are now being recommended in policy settings for use as an indicator of the sustainability of fish farming, or aquaculture," Cottrell said. The lower the trophic level, the more sustainable the species is considered to be, and so policy often calls for more farming of low-trophic species.

However, argue Cottrell and fellow aquaculture experts in a paper(link is external) published in the journal Reviews in Aquaculture, at this time of increasing complexity and sophistication in the aquaculture sector, the same simplicity that makes trophic levels attractive as a measure of sustainability also makes them insufficient.

The causes for that have largely to do with how today's farmed fish are fed.

"Most of the fish and invertebrates that we farm for food are produced using human-made feeds," Cottrell explained. "But these feeds are constantly changing, and so the meaning of farmed trophic levels is changing through time." For instance, he pointed out, salmon are considered to be at a higher trophic level because their naturally carnivorous diets would require large amounts of fishmeal and oil, but advances in feed and manufacturing have reduced the proportion of fish-based ingredients to 10-15% in modern salmon diets. Meanwhile, herbivorous species such as carp and tilapia have been found to respond favorably to small amounts of fishmeal in their feed.

"In reality, they're now farmed at similar trophic levels," Cottrell said. "The line between 'low' and 'high' trophic levels will continue to blur with innovation."

The trophic level concept misses still another important aspect of aquaculture sustainability in the realm of feed and resource efficiency, or how efficiently the farmed animals convert what they are fed into edible food.

"This is not well explained by trophic level," Cottrell said, adding that despite their high trophic placement, many carnivorous farmed fish could be more feed-efficient than their naturally carnivorous counterparts. And because aquaculture is increasingly turning to agriculture to provide replacements for fishmeal and oil, the promise of sustainability might be an empty one.

"Replacing fish-based ingredients with crops has led to a dramatic reduction in the trophic level of fed aquaculture species, but we know very little about how sustainable it is to increase pressure on global agricultural systems," he said.

As the global aquaculture sector strives to meet the growing demand for farmed seafood, the researchers say it's time to rethink the use of trophic levels as a rule for and measure of sustainability. Stipulating low trophic level aquaculture recommendations may not be successful in promoting greater sustainability, Cottrell said. Boosting the supply of mussels, for instance, may not fulfill increasing demand for shrimp or salmon.

"It behooves us to find a way to ensure that for high-demand products, we produce these in the most environmentally efficient and socially responsible way possible," he said. "Trophic levels will not get us there."

Fortunately, there are efforts at more nuanced sustainability assessments, such as voluntary certifications through the Aquaculture Stewardship Council or Best Aquaculture Practices, which examine the impacts of aquaculture at the farm level and through supply chains.

"Greater support for these programs and incentives for producers from various regions and production systems to join them would be a far more robust way to strengthen the sustainability of the aquaculture sector going forward," Cottrell said.

###

Research for this paper was conducted also by Halley E. Froehlich, Caitlin D. Kuempel, Benjamin S. Halpern and David R. Williams at UCSB; Marc Metian at the International Atomic Agency, Monaco; Julia Blanchard and Kirsty L. Nash at the University of Tasmania; Nis Sand Jacobsen at the Technical University of Denmark; Peter B. McIntyre at Cornell University; Lex Bouwman at Utrecht University; Jessica Gephart at American University; Daniel D. Moran at the Norwegian University of Science and Technology and Max Troell at the Royal Swedish Academy of Sciences.

Disclaimer: AAAS and EurekAlert! are not resp

What might sheep and driverless cars have in common? Following the herd

Researchers show how social component of moral decision-making can influence programming of autonomous vehicles and other technologies

UNIVERSITY OF SOUTHERN CALIFORNIA

Research News

Psychologists have long found that people behave differently than when they learn of peers' actions. A new study by computer scientists found that when individuals in an experiment about autonomous vehicles were informed that their peers were more likely to sacrifice their own safety to program their vehicle to hit a wall rather than hit pedestrians who were at risk, the percentage of individuals willing to sacrifice their own safety increased by approximately two-thirds.

As computer scientists train machines to act as people's agents in all sorts of situations, the study's authors indicate that the social component of decision-making is often overlooked. This could be of great consequence, note the paper's authors who show that the trolly problem -long shown to be the scenario moral psychologists turn to--is problematic. The problem, the authors indicate, fails to show the complexity of how humans make decisions.

Jonathan Gratch, one of the paper's author, the principal investigator for this project, and a computer scientist at the USC Institute for Creative Technologies, says existing models assume that in high stakes life and death decisions, people think differently than they actually do. He indicates that there are not moral absolutes for human decision-making but rather "it is more nuanced," says Gratch.

The researchers conducted four separate simulation experiments to understand how people might process and act on the moral dilemmas they would face as an operator of a driverless car. The first three experiments focused on human behavior when faced with risk to themselves and others in the event of negative scenario in which the vehicle would have to be programmed to either the car to hit the wall or hit five pedestrians. The authors prove that participants would use severity of injury to self and the risk to others as guideposts for decision-making. They found that the higher the risk to pedestrians; the more likely people were likely to self -sacrifice their own health. In addition, the level of risk to pedestrians did not to have to be as high as for the operator of the autonomous vehicle to sacrifice their own well-being.

In the fourth experiment, the researchers added a social dimension telling participants what peers had opted to do in the same situation. In one simulation, the knowledge that peers chose to risk their own health changed the participants' responses, rising from 30 percent who were willing to risk their health to 50 percent. But this can go both ways cautions Gratch. "Technically there are two forces at work. When people realize their peers don't care, this pulls people down to selfishness. When they realize they care, this pulls them up."

The research has implications for autonomous vehicles including drones and boats, as well as robots that are programmed by humans. The authors suggest that it is important for manufacturers to have an awareness of how humans actually make decisions in life or death situations. In addition, the authors imply that transparency in how machines are programmed as well as relinquishing controls to the human drivers so that they might change settings prior to these life and death situations are important for the public. They also suggest it is important for legislators to be aware of how vehicles might be programmed. Lastly, given the human susceptibility to conform to social norms, the authors believe that public health campaigns related to how peers programmed their autonomous vehicles for self-sacrifice might influence future owners to change their vehicle settings to be more oriented to protecting others from injury and choose self-sacrifice.

###

The authors of this study are Celso M. de Melo of the US Army Research Laboratory, Stacy Marsella of Northeastern University, and Jonathan Gratch of the USC Institute for Creative Studies.

Study uncovers flaws in process for maintaining state voter rolls

YALE UNIVERSITY

Research News

States regularly use administrative records, such as motor-vehicle data, in determining whether people have moved to prune their voter rolls. A Yale-led study of this process in Wisconsin shows that a significant percentage of registered voters are incorrectly identified as having changed addresses, potentially endangering their right to vote.

The study, published in the journal Science Advances, found that at least 4% of people listed as suspected "movers" cast ballots in 2018 elections using addresses that were wrongly flagged as out of date. Minority voters were twice as likely as white voters to cast their ballot with their original address of registration after the state marked them as having moved, the study showed.

The findings suggest that states should more clearly communicate the processes they use to update voter-registration files and that a more robust effort is required to confirm whether individuals have moved before they are removed from the voter rolls, said Yale political scientist Gregory A. Huber, the study's lead author.

"The process of maintaining states' voter-registration files cries out for greater transparency," said Huber, the Forst Family Professor of Political Science in the Faculty of Arts & Sciences. "Our work shows that significant numbers of people are at risk of being disenfranchised, particularly those from minority groups.

"Unfortunately, we don't know enough about the process used to prune voter rolls nationwide to understand why mistakes occur and how to prevent them."

Regularly updating voter rolls prevents registration files from becoming bloated with individuals who have died, moved away, or are otherwise no longer eligible to vote. When these rolls swell with ineligible voters, it raises concerns about potential fraud (although there is little evidence it causes unlawful voting, Huber says) and creates headaches for political campaigns, which rely on accurate registration records to reach potential voters.

Americans are not obligated to inform local election officials when they move to a new address, but federal law mandates that states identify changes in residence among registered voters. To better accomplish this task, 30 states, including Wisconsin, and the District of Columbia have formed the Electronic Registration Information Center (ERIC), a non-profit organization that assists them in improving the accuracy of their voter rolls.

ERIC uses various administrative records, including motor vehicle data, change of address information from the U.S. Postal Service, and the Social Security Administration's master death file, to flag registrations that are potentially out of date. It provides states a "movers list" of people who likely have changed residences. The states contact listed individuals, often by sending them postcards they can use to confirm their address. If people do not return the postcards, their registration can be inactivated, starting the process for removal.

Federal privacy protections and ERIC's agreements with member states prohibit the organization from disclosing who is marked as having moved and on what basis they were flagged as such, making it difficult to examine its process. However, after submitting a Wisconsin Freedom of Information Act request, Huber and his co-authors obtained special "movers poll books" from the state which list all people who were marked as suspected movers and who did not respond to the postcard notification. Individuals in the books who showed up to vote in 2018 signed their names in these books, providing evidence that they voted at addresses that had been flagged as invalid.

The researchers collected movers poll books from a representative sample of election wards and matched their contents against voting records for 2018 local, state, and federal elections. They found that at least 9,000 people -- about 4% of those listed in the poll books -- voted in 2018 using the address of registration that ERIC had marked as invalid. Minority voters were twice as likely to be incorrectly identified as having moved.

The study likely undercounts the number of registered voters incorrectly listed as having moved, the researchers said, explaining that a significant number of people who did not respond to the postcard might have nonetheless renewed their voting registration before the poll books were published. In addition, the study examined low-turnout elections, making it likely that many people wrongly listed in the poll books weren't covered in the analysis because they didn't vote, Huber said.

The researchers are not suggesting that ERIC intentionally targeted minorities.

"There's no malice here," Huber said. "ERIC wants to help states, but relying on administrative records inevitably produces mistakes for any number of reasons. This makes the process used to validate having moved, such as mailed postcards, even more important. Without more information, we can't be certain why the process disparately affects minorities."

A potential reason for the disparity is that minorities are more likely than whites to live in apartment buildings and large households, which may increase the risk of errors in administrative records, the researchers suggest. In addition, residents of apartment buildings also may be less likely to confirm their address using the postcard since mail service can be spottier in multi-unit buildings than single-family homes.

Huber credits Wisconsin for taking steps to protect people's voting rights.

"The poll books are a great way to identify mistakes and prevent people from being disenfranchised," he said. "The state also has same day voter registration, which is another safety valve that doesn't exist in many states. We suggest that states expend more effort on contacting people at risk of losing their registration."

###

The study's co-authors are Marc Meredith of the University of Pennsylvania, Yale Law School graduate Michael Morse, and Katie Steele of the University of Pennsylvania.

Imaging space debris in high resolution

SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS

Research News

IMAGE

IMAGE: FROM LEFT TO RIGHT: SPACE DEBRIS MODELED AS A CLUSTER OF SIX REFLECTIVE OBJECTS, AN IMAGE DEVELOPED OF THE DEBRIS WITHOUT ACCOUNTING FOR THE OBJECTS' ROTATION, AND AN IMAGE DEVELOPED... view more 

CREDIT: FIGURE COURTESY OF MATAN LEIBOVICH, GEORGE PAPANICOLAOU, AND CHRYSOULA TSOGKA.

Litter is not only a problem on Earth. According to NASA, there are currently millions of pieces of space junk in the range of altitudes from 200 to 2,000 kilometers above the Earth's surface, which is known as low Earth orbit (LEO). Most of the junk is comprised of objects created by humans, like pieces of old spacecraft or defunct satellites. This space debris can reach speeds of up to 18,000 miles per hour, posing a major danger to the 2,612 satellites that currently operate at LEO. Without effective tools for tracking space debris, parts of LEO may even become too hazardous for satellites.

In a paper publishing today in the SIAM Journal on Imaging Sciences, Matan Leibovich (New York University), George Papanicolaou (Stanford University), and Chrysoula Tsogka (University of California, Merced) introduce a new method for taking high-resolution images of fast-moving and rotating objects in space, such as satellites or debris in LEO. They created an imaging process that first utilizes a novel algorithm to estimate the speed and angle at which an object in space is rotating, then applies those estimates to develop a high-resolution picture of the target.

Leibovich, Papanicolaou, and Tsogka used a theoretical model of a space imaging system to construct and test their imaging process. The model depicts a piece of fast-moving debris as a cluster of very small, highly reflective objects that represent the strongly reflective edges of an item in orbit, such as the solar panels on a satellite. The cluster of reflectors all move together with the same speed and direction and rotate about a common center. In the model, multiple sources of radiation on the Earth's surface--such as the ground control stations of global navigation satellite systems--emit pulses that are reflected by target pieces of space debris. A distributed set of receivers then detects and records the signals that bounce off the targets.

The model focuses on sources that produce radiation in the X-band, or from frequencies of 8 to 12 gigahertz. "It is well known that resolution can be improved by using higher frequencies, such as the X-band," Tsogka said. "Higher frequencies, however, also result in distortions to the image due to ambient fluctuations from atmospheric effects." Signals are distorted by turbulent air as they travel from the target to receivers, which can make the imaging of objects in LEO quite challenging. The first step of the authors' imaging process was thus to correlate the data taken at different receivers, which can help reduce the effects of these distortions.

The diameter of the area encompassed by the receivers is called the physical aperture of the imaging system -- in the model, this is about 200 kilometers. Under normal imaging conditions, the physical aperture's size determines the resolution of the resulting image; a larger aperture begets a sharper picture. However, the quick movement of the imaging target relative to the receivers can create an inverse synthetic aperture, in which the signals that were detected at multiple receivers as the target moved throughout their field of view are synthesized coherently. This configuration can effectively improve the resolution, as if the imaging system had a wider aperture than the physical one.

Objects in LEO can spin on timescales that range from a full rotation every few seconds to every few hundred seconds, which complicates the imaging process. It is thus important to know--or at least be able to estimate--some details about the rotation before developing the image. The authors therefore needed to estimate the parameters related to the object's rotation before synthesizing the data from different receivers. Though simply checking all of the possible parameters to see which ones yield the sharpest image is technically feasible, doing so would require a lot of computational power. Instead of employing this brute force approach, the authors developed a new algorithm that can analyze the imaging data to estimate the object's rotation speed and the direction of its axis.

After accounting for the rotation, the next step in the authors' imaging process was to analyze the data to develop a picture of the space debris that would hopefully be as accurate and well-resolved as possible. One method that researchers often employ for this type of imaging of fast-moving objects is the single-point migration of cross correlations. Though atmospheric fluctuations do not usually significantly impair this technique, it does not have a very high resolution. A different, commonly-used imaging approach called Kirchhoff migration can achieve a high resolution, as it benefits from the inverse synthetic aperture configuration; however, the trade-off is that it is degraded by atmospheric fluctuations. With the goal of creating an imaging scheme that is not too heavily affected by atmospheric fluctuations but still maintains a high resolution, the authors proposed a third approach: an algorithm whose result they call a rank-1 image. "The introduction of the rank-1 image and its resolution analysis for fast-moving and rotating objects is the most novel part of this study," Leibovich said.

To compare the performance of the three imaging schemes, the authors gave simulated data of a rotating object in LEO to each one and compared the images that they produced. Excitingly, the rank-1 image was much more accurate and well-resolved than the result of single-point migration. It also had similar qualities to the output of the Kirchhoff migration technique. But this result was not entirely surprising, given the problem's configuration. "It is important to note that the rank-1 image benefits from the rotation of the object," Papanicolaou said. Though a rotating object generates more complex data, one can actually incorporate this additional information into the image processing technique to improve its resolution. Rotation at certain angles can also increase the size of the synthetic aperture, which significantly improves the resolution for the Kirchhoff migration and rank-1 images.

Further simulations revealed that the rank-1 image is not easily muddled by errors in the new algorithm for the estimation of rotation parameters. It is also more robust to atmospheric effects than the Kirchhoff migration image. If receivers capture data for a full rotation of the object, the rank-1 image can even achieve optimal imaging resolution. Due to its good performance, this new imaging method could improve the accuracy of imaging LEO satellites and space debris. "Overall, this study shed light on a new method for imaging fast-moving and rotating objects in space," Tsogka said. "This is of great importance for ensuring the safety of the LEO band, which is the backbone of global remote sensing."

###

Source article: Leibovich, M., Papanicolaou, G., & Tsogka, C. (2021). Correlation Based Imaging for Rotating SatellitesSIAM J. Imag. Sci., 14(1), 271-303.

Social dilemma follows 2018 eruption of Kilauea volcano

UNIVERSITY OF HAWAII AT MANOA

Research News

IMAGE

IMAGE: FISSURE 8 ERUPTS IN LAVA HAZARD ZONE 1, BACK-LIGHTING A FRONT GATE, A MAILBOX AND UTILITY LINES. MAY 5, 2018. view more 

CREDIT: BRUCE HOUGHTON

The unprecedented cost of the 2018 Kilauea eruption in Hawai'i reflects the intersection of distinct physical and social phenomena: infrequent, highly destructive eruptions, and atypically high population growth, according to a new study published in Nature Communications and led by University of Hawai'i at Mānoa researchers.

It has long been recognized that areas in Puna, Hawai'i, are at high risk from lava flows. This ensured that land values were lower in Puna--which lies within the three highest risk lava hazard zones 1, 2 and 3--which actively promoted rapid population growth.

"Low prices on beautiful land and a scarcity of recent eruptions led to unavoidable consequences--more people and more development," said Bruce Houghton, the lead author of the study and Gordan Macdonald Professor of Volcanology in the UH Mānoa School of Ocean and Earth Science and Technology (SOEST). "Ultimately this drastically increased the value of what was at risk in 2018, relative to earlier eruptions of Ki?lauea."

Kilauea is one of the most active volcanoes on Earth and has one of the earliest, most comprehensive volcanic monitoring systems. Its recent history has been dominated by activity at the summit caldera and from one of two lines of vents called the Eastern Rift Zone. Between 1967 and 2018, volcanic activity was dominated by eruptions from the upper part of the Eastern Rift Zone. In contrast, no damaging eruptions occurred after 1961 in the more heavily populated Puna district from the vents within the lower portion of the Eastern Rift Zone.

The UH team assessed trends in population growth in Pāhoa-Kalapana, Hilo and Puna using census data, and compared median cost of land and household income in these areas.

Valuable lessons regarding the complex interplay of science, policy, and public behavior emerged from the 2018 disaster.

"Steep population growth occurred during the absence of any locally sourced eruptions between 1961 and 2018, and set the scene for the unprecedented levels of infra-structural damage during the 2018 Lower Eastern Rift Zone eruption," said Wendy Cockshell, co-author on the paper and technical assistant at the National Disaster Preparedness Training Center (NDPTC) at UH Mānoa.

If population growth resumes in lava hazard zones 1 and 2, there will be increased risk in the most dangerous areas on this exceptionally active volcano translating into high cost of damage in future eruptions.

"Our funded research supports the principle of the initiatives by local and federal government to provide buy-out funding to land owners affected by the 2018 eruption to able them to relocate outside of these hazardous areas," said Houghton.


CAPTION

Within lava hazard zone 1 a new cone is formed and a river of lava flows through properties down into lava hazard zone 2.



CAPTION

Houses may survive but can be surrounded by lava without access to electrical or water supplies. Roads now lead to nowhere. (Note Fissure 8 fountain in the background.)

CREDIT

Bruce Houghton




New sustainable building simulation method points to the future of design

CORNELL UNIVERSITY

Research News

ITHACA, N.Y. - A team from Cornell University's Environmental Systems Lab, led by recent graduate Allison Bernett, has put forth a new framework for injecting as much information as possible into the pre-design and early design phases of a project, potentially saving architects and design teams time and money down the road.

"(Our framework) allows designers to understand the full environmental impact of their building," said Bernett, corresponding author of "Sustainability Evaluation for Early Design (SEED) Framework for Energy Use, Embodied Carbon, Cost, and Daylighting Assessment" which published Jan. 10 in the Journal of Building Performance Simulation.

Principle investigators are Timur Dogan, assistant professor of architecture in the College of Architecture, Art and Planning; and Katharina Kral, a licensed architect and lecturer in the Department of Architecture.

"How we look at this is, there's the cost of change in the design process, and then the opportunity of impact," Dogan said. "In the very beginning, changing something doesn't cost anything, but if you're a month into the project, changing something is really expensive, because now you have to rehire consultants and redesign things.

"And then the other thing is the potential of impact," he said. "In the very beginning, just with a simple nudge in the right direction, you can change a project from being an energy hog to something that's very sustainable, and integrates well into the environment."

In 2018, according to the International Energy Agency, the construction sector accounted for 39% of energy and process-related greenhouse gas emissions. That included 11% originating from the manufacturing of building materials and products.

The Sustainability Evaluation for Early Design (SEED) Framework is a decision-making tool that can dynamically and concurrently simulate several variables: building energy performance; embodied carbon (carbon emissions generated by construction and materials); construction cost; and daylighting (the use of natural light to illuminate indoor spaces).

The framework will allow architects and design teams to rapidly trial and rank tens of thousands of design iterations, using as few as four inputs.

Using publicly available data and a suite of available design simulation programs - including Rhino/Grasshopper (a CAD program); ClimateStudio, developed by Dogan, for daylight simulation and building energy modeling; and engineering software Karamba3D - Bernett and the team tested SEED in a case study of a hypothetical mid-sized office building modeled in Boston, Washington, D.C., and Phoenix.

The SEED Framework generated thousands of design options based on variables specific to the three cities in the case study, offering designers the flexibility of many options early in the process, before changing course would get too expensive.

"The idea is, you run this analysis," Dogan said, "and you get a few options that already make a lot of sense, and some options that you can completely forget about. ... [It] always comes down to this lack of information in the decision-making process.

"In that sense, the construction industry is super inefficient," he said. "There's too many players who don't know the full picture and then make decisions that are not always rational. This framework that Allison worked on is geared to help bring the information to the table. Every stakeholder in the design process can then form their own opinion about design goal priorities."

SEED's greatest asset, Bernett said, is amassing a tranche of data on multiple factors in one place, and involving architects early in the design and pre-design phases.

"It takes a lot of time to gather all that data, and we have that prepackaged. So there's definitely a hunger for that," said Bernett, who presented the SEED Framework in September 2019 at the International Building Performance Simulation Conference, in Rome.

"Right now, we rely heavily on energy modelers and consultants to do this work," she said. "And if we can involve architects more readily and more early on, I think that we're going to see a lot of improvement and cost-effectiveness to these early design decisions."

In addition to the publicly available design simulations, the team used AutoFrame, a new procedure developed by Kral for automatically computing structural systems. AutoFrame helps improve the precision of embodied carbon assessments and daylight simulations.

The Cornell Atkinson Center for Sustainability's Small Grants Program provided pivotal support for this work, Bernett said.

"That funding really gave it the push it needed," she said. "It allowed me to present a first iteration [of SEED] at the conference in Rome, and then to really flesh out the research more after that."

###