Saturday, July 01, 2023

Early birds of the future: earlier, but still too late?


Peer-Reviewed Publication

NETHERLANDS INSTITUTE OF ECOLOGY (NIOO-KNAW)

Birds in the box 

IMAGE: FOUR KEY PHASES INSIDE THE NEST BOX: LAYING EGGS (TOP LEFT), TWO DAYS AFTER THE EGGS ARE HATCHED (TOP RIGHT), A WEEK AFTER THEY ARE HATCHED (BOTTOM LEFT) AND TWO WEEKS LATER - WHEN THE YOUNG ARE ABOUT TO LEAVE THE NEST. view more 

CREDIT: MELANIE LINDNER/NIOO-KNAW




Birds need to adapt to climate change, but evolution is a slow process. Model species such as the great tit are an indispensable yardstick for our ability to predict the impact of climate change on nature. Using innovative methods, a team from the Netherlands Institute of Ecology (NIOO-KNAW) took a sneak peek into the birds’ future.

Can species keep up with the climate change yet to come? How fast can they adapt? We need to understand this before we can properly predict the effects of climate change on nature at large. “It’s important to know", stresses research leader Marcel Visser, "because climate change and evolution need to keep a relatively even pace for species to keep up.”

Back to the future

“That's why we set out to study great tits from the future", explains Visser. "In the coming decades, natural selection will produce birds with a particular genetic make-up. To predict the extent to which these birds can respond to natural selection, we sped up evolution through artificial selection of genetically early and late birds in aviaries. We then took the eggs to our long-term population in De Hoge Veluwe national park, to see how their offspring did compared to wild great tits.”

“In the forest, the earliest birds did in fact lay their eggs earlier than great tits selected for laying eggs late”, adds researcher Melanie Lindner. “So we were able to successfully select them for laying eggs early or late in spring. But the earliest birds didn’t lay their eggs significantly earlier than the wild great tits breeding in the forest, while the ones we selected for laying their eggs late, did have a significantly later lay date."

In the end, these early birds didn’t breed any more successfully than the ones in the wild. "Genetic adaptation towards early lay dates turns out to be an extremely slow process." The results of the study have been published in the scientific journal Science Advances, with Lindner as first author.

Ecological relationship problems

Climate change is giving insectivorous songbirds such as the great tit a bad case of  ecological relationship problems. Their timing no longer matches that of the insects their young feed on: the ‘caterpillar peak’ in the forest and the moment young great tits hatch no longer coincide. Consequently, they miss out on the biggest, juiciest caterpillars – with the most nutritious proteins. Changing their timing could be a solution, but until now it remained unclear how much earlier the birds could actually lay their eggs.

So what will happen to great tits in the coming decade? “What we’re seeing now is that climate change is simply going too fast for them”, says Marcel Visser. “They won’t be able to adapt sufficiently. In the bleakest climate scenarios, in particular, the birds will fall behind more and more.” As a result, the number of great tits fledging will decline.

Camouflage

So why aren’t we witnessing a decline in population numbers of the great tit yet at the moment? “Right now, the population effects are buffered by density-dependent processes", says Visser. The real impact of climate change is currently being ‘camouflaged’: out of every ten chicks, eight or nine would normally die in the first year as a result of predation, disease, food shortages, competition or just bad luck. But if three of them die before fledging because of climate change, the chances of the remaining seven will improve. There’ll be more food to go around, and less competition. "But there’s a limit.”

“There’s also a great deal of variation from year to year in terms of the weather, which makes it harder to measure the impact on the birds in the field. In the years of the study, the mismatch between wild great tits and their food – the caterpillars of the winter moth – happened to be surprisingly small.”

The extensive, multi-year study was funded by a major European grant (ERC Advanced) that was awarded to Visser for five years of research. “This was part of it, and a lot of people contributed: PhD candidates, postdocs, research assistants and students. It’s been an ambitious experiment, which is unlikely to be repeated any time soon by our team or by anyone else.”

Now for the entire Veluwe…

So what else do we need to find out? Visser has some suggestions: “Now that we have looked at the impact of climate change on the relatively straightforward food chain of oak tree, winter moth and great tit, it’s time to see if we can include a larger number of the species that together make up a food network. Like the entire Veluwe for instance…”

That's precisely one of the goals of the large LTER-LIFE programme, which is about to start this July: Developing 'digital twins' of ecosystems to help predict how global change affects nature. And the Veluwe natural area will be the first case.

__________________________________________________________________________________________

With more than 200 staff members and students, the Netherlands Institute of Ecology (NIOO-KNAW) is one of the largest research institutes of the Royal Netherlands Academy of Arts and Sciences (KNAW). The institute specialises in water and land ecology with three major themes: biodiversity, climate change and the sustainable use of land and water. The institute is located in an innovative and sustainable research building in Wageningen, the Netherlands. NIOO has an impressive research history that stretches back more than 65 years and spans the entire country, and beyond. www.nioo.knaw.nl/en

 

How computers and artificial intelligence evolve together


A review of compiler technologies for deep learning co-design

Peer-Reviewed Publication

INTELLIGENT COMPUTING

The Buddy Compiler 

IMAGE: THE BUDDY COMPILER FRAMEWORK, A WORK IN PROGRESS, WILL INCLUDE A BENCHMARK FRAMEWORK, A DOMAIN-SPECIFIC ARCHITECTURE FRAMEWORK, A CO-DESIGN MODULE, AND A COMPILER-AS-A-SERVICE PLATFORM IN ADDITION TO A COMPILER FRAMEWORK. view more 

CREDIT: HONGBIN ZHANG ET AL.




Co-design, that is, designing software and hardware simultaneously, is one way of attempting to meet the computing-power needs of today’s artificial intelligence applications. Compilers, which translate instructions from one representation to another, are a key piece of the puzzle. A group of researchers at the Chinese Academy of Sciences summarized existing compiler technologies in deep learning co-design and proposed their own framework, the Buddy Compiler.

The group’s review paper was published June 19 in Intelligent Computing, a Science Partner Journal.

Although others have summarized optimizations, hardware architectures, co-design approaches, and compilation techniques, no one has discussed deep learning systems from the perspective of compilation technologies for co-design. The researchers studied deep learning from this angle because they believe that “compilation technologies can bring more opportunities to co-design and thus can better achieve the performance and power requirements of deep learning systems.”

The review covers five topics:

  • The history of deep learning and co-design
  • Deep learning and co-design now
  • Compilation technologies for deep learning co-design
  • Current problems and future trends
  • The Buddy Compiler

The history of deep learning and co-design

Since the 1950s, neural networks have gone through many rises and falls leading up to today's explosive growth of deep learning applications and researches. Co-design began in the 1990s and has since then been adopted in various fields, progressing from manual work to computer-aided design and ultimately becoming a complex process involving modeling, simulation, optimization, synthesis, and testing. Since 2020, a network model called a transformer has seen great success: ChatGPT is a chatbot built using a “generative pre-trained transformer.” Current AI applications like ChatGPT are reaching a new performance bottleneck that will require hardware-software co-design again.

Deep learning and co-design now

The breakthrough of deep learning comes from the use of numerous layers and a huge number of parameters, which significantly increase the computational demands for training and inference. As a result, relying solely on software-level optimization, it becomes challenging to achieve reasonable execution times. To address this, both industry and academia have turned to domain-specific hardware solutions, aiming to achieve the required performance through a collaborative effort between hardware and software, known as hardware-software co-design. Recently, a comprehensive system has emerged, comprising deep learning frameworks, high-performance libraries, domain-specific compilers, programming models, hardware toolflows, and co-design techniques. These components collectively contribute to enhancing the efficiency and effectiveness of deep learning systems.

Compilation technologies for deep learning co-design

There are two popular ecosystems that are used to build compilers for deep learning: the tensor virtual machine, known as TVM, and the multi-level intermediate representation, known as MLIR. These ecosystems employ distinct strategies, with TVM serving as an end-to-end deep learning compiler and MLIR acting as a compiler infrastructure. Meanwhile, in the realm of hardware architectures customized for deep learning workloads, there are two primary types: streaming architecture and computational engine architecture. Hardware design toolflows associated with these architectures are also embracing new compilation techniques to drive advancements and innovations.  The combination of deep learning compilers and hardware compilation techniques brings new opportunities for deep learning co-design.

Current problems and future trends

With performance requirements increasing too fast for processor development to keep up, effective co-design is critical. The problem with co-design is that there is no single way to go about it, no unified co-design framework or abstraction. If several layers of abstraction are required, efficiency decreases. It is labor-intensive to customize compilers for specific domains. Unifying ecosystems are forming, but underlying causes of fragmentation remain. The solution to these problems would be a modular extensible unifying framework.

The Buddy Compiler

The contributors to the Buddy Compiler project are “committed to building a scalable and flexible hardware and software co-design ecosystem.” The ecosystem’s modules will include a compiler framework, a compiler-as-a-service platform, a benchmark framework, a domain-specific architecture framework, and a co-design module. The latter two modules are still in progress.

The authors predict continued development of compilation ecosystems that will help unify the work being done in the rapidly developing and somewhat fragmented field of deep learning.

The authors of the review are Hongbin Zhang, Mingjie Xing, Yanjun Wu, and Chen Zhao of the Institute of Software, Chinese Academy of Sciences.

International research team discovers Gulf Stream thermal fronts controlling North Atlantic subtropical mode water formation



Peer-Reviewed Publication

SCIENCE CHINA PRESS

Schematic of dynamics for the North Atlantic subtropical mode water formation controlled by Gulf Stream fronts. 

IMAGE: THE GULF STREAM THERMAL FRONT LEADS TO EXCESSIVE OCEAN LATENT HEAT RELEASE PRIMARILY DUE TO HIGHER SURFACE WIND SPEED AND SHARPER AIR-SEA HUMIDITY CONTRAST OVER ITS WARM FLANK. THE CUMULATIVE EXTENSIVE LATENT HEAT LOSS FAVORS THE DEEPENING OF MIXED LAYER, WHICH GIVES RISE TO ENLARGEMENT IN THE OUTCROPPING OF CORRESPONDING ISOPYCNALS, LEADING TO CONSIDERABLE TRANSFORMATION OF LIGHTER WATER MASSES INTO STMW. view more 

CREDIT: ©SCIENCE CHINA PRESS




Subtropical mode water (STMW) is a vertically homogeneous thermocline water mass, serving as heat, carbon, and oxygen silos in the ocean interior and providing memory of climate variability for climate prediction. Understanding physics governing STMW formation is thus of broad scientific significance and has received much attention. Traditionally, it has been considered that STMW is constructed by basin-scale atmospheric forcing. Due to the limitations resulting from sparse sampling of observations and coarse resolution of climate models, less knowledge is acquired about the role of oceanic thermal fronts in the STMW production.

With focus on the North Atlantic Ocean where contains the thickest and volumetrically largest STMW in the global ocean, the team found for the first time that the feedback of sharp surface thermal fronts shaped by the Gulf Stream to the overlying atmosphere is essential for the STMW formation. By comparing twin simulations conducted with a state-of-the-art eddy-resolving coupled global climate model, it is found that suppressing local frontal-scale ocean-to-atmosphere (FOA) feedback leads to STMW formation being reduced almost by half. This is attributable to a vast increase in surface outcropping associated with the cumulative excessive latent heat release primarily due to higher wind speeds and greater air-sea humidity contrast driven by the Gulf Stream fronts (Figure 1).

Furthermore, the crucial role of the FOA feedback is attested by a multi-model and multi-resolution ensemble of latest global coupled models participating in CMIP6, in which with finer model resolution, the observed STMW is better reproduced due to more realistic representation of FOA feedback (Figure 2). “This is an important finding that incorporates the missing piece for the accurate STMW modelling and provides an effective solution to the common severe underestimation of STMW in earth system models” Dr. Gan says.

“This study lasted over 2 years since 2020 and I really enjoyed being part of it. It is a new exciting result that highlights the importance of frontal-scale air-sea interactions in climate system.” Dr. Yu says.

###

See the article:

North Atlantic subtropical mode water formation controlled by Gulf Stream fronts

https://doi.org/10.1093/nsr/nwad133

Protection of biodiversity and ecosystems: we are still far from the European targets


Peer-Reviewed Publication

UNIVERSITÀ DI BOLOGNA

Strictly protected areas across the EU 

IMAGE: STRICTLY PROTECTED AREAS (CLASSIFIED BY THE IUCN AS INTEGRAL RESERVES, WILDERNESS AREAS AND NATIONAL PARKS) ACROSS THE EU view more 

CREDIT: UNIVERSITY OF BOLOGNA




The goal of fully protecting 10% of the EU's land area is ambitious for European countries that have been profoundly shaped by millennia of human transformation. A recently published study, coordinated by the University of Bologna, has carried out the first analysis at European level on the strictly protected areas (classified by the IUCN as integral reserves, wilderness areas and national parks) across the EU, studying how extensive integral protection is across biogeographical regions, countries and elevation gradients.

"We have discovered – explains Prof. Roberto Cazzolla Gatti, conservation biologist at the Department of Biological, Geological and Environmental Sciences (BiGeA) of the University of Bologna and first author of the study – that the current strictly protected area in the EU 27 is extremely unbalanced between biogeographical regions, countries and altitude bands (for example, we find very few strictly protected areas in the plains and at low elevations) and, with very rare exceptions (only Luxembourg and Sweden are above the threshold identified by the EU, with Finland very close), most countries are much below the 10% strict protection goal. It will therefore be necessary to work to get closer to the conservation objectives set by the EU 2030 Strategy for biodiversity through rigorous international cooperation between countries and the commitment of individual states to identify national areas to be allocated for protection."

The conservation of global biodiversity is one of the most urgent objectives for the coming decades. Habitat destruction, degradation and fragmentation of 70% of the earth's surface are the main causes of biodiversity loss and are triggering the sixth mass extinction. In Europe, there is not a single contiguous area of more than ten thousand square kilometers left free of human impacts. However, there are still areas with high wildness and rather intact ecosystems present, mainly, within protected areas.

In May 2020, the "European Biodiversity Strategy for 2030” was signed: an ambitious plan to protect biodiversity and reverse the degradation of ecosystems. With this strategy, the European Union aims to expand the network of protected areas up to 30% of its territory, applying an integral protection of 10% of the land and sea surface for all EU countries. The achievement of the objective of rigorously protecting these areas represents a fundamental element for achieving the long-term conservation of ecosystem processes and the maintenance of high levels of persistence of biodiversity.

"According to the European Commission - said Prof. Alessandro Chiarucci, director of the BiGeA Department of the University of Bologna and coordinator of this research project - strictly protected areas are areas that are completely and legally preserved, designated for conservation or restoration the integrity of natural areas rich in biodiversity and natural environmental processes. This definition gives a clear idea of which should be considered strictly protected in the EU context. Within these areas all industrial, extractive and destructive uses and activities that disturb species and ecosystems such as mining, deforestation, aquaculture and construction, they are usually not allowed. Rigorously protected areas must be seen as places where ecological and evolutionary processes are left substantially undisturbed to ensure the persistence of biodiversity. Therefore, it is necessary that in these areas human activities are limited and well controlled, allowing the natural development of natural processes. Management actions may be permitted to support or enhance natural processes, as well as the restoration or conservation of habitats and species for which the area has been designated to be protected."

The study also notes that the current scenario could very probably be even worse than the one represented here since the management of some protected areas, such as the peripheral areas of national parks, does not always correspond to integral protection. In fact, some national parks, although classified as strictly protected, allow a wide range of anthropogenic activities in some of their areas (for example forestry, agriculture, hunting or grazing of domestic animals), hindering the conservation of some ecosystem processes. The authors of the study highlight how important it is to preserve large spaces without (or with very limited) human disturbance to ensure real ecological connectivity.

"Therefore – concludes Prof. Cazzolla Gatti – it would be necessary to identify potential areas to expand integral protection with low economic and social costs, including, for example, areas with a high biodiversity value, but low population and land use. However, considering that in Europe most of the territory has been profoundly modified by man, strictly protected areas should also include territories that currently have a lower protection status, such as those in the Natura 2000 network, and which can recover their biodiversity value through restoration and rewilding."

To achieve the objectives set by the EU 2030 Strategy for Biodiversity, it will first of all be necessary to identify a sufficient area to be fully protected for 10% of each member country. Until now, a biogeographic and ecological analysis of the coverage of strictly protected areas in the EU has been lacking and this has limited the definition of large-scale conservation policies. This study represents a further contribution towards greater conservation of European biodiversity.

 

Is early childhood education contributing to socioeconomic disparities?


New study of French school children finds differences in engagement opportunities linked to social class

Peer-Reviewed Publication

NEW YORK UNIVERSITY





Students from middle- and upper-class backgrounds are more likely to participate in classroom discussions than are equally capable students from working-class backgrounds, finds a new study of preschool children in France by an international team of researchers. The work also shows that these differences may shape how students are perceived by their peers.

 

The results, which appear in the Journal of Experimental Psychology: General, shed new light on the persistent and early emerging disparities in education linked to socioeconomic status (SES).

 

“While preschool attendance has been shown to be beneficial for low-SES students’ achievement, our results suggest that early childhood education is not currently maximizing its potential as an equalizing force,” says lead author Sébastien Goudeau, an assistant professor at Université de Poitiers.

 

“Early schooling contexts provide unequal opportunities for engagement to children in ways linked to their socioeconomic status, which could serve to maintain or even exacerbate social class disparities in achievement,” adds Andrei Cimpian, a professor in New York University’s Department of Psychology and one of the paper’s authors. “These and other findings call for redesigning aspects of early childhood in ways that foster engagement among all students, regardless of their social class.”

 

Previous research has primarily focused on deficits in low-SES parents’ knowledge, practices, or resources to explain disparities found in early childhood education. The new Journal of Experimental Psychology: General study examined how schooling itself at this age might be shortchanging children from lower-income backgrounds. 

 

In doing so, the study’s authors, which also included researchers from Northwestern University’s Kellogg School of Management and Stanford University’s Department of Psychology, examined students’ behavioral engagement during whole-class discussions—a core part of the preschool curriculum in Europe and North America.

 

One study included nearly 100 preschoolers, who were anonymous to the researchers, across four classrooms of Grande Section—the last year in French preschools before first grade—in France’s Nouvelle-Aquitaine region. The classrooms selected had a high degree of SES variability among the students as determined by their parents’ occupation. The researchers videotaped whole-classroom discussions—ranging from eight to 19 in each classroom—and recorded the frequency and duration of each child’s participation.

 

The results showed that low-SES students spoke less frequently and for less time compared to high-SES students. Notably, these differences were not accounted for by SES differences in oral language proficiency, indicating that low-SES students did not talk less because they lacked the proficiency to do so.

 

In a second study, the authors sought to understand how preschool children perceive differences among their peers in their levels of school engagement. To do so, they drew a new group of Grande Section participants from the same region; it included nearly 100 preschool students across five classrooms. 

 

To determine the children’s perceptions of their classmates, the researchers posed scenarios involving fictional students aimed at surfacing the students’ views of the types of students who are called upon and who speak longer than others. For instance, “When the teacher asks the class a question, several children raise their hands. However, the teacher calls on [Theodore/Zélie] more often than other children.” After each scenario, children were asked to explain the protagonist’s behavior: for instance, “Why do you think [Theodore/Zélie] is called on more often than other children?”

 

The research team then coded the open-ended responses the children provided, looking in particular for whether children mentioned inherent factors having to do with the protagonist’s own characteristics (e.g., “because she/he is smart,” “because she/he has a lot to tell”) or extrinsic factors having to do with the protagonist’s background or the classroom context (e.g., “because the teacher likes her/him,” “because the other children are disobedient”).

 

For each scenario, after the open-ended explanation question, children were also asked to evaluate the fictional student along the two fundamental dimensions of social judgments: competence and warmth. These included perceived intelligence (“Do you think [the fictional child] is more intelligent than the other children, or less intelligent than the other children?”) as well as how they thought the teacher viewed the fictional student (“Do you think the teacher likes [the child] more than the other children, or less than the other children?”). These comparisons were made with the fictional student’s classmates in mind.

 

Overall, the fictional child who made frequent and longer contributions to classroom discussions was perceived as possessing more positive characteristics than other children in their class. 

 

“Preschoolers explained differences in engagement during whole-class discussions as a consequence of children’s inherent characteristics, including their competence and warmth,” explains Cimpian. “These results suggest that the patterns of school engagement typical of middle- and high-SES students increase the extent to which they are valued by their preschool peers and—conversely—may undermine low-SES students’ psychological experiences.”

 

 

For job applicants with a criminal record, showcasing the right credentials can make a difference


Peer-Reviewed Publication

AMERICAN SOCIETY OF CRIMINOLOGY




Employment is believed to reduce the likelihood of criminal recidivism, but a criminal record is a significant barrier to employment. People with a criminal record are more likely to be unemployed or underemployed, or to have a job that does not match their skills or interests. In a new study, researchers asked business managers to make hypothetical hiring decisions about males with a criminal conviction, changing the characteristics of the applicants to identify their effect on managers’ decisions.

The study found that applicants with a criminal record were unlikely to be hired when compared with applicants without a record, but that some credentials—such as more education, certain references, and more years of experience—changed managers’ decisions. In fact, some credentials, such as a recommendation by a college professor, a GED, or a college degree, made the applicant with a criminal record more likely to be hired than a similar applicant without a criminal record who lacked those credentials.

The study, by researchers at the University of South Florida (USF), appears in Criminology, a publication of the American Society of Criminology.

“Having a criminal record is very costly in the labor market, but this cost can be superseded by specific credentials that likely signal an applicant’s reliability, which can be provided by existing programs and institutions,” says Mateus Rennó Santos, assistant professor of criminology at USF, who led the study.

Using a nationwide sample of nearly 600 hiring managers in 2021, researchers catalogued responses about hypothetical hiring decisions between two male applicants for entry-level jobs. The main difference between the applicants was a prior criminal conviction for drug possession with intent to distribute. The authors randomly manipulated the education, references, wages, or experience of the applicant with the criminal record to identify which factors could offset the existence of the record in terms of the applicant’s probability of being hired.

When credentials were the same, the applicant with a criminal record was consistently much less likely to be hired. However, that applicant was more likely to be hired if he had at least one year of relevant experience, a GED or college degree, or references from a former employer or a university professor. Incomplete degrees, references from criminal justice professionals (e.g., a prison reentry program supervisor, a probation or parole officer), or wage discounts did not make the applicant with the record more likely to be hired than a similar applicant without a criminal record.

With respect to experience, the study found no difference in effect on employability between experience obtained in or out of a correctional facility. This suggests that there is little need to hide or gloss over jobs inside prison if a potential employer is already aware of the applicant’s criminal record. In addition, increasing an applicant’s experience from nothing to one year was very helpful to employability, but any increase after the first year had little benefit to being hired for an entry-level position.

The study also found that managers who had criminal records were more likely to hire applicants with records, which speaks to potential empathy in the hiring process. In addition, managers in public-facing industries, especially those serving vulnerable populations (e.g., education, health care), were less likely to select applicants with criminal records than were managers in occupations such as manufacturing and transportation.

Finally, the study investigated managers’ justifications for their hiring choices, which included their desire to help people with a criminal record, their belief in redeemability, the expected benefits of hiring a candidate with better credentials, and the positive impressions signaled by certain credentials (e.g., greater commitment or skill). When deciding against the candidate with a criminal record, managers often said they wanted to minimize risk to their business, employers, or clients; worried about having someone with a drug conviction in the workplace; or dismissed the benefits of improved credentials for their particular business.

“In mitigating the cost of a criminal record for employment, hiring managers identified several ways to boost employability, most of which take advantage of interventions already available at many correctional institutions and re-entry programs,” notes Chae M. Jaynes, assistant professor of criminology at USF, who coauthored the study. “Not only can these factors be addressed individually, but they can be combined in single programs to increase the likelihood of employability for formerly incarcerated individuals.”

The study’s findings have practical implications, say the authors, including:

  • Correctional institutions are increasingly partnering with universities to offer incarcerated people opportunities to obtain college credits; such initiatives would be most beneficial if they focused on degree completion, which can provide a clearer signal of employability.
  • Professors considering becoming involved with prison education and re-entry initiatives should consider the value they can bring to the employability of individuals with criminal records, both in terms of skills and by lending their credibility through a recommendation.
  • Correctional institutions and re-entry programs should ensure that incarcerated individuals are offered the opportunity to work before their entry into the labor market, advise re-entering individuals that working while incarcerated is valued work experience, and discuss ways to showcase this experience on job applications.

Among the study’s limitations, the authors say their findings are specific to the scenarios they established and do not necessarily generalize to complex hiring settings with multiple applicants (e.g., to females, people with records for violent crimes, managers hiring for higher-level jobs). Also, because the study was done when many employers were having difficulty finding workers, managers may have been more open to hiring people with a criminal record.

“Putting our findings into practice can help justice-involved individuals in search of opportunities, as well as their communities, and the employers who are willing to hire them,” suggests Danielle Thomas, a doctoral student in criminology at USF, who coauthored the study.

The study was supported by USF’s College of Behavioral & Community Sciences.

 

Study shows significant decline of snow cover in the Northern hemisphere over the last half century


Snow cover plays a major role in global energy balance, continental thermal stability, and regional temperatures

Peer-Reviewed Publication

UNIVERSITY OF CALIFORNIA - SANTA CRUZ

Ice floes in the Arctic. 

IMAGE: ICE FLOES IN THE ARCTIC. view more 

CREDIT: MIKE DUNN, NOAA CLIMATE PROGRAM OFFICE, NABOS 2006 EXPEDITION




In the face of the ongoing climate crisis, scientists from many fields are directing their expertise at understanding how different climate systems have changed and will continue to do so as climate change progresses. Robert Lund, professor and department chair of statistics at the UC Santa Cruz Baskin School of Engineering, collaborated on a new study that uses rigorous mathematical models and statistical methods and finds declining snow cover in many parts of the northern hemisphere over the last half century.

Understanding snow cover trends is important because of the role that snow plays in the global energy balance. Snow’s high albedo – the ability to reflect light – and insulating characteristics affects surface temperatures on a regional scale and thermal stability on a continent-wide scale.

In the new study published in the Journal of Hydrometeorology, researchers analyzed snow cover data gathered from weekly satellite flyovers between 1967 (when satellites became more common) and 2021, which was divided into grid sections for analysis. Of the grids that researchers determined had reliable data, they found that snow cover is declining in nearly twice as many grids as it is advancing.

“In the Arctic regions, snow is going away more often than not – I think climatologists sort of suspected this,” Lund said. “But it's also going away at the southern boundaries of the continents.”

In a study that took about four years to complete, the researchers show that snow presence in the Arctic and southern latitudes of the Northern hemisphere is generally decreasing, while some areas such as Eastern Canada are seeing an increase in snow cover. This could be due to increasing temperatures in areas that are typically very cold but still below freezing, allowing the atmosphere to hold more water, which then falls as snow.

Lund believes this is the first truly dependable analysis of snow cover trends in the Northern hemisphere due to the rigor of the researchers’ statistical methods. It is often challenging for non-statisticians to extract trends from this type of satellite data, which comes as a sequence of 0s or 1s to indicate if snow was present during a certain week. The researchers also had to take correlation into account when looking at trends, as the presence of snow cover one week greatly affects the likelihood of snow cover the following week. These two factors were taken into account with a Markov chain based model. Accurate uncertainty estimates of the trends could be computed from the model. The researchers found hundreds of grids where snow cover was declining with at least 97.5% certainty.

However, they also found that some of the satellite data gathered in mountainous regions was unreliable, showing no snow in the winter and several weeks of snow in the winter. This was likely due to a flaw in the algorithm that processed the satellite data to determine if snow was present or not.

“The reason this study took a lot of work is because the satellite data is so doggone poor,” Lund said. “Whatever the meteorologists did to estimate snow from the pictures in some of the mountainous regions just didn't work, so we had to take all the grids in the Northern hemisphere, and figure out whether the data was even trustworthy or not.”

By determining which satellite data is unreliable, this study can serve as a resource to the scientific community who also may want to evaluate this snow cover data for their research.

Lund collaborated on this study with UCSC Ph.D. candidate Jiajie Kong, Assistant Professor of Math and Statistics at the University of North Florida Yisu Jia, Professor of Meteorology and Climatology at Mississippi State University Jamie Dyer, Associate Professor of Statistics at Mississippi State University Jonathan Woody, and Professor of Statistics and Operations Research at the University of North Carolina at Chapel Hill J. S. Marron. This research was supported by funding from the National Science Foundation.