Wednesday, April 03, 2024

New method reveals hidden activity of life below ground



Peer-Reviewed Publication

BIGELOW LABORATORY FOR OCEAN SCIENCES

Death Valley Borehole 

IMAGE: 

THE DESERT RESEARCH INSTITUTE TEAM EXTRACTING SAMPLES FROM THE BORE HOLE AT DEATH VALLEY.

view more 

CREDIT: DUANE MOSER, DESERT RESEARCH INSTITUTE






A team of scientists led by researchers at Bigelow Laboratory for Ocean Sciences have developed an innovative method to link the genetics and function of individual microbes living without oxygen deep below Earth’s surface. Measuring both of these attributes — and, more importantly, linking them together — has long been a challenge in microbiology but is critical for understanding the role of microbial communities in global processes like the carbon cycle. 

The new approach, developed at Bigelow Laboratory’s Single Cell Genomics Center, enabled researchers to discover that one species of sulfate-consuming bacterium was not only the most abundant but also the most active organism in a groundwater aquifer beneath Death Valley, almost half a mile below the surface. The findings, published in the Proceedings of the National Academy of Sciences, show how this method can be a powerful tool for measuring how active different organisms are in these extreme environments.

“Previously, we had to assume that all cells were operating at the same rate, but now we can see that there is a wide range of activity levels between individual members of the microbial communities,” said Research Scientist and lead author on the paper Melody Lindsay. “That helps us understand what these microbial communities are capable of and how that might influence global biogeochemical cycles.”

The recent study is a part of a larger project linking the genetic code of microbes — the blueprint of what they’re capable of — to what they’re actually doing at any given moment.

Funded by NSF’s EPSCoR program, the “Genomes to Phenomes” project is a joint venture between Bigelow Laboratory, the Desert Research Institute, and the University of New Hampshire. It leverages recent advances in single-cell genetic sequencing with a creative approach applying flow cytometry to estimate the rates of processes, such as respiration, happening within those cells.

Flow cytometry, a method for analyzing individual environmental microbes that was adapted at Bigelow Laboratory from the biomedical sciences, allowed the researchers to quickly sort out living microbes in the aquifer water samples. Those microbes were stained with a specially designed compound that lights up under the flow cytometry laser when certain chemical reactions are happening within the cell. The relationship between how much the cell fluoresces under the laser and the rate of those reactions was worked out experimentally with lab-grown cultures of cells by student interns at Bigelow Laboratory and then applied to the Death Valley samples.

Once the active cells were measured and isolated, the team sequenced their individual genomes. The researchers also used meta-transcriptomics, a method for determining which genes are being actively expressed, and radioisotope tracers, a more traditional method for measuring activity within a microbial community. This was done both to “double check” their results and to get even more information on the links between what these microbes are genetically capable of and what they’re actually doing.

The Single Cell Genomics Center is the only analytical facility in the world offering this new technique to researchers.

“This study was an exciting opportunity for our research team and the SCGC to help improve our understanding of the immense, enigmatic microbial ecosystems underground,” said Bigelow Laboratory Senior Research Scientist Ramunas Stepanauskas, the director of SCGC and principal investigator of the project.

This new study builds on the first demonstration of this approach for quantifying the activity of individual cells. In late 2022, the team published findings on microbes in seawater, showing that a small fraction of microorganisms is responsible for consuming most of the oxygen in the ocean. With this new paper, the team is expanding that method to show it can be used in low biomass environments with microbes that don’t rely on oxygen. In the samples drawn from the subsurface aquifer in California, for example, the scientists estimated that there were hundreds of cells per milliliter of water, compared to millions of cells in a typical milliliter of surface water.

“We started out with oxygen-respiring organisms in the ocean because they’re a little more active, a little easier to sort, and easier to grow in the lab,” Lindsay said. “But aerobic respiration is just one process that is possible in microbiology, so we wanted to branch out beyond that.”

The results confirmed that the bacterium Candidatus Desulforudis audaxviator was not only the most abundant microbe in this environment, but also the most active, reducing sulfate for energy. The overall activity rates the team measured were low compared to the seawater samples from the previous study, but there were large differences between how active individual microbes were.

The research team is now working to apply their method to measure other anaerobic reactions, such as nitrate reduction, and to new environments, including sediments along Maine’s coast. A related project funded by NASA is also enabling Lindsay and her colleagues to test the method in the deep subsurface below the ocean.

“Right now, we’re getting all of these point measurements around the world, and they do help us better understand what microbes are up to, but we need to scale it up,” Lindsay said. “So, we’re thinking about how to apply this method in new places, even potentially on other planets, in expanded ways.”

 

Gloom and doom warnings about climate change do not work



If we want people to act to curb climate change, we have to find a way to motivate them



NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY





If you want to spread a message about climate change and global warming, you need to adapt the message according to your intended audience and what you want to achieve.

Researchers have now developed an app to help people who want to spread their message on climate issues to ensure they generate the most support possible – be they researchers, politicians, various decision makers or legislators.

Huge survey involving 63 countries

59,000 people participated in surveys as part of the work on creating the app, and Norway was among the 63 countries involved. (You can read about what works best in Norway later in the article)

“The research team created this app that can help raise climate awareness and climate action globally. It is important to highlight messages that research shows are effective,” says Isabel Richter, Associate Professor at the Norwegian University of Science and Technology (NTNU’s) Department of Psychology.

In total, nearly 250 researchers were involved in the work of testing out different climate messages and tactics. Richter was part of the research team along with colleagues Senior Researcher Stepan Vesely and Professor Christian Klöckner, also from NTNU's Department of Psychology.

Previous studies have concentrated on checking attitudes towards individual measures. These might include recycling, use of public transport and energy-saving measures in the home. However, this study looked at a number of different variations. It also received answers from people all around the world, and not only Western, industrialised countries.

The researchers collected data between July 2022 and May 2023, so the figures are very recent. Both the app and the method behind it have now been presented in the Science Advances journal.

Multiple variations

The researchers exposed people to different variations of climate messages and tasks related to climate change. They then investigated their attitudes towards the different climate measures and other types of responses.

To measure how effective the methods were, they checked how willing the participants were to support different points of view and measures regarding climate change. For example, participants were asked whether they saw climate change as a serious threat, whether they supported a carbon tax on fossil energy, or whether they would plant trees themselves as part of the solution.

The researchers also tested whether participants were willing to share messages on social media, such as eating less meat in order to mitigate climate change.

Here are some of the results:

  • INTIMIDATION: “Climate change poses a serious threat to humanity”.

All tactics increased the likelihood of people sharing the climate message on social media, and this doom and gloom messaging style was most effective, at least globally. However, sharing requires little effort from the person doing it. In some countries, scare tactics reduce support for reforestation, a real measure that requires more effort but may work. Scare tactics also reinforced the negative attitudes of people who are already climate sceptics.

  • KNOWLEDGE: “99% of climate experts believe the planet is getting warmer and that climate change is primarily due to human activity”.

Some messages produce different results in different countries. This message, which appeals to the recipient’s sense of knowledge, increased support for climate measures in Romania by 9 per cent. In Canada, however, it reduced support by 5 per cent.

  • EMOTIONS: Writing a letter to a child who is close to you about the climate measures we are taking today to make the planet a liveable place in 2055.

This tactic increased support for climate measures in Nigeria, Russia, Ghana, Brazil and the United States by between 5 and 10 per cent. However, in countries such as India, Serbia and the United Arab Emirates, it had little effect, or even reduced support slightly.

Other variations the researchers tested included presenting climate measures that have already been successfully implemented in the past, or portraying climate measures as patriotic or popular choices. Participants were also asked to imagine writing a letter to their future self telling them what type of climate measures they should have taken.

86 per cent believe climate change is a threat

Attitudes varied widely from country to country and depended on both demographics and beliefs. The researchers also divided people into groups according to their nationality, political ideology, age, gender, education, and income.

The results showed that 86 per cent of the participants believed that climate change poses a threat.

More than 70 per cent were supporters of systematic and collective measures to address climate change.

No point in using scare tactics in Norway

Gloom and doom messages about climate change do not work in Norway.

“Writing a letter to future generations is most effective in increasing political support for climate measures, and in increasing the belief that climate change is a problem. The second most effective measure is to say that almost all climate experts agree,” Klöckner said.

Dire warnings and writing a letter to your future self were the least effective measures in Norway.

“All the alternatives made people in Norway less inclined to share a climate message on social media,” adds Richter. In other words, in complete contrast to the results seen globally.

However, people in Norway are quite eager to do something themselves, like planting trees. Here, it is most effective to focus on moral responsibility, the fact that many people acknowledge that climate change is a problem, and also that there is consensus among climate experts.

“The way that I choose to interpret it is that people in Norway like to do something concrete instead of just sharing things on social media,” says Associate Professor Richter.

Significant Norwegian contribution

Researchers from New York University and the University of Vienna led the study, but NTNU’s contribution was also significant.

“We were involved from the very beginning, developing possible interventions. We assessed intervention proposals from other partners, improved them in collaboration with the group and helped determine which interventions should actually be implemented,” says Vesely.

Vesely and Klöckner led and funded the collection of data in Norway.

Richter has good contacts in a number of African countries, the involvement of which is not always that easy to get in these types of studies. Among other things, she co-funded and participated in the collection of data from Kenya in particular.

Approximately 50 per cent of the Norwegian funding came from the Norwegian School of Economics (NHH). NHH also organised data collection through Ipsos.

Messages need to be adapted

Some activists believe that scare tactics are precisely what is needed in order for people to take action themselves. Others are of the opinion that it is depressing, demoralising and counterproductive. The study supports both of these hypotheses, but it depends on what you want to achieve.

Scare tactics work if your main focus is on getting people to post about their support on social media, but the venting of anger and frustration on Facebook, TikTok or X doesn’t necessarily help the environment. If you want to gather support for things that may actually work, you need to use other means.

It is quite easy to get people to do things that do not require much effort, such as sharing a message on social media.

“Sharing something on social media can in itself feel like taking action. People may feel like ‘Now that I have done something, I can get on with my life’. This is behaviour with a very low threshold,” says Associate Professor Richter.

However, based on the results from around the world, none of the methods made people more willing to plant more trees for the sake of the environment – a measure that means people have to put an effort in themselves.

“The findings show that spreading a climate message depends on people’s attitudes towards climate change in the first place. Legislators and campaigners must adapt their messaging to the public,” says Madalina Vlasceanu, Assistant Professor at New York University and one of the people who led the research project.

Reference: Madalina Vlasceanu et al. Addressing climate change with behavioral science: A global intervention tournament in 63 countries. Sci. Adv.10,eadj5778(2024). DOI:10.1126/sciadv.adj5778

 

SwRI awarded $6.4 million to advance cognitive electronic warfare


SwRI engineers developing EW algorithm to ‘think like a human’


Business Announcement

SOUTHWEST RESEARCH INSTITUTE

COGNITIVE ELECTRONIC WARFARE 

IMAGE: 

UNDER A $6.4 MILLION AIR FORCE CONTRACT, SWRI IS EXPLORING ALGORITHMS TO ADVANCE THE AIR FORCE’S COGNITIVE ELECTRONIC WARFARE CAPABILITIES. COGNITIVE EW SYSTEMS ACCURATELY DETECT AND RESPOND TO UNKNOWN ENEMY RADAR THREATS IN REAL TIME, TO DISRUPT OR DECEIVE ENEMY AIR DEFENSES SUCH AS SURFACE TO AIR MISSILES OR ANTI-AIRCRAFT ARTILLERY, AS ILLUSTRATED BY THE U.S. AIR FORCE F-16 EW TRAINING EXERCISE PICTURED.

 

view more 

CREDIT: U. S. AIR FORCE/JAKE MELAMPY





SAN ANTONIO — April 2, 2024 —The United States Air Force is tapping into the extensive expertise of Southwest Research Institute’s Advanced Electronic Warfare team with a $6.4 million contract to explore cognitive electronic warfare (EW) algorithms, which accurately detect and respond to unknown enemy radar threats in real time. SwRI engineers are conducting research to develop a reliable algorithm that will advance the Air Force’s cognitive EW capabilities and protect aircrews.

“How do we get to the point where the EW system is thinking like a human?” said SwRI Staff Engineer David Brown, who is leading the project. “A pilot can fly into an area and not know what’s there, but by analyzing the environment and signals, the pilot can choose a proper response to a threat. We are developing an algorithm that can analyze its environment the same way. It will sift through information with the reliability of a human, but with higher accuracy and faster reaction times.”

Traditional electronic warfare processes involve gathering intelligence before flying into an area. Aircraft operators have advance knowledge of which adversaries they might encounter, and that information is preloaded into the aircraft’s electronic warfare system. The system alerts pilots when it detects threats and automatically protects the aircraft. While this method is effective at tracking known threat signals, the system is incapable of identifying never-before-seen threats. That’s where cognitive EW could prove a stronger, faster and more accurate tool to protect the warfighter and enhance capabilities with allies. 

“In the past, we’ve relied on a library of signals that allows you to recognize and compare familiar signals. We’re asking, ‘does this match anything I already know about?’” said SwRI Research Engineer Damon Plyler, who is part of the development team. “That library is the limit of our information. Through cognitive EW and a machine learning approach, we can instantly identify new signals that are not already in the library.”

To develop this autonomous EW system, SwRI engineers are using a two-phased approach. The first phase, feature extraction, uses artificial intelligence and machine learning processes to extract specific features of threatening radar signals. Those data are used in the second phase to group millions of pulses, highlighting signal lethality and vulnerabilities.

SwRI engineers are implementing feature extraction algorithms on advanced platforms, including neuromorphic processing hardware. Neuromorphic computing systems use spiking neural networks to emulate how the human brain retains “memories,” making processing faster, more accurate and efficient.

“We are working to provide the Air Force with efficient and resilient cognitive EW solutions,” said SwRI’s Dr. Steven Harbour, who is applying his doctorate in neuroscience to lead the development of neuromorphic systems. “We are implementing neuromorphics in hardware to be used for the first time in an operational combat environment. It puts us well ahead of our adversaries. To the best of our knowledge, we are the first in the world to do this.”

SwRI’s cognitive EW work began as a multiyear, multimillion-dollar internal research and development (IR&D) project. Through its IR&D program, the Institute invests in future-focused concepts to advance technology for government and industry clients. The contract between SwRI and USAF commenced on October 1, 2023, and will run through March 2025. Upon completion of the project, SwRI will present the Air Force with effective research toward advancing the field of cognitive EW.

SwRI develops advanced electronic warfare systems for air and ground applications.  For more information, visit https://www.swri.org/industry/advanced-electronic-warfare-department/advanced-electronic-warfare-solutions.


Sandia pumps $140B into the US economy through technology development

Yes, that’s billion with a B


DOE/SANDIA NATIONAL LABORATORIES

20-year economic impact 

IMAGE: 

SANDIA NATIONAL LABORATORIES’ COOPERATIVE RESEARCH AND DEVELOPMENT AGREEMENTS AND PATENT LICENSE AGREEMENTS HAVE RESULTED IN A $140 BILLION ECONOMIC IMPACT SINCE THE YEAR 2000. 

view more 

CREDIT: SANDIA NATIONAL LABS





ALBUQUERQUE, N.M. — To say that the technology and products Sandia National Laboratories researchers have helped imagine, innovate and industrialize have had a massive impact on the country would be an understatement.

Two studies commissioned by Sandia and the National Nuclear Security Administration show Sandia’s work has had an overall economic impact of $140 billion since the year 2000. That’s a significant figure, especially considering it spans just 20 years, less than a third of Sandia’s 75-year existence.

“I am very proud of how Sandia excels in fulfilling its technology transfer mission to deliver economic impact to the U.S.,” said Senior Manager of Business Development Mary Monson. “Sandia’s contribution of $140 billion over 20 years makes it a leader among Department of Energy laboratories.”

Impact in dollars and cents

The two studies, conducted by TechLink, a federally funded technology transfer center at Montana State University, show: $140 billion in overall economic impact, $72.2 billion in new product sales, $22.52 billion in new product sales to the government, $14.1 billion in new tax revenue and 607,246 jobs created.

The studies examined Sandia’s Cooperative Research and Development Agreements and patent license agreements between 2000 and 2023, assessing how they translated into actual products, sales, tax revenue and jobs. This benefits the national economy, improves the nuclear weapons stockpile for the United States and international allies and supports NNSA missions.

“The Nuclear Security Enterprise is a treasure trove of commercializable technologies, and Sandia National Laboratories stands apart in its efforts to bring products to market,” said acting Director of NNSA’s Office of Technology and Partnerships, Jahleel Hudson.

Cooperative Research and Development Agreements and Patent License Agreements

Cooperative Research and Development Agreements and Patent License Agreements are two ways Sandia helps move its technology to industry.

CRADAs are unique contractual vehicles that enable federal labs and outside parties to jointly develop new technologies, leveraging each other’s expertise and resources.

PLAs are legally binding contracts that give licensees, usually for-profit corporations, the right to make, use and sell federal government-owned inventions.

Sandia has excelled in these agreements. According to the FY22 DOE Office of Technology Transitions annual data call, Sandia had:

  • The highest number of patent disclosures and filings and copyright assertions
  • The second highest number of active patent licenses
  • The third highest number of CRADA projects in the DOE enterprise

DOE Boost

In recent years, Sandia has accelerated its efforts to move innovation to industry through the creation of new programs. One such program is the DOE Boost Platform, led by Sandia and external partner FedTech, which partners with 13 other DOE labs. The goal is to bring national labs, startups, academia and entrepreneurs together to find solutions to community-based energy challenges, especially in rural areas.

Sandia is already negotiating three licenses under this program, including one with a team of New Mexicans and Alaskans.

“They are looking to use Sandia’s algae technology in Santa Fe’s municipal water system,” Monson said. “It highlights the fact that communities want to work with people and entrepreneurs from their community to focus on the challenges they’re facing. It’s kind of like the whole impetus behind boost.”

 C4 Partnering Model

Another successful program is the C-4 Partnering Model. The four C’s stand for Co-locate, Collaborate, Community and Communicate. It works with seven lab partners and regional partners to develop a regional commercialization ecosystem for manufacturing clean energy technologies focused on collaboration and rapid technology development.

S2EDGE

A third program is the Semiconductor Sector S2-EDGE program. EDGE stands for Expedite and Develop Game-Changing Exemplar. In collaboration with four other labs, it aims to harness industry use-inspired input to move innovations in semiconductor technologies and next-generation microelectronics from the lab to the market.

Monson says these programs are just part of the reason Sandia’s tech transfer numbers are so positive.

“We are committed to bringing new programs to our ecosystem that researchers can take advantage of to move labs-developed technologies to the market,” she said.

Success Stories – Goodyear

This massive economic impact has been fueled by some long-lasting partnerships including that with Goodyear. In the early 90s, the tire company turned to Sandia for its expertise in advanced computational mechanics, simulation and structural dynamics to improve tire performance, reduce product development times, and improve manufacturing methods.

Thirty years later, that partnership continues. One of its latest projects is a virtual way to develop a quieter tire without the need for physical testing. Over the years, the partnership has also resulted in the development of the Assurance TripleTred, a visually distinctive all-weather tire, and the development of Virtual Flat Track, which uses computer simulations to test a virtual tire on a virtual test track simulating actual road conditions.

All of this technology has roots in Sandia’s nuclear deterrence mission.

Sandia National Laboratories and the Goodyear Tire & Rubber Co. have developed a virtual way to test how noisy a tire will be on the road before any physical testing.

CREDIT

Goodyear

PNM

Sandia is also involved in ongoing work with the Public Service Company of New Mexico. In 2020, the two formalized a CRADA to help PNM develop a more resilient, clean electric grid and meet a 2040 carbon emissions-free target.

The partners are developing new analytical approaches, capabilities and technologies to help PNM use distributed and centralized sources of clean energy. Sandia and PNM will explore different types of storage, such as batteries, thermal storage and hydrogen that can provide grid resilience. Together, the partners can work to test solutions in real-world scenarios.

hCMOS

Last year, Sandia celebrated its partnership with an Albuquerque-based startup, Advanced hCMOS systems, to commercialize the world’s fastest burst-mode X-ray camera created at Sandia. The company was co-founded by former Sandia employees Liam Claus and Marcos Sanchez, who took advantage of the Entrepreneurial Separation for Technology Transfer program to start their own business.

While the technology is used as part of Sandia’s nuclear power and fusion work, there are many other applications, including astrophysics research, laser design and materials science. Researchers are looking to use it for, among other things, developing stronger glass screens for cell phones. They need to understand how glass fractures. In order to do that, they must see it as it happens at extremely high speeds.

The company is already serving government customers and is working on ramping up production through a DOE Phase I Small Business Innovation Research Grant and a TRGR Technology Readiness Initiative project.

75 years of innovation

With 75 years of innovation behind it, Sandia is looking to make an even bigger impact in the years ahead.

Commercialization of lab developed technologies has long been part of Sandia’s mission, but it is also a reminder that this national lab is made up of some of the brightest minds in the country. Through these partnerships and programs, this work can make its way into the real world and affect everyday lives. Something Sandia and NNSA are proud of.

“Over the last 20 years, Sandia’s economic impact has generated billions of dollars, created half a million high-paying jobs and positioned the United States as a leader in technological innovation,” Hudson said. “NNSA’s Technology and Partnerships Office values Sandia’s extraordinary achievements and is delighted to have commissioned the study to recognize this success.”

For more information on Sandia technology and its impacts, visit 70 ways Sandia has changed the nation.


Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.


Sandia National Laboratories’ Quinn Looker inspects sensors used in the ultrafast X-ray imaging camera

CREDIT

Sandia National Laboratories

 

Computer scientists show the way: AI models need not be SO power hungry



UNIVERSITY OF COPENHAGEN - FACULTY OF SCIENCE
Scatter plot 

IMAGE: 

EACH DOT IN THIS FIGURE IS A CONVOLUTIONAL NEURAL NETWORK MODEL WITH THE ENERGY CONSUMPTION ON HORIZONTAL AXIS AND PERFORMANCE ON VERTICAL AXIS. CONVENTIONALLY, MODELS ARE SELECTED ONLY BASED ON THEIR PERFORMANCE - WITHOUT TAKING THEIR ENERGY CONSUMPTION INTO ACCOUNT - RESULTING IN MODELS IN THE RED ELLIPSE. THIS WORK ENABLES PRACTITIONERS TO SELECT MODELS FROM THE GREEN ELLIPSE WHICH GIVE GOOD TRADE-OFF BETWEEN EFFECTIVENESS AND EFFICIENCY. 

view more 

CREDIT: FIGURE FROM SCIENTIFIC ARTICLE (HTTPS://IEEEXPLORE.IEEE.ORG/DOCUMENT/10448303)




The fact that colossal amounts of energy are needed to Google away, talk to Siri, ask ChatGPT to get something done, or use AI in any sense, has gradually become common knowledge. One study estimates that by 2027, AI servers will consume as much energy as Argentina or Sweden. Indeed, a single ChatGPT prompt is estimated to consume, on average, as much energy as forty mobile phone charges. But the research community and the industry have yet to make the development of AI models that are energy efficient and thus more climate friendly the focus, computer science researchers at the University of Copenhagen point out.

"Today, developers are narrowly focused on building AI models that are effective in terms of the accuracy of their results. It's like saying that a car is effective because it gets you to your destination quickly, without considering the amount of fuel it uses. As a result, AI models are often inefficient in terms of energy consumption," says Assistant Professor Raghavendra Selvan from the Department of Computer Science, whose research looks in to possibilities for reducing AI’s carbon footprint.

But the new study, of which he and computer science student Pedram Bakhtiarifard are two of the authors, demonstrates that it is easy to curb a great deal of CO2e without compromising the precision of an AI model. Doing so demands keeping climate costs in mind from the design and training phases of AI models.

"If you put together a model that is energy efficient from the get-go, you reduce the carbon footprint in each phase of the model's 'life cycle'. This applies both to the model’s training, which is a particularly energy-intensive process that often takes weeks or months, as well as to its application," says Selvan.

Recipe book for the AI industry

In their study, the researchers calculated how much energy it takes to train more than 400,000 convolutional neural network type AI models – this was done without actually training all these models. Among other things, convolutional neural networks are used to analyse medical imagery, for language translation and for object and face recognition – a function you might know from the camera app on your smartphone.

Based on the calculations, the researchers present a benchmark collection of AI models that use less energy to solve a given task, but which perform at approximately the same level. The study shows that by opting for other types of models or by adjusting models, 70-80% energy savings can be achieved during the training and deployment phase, with only a 1% or less decrease in performance. And according to the researchers, this is a conservative estimate.

"Consider our results as a recipe book for the AI professionals. The recipes don’t just describe the performance of different algorithms, but how energy efficient they are. And that by swapping one ingredient with another in the design of a model, one can often achieve the same result. So now, the practitioners can choose the model they want based on both performance and energy consumption, and without needing to train each model first," says Pedram Bakhtiarifard, who continues:

"Oftentimes, many models are trained before finding the one that is suspected of being the most suitable for solving a particular task. This makes the development of AI extremely energy-intensive. Therefore, it would be more climate-friendly to choose the right model from the outset, while choosing one that does not consume too much power during the training phase."

The researchers stress that in some fields, like self-driving cars or certain areas of medicine, model precision can be critical for safety. Here, it is important not to compromise on performance. However, this shouldn’t be a deterrence to striving for high energy efficiency in other domains.

"AI has amazing potential. But if we are to ensure sustainable and responsible AI development, we need a more holistic approach that not only has model performance in mind, but also climate impact. Here, we show that it is possible to find a better trade-off. When AI models are developed for different tasks, energy efficiency ought to be a fixed criterion – just as it is standard in many other industries," concludes Raghavendra Selvan.

The “recipe book” put together in this work is available as an open-source dataset for other researchers to experiment with. The information about all these 423,000 architectures is published on Github which AI practitioners can access using simple Python scripts. 

 

 

[BOX:] EQUALS 46 YEARS OF A DANE’S ENERGY CONSUMPTION

The UCPH researchers estimated how much energy it takes to train 429,000 of the AI subtype models known as convolutional neural networks in this dataset. Among other things, these are used for object detection, language translation and medical image analysis.

It is estimated that the training alone of the 429,000 neural networks the study looked at would require 263,000 kWh. This equals the amount of energy that an average Danish citizen consumes over 46 years. And it would take one computer about 100 years to do the training. The authors in this work did not actually train these models themselves but estimated these using another AI model, and thus saving 99% of the energy it would have taken.

 

 

[BOX:] WHY IS AI’S CARBON FOOTPRINT SO BIG?

Training AI models consumes a lot of energy, and thereby emits a lot of CO2e. This is due to the intensive computations performed while training a model, typically run on powerful computers. This is especially true for large models, like the language model behind ChatGPT. AI tasks are often processed in data centers, which demand significant amounts of power to keep computers running and cool. The energy source for these centers, which may rely on fossil fuels, influences their carbon footprint.

 

 

[BOX:] ABOUT THE STUDY

  • The scientific article about the study will be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP-2024).
  • The authors of the article are Pedram Bakhtiarifard, Christian Igel and Raghavendra Selvan from the University of Copenhagen’s Department of Computer Science.

 

A new 'Deep Learning' model predicts with great accuracy water and energy demands in Agriculture


Researchers develop a model based on the ‘Transformer’ architecture to guide irrigation communities' decision-making


UNIVERSITY OF CÓRDOBA

Researchers Emilio Camacho, Juan Antonio Rodríguez and Rafael González 

IMAGE: 

RESEARCHERS EMILIO CAMACHO, JUAN ANTONIO RODRÍGUEZ AND RAFAEL GONZÁLEZ, FROM THE AGRONOMY DEPARTMENT AT THE UNIVERSITY OF CÓRDOBA.

view more 

CREDIT: UNIVERSITY OF CORDOBA




Water scarcity and the high cost of energy represent the main problems for irrigation communities, which manage water for this end, making it available to agriculture. In a context of drought, with a deregulated and changing electricity market, knowing when and how much water crops are going to be irrigated with would allow those who manage them to overcome uncertainty when making decisions and, therefore, guide them towards objectives like economic savings, environmental sustainability and efficiency. For this, data science and Artificial Intelligence are important resources.

Researchers from the Hydraulics and Irrigation group with the María de Maeztu Unit of Excellence in the Agronomy Department at the University of Córdoba (DAUCO) are working to apply this cutting-edge technology to the field of precision agriculture. An example of this is the HOPE project, focused on the development of a holistic precision irrigation model that also involves the application of AI to guide decision-making. Within the framework of this effort, prediction models have been developed that would furnish irrigation communities with rigorous estimates of the amount of water that growers will need to meet their crops' needs. The latest model developed, and the most accurate to date, makes it possible to predict the actual demand for irrigation water one week ahead and with a margin of error of less than 2%, thus make possible the effective management of resources, all without detracting autonomy from its users.

According to researchers Rafael González, Emilio Camacho and Juan Antonio Rodríguez, this advance represents another step in the line of digitization applied to irrigation developed by the AGR 228 "Hydraulics and Irrigation" research group. Now they have applied the revolutionary architecture of Transformer Deep Learning to the field of precision irrigation. Since its appearance in 2017, this has been implemented in various sectors and is at the root of Artificial Intelligence milestones, such as ChatGPT. The ‘Transformer’ architecture stands out for its ability to establish long-term relationships in sequential data through what are known as 'attention mechanisms.' In the case of irrigation, this data architecture allows a lot of information to be processed simultaneously, delegating the selection and extraction of the information necessary for optimal prediction to its artificial neural network.

Daily data from the irrigation campaigns from 2015 to 2022 in the Zujar Canal's Community of Irrigators, in Don Benito (Badajoz), were used to validate the results of this model. In total, more than 1,800 water consumption measurements were used to train the model, combined with data on temperature, precipitation, solar radiation, evapotranspiration, wind speed, humidity, and crop types, etc.
This has reduced the margin of error from previous models from 20% to just 2%. Applied to integrated decision-making support systems, this can be very useful for managers of irrigation communities by offering an accurate forecast of the daily demand for irrigation water for the next seven days in contexts of water scarcity and high energy prices, but also in the framework of a commitment to sustainable resource management.