Thursday, December 12, 2024

  

A new, more economical and sustainable material is designed that uses sunlight to decontaminate the air



University of Córdoba

Image of the research team that carried out the work 

image: 

Image of the research team that carried out the work

view more 

Credit: University of Córdoba






Nitrogen oxides (NOx) are a group of gases formed by nitric oxide and nitrogen dioxide. They are produced, above all, by the burning of fossil fuels. Due to their harmful effects on human health and the environment, in recent years they have been in the scientific community's crosshairs. A research team at the Chemical Institute for Energy and the Environment (IQUEMA), attached to the University of Cordoba, has developed a photocatalytic material capable of effectively reducing these gases, achieving results similar to others developed to date, but through a more economical and sustainable process.


Photocatalysis, or how light can decontaminate cities


There are chemical reactions that can be favored or accelerated in the presence of light. In the case of nitrogen oxides, light energy, in the presence of a material that functions as a catalyst, makes it possible to oxidize the nitrogen oxides in the atmosphere and convert them into nitrates and nitrites.


The first author of this research paper, Laura Marín, explained that, unlike other photocatalytic reactions, which only operate under ultraviolet light, this new material boasts the advantage of working effectively with visible light, which is much more abundant and makes up most of the solar spectrum, allowing greater use to be made of the sun's energy. 


To do this, the research team has synthesized a new compound by combining two different types of materials: carbon nitride (which allows the reaction to be activated in the presence of visible light) and lamellar double hydroxides, which have the capacity to catalyze the reaction, in addition to featuring economical and easily scalable production. 


Professor Ivana Pavlovic, one of the researchers who participated in the study, explained that the new process is capable of converting 65% of nitrogen oxides under visible light irradiation, a percentage very similar to that achieved by other photocatalysts, but with the advantage that this new system uses minerals such as magnesium and aluminum, which are "cheaper, abundant in nature, and benign, compared to other photocatalysts used to date, which contain cadmium, lead or graphene," the researcher pointed out.


Professor of Inorganic Chemistry and IQUEMA Director Luis Sánchez explained that, in this way, the work represents an important step towards the large-scale development of a system that makes it possible to decontaminate the air under real-world conditions, thus reducing one of the most common pollutant gases in cities, and one whose long-term effects can cause serious health problems.
 

Deep-sea hydrothermal vent bacteria hold key to understanding nitrous oxide reduction

Peer-Reviewed Publication

Hokkaido University

Nitrosophilus labii HRV44T 

image: 

Nitrosophilus labii HRV44T is a thermophilic chemolithoautotroph isolated from a deep-sea hydrothermal vent in the Okinawa Trough, Japan. It grows using hydrogen as an electron donor and N2O as an electron acceptor. (Photo by Muneyuki Fukushi, Hokkaido University)

view more 

Credit: Muneyuki Fukushi

Scientists unearth a clue to the molecular mechanisms involved in N2O reduction by deep-sea hydrothermal vent bacteria.

Nitrous oxide (N2O) is the third most potent greenhouse gas after carbon dioxide and methane. It can also be oxidized by physical processes to form ozone-depleting substances. Atmospheric concentrations of N2O have increased since the preindustrial era, making N2O reduction a global challenge.

The only known biological sink of N2O in the biosphere is microbial denitrification. Denitrification is a series of reduction reactions starting with nitrate and ending with the reduction of N2O to nitrogen gas, with no greenhouse effect. This reaction is unique to microorganisms possessing N2O reductase (N2OR; NosZ), highlighting the importance of identifying the molecular mechanisms mediating high N2O reduction activity. 

Researchers at Hokkaido University, in collaboration with colleagues at the Institute of Physical and Chemical Research (RIKEN) and the University of Washington, investigated the molecular mechanisms underlying N2O reduction of a microbial species, Nitrosophilus labii HRV44T, which had been discovered by Hokkaido University researchers in 2020, from a deep-sea hydrothermal vent. The team recently published their results in the journal iScience.

The research team developed a method that enabled them to analyze time-series gene expression at a genome-wide level, called the transcriptome, using RNA extracted from very few cells.

“Time series transcriptomic analysis of HRV44T in response to N2O was more challenging than expected,” said corresponding author Sayaka Mino, Assistant Professor at the Faculty of Fisheries Sciences, Hokkaido University. “We have performed transcriptomic analysis using methods often used in microbial studies, but we failed to capture the gene expression dynamics over short time scales because we could not get enough RNA from just a few cells. The method demonstrated in the current study requires only 1 ng of messenger RNA (mRNA), making it useful for analysis at low cell densities, from which RNA extraction is difficult.”

The time series transcriptomic profiling of HRV44T demonstrated that N2O is not a critical inducer of denitrification gene expression, including nos genes, which are expressed under anaerobic conditions even in the absence of nitrogen oxides as electron acceptors.

“We hypothesize that this feature may contribute to efficient energy metabolisms in deep-sea hydrothermal environments where alternative electron acceptors are occasionally depleted”, said Robert M. Morris, Associate Professor at the University of Washington.

Jiro Tsuchiya, the first author and a JSPS research fellow DC2 at Hokkaido University, and colleagues conducted a statistical analysis of time series data. “Our findings suggest that the denitrification gene nosZ is negatively regulated by transcriptional regulators that typically function as transcriptional activators in response to environmental changes. Although we still need to investigate this result, our study extends the understanding of the regulatory mechanisms controlling gene expression in N2O-reducers and may help increase their ability to respire N2O”, said Tsuchiya.

Deep-sea hydrothermal environments have steep chemical and physical gradients, making them hotspots for bioresources. This study demonstrates the potential for microorganisms in these environments to contribute to N2O mitigations that may help combat climate change. Search for microbial resources with high greenhouse gas reduction efficiency, optimization of their abilities, and elucidation of molecular mechanisms specific to these microorganisms will contribute to developing technologies for environmental remediation by microorganisms.

Strain HRV44T rapidly respires N2O, forming bubbles at the gas-liquid interface. (Photo by Jiro Tsuchiya, Hokkaido University)

Credit

Jiro Tsuchiya

DEI

Diversity and inclusion accelerate the pace of innovation in robotics



Max Planck Institute for Intelligent Systems
The authors of the study 

image: 

The authors of the study (from left to right): Alex Fratzl, Daniela Macari, Ksenia Keplinger, Christoph Keplinger (MPI-IS, W. Scheible)

view more 

Credit: MPI-IS, W. Scheible




Stuttgart – The field of robotics is highly interdisciplinary, encompassing disciplines such as mechanical and electrical engineering, materials science, computer science, neuroscience and biology. The robotics community in itself is a champion of academic diversity. If this academic diversity is paired with workforce diversity – incorporating members of different ethnicities, genders, socioeconomic statuses, ages, life experiences, parental statuses or disabilities – and inclusive leadership, it drives even more disruptive innovation and creativity in the sciences. Hence, promoting diversity and inclusion within research teams is not merely a moral imperative; it is a catalyst for facilitating cutting-edge research and accelerating progress in the field of robotics.

Drawing from literature, a comprehensive citation analysis, and expert interviews, a team of roboticists and behavioral scientists from the Max Planck Institute for Intelligent Systems in Stuttgart and colleagues derive seven main benefits of workforce diversity and inclusive leadership for robotics research. On December 11, 2024, the team published a viewpoint article in Science Robotics which outlines these benefits and additionally serves as a leadership guide to fellow roboticists who wish to accelerate the pace of innovation within their own teams.

“In this article, we highlight existing scientific literature, analyze citation metrics of robotics papers over the past 25 years, reflect on our personal experiences and observations from working in a diverse and inclusive environment, and share insights from interviews with ten established research leaders in robotics”, says Daniela Macari, who is a doctoral researcher in the Robotic Materials Department at MPI-IS and first author of the article.

The authors identified seven main benefits of diverse and inclusive teams:

  1. Analyses of publications across various fields show that diverse teams publish a higher number of papers and have more citations per paper. The now published analysis of robotics papers over 25 years reveals that publications with at least 25% women authors receive significantly more citations and are more likely to rank among the most cited.
  2. Diverse teams are better equipped to tackle complex and multifaceted issues from multiple angles, using a broader pool of methods and considering a wider array of potential solutions.
  3. Having a diverse team composition sparks unconventional ideas, ultimately driving disruptive innovation and breakthroughs in robotics.
  4. Scientific discoveries made by diverse teams are more likely to address the needs of a wider segment of society, resulting in technologies with greater societal relevance.
  5. Research teams that reflect the diversity of robotic technology users are better at identifying and mitigating biases in technology and are more likely to consider ethical implications from multiple perspectives.
  6. Promoting diversity and inclusive leadership enhances employee satisfaction and helps attract and retain talented researchers, thus keeping academic organizations at the forefront of innovation.
  7. Ensuring diverse representation in robotics research not only addresses historical imbalances and systemic inequities but also promotes fairness and equal opportunity for all—regardless of their background and based on their individual potential to advance robotic technology for the benefit of humanity.

If robotics teams around the world embrace a diverse and inclusive environment and foster a sense of belonging and psychological safety, they may achieve higher levels of motivation and commitment to their work, resulting in increased productivity, more disruptive innovation, and maybe even most importantly – less bias in technology.

“Moreover, fostering such an environment, embracing diversity and inclusion within their teams, offers leaders the opportunity to grow into more effective and impactful leaders”, says Dr. Ksenia Keplinger, leader of the research group Organizational Leadership and Diversity at MPI-IS.

“Leading diverse and inclusive research teams challenges us to understand different perspectives and backgrounds, to customize our mentorship style to different group members, and to even adapt our research agendas to embrace new research thrusts aligned with team members’ skills and interests. While this requires constant effort and commitment, it yields long-term benefits in productivity and disruptive innovation for our teams”, adds Prof. Christoph Keplinger, Director of the Robotic Materials Department at MPI-IS.

The leadership guide the authors propose includes measures such as broadening recruitment pools, fostering a culture of inclusion, ensuring wide accessibility to resources, providing role models, and strengthening mentorship and allyship, among others.

  

Team diversity paired with inclusive leadership facilitates cutting-edge research and drives broad applicability.

Credit

MPI-IS

Reference:

Daniela Macari*, Alex Fratzl, Ksenia Keplinger*, Christoph Keplinger*: Accelerating the pace of innovation in robotics by fostering diversity and inclusive leadership. Science Robotics [Vol 9, Issue 97], 11 December 2024, DOI: 10.1126/scirobotics.adt1958

*Corresponding authors

 SCI-FI-TEK

Improved predictive accuracy of fusion plasma performance by data science



Multi-fidelity modeling linking theory, simulation, and experiment



National Institutes of Natural Sciences

Multi-fidelity information fusion. 

image: 

Theoretical and simulation estimates of turbulent transport (high-dimensional data that depend on plasma conditions such as density, temperature, and magnetic field) are used as low-fidelity data, and experimentally observed plasma confinement performance data are used as high-fidelity data. By incorporating correlations between low and high-fidelity data, multi-fidelity modeling compensates for the lack of high-fidelity data and enhances predictive accuracy of the plasma confinement performance.

view more 

Credit: National Institute for Fusion Science




Fusion energy research is being pursued around the world as a means of solving energy problems. Magnetic confinement fusion reactors aim to extract fusion energy by confining extremely hot plasma in strong magnetic fields. Its development is a comprehensive engineering project involving many advanced technologies, such as superconducting magnets, reduced-activation materials, and beam and wave heating devices. In addition, predicting and controlling the confined plasma, in which numerous charged particles and electromagnetic fields interact in complex ways, is an interesting research subject from a physics perspective.

 

To understand the transport of energy and particles in confined plasmas, theoretical studies, numerical simulations using supercomputers, and experimental measurements of plasma turbulence are being conducted. Although physics-based numerical simulations can predict turbulent transport in plasmas and agree with experimental observation to some extent, there are sometimes deviations from experiments. Therefore, the quantitative reliability of the predictions remains an issue. On the other hand, empirical prediction models based on experimental data have been developed. Still, it is uncertain whether they can be applied to future experimental devices based only on data obtained from existing experimental devices. Thus, theory/simulation and experimental data each have advantages and disadvantages, and there are areas where one alone cannot fully compensate for the other. If there is plenty of data with enough accuracy, it is possible to create turbulent transport models through machine learning, such as neural networks. However, to predict future nuclear fusion burning plasmas that have not yet been realized, the data is often lacking; either less quantitatively or in an insufficient amount to cover the parameter range of interest.

 

To solve this problem, we have adopted the concept of multi-fidelity modeling that enhances the predictive accuracy of the limited number of highly accurate (high-fidelity) data.  To compensate for the lack of high-fidelity data, one uses less accurate but more numerous low-fidelity data. This study introduces a multi-fidelity data fusion method called nonlinear auto-regressive Gaussian process regression (NARGP) to turbulent transport modeling in plasmas. In a conventional regression problem, a single pair of input and output data is given as a set, and a regression model is built based on the pair. However, a multi-fidelity problem has multiple outputs with different fidelities for the same input. The idea of NARGP is to express the prediction of high-fidelity data as a function of input and low-fidelity data. It is demonstrated that the multi-fidelity data fusion method improves the prediction accuracy of plasma turbulent transport models by applying the technique to cases such as (i) integration of low- and high-resolution simulation data, (ii) prediction of a turbulent diffusion coefficient based on an experimental fusion plasma data set, and (iii) integration of simplified theoretical models and turbulence simulation data. By incorporating the physical model-based predictability of theory and simulation as low-fidelity data, the lack of quantitative experimental data that we want to predict as the high-fidelity type can be compensated for improving prediction accuracy. These results have been published in a journal of the Nature publishing group, Scientific Reports.

 

Until now, turbulent transport modeling research has been dominated by two approaches: one pursuing predictions based on physical models from theory and simulation and the other constructing empirical models to fit existing experimental data. The present research paves the way to a new method that combines the best of both approaches: the predictability of theory and simulation based on physical models, and the quantitative information obtained from experimental data. By doing so, we are attempting to realize a prediction method for future nuclear fusion burning plasmas that combines the knowledge of simulations with the accuracy of experimental data.

 

The multi-fidelity modeling approach can be applied to various multi-fidelity data, including simulation and experimental data, simplified theory and simulation, and low- and high-accuracy simulations. Therefore, it is expected to be applied not only to fusion plasma research but also to other fields as a general method to construct fast and accurate prediction models by using a small number of high-precision data. It will contribute to performance prediction and design optimization of fusion reactors and develop new technologies in other fields.

 AU CONTRAIRE

Spanish physicists disagree with the British Sleep Society and defend the time change in the United Kingdom




University of Seville




The seasonal time change synchronises the start of human activity with morning light (dawn), allowing more daytime leisure in summer afternoons.  This is the focus of the article that Jorge Mira Pérez and José María Martín Olalla, professors at the University of Santiago de Compostela (USC) and the University of Seville (US), have just published in the Journal of Sleep Research. In the article, they analyse the naturalness and usefulness of the seasonal time change in response to a position statement issued by the British Sleep Society (BSS) that calls for the end of the time change in the United Kingdom and the permanent adoption of winter time.

The researchers review the history of seasonal time change in the UK, highlighting its almost uninterrupted application since 1916, making it an optimal case for describing the application of time change and its effects. They point out that for more than a century the time change has provided a natural experiment in adapting the working day to the seasons, allowing an extra hour of daytime leisure in the summer evenings.

 

"If the population had perceived a chronic misalignment during daylight saving time, they would have counteracted it by changing their habits".

Based on time-use surveys, the authors point out that the collective acceptance of the time change is demonstrated by the fact that, in 100 years, British society has neither eliminated nor counteracted the change by seasonally adjusting its timetables. Referring to a typical working day in the UK, the study recalls that "since 1916 the British have preferred a seasonal adjustment with 9 to 5 in winter and 8 to 4 in summer, which thanks to the time change remains 9 to 5; with the advantage of a constant social reference throughout the year (9 to 5) and, at the same time, a seasonal adaptation". The authors add that the predictions of the original proponents of the practice seem to have been fulfilled: people appreciate starting their working day closer to dawn and thus being able to enjoy more daytime leisure time during the summer evenings. "If the British population had perceived a chronic misalignment during the summer time months, they would have counteracted it by changing their habits".

Martín-Olalla and Mira point out that the BSS subscribes to the rationale for the time change in its position statement manifesto: morning light plays a crucial role in our daily activation. The BSS emphasises this role in winter to rule out permanent daylight saving time because of the morning darkness it would cause in winter. The nuance that the BSS and other similar societies forget is that the sun rises earlier in summer, which encourages an earlier start to human activity: in the UK, in the summer, the sun rises at least four hours earlier than in winter.  Martin-Olalla and Mira point out that the function of the seasonal time change is to adapt work activity to the morning light of each season.

The authors conclude by pointing out that in the current discussion on the seasonal time change the polls show a majority in favour of summer time over winter time. "This is another indication that the 1916 seasonal proposal continues to be accepted, now by today's generations. It's like an outcry: we love our current summer time schedules, please don't move them back."  

 

 

Technique to forecast where the next big quake will start

New Zealand fault study yields global insights 

Peer-Reviewed Publication

University of California - Riverside

Measuring scratches on the fault 

image: 

Geologist Tim Little measuring curved scratches on the Alpine Fault.

view more 

Credit: Nic Barth/UCR

Scientists have a new method for studying faults that could improve earthquake forecasts, shedding light on where quakes start, how they spread, and where the biggest impacts might be.

paper in the journal Geology describes the method, which helps determine the origins and directions of past earthquake ruptures — information valuable to modeling future earthquake scenarios on major faults.

By studying subtle curved scratches left on the fault plane after an earthquake, similar to the tire marks left after a drag race, scientists can determine the direction that earthquakes came from to that location. 

“Fault planes accumulate these curved scratch marks, which until now we didn’t know to look for or how to interpret,” explained UC Riverside geologist and paper first author Nic Barth. 

Curved scratches have been observed on fault surfaces following several historic ruptures including the 2019 Ridgecrest earthquakes in California. Computer modeling was used to confirm that the shape of the curvature indicates the direction the earthquake came from. 

This study is the first to demonstrate that this method can be applied to fingerprint the locations of prehistoric earthquakes. It can be applied to faults worldwide, helping to forecast the effects of possible future earthquakes and improve hazard assessments globally.

“The scratches indicate the direction and origin of a past earthquake, potentially giving us clues about where a future quake might start and where it will go. This is key for California, where anticipating the direction of a quake on faults like San Andreas or San Jacinto could mean a more accurate forecast of its impact,” Barth said.

Where an earthquake starts and where it goes can have a big influence on the intensity of shaking and the amount of time before people feel it. For example, scientists have shown that a large earthquake originating on the San Andreas fault near the Salton Sea that propagates to the north will direct more damaging energy into the Los Angeles region than a closer San Andreas earthquake that travels away from LA.

More optimistically, such an earthquake that starts further away could allow cellular alert systems to give Angelenos a warning of about a minute before the shaking arrives, which could save lives.

New Zealand’s Alpine Fault is known for its regular timing of large earthquakes, which makes it a more straightforward choice for studying fault behavior. The fault is known to rupture at almost metronomic intervals of about 250 years.  

This study provides two valuable insights for the Alpine Fault. First, that the most recent quake in 1717 traveled from south to north, a scenario that has been modeled to produce much greater shaking to populated areas. Second, it establishes that large earthquakes can start on both ends of the fault, which was not previously known.

“We can now take the techniques and expertise we have developed on the Alpine Fault to examine faults in the rest of the world. Because there is a high probability of a large earthquake occurring in Southern California in the near-term, looking for these curved marks on the San Andreas fault is an obvious goal,” Barth said.

Ultimately, Barth and his team hope that earthquake scientists around the world will start applying this new technique to unravel the past history of their faults. Barth is particularly enthusiastic about applying this technique across California’s fault network, including the notorious San Andreas Fault, to improve predictions and preparedness for one of the most earthquake-prone regions in the United States. 

“There is no doubt that this new knowledge will enhance our understanding and modeling of earthquake behavior in California and globally,” he said.


Examples of curved scratches documented in this study. 

Credit

Nic Barth/UCR


Lead author Nic Barth at the Alpine Fault. The Australian Plate is to the left, the Pacific Plate is to the right. 

Credit

Jesse Kearse/Kyoto University