Thursday, May 20, 2021

The Aqueduct of Constantinople: Managing the longest water channel of the ancient world

Double water channels may have been used to maintain the system while enabling constant operation

JOHANNES GUTENBERG UNIVERSITAET MAINZ

Research News

IMAGE

IMAGE: THE TWO-STORY KUR?UNLUGERME BRIDGE, PART OF THE AQUEDUCT SYSTEM OF CONSTANTINOPLE: TWO WATER CHANNELS PASSED OVER THIS BRIDGE - ONE ABOVE THE OTHER. view more 

CREDIT: PHOTO/©: JIM CROW

Aqueducts are very impressive examples of the art of construction in the Roman Empire. Even today, they still provide us with new insights into aesthetic, practical, and technical aspects of construction and use. Scientists at Johannes Gutenberg University Mainz (JGU) investigated the longest aqueduct of the time, the 426-kilometer-long Aqueduct of Valens supplying Constantinople, and revealed new insights into how this structure was maintained back in time. It appears that the channels had been cleaned of carbonate deposits just a few decades before the site was abandoned.

The late Roman aqueduct provided water for the population of Constantinople

The Roman Empire was ahead of its time in many ways, with a strong commitment to build infrastructure for its citizens which we still find fascinating today. This includes architecturally inspiring temples, theaters, and amphitheaters, but also a dense road network and impressive harbors and mines. "However, the most ground-breaking technical achievement of the Roman Empire lies in its water management, particularly its long-distance aqueducts that delivered water to cities, baths, and mines," said Dr. Gül Sürmelihindi from the Geoarchaeology group at Mainz University. Aqueducts were not a Roman invention, but in Roman hands these long-distance aqueducts developed further and extensively diffused throughout one of the largest empires in history.

Almost every city in the Roman Empire had an ample supply of fresh running water, in some cases actually with a larger volume than is the case today. "These aqueducts are mostly known for their impressive bridges, such as the Pont du Gard in southern France, which are still standing today after two millennia. But they are most impressive because of the way problems in their construction were solved, which would be daunting even for modern engineers," said JGU Professor Cees Passchier. More than 2,000 long-distance Roman aqueducts are known to date, and many more are awaiting discovery. The study undertaken by Dr. Gül Sürmelihindi and her research team focuses on the most spectacular late-Roman aqueduct, the water supply lines of Constantinople, now Istanbul in present-day Turkey.

Carbonate deposits provide insights into Byzantine water management

In AD 324, the Roman Emperor Constantine the Great made Constantinople the new capital of the Roman Empire. Although the city lies at the geopolitically important crossroads of land routes and seaways, fresh water supply was a problem. A new aqueduct was therefore built to supply Constantinople from springs 60 kilometers to the west. As the city grew, this system was expanded in the 5th century to springs that lie even 120 kilometers from the city in a straight line. This gave the aqueduct a total length of at least 426 kilometers, making it the longest of the ancient world. The aqueduct consisted of vaulted masonry channels large enough to walk through, built of stone and concrete, 90 large bridges, and many tunnels up to 5 kilometers long.


CAPTION

The 426-kilometer-long aqueduct system of Constantinople

CREDIT

ill./©: Cees Passchier

Sürmelihindi and her team studied carbonate deposits from this aqueduct, i.e., the limescale that formed in the running water, which can be used to obtain important information about water management and the palaeoenvironment at that time. The researchers found that the entire aqueduct system only contained thin carbonate deposits, representing about 27 years of use. From the annals of the city, however, it is known that the aqueduct system worked for more than 700 years, until at least the 12th century. "This means the entire aqueduct must have been maintained and cleaned of deposits during the Byzantine Empire, even shortly before it ceased working," explained Sürmelihindi. Carbonate deposits can block the entire water supply and have to be removed from time to time.

Double construction over 50 kilometers was likely built for maintenance

Although the aqueduct is late Roman in origin, the carbonate found in the channel is from the Byzantine Middle Ages. This made the researchers think about possible cleaning and maintenance strategies - because cleaning and repairing a channel of 426 kilometers implies that it cannot be used for weeks or months, while the city population depends on its water supply. They then found that 50 kilometers of the central part of the water system is constructed double, with one aqueduct channel above the other, crossing on two-story bridges. "It is very likely that this system was set up to allow for cleaning and maintenance operations," said Passchier. "It would have been a costly but practical solution."

Unfortunately for the research team, it is no longer possible to study the exact operation of the system. One of the most imposing bridges, that of Ballıgerme, was blown up with dynamite in 2020 by treasure hunters who erroneously believed they could find gold in the ruins.



CAPTION

The Ball?germe Bridge, part of the aqueduct system of Constantinople, which was destroyed by treasure hunters.

CREDIT

photo/©: Jim Crow

Images:

https://download.uni-mainz.de/presse/09_geowiss_tektonik_aquaedukt_valens_01.jpg

The 426-kilometer-long aqueduct system of Constantinople ill./©: Cees Passchier

https://download.uni-mainz.de/presse/09_geowiss_tektonik_aquaedukt_valens_02.jpg

The Ballıgerme Bridge, part of the aqueduct system of Constantinople, which was destroyed by treasure hunters. photo/©: Jim Crow

https://download.uni-mainz.de/presse/09_geowiss_tektonik_aquaedukt_valens_03.jpg

The two-story Kurşunlugerme Bridge, part of the aqueduct system of Constantinople: Two water channels passed over this bridge - one above the other. photo/©: Jim Crow

https://download.uni-mainz.de/presse/09_geowiss_tektonik_aquaedukt_valens_04.jpg

Carbonate deposit from the aqueduct system of Constantinople showing around 25 annual layers photo/©: Cees Passchier

https://download.uni-mainz.de/presse/09_geowiss_tektonik_aquaedukt_valens_05.jpg

Dr. Gül Sürmelihindi in the main water channel of the 426-kilometer-long aqueduct system of Constantinople photo/©: Cees Passchier

Related links:

https://www.geosciences.uni-mainz.de/tectonics-structural-geology-group/ - Tectonics and Structural Geology group at the JGU Institute of Geosciences ;

https://www.geosciences.uni-mainz.de/geoarchaeology/ - Geoarchaeology group at the JGU Institute of Geosciences

https://www.geosciences.uni-mainz.de/ - JGU Institute of Geosciences

Read more:

https://www.uni-mainz.de/presse/aktuell/12510_ENG_HTML.php - press release "The hydraulics of the world's first industrial plant: a unique construction in the Barbegal water mills" (13 Nov. 2020)

Pepsin-degradable plastics of bionylons from itaconic and amino acids

JAPAN ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY

Research News

IMAGE

IMAGE: DEVELOPMENT STRATEGY FOR PEPSIN DEGRADABLE BIONYLONS FROM ITACONIC ACID AND LEUCINE. view more 

CREDIT: IMAGE COURTESY: TATSUO KANEKO AND MOHAMMAD ASIF ALI FROM JAPAN ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY.

Point:

  • Novel chiral diacid monomers were synthesized.
  • Chirally interactive BioNylons were prepared.
  • BioNylon showed thermal/mechanical performances than conventional Nylons.
  • BioNylons disintegrated and degraded with pepsin.

Summary:

Marine plastic waste problems have been more serious year by year. One of the worst issues is that creatures in ocean are going extinct by mistakenly swallowing them.. Conventional biodegradable plastics are degradable in digestive enzymes, but their performances are too low to use in society. In this study, researchers from JAIST have used bio-derived resources such as itaconic acid and amino acid for the syntheses of high-performance BioNylons having the pepsin degradation function.

Ishikawa, Japan - Currently available conventional nylon such as Nylon 6, Nylon 66, and Nylon 11 are nondegradable. On the other hand, BioNylons derived from itaconic acid showed higher performances than conventional ones and degradability in soil, but degradability under the digestive enzymes was not confirmed.

To tackle these issues, a team of researchers from the Japan Advanced Institute of Science and Technologies (JAIST) are investigating syntheses of new BioNylons with their degradability under pepsin enzyme. Their latest study, published in Advanced Sustainable Systems-Wiley-VCH on April 2021, was led by Professor Tatsuo Kaneko and Dr. Mohammad Asif Ali.

In this study, BioNylons were synthesized based on chemically developed novel chiral dicarboxylic acids derived from renewable itaconic and amino acids (D- or L-leucine). Further, BioNylons were prepared via melt polycondensation of hexamethylenediamine with chirally interactive heterocyclic diacid monomers, as shown in Figure 1. The chiral interactions were derived from the diastereomeric mixture of the racemic pyrrolidone ring and the chiral amino acids of leucine. As a result, the polyamides showed a glass transition temperature, Tg, of approximately 117 °C and a melting temperature, Tm, of approximately 213 °C, which were higher than those of conventional BioNylon 11 (Tg of approximately 57 °C). The BioNylons also showed high Young's moduli, E, and mechanical strengths, σ, ranging from 2.2-3.8 GPa and 86-108 MPa, respectively. Such materials can be used for fishing nets, ropes, parachutes, and packaging materials, as a substitute for conventional nylons. The BioNylons including peptide linkage showed enzymatic degradation using pepsin, which is a digestive enzyme found in mammal stomach. The fact that pepsin-degradation can connect with biodegradation in the stomach of marine mammals. Such an innovative molecular design for high-performance nylons by controlling chirality can lead to establish a sustainable carbon negative society and energy conservation by weight saving.

###

This research was carried out with the support of "Environment Research and Technology Development Fund (1-2005) of Environmental Restoration and Conservation Agency, ERCA. (Principal Investigator: Prof. Tatsuo Kaneko).

New marine sulfur cycle model after the Snowball Earth glaciation

SCIENCE CHINA PRESS

Research News

IMAGE

IMAGE: A COMPILATION OF PYRITE SULFUR ISOTOPE DATA SHOWING GLOBAL OCCURRENCES OF SUPERHEAVY PYRITE IN THE CRYOGENIAN INTERGLACIAL PERIOD. AFTER THE STURTIAN GLACIATION, MID-DEPTH SEAWATER COLUMN WAS SULFIDIC WHICH WAS SUSTAINED... view more 

CREDIT: ©SCIENCE CHINA PRESS

The Sturtian Snowball Earth glaciation (717~660 million years ago) represents the most severe icehouse climate in Earth's history. Geological evidence indicates that, during this glaciation, ice sheets extended to low latitudes, and model simulations suggest global frozen ocean as well as a prolonged shut-down of the hydrological cycles. The Snowball Earth hypothesis poses that the Sturtian global glaciation is directly triggered by intense continental weathering that scavenges atmospheric CO2, while the global frozen condition is terminated by extremely high atmospheric CO2 level (~350 times of present atmospheric level), which is accumulated by synglacial volcanic eruptions for tens of million years. The deglaciation is an abrupt process, lasting for hundreds to thousands of years, and the sharp transition to a hothouse condition is accompanied with extremely high weathering rate and followed by the perturbations of marine sulfur cycle.

Unusual perturbation of the marine sulfur cycle after the Sturtian glaciation is hinted at worldwide precipitation of isotopically superheavy sedimentary pyrite (FeS2) in the interglacial sediments. In the classic sulfur cycle framework, pyrite, the predominant sulfide mineral in sediments is always depleted in 34S as compared with seawater sulfate, because sulfate reducing microbes preferentially utilize 32S enriched sulfate to generate sulfide. However, a compilation of pyrite sulfur isotope data shows extremely high values (up to +70‰, obviously higher than coeval seawater sulfate values) in the aftermath of the Sturtian glaciation. Although superheavy pyrite is also reported in other geological periods, the Cryogenian interglacial interval after the Sturtian glaciation represents the only time with superheavy pyrite formation in a global scale for ~10 million years. The traditional theoretical sulfur cycle model does not satisfactorily address the long-term and global occurrence of superheavy pyrite in the Cryogenian interglacial interval.

Dr. Lang and his colleagues proposed a novel sulfur cycle model that incorporates volatile organosulfur compounds (VOSC) to interpret the global occurrence of superheavy pyrite after the Sturtian glaciation. They carried out detailed petrographic observations and paired pyrite content and sulfur isotope data of superheavy pyrite from the Cryogenian interglacial deposits of the Datangpo Formation in South China. Both the petrographic and geochemical data from South China indicate that the Cryogenian interglacial oceans were mainly sulfidic (anoxic and H2S enriched). In sulfidic conditions, volatile organosulfur compounds (VOSC) could be pervasively generated via sulfide methylation. Because the VOSC always has a lower sulfur isotope value relative to seawater sulfate, continuous VOSC emission would elevate sulfur isotope of residual sulfur pool of sulfidic seawater, resulting a vertical isotopic gradient of seawater and the precipitation of superheavy pyrite near/at seafloor.

Their findings demonstrate that superheavy pyrite formation requires both high microbial sulfate reduction and VOSC formation rates so as to maintain such unusual perturbation of marine sulfur cycle. As organic matter and sulfate are prerequisites for these reaction, ~10 million-year occurrences of superheavy pyrite may suggest continuous high primary productivity and intense continental chemical weathering after the Sturtian glaciation. These findings improve our understanding of the Snowball Earth event and ancient marine sulfur cycle.

###

See the article:

Cracking the superheavy pyrite enigma: possible roles of volatile organosulfur compound emission. Lang et al. National Science Reviewhttps://doi.org/10.1093/nsr/nwab034

The brain game: What causes engagement and addiction to video games?

Researchers have analyzed the objective and subjective aspects of game-playing experience based on analogies of physical models of motion

JAPAN ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY

Research News

IMAGE

IMAGE: THE GRAPH DEPICTS VARIOUS QUANTITIES OF THE MODEL PLOTTED AGAINST THE DIFFICULT OF SOLVING UNCERTAINTY IN A GAME (M). REPRESENTATIVE EXAMPLES OF BOARD GAMES, GAMBLING GAMES, AND SPORTS ARE PLACED... view more 

CREDIT: HIROYUKI IIDA

Ishikawa, Japan - History tells us that games are an inseparable facet of humanity, and mainly for good reasons. Advocates of video games laud their pros: they help develop problem-solving skills, socialize, relieve stress, and exercise the mind and body--all at the same time! However, games also have a dark side: the potential for addiction. The explosive growth of the video game industry has spawned all sorts of games targeting different groups of people. This includes digital adaptations of popular board games like chess, but also extends to gambling-type games like online casinos and betting on horse races. While virtually all engaging forms of entertainment lend themselves to addictive behavior under specific circumstances, some video games are more commonly associated with addiction than others. But what exactly makes these games so potentially addictive?

This is a difficult question to answer because it deals directly with aspects of the human mind, and the inner workings of the mind are mostly a mystery. However, there may be a way to answer it by leveraging what we do know about the physical world and its laws. At the Japan Advanced Institute of Science and Technology (JAIST), Japan, Professor Hiroyuki Iida and colleagues have been pioneering a methodology called "motion in mind" that could help us understand what draws us towards games and makes us want to keep reaching for the console.

Their approach is centered around modelling the underlying mechanisms that operate in the mind when playing games through an analogy with actual physical models of motion. For example, the concepts of potential energy, forces, and momentum from classical mechanics are considered to be analogous to objective and/or subjective game-related aspects, including pacing of the game, randomness, and fairness. In their latest study published in IEEE AccessProfessor Iida and Assistant Professor Mohd Nor Akmal Khalid, also from JAIST, linked their "motion in mind" model with the concepts of engagement and addiction in various types of games from the perceived experience of the player and their behaviors.

The researchers employed an analogy of the law of conservation of energy, mass, and momentum to mathematically determine aspects of the game-playing experience in terms of gambling psychology and the subjective/objective perception of the game. Their conjectures and results were supported by a "unified conceptual model of engagement and addiction," previously derived from ethnographic and social science studies, which suggests that engagement and addiction are two sides of the same coin. By comparing and analyzing a variety of games, such as chess, Go, basketball, soccer, online casinos, and pachinko, among others, the researchers showed that their law of conservation model expanded upon this preconception, revealing new measures for engagement while also unveiling some of the mechanisms that underlie addiction.

Their approach also provides a clearer view of how the perceived (subjective) difficulty of solving uncertainties during a given game can differ from and even outweigh the real (objective) one, and how this affects our behavior and reactions. "Our findings are valuable for understanding the dynamics of information in different game mechanics that have an impact on the player's state of mind. In other words, this helps us establish the relationship between the game-playing process and the associated psychological feeling," explains Professor Iida . Such insight will help developers make game content more engaging, healthy, and personalized both in the short and long term.

Further studies shall make the tailoring of game experiences much easier, as Professor Iida remarks: "Our work is a stepping-stone for linking behavioral psychology and game-playing experiences, and soon we will be able to mechanistically manipulate and adapt the notions of engagement and addiction towards specific needs and use-cases ." Let's hope these findings can take us one step towards deaddiction to games, while keeping the fun intact!

###

About Japan Advanced Institute of Science and Technology, Japan

Founded in 1990 in Ishikawa prefecture, the Japan Advanced Institute of Science and Technology (JAIST) was the first independent national graduate school in Japan. Now, after 30 years of steady progress, JAIST has become one of Japan's top-ranking universities. JAIST counts with multiple satellite campuses and strives to foster capable leaders with a state-of-the-art education system where diversity is key; about 40% of its alumni are international students. The university has a unique style of graduate education based on a carefully designed coursework-oriented curriculum to ensure that its students have a solid foundation on which to carry out cutting-edge research. JAIST also works closely both with local and overseas communities by promoting industry-academia collaborative research.

About Professor Hiroyuki Iida from Japan Advanced Institute of Science and Technology, Japan

Dr. Hiroyuki Iida received his Ph.D. in 1994 on Heuristic Theories on Game-Tree Search from the Tokyo University of Agriculture and Technology, Japan. Since 2005, he has been a Professor at JAIST, where he is also Trustee and Vice President of Educational and Student Affairs. He is the head of the Iida laboratory and has published over 300 hundred papers, presentations, and books. His current research interests include artificial intelligence, game informatics, game theory, mathematical modeling, search algorithms, game-refinement theory, game tree search, and entertainment science.

Funding information

This study was funded by a grant from the Japan Society for the Promotion of Science in the framework of the Grant-in-Aid for Challenging Exploratory Research (Grant Number 19K22893).

Model bias corrections for reliable projection of extreme El Niño frequency change

SCIENCE CHINA PRESS

Research News

IMAGE

IMAGE: FIGURE 2. TROPICAL PACIFIC MEAN-STATE CHANGE DURING 2011-2098, AFTER REMOVING THE IMPACTS OF 13 COMMON BIASES IN THE CMIP5 ORIGINAL PROJECTIONS, IN (A) SST, (B) CONVECTION (OMEGA, UPWARD NEGATIVE), AND... view more 

CREDIT: ©SCIENCE CHINA PRESS

A reliable projection of extreme El Niño frequency change in future warmer climate is critical to managing socio-economic activities and human health, strategic policy decisions, environmental and ecosystem managements, and disaster mitigations in many parts of the world. Unfortunately, long-standing common biases in CMIP5 models, despite enormous efforts on the numerical model development over the past decades, make it hard to achieve a reliable projection of the extreme El Niño frequency change in the future. While increasing attentions have been paid to estimate possible impacts of models' biases, it is not yet fully understood whether and how much models' common biases would impact the projection of the extreme El Niño frequency change in coming decades. This is an urgent question to be solved.

According to the original projection of CMIP5 models, the extreme El Niño, defined by Niño3 convection, would increase twice in the future. However, Prof. Luo and his research team find that models which produce a centennial easterly trend in the tropical Pacific during the 20th century would project a weak increase or even a decrease of the extreme El Niño frequency in the 21st century. Since the centennial easterly trend is systematically underestimated in all CMIP5 models compared to the historical record, a reasonable question is whether this common bias might lead to an over-estimated frequency of the extreme El Niño frequency change in the models' original projections (Figure 1a).

CAPTION

Figure 1. Inter-model correlation between the projected change in the extreme El Niño frequency during 2011-2098 and the simulated zonal wind trend during 1901-2010 (a), and the change in mean-state convection (represented by omega) over Niño3 region (b). Inter-model correlation between the Niño3 mean-state convection change and the Pacific east-minus-west SST gradient change (c).

CREDIT

©Science China Press

Based on their results, the change in the frequency of extreme El Niño, which was defined by the total convection in the eastern equatorial Pacific (i.e. the sum of the mean-state and anomaly value of the vertical velocity in Nino3 region), is mostly determined by the mean-state change of the Nino3 convection (Figure 1b). In addition, change in the mean-state convection in Nino3 region is highly controlled by the change in the east-minus-west sea surface temperature (SST) gradient that determines the tropical Pacific Walker circulation (Figure 1c). Therefore, the change in the extreme El Niño frequency defined by the total convection in Nino3 region boils down to the change in the tropical Pacific east-minus-west SST gradient (i.e., "El Niño-like" or "La Niña-like" change).

By identifying systematic impacts of 13 common biases of CMIP5 models in simulating the tropical climate over the past century, they find that change in the tropical Pacific east-minus-west SST gradient in the future was significantly over-estimated in the original projection. In stark contrast to the original "El Niño-like" SST warming in the future projected by the CMIP5 models, the Pacific SST change, after removing the systematic impacts of the models' 13 common biases, shows that the strongest SST warming would occur in the tropical western Pacific rather than in the east (i.e., a "La Niña-like" SST warming change), coupled with stronger trade winds across the Pacific and suppressed convection in the eastern Pacific (Figure 2).

As mentioned above, change in the frequency of the so-defined extreme El Niño would be determined by the change in the Pacific mean-states. Therefore, by carefully removing the impacts of the models' common biases on the mean-state changes, Luo and colleagues find that the extreme El Niño frequency would remain almost unchanged in the future (Figure 3).

CAPTION

Figure 3. El Niño frequency change in the 21st century. Red, green, and gray dots indicate extreme El Niño events (i.e. Niño3 omega is negative), moderate El Niño events (i.e. positive omega but with greater than 0.5 standardized SST anomaly in Niño3 region), and non-El Niño years, respectively. Results are based on (a) the historical simulations during 1901-2010, (b, c) the original and corrected projections of the extreme El Niño frequency in RCP4.5 scenario during 2011-2098. All the frequency is calculated per 100 years. (d) Histogram of the extreme El Niño frequency in each magnitude bin of the Niño3 SST anomaly (interval: 0.5 standard deviation). The 95% confidence interval of the extreme El Niño frequency is estimated by bootstrap test. (e) Frequency of extreme and moderate El Niño, defined by the Niño3 SST anomaly in boreal winter being greater than 1.5 (red line) and being 0.5-1.5 standard deviation (green line), respectively. The frequency of the extreme and moderate El Niño events (per 100 years) in the historical simulations and future projections are represented by gray and blue Arabic numbers, respectively. (f) As in (e), but for the results defined by Niño3 omega anomaly in boreal winter.

CREDIT

©Science China Press

In summary, this finding highlights that the impacts of models' common biases could be great enough to reverse the original projection of the change in the tropical Pacific climate mean-states, which would largely affect the projection of the extreme El Niño frequency change in the future. Thus, it sheds a new light on the importance of model bias-correction in order to gain a reliable projection of future climate change. More importantly, this finding suggests that much more efforts should be put to improve climate models and reduce major systematic biases in coming years/decades.

###

See the article:

Over-projected Pacific warming and extreme El Niño frequency change due to CMIP5 common biases. Tao Tang, Jing-Jia Luo, Ke Peng et al. National Science Reviewhttps://doi.org/10.1093/nsr/nwab056

Strong quake, small tsunami

GEOMAR scientists publish unique data set on the northern Chilean subduction zone

HELMHOLTZ CENTRE FOR OCEAN RESEARCH KIEL (GEOMAR)

Research News

IMAGE

IMAGE: FOR A TOTAL OF TWO YEARS, 15 OCEAN BOTTOM SEISMOMETERS OFF NORTHERN CHILE RECORDED AFTERSHOCKS FROM THE 2014 IQUIQUE EARTHQUAKE. view more 

CREDIT: JAN STEFFEN/GEOMAR

Northern Chile is an ideal natural laboratory to study the origin of earthquakes. Here, the Pacific Nazca plate slides underneath the South American continental plate with a speed of about 65 millimetres per year. This process, known as subduction, creates strain between the two plates and scientists thus expected a mega-earthquake here sooner or later, like the last one in 1877. But although northern Chile is one of the focal points of global earthquake research, until now there was no comprehensive data set on the structure of the marine subsurface - until nature itself stepped in to help.

On 1 April 2014, a segment of the subduction zone finally ruptured northwest of the city of Iquique. The earthquake with a moment magnitude of 8.1 released at least parts of accumulated stresses. Subsequent seismic measurements off the coast of Chile as well as seafloor mapping and land-based data provided a hitherto unique insight into the architecture of the plate boundary. "Among other things, this allows us to explain why a relatively severe quake like the one in 2014 only triggered a relatively weak tsunami," says Florian Petersen from GEOMAR Helmholtz Centre for Ocean Research Kiel. He is the lead author of the study, which has now been published in the journal Geophysical Research Letters.

As early as December 2014, just eight months after the main earthquake, the Kiel team deployed 15 seismic measuring devices specially developed for the deep sea off the coast of Chile. "The logistical and also administrative challenges for the deployment of these ocean-bottom seismometers are challenging and eight months of preparation time is very short. However, since the investigations are crucial to better comprehend the hazard potential of the plate margin off northern Chile, even the Chilean Navy finally supported us by making its patrol boat COMANDANTE TORO available," reports project leader and co-author Dr. Dietrich Lange from GEOMAR.

At the end of 2015, these ocean bottom seismometers (OBS) were recovered by the German research vessel SONNE. The team on board serviced the devices, read out the data and placed the OBS on the seabed again. It was not until November 2016 that the American research vessel MARCUS G. LANGSETH finally recovered them. "Together with data from land, we have obtained a seismic data set of the earthquake region over 24 months, in which we can find the signals of numerous aftershocks. This is unique so far," explains Florian Petersen, for whom the study is part of his doctoral thesis.

The evaluation of the long-term measurements, in which colleagues from the Universidad de Chile and Oregon State University (USA) were also involved, showed that an unexpectedly large number of aftershocks were located between the actual earthquake rupture zone and the deep-sea trench. "But what surprised us even more was that many aftershocks were quite shallow. They occurred in the overlying South American continental plate and not along the plate boundary of the dipping Nazca plate," Petersen says.

Over many earthquake cycles, these aftershocks can strongly disturb and rupture the seaward edge of the continental plate. Resulting gaps fill with pore fluids. As a result, the authors conclude, the energy of the quakes can only propagate downwards, but not to the deep-sea trench off the coast of Chile. "Therefore, there were no large, sudden shifts of the seafloor during the 2014 earthquake and the tsunami was fortunately relatively small", says Florian Petersen.

The question still remains whether the Iquique earthquake of 2014 was already the expected major quake in the region or whether it only released some of the stress that had built up since 1877. "The region remains very exciting for us. The current results were only possible due to the close cooperation of several nations and the use of research vessels from Germany, Chile and the USA. This shows the immense effort that is required to study marine natural hazards. However, this is critical for a detailed assessment of the risk to the coastal cities in northern Chile, so everyone was dedicated to the task," says co-author Prof. Dr. Heidrun Kopp from GEOMAR.

###

Coral reef restorations can be optimized to reduce flood risk

New practical guidelines for reef restoration will benefit coral ecosystems while also providing coastal communities with effective protection from flood risk

FRONTIERS

Research News

New guidelines for coral reef restoration aiming to reduce the risk of flooding in tropical coastal communities have been set out in a new study that simulated the behavior of ocean waves travelling over and beyond a range of coral reef structures. Published in Frontiers in Marine Science, these guidelines hope to optimize restoration efforts not only for the benefit of the ecosystem, but also to protect the coast and people living on it.

"Our research reveals that shallow, energetic areas such as the upper fore reef and middle reef flat, typically characterized by physically-robust coral species, should be targeted for restoration to reduce coastal flooding," says Floortje Roelvink, lead author on the paper and researcher at Deltares, a Dutch research institute. "This will benefit both coral ecosystems and human coastal populations that rely on them for tourism, fisheries, and recreation."

Important structures

Coral reefs help to sustain the economy of 500 million people in tropical coastal communities and can offer protection from wave-driven flooding and coastal erosion, especially in the face of climate change. Reef restoration, which involves coral planting and reef management to improve the health, abundance, and biodiversity of the ecosystem, has been suggested as a way of reducing flood risk.

"Our research can help guide the design of coral reef restorations to best increase the resiliency of coastal communities from flooding," said Curt Storlazzi, U.S. Geological Survey research geologist and project lead. "Such information can increase the efficiency of coral restoration efforts, assisting a range of stakeholders in not only coral reef conservation and management, but also coastal hazard risk reduction."

"Although we know that coral reefs can efficiently attenuate ocean wave energy and reduce coastal flooding, knowledge of specifically where to locate and design coral reef restorations on specific types of reefs is lacking," explains Ap van Dongeren, coastal morphology specialist at Deltares and project co-lead. "We were keen to fill this knowledge gap because the costs and practical constraints of reef recovery efforts necessitate an approach to design and restoration that produces the most benefit for all."

Reef by design

To first understand the range of naturally occurring reef shapes, such as fringing reefs, straight sloping reefs, convex reefs and reefs with an offshore shelf, the researchers analyzed a database of over 30,000 coral reef profiles across the U.S., including those in the Mariana, Hawaii, and Virgin Islands. Using these reef profiles, they numerically "designed" reef restorations to be both feasible from an operational and ecological perspective and to have an expected beneficial impact on coastal flooding.

The researchers established that reef restorations should not be placed too deep because of operational constraints and limit on the wave reduction efficiency. Restorations should also not be too shallow, to prevent the drying of reef restorations and reef degradation due to thermal intolerance.

Different types of coral restorations were also investigated - "green", entailing solely outplanting corals, or "gray-green hybrid" restorations, entailing emplacement of structures (such as ReefBalls) and then outplanting corals on top of them. The team then used a numerical model to simulate waves travelling over both the restored and unrestored coral reef profiles to see how far those waves ran up the coast, providing an indication of the effect of the different reef restorations on coastal flooding.

"We hope this study will motivate others to continue and expand on this research, among others by conducting field and laboratory experiments to validate our findings," concludes Roelvink.

###

THE SCIENCE OF STAR TREK

How to thermally cloak an object

Theoretical method can make objects invisible to a thermal camera, or mimic a different object

UNIVERSITY OF UTAH

Research News

Can you feel the heat? To a thermal camera, which measures infrared radiation, the heat that we can feel is visible, like the heat of a traveler in an airport with a fever or the cold of a leaky window or door in the winter.

In a paper published in Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, an international group of applied mathematicians and physicists, including Fernando Guevara Vasquez and Trent DeGiovanni from the University of Utah, report a theoretical way of mimicking thermal objects or making objects invisible to thermal measurements. And it doesn't require a Romulan cloaking device or Harry Potter's invisibility cloak. The research is funded by the National Science Foundation.

The method allows for fine-tuning of heat transfer even in situations where the temperature changes in time, the researchers say. One application could be to isolate a part that generates heat in a circuit (say, a power supply) to keep it from interfering with heat sensitive parts (say, a thermal camera). Another application could be in industrial processes that require accurate temperature control in both time and space, for example controlling the cooling of a material so that it crystallizes in a particular manner.

Watch a visualization of how the method cloaks a kite-shaped object here.


Or watch how it works for a Homer Simpson-shaped object here.

Cloaking or invisibility devices have long been elements of fictional stories, but in recent years scientists and engineers have explored how to bring science fiction into reality. One approach, using metamaterials, bends light in such a way as to render an object invisible.

Just as our eyes see objects if they emit or reflect light, a thermal camera can see an object if it emits or reflects infrared radiation. In mathematical terms, an object could become invisible to a thermal camera if heat sources placed around it could mimic heat transfer as if the object wasn't there.

The novelty in the team's approach is that they use heat pumps rather than specially crafted materials to hide the objects. A simple household example of a heat pump is a refrigerator: to cool groceries it pumps heat from the interior to the exterior. Using heat pumps is much more flexible than using carefully crafted materials, Guevara says. For example, the researchers can make one object or source appear as a completely different object or source. "So at least from the perspective of thermal measurements," Guevara says, "they can make an apple appear as an orange."

The researchers carried out the mathematical work needed to show that, with a ring of heat pumps around an object, it's possible to thermally hide an object or mimic the heat signature of a different object.

The work remains theoretical, Guevara says, and the simulations assume a "probing" point source of heat that would reflect or bend around the object - the thermal equivalent of a flashlight in a dark room.

The temperature of that probing source must be known ahead of time, a drawback of the work. However the approach is within reach of current technology by using small heat pumps called Peltier elements that transport heat by passing an electrical current across a metal-metal junction. Peltier elements are already widely used in consumer and industrial applications.

The researchers envision their work could be used to accurately control the temperature of an object in space and time, which has applications in protecting electronic circuits. The results, the researchers say, could also be applied to accurate drug delivery, since the mathematics of heat transfer and diffusion are similar to those of the transfer and diffusion of medications. And, they add, the mathematics of how light behaves in diffuse media such as fog could lead to applications in visual cloaking as well.

Find a preprint of the study here.

After publication, find the full study here.

In addition to Guevara and DeGiovanni, Maxence Cassier, CNRS Researcher at the Fresnel Institute in Marseille, France and Sébastien Guenneau, CNRS researcher, UMI 2004 Abraham de Moivre-CNRS, Imperial College London, London, UK co-authored the study.

###