Tuesday, April 25, 2023

Punctuation in literature of major languages is intriguingly mathematical

Peer-Reviewed Publication

THE HENRYK NIEWODNICZANSKI INSTITUTE OF NUCLEAR PHYSICS POLISH ACADEMY OF SCIENCES

Hazard functions in seven major Western languages 

IMAGE: HAZARD FUNCTIONS REPRESENT THE PROBABILITY OF USING A PUNCTUATION MARK AS A FUNCTION OF THE LENGTH OF THE SEQUENCE WITHOUT THESE MARKS. IN TERMS OF PUNCTUATION, THE MOST ‘CROSS-LINGUISTIC’ IS GERMAN (GREEN CHART). view more 

CREDIT: SOURCE: IFJ PAN

A moment's hesitation... Yes, a full stop here – but shouldn’t there be a comma there? Or would a hyphen be better? Punctuation can be a nuisance; it is often simply neglected. Wrong! The most recent statistical analyses paint a different picture: punctuation seems to “grow out” of the foundations shared by all the (examined) languages, and its features are far from trivial.

To many, punctuation appears as a necessary evil, to be happily ignored whenever possible. Recent analyses of literature written in the world's current major languages require us to alter this opinion. In fact, the same statistical features of punctuation usage patterns have been observed in several hundred works written in seven, mainly Western, languages. Punctuation, all ten representatives of which can be found in the introduction to this text, turns out to be a universal and indispensable complement to the mathematical perfection of every language studied. Such a remarkable conclusion about the role of mere commas, exclamation marks or full stops comes from an article by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, published in the journal Chaos, Solitons & Fractals.

“The present analyses are an extension of our earlier results on the multifractal features of sentence length variation in works of world literature. After all, what is sentence length? It is nothing more than the distance to the next specific punctuation mark –  the full stop. So now we have taken all punctuation marks under a statistical magnifying glass, and we have also looked at what happens to punctuation during translation,” says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

Two sets of texts were studied. The main analyses concerning punctuation within each language were carried out on 240 highly popular literary works written in seven major Western languages: English (44), German (34), French (32), Italian (32), Spanish (32), Polish (34) and Russian (32). This particular selection of languages was based on a criterion: the researchers assumed that no fewer than 50 million people should speak the language in question, and that the works written in it should have been awarded no fewer than five Nobel Prizes for Literature. In addition, for the statistical validity of the research results, each book had to contain at least 1,500 word sequences separated by punctuation marks. A separate collection was prepared to observe the stability of punctuation in translation. It contained 14 works, each of which was available in each of the languages studied (two of the 98 language versions, however, were omitted due to their unavailability). In total, authors in both collections included such writers as Conrad, Dickens, Doyle, Hemingway, Kipling, Orwell, Salinger, Woolf, Grass, Kafka, Mann, Nietzsche, Goethe, La Fayette, Dumas, Hugo, Proust, Verne, Eco, Cervantes, Sienkiewicz or Reymont.

The attention of the Cracow researchers was primarily drawn to the statistical distribution of the distance between consecutive punctuation marks. It soon became evident that in all the languages studied, it was best described by one of the precisely defined variants of the Weibull distribution. A curve of this type has a characteristic shape: it grows rapidly at first and then, after reaching a maximum value, descends somewhat more slowly to a certain critical value, below which it reaches zero with small and constantly decreasing dynamics. The Weibull distribution is usually used to describe survival phenomena (e.g. population as a function of age), but also various physical processes, such as increasing fatigue of materials.

“The concordance of the distribution of word sequence lengths between punctuation marks with the functional form of the Weibull distribution was better the more types of punctuation marks we included in the analyses; for all marks the concordance turned out to be almost complete. At the same time, some differences in the distributions are apparent between the different languages, but these merely amount to the selection of slightly different values for the distribution parameters, specific to the language in question. Punctuation thus seems to be an integral part of all the languages studied,” notes Prof. Drozdz, only to add after a moment with some amusement: “...and since the Weibull distribution is concerned with phenomena such as survival, it can be said with not too much tongue-in-cheek that punctuation has in its nature a literally embedded struggle for survival.”

The next stage of the analyses consisted of determining the hazard function. In the case of punctuation, it describes how the conditional probability of success – i.e. the probability of the next punctuation mark – changes if no such mark has yet appeared in the analysed sequence. The results here are clear: the language characterised by the lowest propensity to use punctuation is English, with Spanish not far behind; Slavic languages proved to be the most punctuation-dependent. The hazard function curves for punctuation marks in the six languages studied appeared to follow a similar pattern, they differed mainly in vertical shift.

German proved to be the exception. Its hazard function is the only one that intersects most of the curves constructed for the other languages. German punctuation thus seems to combine the punctuation features of many languages, making it a kind of Esperanto punctuation. The above observation dovetails with the next analysis, which was to see whether the punctuation features of original literary works can be seen in their translations. As expected, the language most faithfully transforming punctuation from the original language to the target language turned out to be German.

In spoken communication, pauses can be justified by human physiology, such as the need to catch one's breath or to take a moment to structure what is to be said next in one's mind. And in written communication?

“Creating a sentence by adding one word after another while ensuring that the message is clear and unambiguous is a bit like tightening the string of a bow: it is easy at first, but becomes more demanding with each passing moment. If there are no ordering elements in the text (and this is the role of punctuation), the difficulty of interpretation increases as the string of words lengthens. A bow that is too tight can break, and a sentence that is too long can become unintelligible. Therefore, the author is faced with the necessity of 'freeing the arrow', i.e. closing a passage of text with some sort of punctuation mark. This observation applies to all the languages analysed, so we are dealing with what could be called a linguistic law,” states Dr Tomasz Stanisz (IFJ PAN), first author of the article in question.

Finally, it is worth noting that the invention of punctuation is relatively recent – punctuation marks did not occur at all in old texts. The emergence of optimal punctuation patterns in modern written languages can therefore be interpreted as the result of their evolutionary advancement. However, the excessive need for punctuation is not necessarily a sign of such sophistication. English and Spanish, contemporarily the most universal languages, appear, in the light of the above studies, to be less strict about the frequency of punctuation use. It is likely that these languages are so formalised in terms of sentence construction that there is less room for ambiguity that would need to be resolved with punctuation marks.

The Henryk Niewodniczański Institute of Nuclear Physics (IFJ PAN) is currently one of the largest research institutes of the Polish Academy of Sciences. A wide range of research carried out at IFJ PAN covers basic and applied studies, from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly publication output of IFJ PAN includes over 600 scientific papers in high-impact international journals. Each year the Institute hosts about 20 international and national scientific conferences. One of the most important facilities of the Institute is the Cyclotron Centre Bronowice (CCB), which is an infrastructure unique in Central Europe, serving as a clinical and research centre in the field of medical and nuclear physics. In addition, IFJ PAN runs four accredited research and measurement laboratories. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: "Matter-Energy-Future", which in the years 2012-2017 enjoyed the status of the Leading National Research Centre (KNOW) in physics. In 2017, the European Commission granted the Institute the HR Excellence in Research award. As a result of the categorization of the Ministry of Education and Science, the Institute has been classified into the A+ category (the highest scientific category in Poland) in the field of physical sciences.

SCIENTIFIC PUBLICATIONS:

“Universal versus system-specific features of punctuation usage patterns in major Western languages”

T. Stanisz, S. Drożdż, J. Kwapień

Chaos, Solitons & Fractals, 168, 113183, 2023

DOI: https://doi.org/10.1016/j.chaos.2023.113183

LINKS:

http://www.ifj.edu.pl/

The website of the Institute of Nuclear Physics, Polish Academy of Sciences.

http://press.ifj.edu.pl/

Press releases of the Institute of Nuclear Physics, Polish Academy of Sciences.

IMAGES:

IFJ230419b_fot01s.jpg                                 

HR: http://press.ifj.edu.pl/news/2023/04/19/IFJ230419b_fot01.jpg

Hazard functions represent the probability of using a punctuation mark as a function of the length of the sequence without these marks. In terms of punctuation, the most ‘cross-linguistic’ is German (green chart). (Source: IFJ PAN)

Music for sleeping and music for studying share surprising similarities, study finds

A recent study on Spotify data reveals which types of music are used to help listeners focus while studying and fall asleep

Peer-Reviewed Publication

AARHUS UNIVERSITY

What type of music do you use while studying? What type of music would you use to fall asleep at night? Have you ever wondered why you choose certain types of music? It turns out that the music used for both these situations is actually pretty similar.

According to a recent study from Aarhus University, which analyzed data from the streaming service Spotify, music that people listen to while studying and sleeping share more similarities than with music in general.

The researchers used both qualitative and quantitative analysis to compare the two types of music based on tracks, genres and audio features.

The study found that people use similar types of music to accompany these tasks.

“Our study suggest that music used for studying and music used for sleeping share many characteristics in terms of tracks, genres and audio features. This similarity highlights the potential of music to create a pleasant but not too disturbing atmosphere, enabling individuals to focus on studying and relaxation for sleeping”, says Rebecca Jane Scarratt, PhD student at the Center for Music in the Brain at Aarhus University.

Relaxing effects on the brain

The researchers analyzed many playlists seemingly used for studying or for relaxation before bedtime and found that these two types of music shared similar characteristics, such as slow tempo and repetitive patterns.

Among the most common genres found in both datasets were pop, lo-fi, classical and ambient music.

According to the study, the similarities could be attributed to their calming and relaxing effects on the brain. The slow tempo and repetitive patterns of the music help to lower heart rate and reduce stress, creating a conducive environment for both studying and sleeping.

The researchers also used statistical methods to compare the audio features between different datasets and determine whether there were significant differences.

They found that there were significant differences between sleep and general datasets in “Loudness”, “Energy” and “Valence” which refers to the emotional tone or mood of a piece of music. The same was the case between study and general datasets, but there was no large difference between the study and sleep datasets.

According to Rebecca, the findings are the beginning of a new research trend that compares music used for different activities and could lead to a better understanding of how music affects our cognitive and emotional states. The findings shed light on the difference between how music is assumed to be used in theory and how it is actually used in practice.

“While more research is needed to fully understand the relationship between music and cognitive processes, our study provides a starting point for exploring the impact of music on our cognitive and emotional states, and how it may enhance our daily lives.” says Rebecca Jane Scarratt.


Behind the research result

Study type: Empirical big data

External financing: Center for Music in the Brain is funded by the Danish National Research Foundation (DNRF117). 

Conflict of interest: The authors declare no conflicting interests.

Link to publication: https://doi.org/10.1038/s41598-023-31692-8

Swedish quantum computer applied to chemistry for the first time


Peer-Reviewed Publication

CHALMERS UNIVERSITY OF TECHNOLOGY

Martin Rahm 

IMAGE: MARTIN RAHM, ASSOCIATE PROFESSOR IN THEORETICAL CHEMISTRY AT THE DEPARTMENT OF CHEMISTRY AND CHEMICAL ENGINEERING, CHALMERS UNIVERSITY OF TECHNOLOGY view more 

CREDIT: JOHAN BODELL/CHALMERS

There are high expectations that quantum computers may deliver revolutionary new possibilities for simulating chemical processes. This could have a major impact on everything from the development of new pharmaceuticals to new materials. Researchers at Chalmers University have now, for the first time in Sweden, used a quantum computer to undertake calculations within a real-life case in chemistry.

“Quantum computers could in theory be used to handle cases where electrons and atomic nuclei move in more complicated ways. If we can learn to utilise their full potential, we should be able to advance the boundaries of what is possible to calculate and understand,” says Martin Rahm, Associate Professor in Theoretical Chemistry at the Department of Chemistry and Chemical Engineering, who has led the study.

Within the field of quantum chemistry, the laws of quantum mechanics are used to understand which chemical reactions are possible, which structures and materials can be developed, and what characteristics they have. Such studies are normally undertaken with the help of super computers, built with conventional logical circuits. There is however a limit for which calculations conventional computers can handle. Because the laws of quantum mechanics describe the behaviour of nature on a subatomic level, many researchers believe that a quantum computer should be better equipped to perform molecular calculations than a conventional computer.

“Most things in this world are inherently chemical. For example, our energy carriers, within biology as well as in old or new cars, are made up of electrons and atomic nuclei arranged in different ways in molecules and materials. Some of the problems we solve in the field of quantum chemistry are to calculate which of these arrangements are more likely or advantageous, along with their characteristics,” says Martin Rahm.

A new method minimises errors in the quantum chemical calculations

There is still a way to go before quantum computers can achieve what the researchers are aiming for. This field of research is still young and the small model calculations that are run are complicated by noise from the quantum computer’s surroundings. However, Martin Rahm and his colleagues have now found a method that they see as an important step forward. The method is called Reference-State Error Mitigation (REM) and works by correcting for the errors that occur due to noise by utilising the calculations from both a quantum computer and a conventional computer.

“The study is a proof-of-concept that our method can improve the quality of quantum-chemical calculations. It is a useful tool that we will use to improve our calculations on quantum computers moving forward,” says Martin Rahm.

The principle behind the method is to first consider a reference state by describing and solving the same problem on both a conventional and a quantum computer. This reference state represents a simpler description of a molecule than the original problem intended to be solved by the quantum computer. A conventional computer can solve this simpler version of the problem quickly. By comparing the results from both computers, an exact estimate can be made for the amount of error caused by noise. The difference between the two computers’ solutions for the reference problem can then be used to correct the solution for the original, more complex, problem when it is run on the quantum processor. By combining this new method with data from Chalmers’ quantum computer Särimner* the researchers have succeeded in calculating the intrinsic energy of small example molecules such as hydrogen and lithium hydride. Equivalent calculations can be carried out more quickly on a conventional computer, but the new method represents an important development and is the first demonstration of a quantum chemical calculation on a quantum computer in Sweden.

“We see good possibilities for further development of the method to allow calculations of larger and more complex molecules, when the next generation of quantum computers are ready,” says Martin Rahm.

Quantum computer built at Chalmers

The research has been conducted in close collaboration with colleagues at the Department of Microtechnology and Nanoscience. They have built the quantum computers that are used in the study, and helped perform the sensitive measurements that are needed for the chemical calculations.

“It is only by using real quantum algorithms that we can understand how our hardware really works and how we can improve it. Chemical calculations are one of the first areas where we believe that quantum computers will be useful, so our collaboration with Martin Rahm’s group is especially valuable,” says Jonas Bylander, Associate Professor in Quantum Technology at the Department of Microtechnology and Nanoscience.

The quantum computer at Chalmers with the outer shielding of the dilution refrigerator removed.

CREDIT

Chalmers/Anna-Lena Lundqvist

More about the research

Read the article Reference-State Error Mitigation: A Strategy for High Accuracy Quantum Computation of Chemistry in the Journal of Chemical Theory and Computation.

The article is written by Phalgun Lolur, Mårten Skogh, Werner Dobrautz, Christopher Warren, Janka Biznárová, Amr Osman, Giovanna Tancredi, Göran Wendin, Jonas Bylander, and Martin Rahm. The researchers are active at Chalmers University of Technology.

The research has been conducted in cooperation with the Wallenberg Centre for Quantum Technology (WACQT) and the EU-project OpensuperQ. OpensuperQ connects universities and companies in 10 European countries with the aim of building a quantum computer, and its extension will contribute further funding to researchers at Chalmers for their work with quantum chemical calculations.

*Särimner is the name of a quantum processor with five qubits, or quantum bits, built by Chalmers within the framework of the Wallenberg Center for Quantum Technology (WACQT). Its name is borrowed from Nordic mythology, in which the pig Särimner was butchered and eaten every day, only to be resurrected. Särimner has now been replaced by a larger computer with 25 qubits and the goal for WACQT is to build a quantum computer with 100 qubits that can solve problems far beyond the capacity of today’s best conventional super-computers.

 

Is Deep Learning a necessary ingredient for Artificial Intelligence?

Deep learning appears to be a key magical ingredient for the realization of many artificial intelligence tasks. However, these tasks can be efficiently realized by the use of simpler shallow architectures

Peer-Reviewed Publication

BAR-ILAN UNIVERSITY

Is Deep Learning a necessary ingredient for Artificial Intelligence? 

IMAGE: FIGURE: SCHEME OF DEEP MACHINE LEARNING CONSISTING OF MANY LAYERS (LEFT) VS. SHALLOW BRAIN LEARNING CONSISTING OF A FEW LAYERS WITH ENLARGED WIDTH (RIGHT). FOR MORE DETAIL SEE HTTPS://WWW.NATURE.COM/ARTICLES/S41598-023-32559-8 view more 

CREDIT: PROF. IDO KANTER, BAR-ILAN UNIVERSITY

The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer.  However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots.

The key question driving new research published today in Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less computational complexity. “A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.”

"We've shown that efficient learning on an artificial shallow architecture can achieve the same classification success rates that previously were achieved by deep learning architectures consisting of many layers and filters, but with less computational complexity,” said Yarden Tzach, a PhD student and contributor to this work.  “However, the efficient realization of shallow architectures requires a shift in the properties of advanced GPU technology, and future dedicated hardware developments,” he added.

The efficient learning on brain-inspired shallow architectures goes hand in hand with efficient dendritic tree learning which is based on previous experimental research by Prof. Kanter on sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveformsrefractory periods and maximal transmission rates (see also a video on dendritic learning: https://vimeo.com/702894966 ).

For years brain dynamics and machine learning development were researched independently, however recently brain dynamics has been revealed as a source for new types of efficient artificial intelligence.

How bee-friendly is the forest?


Peer-Reviewed Publication

UNIVERSITY OF WÜRZBURG

Bee collecting 

IMAGE: A HONEYBEE (APIS MELLIFERA) COLLECTS HONEYDEW ON A FIR TREE. THE STUDY SHOWS THAT THE BEECH-DOMINATED STEIGERWALD PROVIDES INSUFFICIENT FOOD RESOURCES FOR HONEYBEES. view more 

CREDIT: INGO ARNDT

Bees are generally associated with flowering meadows rather than with dense forests. Woodland, however, is considered the original habitat of the western honeybee (Apis mellifera), as it offers nesting sites in the form of tree cavities. Researchers at the Julius-Maximilians-Universität Würzburg (JMU) have now investigated the extent to which contemporary deciduous forests are suitable as foraging habitats for the busy insects.

For this purpose, Benjamin Rutschmann and Patrick Kohl installed twelve normally-sized honeybee colonies in observation hives across the Steigerwald – the respective proportion of forest in the surroundings varied for each bee colony. The two scientists conduct research at JMU in the Chair of Animal Ecology and Tropical Biology (Zoology III), which is headed by Professor Ingolf Steffan-Dewenter. The latter was also involved in the study, which has now appeared in the Journal of Applied Ecology.

Eavesdropping on bee dances

Honeybees communicate via the so-called waggle dance. The team analyzed a total of 2022 of these dances video recorded across the bees’ main foraging season, from March to August. Because the bees communicate the approximate location of a food source to their conspecifics during these dances, the scientist were able to draw conclusions about foraging distances and habitat preferences. The surprising result was that the bees used the forest far less than expected based on its contribution to landcover. Colonies that lived deep in the forest often had to travel long distances to find food.

"Especially in late summer, the supply of pollen in the forest was not guaranteed or insufficient, besides this being an especially critical time for the bee colonies and their brood," says Rutschmann. One of the main reasons for this, he says, is the tree species beech, which makes up more than 40 percent of the tree population in the Steigerwald: "Beech forests are dark, there is not much growing on the ground. Hardly any plants can cope with the light conditions in beech forests after the canopy closes, so a diverse herb layer that would be so important for bees is missing," according to the biologist.

Bees need more diverse forests

Honeydew or flowering tree species, such as linden, black locust, and chestnut, or shrubs such as blackberry and raspberry, do provide bees with an important source of carbohydrates and, in some cases, pollen as a source of protein during short periods of the year; however, bees need a balanced food supply throughout the season. "For a more bee-friendly environment, forests should be diversified with insect-pollinated trees – cherry, linden, maple, willow, horse chestnut or sweet chestnut," Rutschmann advises. Allowing secondary succession in forest gaps, the natural return of flora and fauna typical of a site, could help.

As if the lack of food were not enough of a problem, wild honeybee colonies in managed forests are also hampered by the low availability of tree hollows.

In a possible next step, the comparison with other European forest areas with different tree species composition and management could be investigated: "Especially the comparison with protected areas, where greater disturbances occur, would be interesting," says Benjamin Rutschmann. More natural disturbance and less optimization for timber production should not only increase floral diversity in the forest, but also improve the chances of survival of wild-living honeybee colonies.

Not only honeybees benefit

So, honeybees need a more diverse forest as a habitat. Once established, they also contribute significantly to biodiversity conservation in return. After all, the overwhelming majority of plants depend on cross-pollination. The honeybee, in turn, is one of the most important pollinators, along with numerous other species of wild bees.

A more diverse forest benefits not only the honeybee, but ultimately the forest itself – a diverse ecosystem is a healthy ecosystem and less susceptible to pest infestation, for example. "Converting forests to species-rich mixed deciduous forests not only promotes biodiversity, but also adaptation to future climate conditions," emphasizes Ingolf Steffan-Dewenter.

Wild wild garlic (Allium urisnum) blooming in the forest at springtime. Diverse vegetation is essential for the survival of honeybees.


Gloomy outlook: little sunlight penetrates through the dense canopy of the beech forest.

CREDIT

Ingo Arndt

Predicting regional organic carbon in deep soils

Peer-Reviewed Publication

SCIENCE CHINA PRESS

The performance of the currently reported seven depth distribution functions in fitting deep soil organic carbon (SOC) 

IMAGE: THE BOX CHART SHOWING THE DETERMINATION COEFFICIENT (R2) AND ROOT MEAN SQUARE ERROR (RMSE) OF SEVEN DEPTH DISTRIBUTION FUNCTIONS IN FITTING SOC CONCENTRATION ALONG SOIL PROFILES ON CHINA’S LOESS PLATEAU (A, C) AND OTHER REGIONS (SOUTHEAST CHINA, SOUTHERN KENYA AND SOUTHEAST BRAZIL TOGETHER) (B, D). THE UPPER, INTERMEDIATE AND LOWER EDGES OF BOX INDICATE THE 75% QUARTILES, MEDIAN AND 25% QUARTILES OF THE R2 OR RMSE, RESPECTIVELY. THE GRAY POINT IS R2 OR RMSE OF EACH MEASURED PROFILE. THE X-AXIS IS THE SEVEN DEPTH DISTRIBUTION FUNCTIONS. NEF: NEGATIVE EXPONENTIAL FUNCTION; EDF: EXPONENTIAL DECAY FUNCTION; PF: POWER FUNCTION; LF: LOGARITHMIC FUNCTION; TEF: TYPE III EXPONENTIAL FUNCTION; FIPF: FIRST-DEGREE INVERSE POLYNOMIAL FUNCTION; REF: REVISED EXPONENTIAL FUNCTION. view more 

CREDIT: ©SCIENCE CHINA PRESS

Field sampling combined with laboratory analysis is the most commonly used approach to obtain deep soil organic carbon (SOC) data and has been widely applied for more than a century. This approach provides the most accurate measurement of deep SOC concentration but is highly time-consuming and labor-intensive and is not practical at large spatial scales. Alternatively, developing mathematical functions to predict SOC in deep soils offers a quick technique for regional assessment. The depth distribution function describing the vertical distribution of SOC with soil depth has been used to estimate the deep SOC concentration in various regions and ecosystems. This method requires SOC data collected from multiple layers with a depth of at least 100 cm to obtain the parameters of the function. Additionally, the fittings among various functions have been rarely compared, leading to large arbitrariness in selecting the depth distribution function and lower fit goodness of selected function for the measured data. Moreover, application of such method is mainly focused at the site scale. These drawbacks of the currently used approaches restrict the accurate estimation of deep soil SOC at regional or larger spatial scales.

Jingjing Wang et al. composed regional SOC datasets from the measured and International Soil Reference and Information Centre (ISRIC) Soil Information System database. The datasets were used to compare the results of the currently used 7 depth distribution functions in fitting the vertical distribution patterns of SOC to select the optimal depth distribution function. Then, the team developed a prediction approach of deep SOC at the regional scale, through analyzed the relationships of the optimal depth distribution function parameters and soil properties from 0-40 cm topsoil layers.

The team demonstrated that the negative exponential function can effectively simulate the SOC vertical distribution pattern along soil profiles in different regions, the parameters (i.e., Ce and k) in the negative exponential function was linearly correlated with SOC in topsoils (0-40 cm) at the regional scale. Combining the negative exponential function and the parameters derived from the above linear prediction relationships, the authors developed a quick approach to predict SOC concentration in deep soils (down to 500 cm) at the regional scale. This approach was demonstrated to perform well in predicting SOC in deep soils in various regions.

See the article:

Wang J, Wei X, Jia X, Huang M, Liu Z, Yao Y, Shao M. 2023. An empirical approach to predict regional organic carbon in deep soils. Science China Earth Sciences, 66(3): 583–593, https://doi.org/10.1007/s11430-022-1032-2

Greenhouse gas release from permafrost is influenced by mineral binding processes

Peer-Reviewed Publication

UNIVERSITY OF COLOGNE

Drilling work in permafrost 

IMAGE: A PIECE OF THE DRILLED PERMAFROST CORE EXTRACTED DURING DRILLING OPERATIONS ON THE NEW SIBERIAN ISLANDS IN THE NORTH OF EASTERN SIBERIA. view more 

CREDIT: LUTZ SCHIRRMEISTER, ALFRED WEGENER INSTITUTE

About a quarter of the organic carbon contained in ice-rich Arctic permafrost is more difficult for microorganisms to utilize. The reason for this is a strong binding of the organic material originating from dead plant remains to mineral soil particles. That is the result of a study conducted by a research group led by Professor Dr Janet Rethemeyer and Dr Jannik Martens at the University of Cologne’s Institute of Geology and Mineralogy. Accurate predictions of the release of greenhouse gases from permafrost deposits are therefore more complex than previously assumed.

The results of the joint project, which was funded by the German Federal Ministry of Education and Research (BMBF), are published in the article ‘Stabilization of mineral-associated organic carbon in Pleistocene permafrost’ in the journal Nature Communications.

The Arctic is warming dramatically fast compared to other parts of the world. Much of it is covered by permafrost and contains large amounts of carbon, almost twice as much as the atmosphere. This carbon comes from plants that have grown over thousands of years, decomposed in the soil and then become ‘frozen’. Due to strongly rising temperatures in the Arctic, this gigantic freezer is thawing fast. The old carbon stored in it can now be degraded by microorganisms, releasing carbon dioxide and methane into the atmosphere. These greenhouse gases accelerate global warming. The warmer it gets, the more greenhouse gases are in turn released from the permafrost, causing temperatures to rise further and frozen soils and sediments to thaw even faster. “There is a feedback of carbon in permafrost with climate, the strength of which depends largely on those factors that influence microbial degradation,” said Janet Rethemeyer.

In the joint research project, scientists from the Institute of Zoology at the University of Cologne, the University of Tübingen, the Technical University of Munich and the Alfred-Wegener-Institute in Potsdam studied long permafrost cores from the Siberian Arctic. The cores come from very ice-rich, fine-grained sediments – similar to loess in our latitudes – that were deposited in large areas of Siberia and Alaska during the last ice age. The cores, up to 12 metres long, comprise sediments deposited over a period of up to 55,000 years.

The analyses of the permafrost cores show that a significant part (25-35 %) of the carbon is associated with the mineral particles and thus more difficult to access for microorganisms. “Predictions of interactions between thawing permafrost and climate are very complicated because the microbial degradability of the organic material in the sediments has varied greatly over the last 55,000 years. This is due to the different climatic conditions during this long period of deposition,” Janet Rethemeyer explained. Warmer and wetter conditions resulted in poorer binding of carbon to the mineral particles, while a colder and drier climate led to stronger binding, primarily to iron oxides. Stronger binding to iron oxides means that the decomposition rates of old plant material are lower, as Professor Dr. Michael Bonkowski from the Institute of Zoology, Department of Terrestrial Ecology at the University of Cologne has shown in laboratory experiments.

“These new findings can make a significant contribution to making computer models for forecasting greenhouse gas emissions from thawing permafrost more reliable,” said Jannik Martens, who is currently conducting research at Columbia University in New York.

Drought thresholds that impact vegetation growth reveals the critical points for nonlinear vegetation response to soil moisture stress

Peer-Reviewed Publication

SCIENCE CHINA PRESS

In a new study, a group from Peking University, China, present a highly novel data-led method that identifies, at all locations, the onset and extent of vegetation suppression for increasing levels of drought.

The drought threshold at which damage starts to occur is identified from simultaneous data streams of both soil moisture content and satellite measurements of plant and tree “greenness”. Specifically, vegetation lushness during the growing period is based onthe Normalized Difference Vegetation Index (NDVI], the kernel NDVI, the near-infrared reflectance of vegetation (NIRv) and solar-induced chlorophyll fluorescence (SIF), which are the measures of vegetation greenness and productivity.

The researchers find vegetation behaves nonlinearly as soil moisture stress rises. A discovery is made of an inflection point that clearly delimits two distinct phases of the response of vegetation growth to increasing drought stress. The first phase is characterised as where vegetation is stable, and resilient to soil moisture fluctuations due to plentiful soil moisture. In the second phase, vegetation growth rapidly decreases as drought intensifies. “Using our framework, we detect well-defined thresholds of soil moisture beyond which vegetation changes from highly resilient to highly vulnerable as soil water stress intensifies” Dr. Xiangyi Li, first author of this work, says.

The team show drought thresholds vary geographically, with more forested areas having lower thresholds, making them less sensitive to any emerging drought than less forested regions. The threshold representation, based purely on data, reveals that even state-of-the-art vegetation models often fail to describe the extent to which drought can lower vegetation health. Conversely, current models are overly sensitive to imposed drought conditions for some humid areas with high forest cover. "Our data-driven parameter-sparse representation of drought impacts is a much-needed way to benchmark ecological models", adds Xiangyi.

Arguably the physical components of climate models have been developed over a longer period and are more reliable. Hence the researchers merge estimates of future meteorological change, including drought, with their observationally constrained descriptions of vegetation response to water stress. This combining of lines of evidence reveals hotspots of East Asia, Europe, Amazon, southern Australia, eastern and southern Africa where the risk of drought-induced vegetation damage will increase substantially by the end of 21st Century and for a “business-as-usual” emissions scenario.

The analysis was led by Prof. Shilong Piao and Dr. Xiangyi Li from the College of Urban and Environmental Sciences, Peking University, and supported by scientists from Germany, the UK, Spain and the US.

###

See the article:

Global variations in critical drought thresholds that impact vegetation

https://doi.org/10.1093/nsr/nwad049