Friday, March 19, 2021

 

Roof-tiles in imperial China: Creating Ximing Temple's lotus-pattern tile ends

Researchers from Kanazawa University and the Chinese Academy of Social Sciences cast light on the production of roof tiles during the Tang dynasty through a study of variations in lotus-pattern tile ends recovered from the Ximing Temple in Xi'an

KANAZAWA UNIVERSITY

Research News

IMAGE

IMAGE: BASIC INFORMATION ABOUT TILE ENDS AND IMBRICES. THE FIGURE SHOWS THE STRUCTURE OF A TILE END AND HOW TILE ENDS AND IMBRICES ARE USED. view more 

CREDIT: KANAZAWA UNIVERSITY

Kanazawa, Japan -- Any visitor to China will have noticed the spectacular roofs on buildings dating from imperial times. However, the question of how these roof tiles were produced has attracted relatively little attention from archaeologists. Now, a team of researchers has conducted a major study of tile ends unearthed at the Ximing Temple in Xi'an, yielding exciting insights into their production.

In a study published in Archaeological Research in Asia, researchers from Kanazawa University and the Chinese Academy of Social Sciences have revealed the significance of minute variations in the tile ends used in the roof of the famous Ximing Temple in Xi'an, built during the Tang dynasty (618-907 AD) when Xi'an (then known as Chang'an) was the imperial capital.

The researchers conducted an investigation of 449 tile ends with lotus patterns from various periods during the Tang dynasty that had been recovered from the Ximing Temple. "We were interested in the variations in the tile ends, both those within the conscious control of the artisans who made the tiles, such as whether to use simple or complex lotus patterns, and those outside their control, such as the marks left by the deterioration of the molds used to make the tiles," says lead author of the study Meng Lyu.

"We discovered that the degree of minor variation in the tile ends increases significantly in the later samples," adds author Guoqiang Gong. "This suggests to us that there was a shift away from the centralized manufacturing of imperial building materials during the Early Tang period toward one in which small private artisans played an important role in the Late Tang period."

Intriguingly, the study has revealed traces of the coming together of two distinct cultural traditions. "We found that there were, in fact, two separate production systems at work to make the title ends," notes author Chunlin Li. "One produced tile ends with compound petal patterns and curved incisions, whereas the other made end tiles with simple petal patterns and scratched incisions." These two styles may ultimately have their origins during an earlier historical period when the Northern Wei dynasty was divided into two regimes on either side of the Taihang mountain range.

This study demonstrates that studying the roof tiles of China's grand imperial buildings can reveal a great deal about the circumstances of their production and yield insights into larger historical questions.

CAPTION

Shaping stage in production process of tile end. The shaping stage in tile-end production process is most likely to have followed this sequence: 1. Design; 2. Making first-level mold; 3. Making ceramic second-level mold; 4. Making tile end. The use of two different levels of mold enabled artisans to produce the needed numbers over a relatively short period of time.

CREDIT

Kanazawa University



CAPTION

Incisions on the back surfaces of tile ends and patterns on the front surfaces. Artisans firmly joined tile ends to imbrices through a process which left obvious traces on the back surfaces of the tile ends. Tile ends with simple petals mostly contain thin, radially oriented scratched incisions (Fig. 3.1), while those with compound petals usually contain wide, triangular-shaped curved incisions (Fig. 3.2). The correlation between pattern and processing technique identifies two production systems at the Ximing Temple workshop.

CREDIT

Kanazawa University


Consumption of added sugar doubles fat production

UNIVERSITY OF ZURICH

Research News

Sugar is added to many common foodstuffs, and people in Switzerland consume more than 100 grams of it every day. The high calorie content of sugar causes excessive weight and obesity, and the associated diseases. But does too much sugar have any other harmful effects if consumed regularly? And if so, which sugars in particular?

Even moderate amounts of sugar increase fat synthesis

Researchers at the University of Zurich (UZH) and the University Hospital Zurich (USZ) have been investigating these questions. Compared to previous studies, which mainly examined the consumption of very high amounts of sugar, their results show that even moderate amounts lead to a change in the metabolism of test participants. "Eighty grams of sugar daily, which is equivalent to about 0,8 liters of a normal soft drink, boosts fat production in the liver. And the overactive fat production continues for a longer period of time, even if no more sugar is consumed," says study leader Philipp Gerber of the Department of Endocrinology, Diabetology and Clinical Nutrition.

Ninety-four healthy young men took part in the study. Every day for a period of seven weeks, they consumed a drink sweetened with different types of sugar, while the control group did not. The drinks contained either fructose, glucose or sucrose (table sugar which is a combination of fructose and glucose). The researchers then used tracers (labeled substances that can be traced as they move through the body) to analyze the effect of the sugary drinks on the lipid metabolism.

Fructose and sucrose double fat production beyond food intake

Overall, the participants did not consume more calories than before the study, as the sugary drink increased satiety and they therefore reduced their calorie intake from other sources. Nevertheless, the researchers observed that fructose has a negative effect: "The body's own fat production in the liver was twice as high in the fructose group as in the glucose group or the control group - and this was still the case more than twelve hours after the last meal or sugar consumption," says Gerber. Particularly surprising was that the sugar we most commonly consume, sucrose, boosted fat synthesis slightly more than the same amount of fructose. Until now, it was thought that fructose was most likely to cause such changes.

Development of fatty liver or diabetes more likely

Increased fat production in the liver is a significant first step in the development of common diseases such as fatty liver and type-2 diabetes. From a health perspective, the World Health Organization recommends limiting daily sugar consumption to around 50 grams or, even better, 25 grams. "But we are far off that mark in Switzerland," says Philipp Gerber. "Our results are a critical step in researching the harmful effects of added sugars and will be very significant for future dietary recommendations."

###


I became awed by the power of a single taste, and the concentration of brains, energy, wealth and -- most of all, power -- that had led to its being supplied to so ..



UK variant spread rapidly in care homes in England

The UK variant of SARS-CoV-2 spread rapidly in care homes in England in November and December last year, broadly reflecting its spread in the general population, according to a study by UCL researchers

NOW SPREADING ACROSS ALBERTA, CANADA, AND USA

UNIVERSITY COLLEGE LONDON

Research News

The UK variant of SARS-CoV-2 spread rapidly in care homes in England in November and December last year, broadly reflecting its spread in the general population, according to a study by UCL researchers.

The study, published as a letter in the New England Journal of Medicine, looked at positive PCR tests of care home staff and residents between October and December. It found that, among the samples it had access to, the proportion of infections caused by the new variant rose from 12% in the week beginning 23 November to 60% of positive cases just two weeks later, in the week beginning 7 December.

In the south east of England, where the variant was most dominant, the proportion increased from 55% to 80% over the same period. In London, where the variant spread fastest, the proportion increased from 20% to 66%.

The researchers said the timing of infections suggested the new variant may have been passed from staff to residents, with positive cases among older people occurring later.

Senior author Dr Laura Shallcross (UCL Institute of Health Informatics) said: "Our findings suggest the UK variant spread just as quickly in care homes as it did in the general population. This shows the importance of public health measures to reduce transmission in the country as a whole."

Lead author Dr Maria Krutikov (UCL Institute of Health Informatics) said: "Our results are consistent with national trends, suggesting that the UK variant was present in care homes from early on, although our sample did not fully represent all care homes in England. As we carried out this work in December, we were able to inform public health decisions at the time.

"To see how viruses like Covid-19 are changing and to respond quickly and appropriately, it is really important we have an advanced surveillance system, with gene sequencing that can identify new variants as early as possible."

For the study, researchers analysed 4,442 positive PCR samples from care home staff and residents in England. These were all the positive tests of staff and residents processed from October to December at the Lighthouse laboratory in Milton Keynes, one of the UK's biggest coronavirus testing labs. Staff in care homes are tested every week, while residents are tested monthly.

PCR tests for SARS-CoV-2 are designed to detect three parts of the virus - the S gene, N gene, and ORF1ab. The UK variant, known as B.1.1.7., has changes in its S gene, or spike gene, which mean the tests do not detect this particular target.

This means researchers were able to identify the proportion of infections caused by the new variant by looking at the samples in which the other two targets, the N gene and ORF1b, were detected, but not the S gene.

They also compared Ct values, which show how much of the virus is present, to check the samples did not miss the S gene because they were "weaker" positive tests, with less viral material.

Their analysis showed that in late November, the proportion of infections associated with B.1.1.7 increased sharply in several regions of England. In London, this was from 20% (week beginning 23 November) to 66% (week beginning 7 December). In the east of England, it rose from 35% to 64% over the same period, while in the south east the increase was from 55% to 80%. The data was predominantly drawn from London, the south east and east of England and the Midlands, with fewer positive test samples from the north of England and the south west.

Most samples were from people aged under 65, as staff are tested much more frequently than residents. However, among samples from those aged over 65, the proportion of infections caused by the new variant rose from 14% in the week beginning 23 November to 76% in the week beginning 7 December. (The number of total positive samples was low - just 21 and 157 respectively.)

The research was conducted as part of the Vivaldi study looking at Covid-19 infections in care homes. It received support and funding from the Department of Health and Social Care.

###

Peer reviewed study / observational / people and cells

A new way to measure human wellbeing towards sustainability

INTERNATIONAL INSTITUTE FOR APPLIED SYSTEMS ANALYSIS

Research News

From science to implementation: How do we know if humankind is moving in the right direction towards global sustainability? The ambitious aim of the SDGs is a global call to action to end poverty, protect the planet, and ensure all people enjoy peace and prosperity by 2030. To monitor progress towards these goals, a set of over 220 indicators is used, but there is a danger that one can no longer see the forest for the trees. A single comprehensive indicator to assess the overall progress is needed. In a new paper published in the Proceedings of the National Academy of Sciences (PNAS), IIASA researchers and colleagues from the University of Vienna, the Vienna Institute of Demography (Austrian Academy of Sciences), and the Bocconi University present a bespoke indicator based on life expectancy and benchmarks of objective and subjective wellbeing: The Years of Good Life (YoGL) indicator.

"Many existing indicators of wellbeing do not consider the basic fact that being alive is a prerequisite for enjoying any quality of life. In addition, they often disregard the length of a life. Life expectancy has long been used as a very comprehensive indicator of human development, with avoiding premature death being a universally shared aspiration. However, mere survival is not enough to enjoy life and its qualities," explains lead author Wolfgang Lutz, Founding Director of the Wittgenstein Centre for Demography and Global Human Capital, a collaborative center of the Austrian Academy of Sciences (Vienna Institute of Demography), International Institute for Applied Systems Analysis, and University of Vienna. "The Years of Good Life indicator only counts a year as a good year if individuals are simultaneously not living in absolute poverty, free from cognitive and physical limitations, and report to be generally satisfied with their lives."

The results show that YoGL differs substantially between countries. While in most developed countries, 20-year-old women can expect to have more than 50 years of good life left (with a record of 58 years in Sweden), women in the least developed countries can expect less than 15 years (with a record low of 10 years for women in Yemen). While life expectancy is higher for women than for men in every country, female Years of Good Life are lower than those of males in most developing countries. This reveals a significant gender inequality in objective living conditions and subjective life satisfaction in most of these countries.

The paper - funded by an Advanced Grant to Lutz from the European Research Council - presents a first step in the great challenge to comprehensively assess sustainable human wellbeing that also considers feedbacks from environmental change. Unlike many other indicators, YoGL is not restricted to the national level but can be assessed for flexibly defined sub-populations and over long-time horizons because it has substantive meaning in its absolute value. It also has the potential to become a broadly used "currency" for measuring the benefits of certain actions, complementing assessments based on purely monetary units. For example, the social costs of carbon could potentially be evaluated in terms of Years of Good Life lost among future generations, rather than only in dollar terms - making it a key indicator to measure sustainable progress in an integrated and tangible way. Applying the same logic to the recent COVID-19 pandemic, study coauthor Erich Striessnig adds that YoGL also represents a major improvement over conventional indicators in assessing the long-term success of intervention measures.

"If we used YoGL as a currency to measure the long-term impacts of the ongoing crisis rather than GDP per capita or life expectancy, we would not only account for the material losses and the lost life years, but also for the losses in physical and cognitive wellbeing, as well as for the losses incurred by the younger generations in terms of their human capital resulting from school closures. Lack of consistent data that is needed to calculate YoGL does of course remain an issue. Political decision makers should, however, aim for improved data availability to make better informed decisions based on indicators such as YoGL," Striessnig concludes.

###

Reference

Lutz, W., Striessnig, E., Dimitrova, A., Ghislandi, W., Lijadi, A., Reiter, C., Spitzer, S., Yildiz, D. (2021). Years of Good Life (YoGL) is a wellbeing indicator designed to serve research on sustainability. Proceedings of the National Academy of Sciences (PNAS) DOI: 10.1073/pnas.1907351

About IIASA:

The International Institute for Applied Systems Analysis (IIASA) is an international scientific institute that conducts research into the critical issues of global environmental, economic, technological, and social change that we face in the twenty-first century. Our findings provide valuable options to policymakers to shape the future of our changing world. IIASA is independent and funded by prestigious research funding agencies in Africa, the Americas, Asia, and Europe.
http://www.iiasa.ac.at

 

Study finds American mink to be main limiting factor of European mink

ESTONIAN RESEARCH COUNCIL

Research News

IMAGE

IMAGE: RADIOTRACKING view more 

CREDIT: MADIS PÕDRA

The disappearance of the species from their natural habitat is a growing problem, which unfortunately means the need to intensive management, including ex situ conservation and translocations, is also growing. For the translocation to be successful, risk factors must be removed from the area. In the course of reestablishment, it is important to assess the adaptation of captive-bred animals into the wild in order to improve release strategies and methods.

The doctoral thesis of Madis Põdra focused on the translocation of captive-bred European mink. The efficiency of adaptation as well as the influence of American minks, the main threat, were evaluated. This was achieved by analysing the spread of the invasive species in Spain. The translocation of the European mink was assessed in two regions - the Salburua wetland in Northern Spain and Hiiumaa in Estonia. In Salburua, the abundance of the American mink was reduced before releasing the European mink; in Hiiumaa, the alien species was removed entirely. 27 European minks were released in Salburua wetland (2008-2010) and 172 in Hiiumaa (2000-2003). To monitor the process of adaptation of the released animals, radio-tracking as well as live trapping were used. The researchers studied the survival of the minks, the causes of their death, movements and their dietary acclimatization.

"My thesis confirms that the American mink is the main obstacle in reintroducing the European mink," explains Põdra. "If we want to reintroduce the European mink successfully, the alien species must be removed entirely. Captive-bred European mink are capable of adapting and surviving in the wild. The first month or month and a half is the most critical stage: at that time, the death rate of released animals is relatively high. Later, their behaviour starts to resemble that of wild minks."

This doctoral thesis is particularly interesting because the researchers managed to evaluate the efficiency of the adaptation in rather great detail. Similar studies have previously been researched on numerous occasions, but the majority of studies focus on the survival of the released animals, often leaving the question 'why' unanswered. In Spain, Madis Põdra proved that the American mink has significant influence on the translocation of the European mink even if the abundance of the alien species is low and the European mink has been well prepared for life in the wild. It is known that the European mink competes with the American mink for habitats, but with his research, Madis Põdra proved that the American mink is able to depredate on native species. The results obtained in Hiiumaa showed that captive-bred specimens are capable of adapting to life in the wild, but the process is influenced by multiple factors like the sex of the released animals and their living conditions in captivity. In addition, the tendency of the European mink to move to unsuitable habitats after release was discovered alongside their difficulties catching prey. This indirectly affects their survival - although bigger predators are the proximate causes of death, the ultimate causes may be a syndrome of mal-adaptations.



CAPTION

Madis Põdra, a doctoral student from the School of Natural Sciences and Health of Tallinn University,

CREDIT

Madis Põdra

The supervisors of the doctoral thesis are Tiit Maran, visiting lecturer from the Estonian University of Life Sciences, and Tiiu Koff, visiting professor and research track associate professor at Tallinn University. The opponents are Professor Asko Lõhmus from the University of Tartu and John G. Ewen, senior research fellow at the London Institute of Zoology.

The dissertation is available in the ETERA digital environment of TU Academic Library. https://www.etera.ee/zoom/110294/view?page=1&p=separate&p=separate&tool=info&tool=info&view=0,0,2067,2835https:%2F%2Fhttp://www.etera.ee%2Fzoom%2F110294%2Fview%3Fpage&view=0,0,2067,2835

 

Militarization negatively influences green growth

This was concluded by economists who studied the indicators of 21 OECD countries from 1980 to 2016

URAL FEDERAL UNIVERSITY

Research News

IMAGE

IMAGE: LAND VEHICLES, AIRCRAFT, AND SEA-VESSELS CONSUME A GARGANTUAN AMOUNT OF FOSSIL FUELS. view more 

CREDIT: URFU / ILYA SAFAROV.

Military expenditures are highly counterproductive to green economic growth- documented by a recent study conducted by UrFU economist collaboration with an international research team. Sustainable economic development or green growth requires cleaner energy and green technology that can mitigate the negative externalities (e.g., carbon emission) of economic growth. The study utilized various macroeconomic indicators for 21 OECD countries over the year 1980-2016. This empirical study focusing on the dynamic impact of innovation, militarization and renewable energy on the green economy is published in the journal "Environmental Science and Pollution Research".

On the one hand, the military-industry (land vehicles, aircraft, and sea-vessels) consume a gargantuan amount of fossil fuels. About 75% of the global non-renewable energy consumption (coal, gas, oil) is by military actions, economists claim. According to the BP report (without division by sectors), the five main consumers of oil, gas, coal in 2019 were China (120.64 EJ), the United States (78.81 EJ), India (31.01 EJ), Russia (26.2 EJ), and Japan (16.33 EJ).

On the other hand, militarization is one of the main sources of air and environmental pollution.

"Although there is a discrepancy in the environmental damages across the nations, the opulent countries invariably resume causing a challenge to the global ecosystem compared to the impoverishment of counterparts. For example, the Pentagon is the glaring example of a paramount consumer of non-renewable resources. The US maintains hundreds of military bases in sixty countries exclusively. Accordingly, recent armed forces' equipment consistently becomes extra capital, more resource-intensive, and waste-generative as they have a substantial dependency on fossil fuels. In the act of assessing, supporting, and maintaining an arsenal of weapons, a substantial amount of toxic substances is released which is known to cause harm to the land and water adjacent to the military bases and the surrounding communities," - says Sohag Kazi, co-author, senior researcher at the Department of Econometrics and Statistics, Ural Federal University.

Economists are not calling for abandoning militarization. Their suggestion is to not increase the annual funding for the military-industrial complex and to use renewable energy sources for military needs. The researches argue that switching from non-renewable energy to renewable energy in the production process would not significantly affect the output but would reduce carbon emissions.

"It is highly unlikely that governments would reduce the budget allocated for the defense purchases in developed countries for various reasons. However, we have a cautionary remark regarding the operation and maintenance of the military expenditures on green growth. It is recommended that developed countries curtail their military expenditures and non-renewable energy usage, and instead conduct their military operations more cautiously, certainly by using of renewable energy technology, which should help to contribute to a better world," - says Sohag Kazi.

The study was conducted with participation of economists from Ural Federal University (Russia), University of Western Australia (Australia), Drexel University (USA), University of Economics (Vietnam), Universiti Teknologi MARA (Malaysia).

Military expenditures combine all current and capital expenditures spent on the militia, together with pacifist forces and defense establishments. It also includes government bureaus involved in defense projects, paramilitary forces if they are trained and provided for armed forces operations, and military space activities.


LA REVUE GAUCHE - Left Comment: Search results for PERMANENT ARMS ECONOMY 

Imposter syndrome is common among high achievers in med school

A high percentage of medical students feel like "imposters" during their first year of medical school, which indicates increasing levels of distress.

THEY HAVE YET TO BE INITATED INTO THE GATEKEEPERS LODGE OF THE BROTHERHOOD OF HIPPOCRATES

THOMAS JEFFERSON UNIVERSITY

Research News

PHILADELPHIA - Imposter syndrome is a considerable mental health challenge to many throughout higher education. It is often associated with depression, anxiety, low self-esteem and self-sabotage and other traits. Researchers at the Sidney Kimmel Medical College at Thomas Jefferson University wanted to learn to what extent incoming medical students displayed characteristics of imposter syndrome, and found that up to 87% of an incoming class reported a high or very high degree of imposter syndrome.

"Distress and mental health needs are critical issues among medical students," says Susan Rosenthal, MD, lead author of the study published in the journal Family Medicine. "This paper identifies how common imposter syndrome is, and the personality traits most associated with it, which gives us an avenue to address it."

Medical students nationwide report alarming rates of depression, anxiety and burnout. Identifying and intervening to support psychological well-being in these learners is a continuing challenge, especially among first year medical students.

Dr. Rosenthal and her colleagues examined imposter syndrome, which is defined as inappropriate feelings of inadequacy among high achievers, using a validated survey tool called the Clance Imposter Phenomenon (IP) Scale. Of the 257 students who completed the survey, 87% of students who reported high levels of imposter syndrome, were more likely to show an even higher degree of imposter syndrome at the end of their first year. They also found that students' higher IP scores were associated with lower scores for self-compassion, sociability, self-esteem and higher scores on neuroticism/anxiety. Therefore, a high CIP score among entering students may be an indicator of future risk for experiencing psychological distress during medical school.

"Imposter syndrome is a malleable personality construct, and is therefore responsive to intervention," says Dr. Rosenthal, who is also the medical college's associate dean for Student Affairs. "Supportive feedback and collaborative learning, mentoring by faculty, academic support, individual counseling and group discussions with peers are all helpful. For many students, the most powerful first step in addressing and ameliorating imposter syndrome is normalizing this distorted and maladaptive self-perception through individual sessions with faculty and mentored small-group discussions with peers."

It is of interest to note that the students in this study the medical college's Class of 2020 were exposed to the traditional medical school curriculum. The following year, Jefferson introduced an innovative new curriculum, called JeffMD. Dr. Rosenthal and colleagues plan to compare the rates of imposter syndrome in students exposed to the novel curriculum. The new JeffMD curriculum emphasizes collaborative learning with a faculty mentor and a small group of students. The researchers hope, and will test whether this change in the learning environment can ameliorate feelings of imposterism.

###

Article reference: Susan Rosenthal, Yvette Schlussel, Mary Bit Yaden, Jennifer DeSantis, Kathryn Trayes, Charles Pohl, Mohammadreza Hojat, "Persistent Impostor Phenomenon Is Associated With Distress in Medical Students," Family Medicine, DOI: 10.22454/FamMed.2021.799997, 2021.

Visa costs higher for people from poor countries

Research team including Goettingen University sheds light on global inequality in travel permit costs

UNIVERSITY OF GÖTTINGEN

Research News

IMAGE

IMAGE: MAP OF THE WORLD SHOWING AVERAGE NUMBER OF DAYS THAT SOMEONE HAS TO WORK TO BE ABLE TO AFFORD A TOURIST VISA view more 

CREDIT: GLOBAL VISA COST DATASET

How much do people have to pay for a travel permit to another country? A research team from Göttingen, Paris, Pisa and Florence has investigated the costs around the world. What they found revealed a picture of great inequality. People from poorer countries often pay many times what Europeans would pay. The results have been published in the journal Political Geography.

Dr Emanuel Deutschmann from the Institute of Sociology at the University of Göttingen, together with Professor Ettore Recchi, Dr Lorenzo Gabrielli and Nodira Kholmatova (from Sciences Po Paris, CNR-ISTI Pisa and EUI Florence respectively) compiled a new dataset on visa costs for travel between countries worldwide. The analysis shows that on average people from North Africa and South Asia pay more than three times as much for tourist visas (at just under 60 US dollars), as people from Western Europe (at around 18 US dollars).

The inequality becomes even greater when the differences in wealth between countries are taken into account. While Europeans usually only have to work for a fraction of a day to be able to afford a travel permit, in some African and Asian countries the visa costs are equivalent to several weeks or even months of the average income.

"Our dataset provides information about a dimension of global inequality that has, so far, received little attention," says Deutschmann. "While Article 13 of the Universal Declaration of Human Rights states that every person has the right to move freely and to leave any country, including their own, in reality there are barriers at many different levels which can obstruct global mobility, depending on where you come from. And our data clearly shows that these barriers include visa costs."

###

Original publication:
Recchi, E., E. Deutschmann, L. Gabrielli & N. Kholmatova. 2021. The Global Visa Cost Divide: How and Why the Price for Travel Permits Varies Worldwide. Political Geography 86: 1-14.
https://doi.org/10.1016/j.polgeo.2021.102350

Thursday, March 18, 2021

What happened to Mars's water? It is still trapped there

New data challenges the long-held theory that all of Mars's water escaped into space

CALIFORNIA INSTITUTE OF TECHNOLOGY

Research News

Billions of years ago, the Red Planet was far more blue; according to evidence still found on the surface, abundant water flowed across Mars and forming pools, lakes, and deep oceans. The question, then, is where did all that water go?

The answer: nowhere. According to new research from Caltech and JPL, a significant portion of Mars's water--between 30 and 99 percent--is trapped within minerals in the planet's crust. The research challenges the current theory that the Red Planet's water escaped into space.

The Caltech/JPL team found that around four billion years ago, Mars was home to enough water to have covered the whole planet in an ocean about 100 to 1,500 meters deep; a volume roughly equivalent to half of Earth's Atlantic Ocean. But, by a billion years later, the planet was as dry as it is today. Previously, scientists seeking to explain what happened to the flowing water on Mars had suggested that it escaped into space, victim of Mars's low gravity. Though some water did indeed leave Mars this way, it now appears that such an escape cannot account for most of the water loss.

"Atmospheric escape doesn't fully explain the data that we have for how much water actually once existed on Mars," says Caltech PhD candidate Eva Scheller (MS '20), lead author of a paper on the research that was published by the journal Science on March 16 and presented the same day at the Lunar and Planetary Science Conference (LPSC). Scheller's co-authors are Bethany Ehlmann, professor of planetary science and associate director for the Keck Institute for Space Studies; Yuk Yung, professor of planetary science and JPL senior research scientist; Caltech graduate student Danica Adams; and Renyu Hu, JPL research scientist. Caltech manages JPL for NASA.

The team studied the quantity of water on Mars over time in all its forms (vapor, liquid, and ice) and the chemical composition of the planet's current atmosphere and crust through the analysis of meteorites as well as using data provided by Mars rovers and orbiters, looking in particular at the ratio of deuterium to hydrogen (D/H).

Water is made up of hydrogen and oxygen: H2O. Not all hydrogen atoms are created equal, however. There are two stable isotopes of hydrogen. The vast majority of hydrogen atoms have just one proton within the atomic nucleus, while a tiny fraction (about 0.02 percent) exist as deuterium, or so-called "heavy" hydrogen, which has a proton and a neutron in the nucleus.

The lighter-weight hydrogen (also known as protium) has an easier time escaping the planet's gravity into space than its heavier counterpart. Because of this, the escape of a planet's water via the upper atmosphere would leave a telltale signature on the ratio of deuterium to hydrogen in the planet's atmosphere: there would be an outsized portion of deuterium left behind.

However, the loss of water solely through the atmosphere cannot explain both the observed deuterium to hydrogen signal in the Martian atmosphere and large amounts of water in the past. Instead, the study proposes that a combination of two mechanisms--the trapping of water in minerals in the planet's crust and the loss of water to the atmosphere--can explain the observed deuterium-to-hydrogen signal within the Martian atmosphere.

When water interacts with rock, chemical weathering forms clays and other hydrous minerals that contain water as part of their mineral structure. This process occurs on Earth as well as on Mars. Because Earth is tectonically active, old crust continually melts into the mantle and forms new crust at plate boundaries, recycling water and other molecules back into the atmosphere through volcanism. Mars, however, is mostly tectonically inactive, and so the "drying" of the surface, once it occurs, is permanent.

"Atmospheric escape clearly had a role in water loss, but findings from the last decade of Mars missions have pointed to the fact that there was this huge reservoir of ancient hydrated minerals whose formation certainly decreased water availability over time," says Ehlmann.

"All of this water was sequestered fairly early on, and then never cycled back out," Scheller says. The research, which relied on data from meteorites, telescopes, satellite observations, and samples analyzed by rovers on Mars, illustrates the importance of having multiple ways of probing the Red Planet, she says.

Ehlmann, Hu, and Yung previously collaborated on research that seeks to understand the habitability of Mars by tracing the history of carbon, since carbon dioxide is the principal constituent of the atmosphere. Next, the team plans to continue to use isotopic and mineral composition data to determine the fate of nitrogen and sulfur-bearing minerals. In addition, Scheller plans to continue examining the processes by which Mars's surface water was lost to the crust using laboratory experiments that simulate Martian weathering processes, as well as through observations of ancient crust by the Perseverance rover. Scheller and Ehlmann will also aid in Mars 2020 operations to collect rock samples for return to Earth that will allow the researchers and their colleagues to test these hypotheses about the drivers of climate change on Mars.

###

The paper, titled "Long-term Drying of Mars Caused by Sequestration of Ocean-scale Volumes of Water in the Crust," published in Science on 16 March 2021. This work was supported by a NASA Habitable Worlds award, a NASA Earth and Space Science Fellowship (NESSF) award, and a NASA Future Investigator in NASA Earth and Space Science and Technology (FINESST) award.

Researchers find a better way to measure consciousness

UNIVERSITY OF WISCONSIN-MADISON

Research News

MADISON, Wis. -- Millions of people are administered general anesthesia each year in the United States alone, but it's not always easy to tell whether they are actually unconscious.

A small proportion of those patients regain some awareness during medical procedures, but a new study of the brain activity that represents consciousness could prevent that potential trauma. It may also help both people in comas and scientists struggling to define which parts of the brain can claim to be key to the conscious mind.

"What has been shown for 100 years in an unconscious state like sleep are these slow waves of electrical activity in the brain," says Yuri Saalmann, a University of Wisconsin-Madison psychology and neuroscience professor. "But those may not be the right signals to tap into. Under a number of conditions -- with different anesthetic drugs, in people that are suffering from a coma or with brain damage or other clinical situations -- there can be high-frequency activity as well."

UW-Madison researchers recorded electrical activity in about 1,000 neurons surrounding each of 100 sites throughout the brains of a pair of monkeys at the Wisconsin National Primate Research Center during several states of consciousness: under drug-induced anesthesia, light sleep, resting wakefulness, and roused from anesthesia into a waking state through electrical stimulation of a spot deep in the brain (a procedure the researchers described in 2020).

"With data across multiple brain regions and different states of consciousness, we could put together all these signs traditionally associated with consciousness -- including how fast or slow the rhythms of the brain are in different brain areas -- with more computational metrics that describe how complex the signals are and how the signals in different areas interact," says Michelle Redinbaugh, a graduate student in Saalman's lab and co-lead author of the study, published today in the journal Cell Systems.

To sift out the characteristics that best indicate whether the monkeys were conscious or unconscious, the researchers used machine learning. They handed their large pool of data over to a computer, told the computer which state of consciousness had produced each pattern of brain activity, and asked the computer which areas of the brain and patterns of electrical activity corresponded most strongly with consciousness.

The results pointed away from the frontal cortex, the part of the brain typically monitored to safely maintain general anesthesia in human patients and the part most likely to exhibit the slow waves of activity long considered typical of unconsciousness.

"In the clinic now, they may put electrodes on the patient's forehead," says Mohsen Afrasiabi, the other lead author of the study and an assistant scientist in Saalmann's lab. "We propose that the back of the head is a more important place for those electrodes, because we've learned the back of the brain and the deep brain areas are more predictive of state of consciousness than the front."

And while both low- and high-frequency activity can be present in unconscious states, it's complexity that best indicates a waking mind.

"In an anesthetized or unconscious state, those probes in 100 different sites record a relatively small number of activity patterns," says Saalmann, whose work is supported by the National Institutes of Health.

A larger -- or more complex -- range of patterns was associated with the monkey's awake state.

"You need more complexity to convey more information, which is why it's related to consciousness," Redinbaugh says. "If you have less complexity across these important brain areas, they can't convey very much information. You're looking at an unconscious brain."

More accurate measurements of patients undergoing anesthesia is one possible outcome of the new findings, and the researchers are part of a collaboration supported by the National Science Foundation working on applying the knowledge of key brain areas.

"Beyond just detecting the state of consciousness, these ideas could improve therapeutic outcomes from people with consciousness disorders," Saalmann says. "We could use what we've learned to optimize electrical patterns through precise brain stimulation and help people who are, say, in a coma maintain a continuous level of consciousness."

###

This research was supported by grants from the National Institutes of Health (R01MH110311 and P51OD011106), the Binational Science Foundation, and the Wisconsin National Primate Research Center.