Thursday, August 06, 2020

Methanol synthesis: Insights into the structure of an enigmatic catalyst

In the past, the catalyst for the production of methanol had eluded all attempts to clarify its surface structure
RUHR-UNIVERSITY BOCHUM
IMAGE
IMAGE: HOLGER RULAND, DANIEL LAUDENSCHLEGER AND MARTIN MUHLER (LEFT TO RIGHT) COLLABORATED FOR THE STUDY. view more 
CREDIT: RUB, MARQUARD
Methanol is one of the most important basic chemicals used, for example, to produce plastics or building materials. To render the production process even more efficient, it would be helpful to know more about the copper/zinc oxide/aluminium oxide catalyst deployed in methanol production. To date, however, it hasn't been possible to analyse the structure of its surface under reaction conditions. A team from Ruhr-Universität Bochum (RUB) and the Max Planck Institute for Chemical Energy Conversion (MPI CEC) has now succeeded in gaining insights into the structure of its active site. The researchers describe their findings in the journal Nature Communications from 4 August 2020.
In a first, the team showed that the zinc component of the active site is positively charged and that the catalyst has as many as two copper-based active sites. "The state of the zinc component at the active site has been the subject of controversial discussion since the catalyst was introduced in the 1960s. Based on our findings, we can now derive numerous ideas on how to optimise the catalyst in the future," outlines Professor Martin Muhler, Head of the Department of Industrial Chemistry at RUB and Max Planck Fellow at MPI CEC. For the project, he collaborated with Bochum-based researcher Dr. Daniel Laudenschleger and Mülheim-based researcher Dr. Holger Ruland.
Sustainable methanol production
The study was embedded in the Carbon-2-Chem project, the aim of which is to reduce CO2 emissions by utilising metallurgical gases produced during steel production for the manufacture of chemicals. In combination with electrolytically produced hydrogen, metallurgical gases could also serve as a starting material for sustainable methanol synthesis. As part of the Carbon-2-Chem project, the research team recently examined how impurities in metallurgical gases, such as are produced in coking plants or blast furnaces, affect the catalyst. This research ultimately paved the way for insights into the structure of the active site.
Active site deactivated for analysis
The researchers had identified nitrogen-containing molecules- ammonia and amines - as impurities that act as catalyst poisons. They deactivated the catalyst, but not permanently: if the impurities disappear, the catalyst recovers by itself. Using a unique research apparatus that was developed in-house, i.e. a continuously operated flow apparatus with an integrated high-pressure pulse unit, the researchers passed ammonia and amines over the catalyst surface, temporarily deactivating the active site with a zinc component. Despite the zinc component being deactivated, another reaction still took place on the catalyst: namely the conversion of ethene to ethane. The researchers thus detected a second active site operating in parallel, which contains metallic copper but doesn't have a zinc component.
Since ammonia and the amines are bound to positively charged metal ions on the surface, it was evident that zinc, as part of the active site, carries a positive charge.
###
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Dear Dr... how our email style reveals much about our personalities

UNIVERSITY OF BATH
A new theory from psychologists at the University of Bath argues that how we communicate online, including via email and social media, reveals much about our personality and character types.
In an open letter for the journal Molecular Autism, the researchers at Bath and Cardiff highlight clear differences in electronic communication styles between autistic and non-autistic people. And they say these findings have wider relevance about how we communicate online and for being respectful of others' communication styles.
By looking at the ways in which email style differed between two groups, the researchers observed fewer social niceties and less preamble in emails from autistic people (e.g. 'I hope you are well'), yet a stronger and polite observance of formal address, (e.g. 'Dear Dr...').
In autistic people, they noticed considerable attention to detail, often demonstrated by participants correcting the researcher, by highlighting grammatical errors or broken hyperlinks. But autistic people were also more open to correcting themselves, for example if they found spelling mistakes in their previous emails. Non-autistic people rarely seemed to make these corrections, likely fearing they would appear rude or silly.
They also noted that many autistic people communicated in precise, though socially unconventional ways (for example referring to their arrival time for a meeting as 14:08 or describing a meeting point with map coordinates). Such interactions almost never occurred when emails were exchanged with non-autistic people.
The analysis, say the researchers, is important for all of us - not just those with autism - in thinking about how we might better adapt our own styles and be more respectful of others. The researchers say that the autistic email style is far from a weakness and that we could benefit from adopting a more direct, efficient, and precise autistic-like style in our emails.
Dr Punit Shah from the Department of Psychology at Bath explained: "There is no right or wrong way to email, but there are definitely different email styles and that can be revealing of a whole host of characteristics. Our work only looked at the differences between non-autistic and autistic people, but this topic has much wider relevance and application. In a world where we are increasingly reliant on email communication, how we communicate online really matters.
"Some people may bash off emails in seconds, with little care for polite preamble, formalities, or spelling. But we must try not to read too much into how something is said and focus more on its function. We should also be more willing to give people 'the benefit of the doubt' if they seem rude as we don't know about their social-communication differences, potentially related to autism, or other contextual factors that might have influenced their electronic communication, for example managing child care while emailing remotely from home.
"On the other hand, for some people with autism and many others in society more generally, writing emails to friends and colleagues, or posting to social media can be challenging. For some people, this can create a block where, for fear of an 'email faux pas', they become unresponsive online. This can be problematic, potentially leading to feelings of stress and anxiety.
"In our fast-paced online world we will hopefully become as tolerant and respectful of different electronic communication styles as we are of social differences in face-to-face communications."
To read the open letter 'Electronic communication in autism spectrum conditions' see https://molecularautism.biomedcentral.com/articles/10.1186/s13229-020-00329-2.

How the seafloor of the Antarctic Ocean is changing - and the climate is following suit

ALFRED WEGENER INSTITUTE, HELMHOLTZ CENTRE FOR POLAR AND MARINE RESEARCH
The glacial history of the Antarctic is currently one of the most important topics in climate research. Why? Because worsening climate change raises a key question: How did the ice masses of the southern continent react to changes between cold and warm phases in the past, and how will they do so in the future? A team of international experts, led by geophysicists from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), has now shed new light on nine pivotal intervals in the climate history of the Antarctic, spread over 34 million years, by reconstructing the depth of the Southern Ocean in each one. These new maps offer insights into e.g. the past courses of ocean currents, and show that, in past warm phases, the large ice sheets of East Antarctica reacted to climate change in a similar way to how ice sheets in West Antarctica are doing so today. The maps and the freely available article have just been released in the online journal Geochemistry, Geophysics, Geosystems, a publication of the American Geological Union.
The Southern Ocean is one of the most important pillars of the Earth's climate system. Its Antarctic Circumpolar Current, the most powerful current on the planet, links the Pacific, Atlantic and Indian Oceans, and has effectively isolated the Antarctic continent and its ice masses from the rest of the world for over 30 million years. Then and now, ocean currents can only flow where the water is sufficiently deep and there are no obstacles like land bridges, islands, underwater ridges and plateaus blocking their way. Accordingly, anyone seeking to understand the climate history and glacial history of the Antarctic needs to know exactly what the depth and surface structures of the Southern Ocean's floor looked like in the distant past.
Researchers around the globe can now find this information in new, high-resolution grid maps of the ocean floor and data-modelling approaches prepared by a team of international experts led by geoscientists from the AWI, which cover nine pivotal intervals in the climate history of the Antarctic. "In the course of the Earth's history, the geography of the Southern Ocean has constantly changed, as continental plates collided or drifted apart, ridges and seamounts formed, ice masses shoved deposited sediments across the continental shelves like bulldozers, and meltwater transported sediment from land to sea," says AWI geophysicist and co-author Dr Karsten Gohl. Each process changed the ocean's depth and, in some cases, the currents. The new grid maps clearly show how the surface structure of the ocean floor evolved over 34 million years - at a resolution of ca. 5 x 5 kilometres per pixel, making them 15 times more precise than previous models.
Dataset reflects the outcomes of 40 years of geoscientific research in the Antarctic
In order to reconstruct the past water depths, the experts gathered geoscientific field data from 40 years of Antarctic research, which they then combined in a computer model of the Southern Ocean's seafloor. The basis consisted of seismic profiles gathered during over 150 geoscientific expeditions and which, when put end-to-end, cover half a million kilometres. In seismic reflection, sound waves are emitted, penetrating the seafloor to a depth of several kilometres. The reflected signal is used to produce an image of the stratified sediment layers below the surface - a bit like cutting a piece of cake, which reveals the individual layers. The experts then compared the identified layers with sediment cores from the corresponding regions, which allowed them to determine the ages of most layers. In a final step, they used a computer model to 'turn back time' and calculate which sediment deposits were already present in the Southern Ocean at specific intervals, and to what depths in the seafloor they extended in the respective epochs.
Turning points in the climate history of the Antarctic
They applied this approach to nine key intervals in the Antarctic's climate history, including e.g. the warm phase of the early Pliocene, five million years ago, which is widely considered to be a potential template for our future climate. Back then the world was 2 to 3 degrees Celsius warmer on average than today, partly because the carbon dioxide concentration in the atmosphere was as high as 450 ppm (parts per million). The IPCC (IPCC Special Report on the Ocean and Cryosphere in a Changing Climate, 2019) has cited this concentration as the best-case scenario for the year 2100; in June 2019 the level was 415 ppm. Back then, the Antarctic ice shelves now floating on the ocean had most likely completely collapsed. "Based on the sediment deposits we can tell, for example, that in extremely warm epochs like the Pliocene, the large ice sheets in East Antarctica reacted in a very similar way to what we're currently seeing in ice sheets in West Antarctica," reports Dr Katharina Hochmuth, the study's first author and a former AWI geophysicist, who is now conducting research at the University of Leicester, UK.
Accordingly, the new maps provide data on important climatic conditions that researchers around the world need in order to accurately simulate the development of ice masses in their ice-sheet and climate models, and to produce more reliable forecasts. Researchers can also download the corresponding datasets from the AWI's Earth system database PANGAEA.
In addition to researchers from the AWI, experts from the following institutions took part in the study: (1) All Russia Scientific Research Institute for Geology and Mineral Resources of the Ocean, St. Petersburg, Russia; (2) St. Petersburg State University, Russia; (3) University of Tasmania, Australia; (4) GNS Science, Lower Hutt, New Zealand; and (5) the National Institute of Oceanography and Applied Geophysics, Italy.
The grid maps depict the geography of the Southern Ocean in the following key intervals in the climate history and glacial history of the Antarctic:
    (1) 34 million years ago - transition from the Eocene to the early Oligocene; the first continental-size ice sheet on Antarctic continent
    (2) 27 million years ago - the early Oligocene;
    (3) 24 million years ago - transition from the Oligocene to the Miocene;
    (4) 21 million years ago - the early Miocene;
    (5) 14 million years ago - the mid-Miocene, Miocene Climatic Optimum (mean global temperature ca. 4 degrees Celsius warmer than today; high carbon dioxide concentration in the atmosphere);
    (6) 10.5 million years ago - the late Miocene, major continental-scale glaciation;
    (7) 5 million years ago - the early Pliocene (mean global temperature ca. 2 - 3 degrees Celsius warmer than today; high carbon dioxide concentration in the atmosphere);
    (8) 2.65 million years ago - transition from the Pliocene to the Pleistocene;
    (9) 0.65 million years ago - the Pleistocene.
The data on sediment cores was gathered in geoscientific research projects conducted in connection with the Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP), Integrated Ocean Drilling Program, and International Ocean Discovery Program (IODP).
###
The study was released in the journal Geochemistry, Geophysics, Geosystems under the following title:
K. Hochmuth, K. Gohl, G. Leitchenkov, I. Sauermilch, J.M. Whittaker, G. Uenzelmann- Neben, B. Davy, L. De Santis: The evolving paleobathymetry of the circum-Antarctic Southern Ocean since 34 Ma - a key to understanding past cryosphere-ocean developments, Geochemistry, Geophysics, Geosystems, DOI: 10.1029/2020GC009122
The freely available datasets corresponding to the maps can be found at the PANGAEA data portal (http://www.pangaea.de) or downloaded at https://doi.org/10.1594/PANGAEA.918663.
CRIMINAL CAPITALISM PAYS 

Study reveals impact of powerful CEOs and money laundering on bank performance

UNIVERSITY OF EAST ANGLIA
Banks with powerful CEO's and smaller, less independent, boards are more likely to take risks and be susceptible to money laundering, according to new research led by the University of East Anglia (UEA).
The study tested for a link between bank risk and enforcements issued by US regulators for money laundering in a sample of 960 publicly listed US banks during the period 2004-2015.
The results, published in the International Journal of Finance and Economics, show that money laundering enforcements are associated with an increase in bank risk on several measures of risk. In addition, the impact of money laundering is heightened by the presence of powerful CEOs and only partly mitigated by large and independent executive boards.
Researchers Dr Yurtsev Uymaz and Prof John Thornton at UEA, and Dr Yener Altunbas of Bangor University, conclude that banks with powerful CEOs warrant the particular attention of regulators engaged in anti-money laundering efforts, especially when boards of directors are small and not independent.
The study is believed to be the first to show that money laundering is also a significant driver of bank risk, alongside banks' business models and ownership structures, the regulatory and supervisory framework, and market competition.
Previously banking research on the determinants of risk-taking has largely ignored the potential role of money laundering, which the authors say is surprising given combatting money laundering is a major focus of US, and other, bank regulators concerned with the stability of the financial system.
For example, the US Office of the Comptroller of the Currency views it as posing risks to the safety and soundness of the financial industry and the safety of the nation more generally as terrorists employ money laundering to fund their operations.
The Financial Action Task Force, the global money laundering and terrorist financing watchdog, also cites changes in money demand and increased volatility of international capital flows and exchange rates due to unanticipated cross-border asset transfers as being among the potential adverse economic consequences of money laundering.
Lead author Dr Uymaz, of UEA's Norwich Business School, said: "It is important to understand all possible risks, given those from money laundering have been increased by the growth in volume of cross-border transactions that have made banks inherently more vulnerable.
"They are also impacted by the fact that regulators are continually revising rules as their focus expands from organized crime to terrorism, while governments have expanded their use of economic sanctions to target individual countries, entities, and even specific individuals as part of their foreign policies.
"Money laundering exposes banks to serious reputational, operational, and compliance risks that could result in significant financial costs, for example through fines and sanctions by regulators, claims against the bank, investigation costs, asset seizures and freezes, and loan losses. It also results in the diversion of valuable management time and operational resources to resolve money laundering-related problems.
"We show that board size and independence can mitigate but not fully offset the impact of money laundering on bank risk, and that powerful CEOs impact adversely on bank risk taking and accentuate the negative impact of money laundering on risk."
The authors use three measures of bank risk. The first is default risk, the assumption being that money laundering enforcements could lead to the failure of an individual bank because of reputational damage and/or the impact of severe financial penalties on bank capital.
The second measure is systematic risk, where, for example, money laundering in the banking sector could be so widespread so as not to be diversifiable against within the sector.
The final one is a measure of systemic risk, which captures the reaction of individual banks to systemic events, for example if financial penalties and other costs associated with enforcements because of money laundering have debilitated the bank.
###
The study 'Money laundering and bank risk: evidence from US banks', Yurtsev Uymaz, John Thornton and Yener Altunbas, is published in the International Journal of Finance and Economics.

Between shark and ray: The evolutionary advantage of the sea angels

Threatened with extinction despite perfect adaptation
UNIVERSITY OF VIENNA
The general picture of a shark is that of a fast and large ocean predator. Some species, however, question this image - for example angel sharks. They have adapted to a life on the bottom of the oceans, where they lie in wait for their prey. In order to be able to hide on or in the sediment, the body of angel sharks became flattened in the course of their evolution, making them very similar to rays, which are closely related to sharks.
Flattened body as indication for a successful lifestyle
The oldest known complete fossils of angel sharks are about 160 million years old and demonstrate that the flattened body was established early in their evolution. This also indicates that these extinct angel sharks already had a similar lifestyle as their extant relatives - and that this lifestyle obviously was very successful.
Angel sharks are found all over the world today, ranging from temperate to tropical seas, but most of these species are threatened. In order to understand the patterns and processes that led to their present low diversity and the possible consequences of their particular anatomy, the team has studied the body shapes of angel sharks since their origins using modern methods.
Today's species are very similar
For this purpose, the skulls of extinct species from the late Jurassic period (about 160 million years ago) and of present-day species were quantitatively analysed using X-ray and CT images and prepared skulls employing geometric-morphometric approaches. In doing so, the evolution of body shapes could be explained comparatively, independent of body size.
The results show that early angel sharks were different in their external shape, whereas modern species show a comparably lower variation in shape. "Many of the living species are difficult to identify on the basis of their skeletal anatomy and shape, which could be problematic for species recognition," explains Faviel A. López-Romero.
Angel sharks are well adapted, but react slowly to environmental changes
It has been shown that in living species the individual parts of the skull skeleton are more closely integrated than in their extinct relatives. This led to a reduced variability in appearance during the evolution of angel sharks. "The effect of integrating different parts of the skull into individual, highly interdependent modules can lead to a limited ability to evolve in different forms, but at the same time increases the ability to successfully adapt to specific environmental conditions," explains Jürgen Kriwet.
In the case of the angel sharks, increasing geographical isolation resulted in the development of different species with very similar adaptations. "But modular integration also means that such animals are no longer able to react quickly to environmental changes, which increases their risk of extinction," concludes Jürgen Kriwet.
###
Publication in Scientific Reports:
Evolutionary trends of the conserved neurocranium shape in angel sharks (Squatiniformes, Elasmobranchii). López-Romero, F. A., Stumpf, S., Pfaff, C., Marramà, G., Johanson, Z. & Kriwet, J. in: Scientific Reports.
DOI: 10.1038/s41598-020-69525-7

More carbon in the ocean can lead to smaller fish

UNIVERSITY OF CONNECTICUT
As humans continue to send large quantities of carbon into the atmosphere, much of that carbon is absorbed by the ocean, and UConn researchers have found high CO2 concentrations in water can make fish grow smaller.
Researchers Christopher Murray PhD '19, now at the University of Washington, and UConn Associate Professor of Marine Sciences Hannes Baumann have published their findings in PLOS ONE.
"The ocean takes up quite a bit of CO2. Estimates are that it takes up about one-third to one-half of all CO2 emissions to date," says Murray. "It does a fantastic job of buffering the atmosphere but the consequence is ocean acidification."
Life relies on chemical reactions and even a slight change in pH can impede the normal physiological functions of some marine organisms; therefore, the ocean's buffering effect may be good for land-dwellers, but not so good for ocean inhabitants.
Baumann explains that in the study of ocean acidification (or OA), researchers have tended to assume fish are too mobile and tolerant of heightened CO2 levels to be adversely impacted.
"Fish are really active, robust animals with fantastic acid/base regulatory capacity," says Murray. "So when OA was emerging as a major ocean stressor, the assumption was that fish are going to be OK, [since] they are not like bivalves or sea urchins or some of the other animals showing early sensitivities."
The research needed for drawing such conclusions requires long-term studies that measure potential differences between test conditions. With fish, this is no easy task, says Baumann, largely due to logistical difficulties in rearing fish in laboratory settings.
"For instance, many previous experiments may not have seen the adverse effects on fish growth, because they incidentally have given fish larvae too much food. This is often done to keep these fragile little larvae alive, but the problem is that fish may eat their way out of trouble -- they overcompensate - so you come away from your experiment thinking that fish growth is no different under future ocean conditions," says Baumann.
In other words, if fish are consuming more calories because their bodies are working harder to cope with stressors like high CO2 levels, a large food ration would mask any growth deficits.
Additionally, previous studies that concluded fish are not impacted by high CO2 levels involved long-lived species of commercial interest. Baumann and Murray overcame this hurdle by using a small, shorter-lived fish called the Atlantic silverside so they could study the fish across its life cycle. They conducted several independent experiments over the course of three years. The fish were reared under controlled conditions from the moment the eggs were fertilized until they were about 4 months old to see if there were cumulative effects of living in higher CO2 conditions.
Murray explains, "We tested two CO2 levels, present-day levels and the maximum level of CO2 we would see in the ocean in 300 years under a worst-case emissions scenario. The caveat to that is that silversides spawn and develop as larvae and early juveniles in coastal systems that are prone to biochemical swings in CO2 and therefore the fish are well-adapted to these swings."
The maximum CO2 level applied in the experiments is one aspect that makes this research novel, says Murray,
"That is another important difference between our study and other studies that focus on long-term effects; almost all studies to date have used a lower CO2 level that corresponds with predictions for the global ocean at the end of this century, while we applied this maximum level. So it is not surprising that other studies that used longer-lived animals during relatively short durations have not really found any effects. We used levels that are relevant for the environment where our experimental species actually occurs."
Baumann and Murray hypothesized that there would be small, yet cumulative, effects to measure. They also expected fish living in sub-ideal temperatures would experience more stress related to the high CO2 concentrations and that female fish would experience the greatest growth deficits.
The researchers also used the opportunity to study if there were sex-determination impacts on the population in the varying CO2 conditions. Sex-determination in Atlantic silversides depends on temperature, but the influence of seawater pH is unknown. In some freshwater fish, low pH conditions produce more males in the population. However, they did not find any evidence of the high CO2 levels impacting sex differentiation in the population. And the growth males and females appeared to be equally affected by high CO2.
"What we found is a pretty consistent response in that if you rear these fish under ideal conditions and feed them pretty controlled amounts of food, not over-feeding them, high CO2 conditions do reduce their growth in measurable amounts," says Murray.
They found a growth deficit of between five and ten percent, which Murray says amounts to only a few millimeters overall, but the results are consistent. The fish living at less ideal temperatures and more CO2 experienced greater reductions in growth.
Murray concludes that by addressing potential shortcomings of previous studies, the data are clear: "Previous studies have probably underestimated the effects on fish growth. What our paper is demonstrating is that indeed if you expose these fish to high CO2 for a significant part of their life cycle, there is a measurable reduction in their growth. This is the most important finding of the paper."
###
This work was funded by the National Science Foundation grant number OCE #1536165. You can follow the researchers on Twitter @baumannlab1 and @CMurray187.

'Price of life' lowest in UK during COVID-19 pandemic, study finds

UNIVERSITY OF EXETER
The price the UK government was prepared to pay to save lives during the COVID-19 pandemic was far lower than in many other developed nations, a study has revealed.
In a cross-country comparison across nine nations - Belgium, the US, Germany, Korea, Italy, Denmark, China, New Zealand and the UK - researchers used epidemiological modelling to calculate how many lives were lost through delaying lockdown, estimating that a UK lockdown date just three days earlier would have saved 20,000 lives.
They then linked those policy decisions to the financial cost lockdown had on GDP, resulting in a 'price of life' estimate - the amount of money governments were willing to pay to protect their citizens' lives, reflected in the economic activity sacrificed.
The price of life in the UK was among the lowest at around $100,000, and lower still once under-reporting of COVID-19 deaths is accounted for. In contrast countries that were quicker to go into lockdown, such as Germany, New Zealand and South Korea, put a price on life in excess of $1million.
"Price of life estimates are of critical importance given that government intervention has the ability to save life, yet trades off against other goods," said lead author Ben Balmford, from the University of Exeter Business School.
"Comparing across countries those who pursued an early lockdown strategy reveal themselves to be willing to pay a high price to save their citizen's lives, only rejecting prices above $1m.
"However, some countries, those which imposed lockdown relatively late-on in their respective pandemics, were clearly only willing to pay far less."
The study addressed why countries have suffered such huge variations in death tolls and established how the timing of lockdowns impacted on mortality rates, complementing the official Covid-19 statistics with excess mortality data and taking into account socio-economic and demographic factors such as age, population density and income inequality.
Modelling mortality across the countries before simulating changes in the date of lockdown, the researchers calculated that 20,000 lives in the UK would have been saved by imposing lockdown three days earlier.
Even further delays would have cost yet more lives: 32,000 extra people would have died had lockdown come in three days later than it did; while a delay of 12 days would have cost more than 200,000 extra lives.
Similarly high figures were observed in other countries that acted relatively late - such as Italy - highlighting how earlier governmental action would have saved many more lives.
Price of life was then calculated using estimates of the financial cost of lockdown on GDP, comparing IMF forecasts pre-lockdown to the most recent figures and teasing apart the amount of GDP loss that comes from the effects of lockdown policy, as opposed to other factors.
Imposing lockdown earlier on in a country's outbreak means saving more lives, but at a higher cost to the economy. This means countries that delayed their lockdowns such as the UK, US and Italy are revealed to price the lives of their citizens relatively low (at around $100,000) whereas the price of life in Germany, a country very similar to the UK in terms of GDP per capita, was $1.03m - around an order of magnitude higher.
For those countries whose governments acted quickest - South Korea and New Zealand - and whose response to date has been deemed most successful, the price of life was $6.7m and $11.6m respectively.
"Seemingly, much like a bird in the hand, cash flowing through the market is worth much more than value passing through wellbeing, at least to some countries," said Balmford.
"By choosing not to impose lockdowns three days earlier, governments rejected saving more lives when the price was relatively high.
"The same logic reveals them to have accepted the implied price of life from a delay - they would rather bear the cost in terms of GDP than as further human lives lost."
###
The study is published in the journal Environmental and Resource Economics.

Study suggests optimal social networks of no more than 150 people

U.S. ARMY RESEARCH LABORATORY
RESEARCH TRIANGLE PARK, N.C. -- New rules of engagement on the battlefield will require a deep understanding of networks and how they operate according to new Army research. Researchers confirmed a theory that find that networks of no more than 150 are optimal for efficient information exchange.
"This is the beginning of a new way to address competition and conflict in today's complex world," said Dr. Bruce West, senior scientist, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "To increase the utility of the Army's evolving network structures in terms of robustness, resilience, adaptability and efficiency, requires a deeper understanding of how networks actually function, both ours and those of our adversary."
Researchers at ARO and the University of North Texas tested a theory proposed by British anthropologist Robin Dunbar in the 1990s, which suggested that 150 was the largest group that humans can maintain stable social relations. In the vicinity of this size the social group becomes unstable and splinters into smaller groups.
"It takes a network to defeat a network," wrote retired Army Gen. Stanley McChrystal, in his book Team of Teams. He discusses understanding the implications of the theory, abstracting from battlefield experiences in Iraq battling the loosely networked but effective terrorist organization Al Qaeda.
Researchers published their findings in the peer-reviewed Proceedings of the National Academy of Sciences of the United States of America. In their study, they prove Dunbar's conjecture, demonstrating that certain sized network has better information transport properties than others, and that networks of no more than 150 are optimal for internally sharing information.
"A fundamental property of a network is the relation between its functionality and size, which is why understanding the source of the Dunbar Number is important," said West, a co-author of the paper.
The researchers propose that the number 150 arises as a consequence of internal dynamics of a complex network self-organizing within a social system.
Based on that theory, the researchers also indicated that a peaceful demonstration can be turned into a mob by just a few agitators, with the size of 150 being the most vulnerable to such disruption.
"The 150 optimum has been observed by Dunbar and others, but Dr. West and colleagues are the first to computationally capture the theorized process of information dynamics, which are fundamental to problem-solving, development of group factions, and formation of cohesive groups," said Dr. Lisa Troyer, who manages ARO's social and behavioral sciences research program. "This is an important leap forward by for social science theory and will likely lead to further research and insights on collective action."
Dunbar predicted that social groups have optimal sizes. He referred to these group sizes as nested layering and that they have a scaling ratio of approximately three. Consequently, he identified the sequence of sizes of cognitively efficient social groups 5, 15, 50, 150 and 500, explaining that these layers were not equal in terms of strength of relationships.
"The layering sequence is interesting because each number in the sequence is within a factor of two of the empirical magnitudes of entity sizes in the U.S. Army, ranging from a squad of roughly 15 to a platoon of approximately three times the squad size, next to a company consisting of three platoons and followed by a brigade the size of roughly three companies and so on," West said. "This is the intuition on which armies have been hierarchically constructed by military leaders since the Roman Empire."
According to West, understanding how information flows within, is analyzed by, and is accepted or rejected from groups of various sizes is crucial in the training of teams. He said that this is not only true in the development of a single team, but is just as important for the training of teams to work together, to form teams-of-teams.
"The size of a team may be the determining factor in the potential success of a complex mission that depends on adaptability and collective problem solving," West said. "The same understanding can be applied to the reverse process, that of insinuating disinformation within an adversarial group. The size of the group may at times be more important than the form the lie takes for its acceptance and immediate transmission, witness the recent riots."
###
CCDC Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command. As the Army's corporate research laboratory, ARL discovers, innovates and transitions science and technology to ensure dominant strategic land power. Through collaboration across the command's core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more lethal to win the nation's wars and come home safely. CCDC is a major subordinate command of the U.S. Army Futures Command.

Dozens of pesticides linked with mammary gland tumors in animal studies

Findings have implications for how federal agencies assess pesticides for breast cancer risk
SILENT SPRING INSTITUTE
In an analysis of how regulators review pesticides for their potential to cause cancer, researchers at Silent Spring Institute identified more than two dozen registered pesticides that were linked with mammary gland tumors in animal studies. The new findings raise concerns about how the US Environmental Protection Agency (EPA) approves pesticides for use and the role of certain pesticides in the development of breast cancer.
Several years ago, a resident on Cape Cod in Massachusetts contacted researchers at Silent Spring looking for information on an herbicide called triclopyr. Utility companies were looking to spray the chemical below power lines on the Cape to control vegetation.
"We know pesticides like DDT increase breast cancer risk, so we decided to look into it," says co-author Ruthann Rudel, an environmental toxicologist and director of research at Silent Spring. "After examining pesticide registration documents from EPA, we found two separate studies in which rodents developed mammary gland tumors after being exposed to triclopyr, yet for some reason regulators dismissed the information in their decision not to treat it as a carcinogen."
When manufacturers apply to register a pesticide, EPA reviews existing studies and based on those studies assigns the chemical a cancer classification--for instance, how likely or unlikely the chemical is to cause cancer. After reviewing triclopyr, Silent Spring researchers wondered if evidence of mammary tumors was being ignored for other pesticides as well.
Reporting in the journal Molecular and Cellular Endocrinology, Rudel and Silent Spring scientist Bethsaida Cardona reviewed more than 400 EPA pesticide documents summarizing the health effects of each registered pesticide. They found a total of 28 pesticides linked with mammary gland tumors, yet EPA acknowledged only nine of them as causing mammary tumors and dismissed the evidence entirely for the remaining 19.
Rudel and Cardona also found that many of the pesticides in their analysis behaved like endocrine disruptors, for instance, by interfering with estrogen and progesterone. "Breast cancer is highly influenced by reproductive hormones, which stimulate the proliferation of cells within the breast, making it more susceptible to tumors," says Rudel. "So, it's important that regulators consider this kind of evidence. If they don't, they risk exposing people to pesticides that are breast carcinogens."
Traditionally, toxicologists focus on whether a chemical causes DNA damage when determining its potential to cause cancer. But recent findings in cancer biology show there are many ways chemicals can trigger the development of cancer. For example, chemicals can suppress the immune system, cause chronic inflammation, or disrupt the body's system of hormones, all of which can lead to the growth of breast tumors and other types of tumors as well.
"In light of our findings, we hope EPA updates its guidelines for assessing mammary gland tumors by considering evidence that more completely captures the biology of breast cancer, such as the effects of endocrine disruptors," says Cardona.
Rudel and Cardona recommend that EPA re-evaluate five pesticides in particular--IPBC, triclopyr, malathion, atrazine and propylene oxide--due to their widespread use and the evidence uncovered in the new analysis. IPBC is a preservative in cosmetics; triclopyr is an agricultural herbicide that is also used to control vegetation growth along rights-of-way; malathion is a common residential and agricultural pesticide and is used in some lice treatments; atrazine is one of the most commonly-used herbicides in agriculture; and propylene oxide is used to preserve food, cosmetics, and pharmaceuticals, and has many similarities with ethylene oxide, a known human carcinogen.
The project is part of Silent Spring Institute's Safer Chemicals Program which is developing new cost-effective ways of screening chemicals for their effects on the breast. Knowledge generated by this effort will help government agencies regulate chemicals more effectively and assist companies in developing safer products.
###
Funding for this project was provided by the National Institute of Environmental Health Sciences (NIEHS) Breast Cancer and the Environment Research Program (award number U01ES026130), the Cedar Tree Foundation, and Silent Spring Institute's Innovation Fund. The project was also supported by an NIEHS T32 Transdisciplinary Training at the Intersection of Environmental Health and Social Science grant (award number 1T32ES023769-01A1).
Reference:
Cardona, B. and R.A. Rudel. 2020. US EPA's regulatory pesticide evaluations need clearer guidelines for considering mammary gland tumors and other mammary gland effects. Molecular and Cellular Endocrinology. DOI: 10.1016/j.mce.2020.110927
About Silent Spring Institute:
Silent Spring Institute, located in Newton, Mass., is the leading scientific research organization dedicated to uncovering the link between chemicals in our everyday environments and women's health, with a focus on breast cancer prevention. Founded in 1994, the institute is developing innovative tools to accelerate the transition to safer chemicals, while translating its science into policies that protect health. Visit us at http://www.silentspring.org and follow us on Twitter @SilentSpringIns.

Surprisingly dense exoplanet challenges planet formation theories

Small telescope and inexpensive diffuser key to results
ASSOCIATION OF UNIVERSITIES FOR RESEARCH IN ASTRONOMY (AURA)
IMAGE
IMAGE: NEW DETAILED OBSERVATIONS WITH NSF'S NOIRLAB FACILITIES REVEAL A YOUNG EXOPLANET, ORBITING A YOUNG STAR IN THE HYADES CLUSTER, THAT IS UNUSUALLY DENSE FOR ITS SIZE AND AGE. SLIGHTLY SMALLER... view more 
CREDIT: NOIRLAB/NSF/AURA/J. POLLARD
New detailed observations with NSF's NOIRLab facilities reveal a young exoplanet, orbiting a young star in the Hyades cluster, that is unusually dense for its size and age. Weighing in at 25 Earth-masses, and slightly smaller than Neptune, this exoplanet's existence is at odds with the predictions of leading planet formation theories.
New observations of the exoplanet, known as K2-25b, made with the WIYN 0.9-meter Telescope at Kitt Peak National Observatory (KPNO), a Program of NSF's NOIRLab, the Hobby-Eberly Telescope at McDonald Observatory and other facilities, raise new questions about current theories of planet formation [1]. The exoplanet has been found to be unusually dense for its size and age -- raising the question of how it came to exist. Details of the findings appear in The Astronomical Journal.
Slightly smaller than Neptune, K2-25b orbits an M-dwarf star -- the most common type of star in the galaxy -- in 3.5 days. The planetary system is a member of the Hyades star cluster, a nearby cluster of young stars in the direction of the constellation Taurus. The system is approximately 600 million years old, and is located about 150 light-years from Earth.
Planets with sizes between those of Earth and Neptune are common companions to stars in the Milky Way, despite the fact that no such planets are found in our Solar System. Understanding how these "sub-Neptune" planets form and evolve is a frontier question in studies of exoplanets.
Astronomers predict that giant planets form by first assembling a modest rock-ice core of 5-10 times the mass of Earth and then enrobing themselves in a massive gaseous envelope hundreds of times the mass of Earth. The result is a gas giant like Jupiter. K2-25b breaks all the rules of this conventional picture: with a mass 25 times that of Earth and modest in size, K2-25b is nearly all core and very little gaseous envelope. These strange properties pose two puzzles for astronomers. First, how did K2-25b assemble such a large core, many times the 5-10 Earth-mass limit predicted by theory? [2] And second, with its high core mass -- and consequent strong gravitational pull -- how did it avoid accumulating a significant gaseous envelope?
The team studying K2-25b found the result surprising. "K2-25b is unusual," said Gudmundur Stefansson, a postdoctoral fellow at Princeton University, who led the research team. According to Stefansson, the exoplanet is smaller in size than Neptune but about 1.5 times more massive. "The planet is dense for its size and age, in contrast to other young, sub-Neptune-sized planets that orbit close to their host star," said Stefansson. "Usually these worlds are observed to have low densities -- and some even have extended evaporating atmospheres. K2-25b, with the measurements in hand, seems to have a dense core, either rocky or water-rich, with a thin envelope."
To explore the nature and origin of K2-25b, astronomers determined its mass and density. Although the exoplanet's size was initially measured with NASA's Kepler satellite, the size measurement was refined using high-precision measurements from the WIYN 0.9-meter Telescope at KPNO and the 3.5-meter telescope at Apache Point Observatory (APO) in New Mexico. The observations made with these two telescopes took advantage of a simple but effective technique that was developed as part of Stefansson's doctoral thesis. The technique uses a clever optical component called an Engineered Diffuser, which can be obtained off the shelf for around $500. It spreads out the light from the star to cover more pixels on the camera, allowing the brightness of the star during the planet's transit to be more accurately measured, and resulting in a higher-precision measurement of the size of the orbiting planet, among other parameters [3].
"The innovative diffuser allowed us to better define the shape of the transit and thereby further constrain the size, density and composition of the planet," said Jayadev Rajagopal, an astronomer at NOIRLab who was also involved in the study.
For its low cost, the diffuser delivers an outsized scientific return. "Smaller aperture telescopes, when equipped with state-of-the-art, but inexpensive, equipment can be platforms for high impact science programs," explains Rajagopal. "Very accurate photometry will be in demand for exploring host stars and planets in tandem with space missions and larger apertures from the ground, and this is an illustration of the role that a modest-sized 0.9-meter telescope can play in that effort."
Thanks to the observations with the diffusers available on the WIYN 0.9-meter and APO 3.5-meter telescopes, astronomers are now able to predict with greater precision when K2-25b will transit its host star. Whereas before transits could only be predicted with a timing precision of 30-40 minutes, they are now known with a precision of 20 seconds. The improvement is critical to planning follow-up observations with facilities such as the international Gemini Observatory and the James Webb Space Telescope[4].
Many of the authors of this study are also involved in another exoplanet-hunting project at KPNO: the NEID spectrometer on the WIYN 3.5-meter Telescope. NEID enables astronomers to measure the motion of nearby stars with extreme precision -- roughly three times better than the previous generation of state-of-the-art instruments -- allowing them to detect, determine the mass of, and characterize exoplanets as small as Earth.

Notes

[1] The planet was originally detected by Kepler in 2016. Detailed observations for this study were made using the Habitable-zone Planet Finder on the 11-meter Hobby-Eberly Telescope at McDonald Observatory.
[2] The prediction from theory is that once planets have formed a core of 5-10 Earth-masses they begin to accrete gas instead: very little rocky material is added after that.
[3] Diffusers were first used for exoplanet observations in 2017.
[4] GHOST, on Gemini South, will be used to carry out transit spectroscopy of exoplanets found by Kepler and TESS. Their target list includes the star K2-25.

More information

This research was presented in a paper to appear in The Astronomical Journal.
The team is composed of Gudmundur Stefansson (The Pennsylvania State University and Princeton University), Suvrath Mahadevan (The Pennsylvania State University), Marissa Maney (The Pennsylvania State University), Joe P. Ninan (The Pennsylvania State University), Paul Robertson (University of California, Irvine), Jayadev Rajagopal (NSF's NOIRLab), Flynn Haase (NSF's NOIRLab), Lori Allen (NSF's NOIRLab), Eric B. Ford (The Pennsylvania State University), Joshua Winn (Princeton), Angie Wolfgang (The Pennsylvania State University), Rebekah I. Dawson (The Pennsylvania State University), John Wisniewski (University of Oklahoma), Chad F. Bender (University of Arizona), Caleb Cañas (The Pennsylvania State University), William Cochran (The University of Texas at Austin), Scott A. Diddams (National Institute of Standards and Technology, and University of Colorado), Connor Fredrick (National Institute of Standards and Technology, and University of Colorado), Samuel Halverson (Jet Propulsion Laboratory), Fred Hearty (The Pennsylvania State University), Leslie Hebb (Hobart and William Smith Colleges), Shubham Kanodia (The Pennsylvania State University), Eric Levi (The Pennsylvania State University), Andrew J. Metcalf (Air Force Research Laboratory, National Institute of Standards and Technology, and University of Colorado), Andrew Monson (The Pennsylvania State University), Lawrence Ramsey (The Pennsylvania State University), Arpita Roy (California Institute of Technology), Christian Schwab (Macquarie University), Ryan Terrien (Carleton College), and Jason T. Wright (The Pennsylvania State University).
NSF's National Optical-Infrared Astronomy Research Laboratory (NOIRLab), the US center for ground-based optical-infrared astronomy, operates the international Gemini Observatory (a facility of NSFNRC-CanadaANID-ChileMCTIC-BrazilMINCyT-Argentina, and KASI-Republic of Korea), Kitt Peak National Observatory (KPNO), Cerro Tololo Inter-American Observatory (CTIO), the Community Science and Data Center (CSDC), and the Vera C. Rubin Observatory. It is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with NSF and is headquartered in Tucson, Arizona. The astronomical community is honored to have the opportunity to conduct astronomical research on Iolkam Du'ag (Kitt Peak) in Arizona, on Maunakea in HawaiÊ»i, and on Cerro Tololo and Cerro Pachón in Chile. We recognize and acknowledge the very significant cultural role and reverence that these sites have to the Tohono O'odham Nation, to the Native Hawaiian community, and to the local communities in Chile, respectively.
The WIYN 0.9-meter Telescope is founded on a partnership between the WIYN Consortium, led by the University of Wisconsin-Madison and Indiana University, and the NSF's NOIRLab. Its operations include an international group of universities.

Links