Tuesday, May 25, 2021

Clues from soured milk reveal how gold

veins form

For decades scientists have been puzzled by the formation of rare hyper-enriched gold deposits

MCGILL UNIVERSITY

Research News

IMAGE

IMAGE: MCGILL COLLOIDAL AU RESEARCH TEAM STUDY A MINERALIZED (GOLD-BEARING) VEIN UNDERGROUND AT THE BRUCEJACK MINE. view more 

CREDIT: DUNCAN MCLEISH

For decades scientists have been puzzled by the formation of rare hyper-enriched gold deposits in places like Ballarat in Australia, Serra Palada in Brazil, and Red Lake in Ontario. While such deposits typically form over tens to hundreds of thousands of years, these "ultrahigh-grade" deposits can form in years, month, or even days. So how do they form so quickly?

Studying examples of these deposits from the Brucejack Mine in northwestern British Columbia, McGill Professor Anthony Williams-Jones of the Department of Earth and Planetary Sciences and PhD student Duncan McLeish have discovered that these gold deposits form much like soured milk. When milk goes sour, the butterfat particles clump together to form a jelly.

Q&A with Anthony Williams-Jones and Duncan McLeish

What did you set to find out?

Scientists have long known that gold deposits form when hot water flows through rocks, dissolving minute amounts of gold and concentrating it in cracks in the Earth's crust at levels invisible to the naked eye. In rare cases, the cracks are transformed into veins of solid gold centimetres thick. But how do fluids with such low concentrations of gold produce rare ultrahigh-grade gold deposits?

What did you discover?

Our findings solve the paradox of "ultrahigh-grade" or "bonanza" gold formation, which has frustrated scientists for over a century. The paradox of bonanza gold deposits is that there is simply not enough time for them to form, they should not exist, but they do!

As the concentration of gold in hot water is very low, very large volumes of fluid need to flow through the cracks in the Earth's crust to deposit mineable concentrations of gold. This process would require millions of years to fill a single centimetre wide crack with gold, whereas these cracks typically seal in days, months, or years.

Using a powerful electron microscope to observe particles in thin slices of rock, we discovered that bonanza gold deposits form from a fluid much like milk. Milk consists of little butterfat particles that are suspended in water because they repel each other, like the negative ends of two magnets. When the milk goes sour the surface charge breaks down, and the particles clump together to form a jelly. It is the same with gold colloids, which consist of charged nanoparticles of gold which repel each other, but when the charge breaks down, they "flocculate" to form a jelly. This jelly gets trapped in the cracks of rocks to form the ultra high-grade gold veins. The gold colloids are distinctively red and can be made in the lab, whereas solutions of dissolved gold are colourless.

Why are the results important?

We produced the first evidence for gold colloid formation and flocculation in nature and the first images of small veins of gold colloid particles and their flocculated aggregates at the nano-scale. These images document the process by which the cracks are filled with gold and, scaled up through the integration of millions of these small veins, reveal how bonanza veins are formed.

How will this discovery impact the mining industry?

Our results are important to the mineral exploration and mining industry in Canada and around the world. Now that we finally understand how bonanza deposits form, mineral exploration companies will be able to use the results of our work to better explore for bonanza deposits as well as gold deposits. Genetic studies of Canada's most fertile metallogenic districts - such as the one we have just completed at Brucejack - are required to improve our understanding of how world-class mineral deposits form, and thereby develop more effective strategies for their exploration.

What's next for this research?

We suspect that the colloidal processes that operated at Brucejack and other bonanza gold systems may also have operated to form more typical gold deposits. The challenge will be to find suitable material to test this hypothesis. At Brucejack, the next step will be to better understand the reasons why colloid formation and flocculation occurred on the scale observed and reconstruct the geological environment of these processes. We have also been preparing gold colloids in the lab in an attempt to simulate what we discovered at Brucejack.


CAPTION

McGill Professor Anthony (Willy) Williams-Jones and Pretium Resources Inc. geologist Joel Ashburner study a mineralized (gold-bearing) vein on surface at the Brucejack mine.

CREDIT

Duncan McLeish



About this study

"Colloidal transport and flocculation are the cause of the hyperenrichment of gold in nature" by Duncan F. McLeish, Anthony E. Williams-Jones, Olga V. Vasyukova, James R. Clark, and Warwick S. Board was published in Proceedings of the National Academy of Sciences of the United States of America.

DOI: https://doi.org/10.1073/pnas.2100689118


CAPTION

Ultra-high-grade (bonanza) occurrence of gold in exploration drill core from the Brucejack mine.

CREDIT

Pretium Resources Inc.




About McGill University

Founded in 1821, McGill University is home to exceptional students, faculty, and staff from across Canada and around the world. It is consistently ranked as one of the top universities, both nationally and internationally. It is a world-renowned institution of higher learning with research activities spanning two campuses, 11 faculties, 13 professional schools, 300 programs of study and over 40,000 students, including more than 10,200 graduate students.??

McGill's commitment to sustainability reaches back several decades and spans scales from local to global. The sustainability declarations that we have signed affirm our role in helping to shape a future where people and the planet can flourish.

https://www.mcgill.ca/newsroom/

 

Study shows which North American mammals live most successfully alongside people

UNIVERSITY OF CALIFORNIA - SANTA CRUZ

Research News

A team of researchers led by scientists at UC Santa Cruz analyzed data from 3,212 camera traps to show how human disturbance could be shifting the makeup of mammal communities across North America.

The new study, published in the journal Global Change Biology, builds upon the team's prior work observing how wildlife in the Santa Cruz Mountains respond to human disturbance. Local observations, for example, have shown that species like pumas and bobcats are less likely to be active in areas where humans are present, while deer and wood rats become bolder and more active. But it's difficult to generalize findings like these across larger geographic areas because human-wildlife interactions are often regionally unique.

So, to get a continent-wide sense for which species of mammals might be best equipped to live alongside humans, the team combined their local camera trap data with that of researchers throughout the U.S., Canada, and Mexico. This allowed them to track 24 species across 61 regionally diverse camera trap projects to see which larger trends emerged.

"We've been very interested for a long time in how human disturbance influences wildlife, and we thought it would be interesting to see how wildlife in general are responding to similar anthropogenic pressures across North America," said Chris Wilmers, an environmental studies professor and director of the Santa Cruz Puma Project, who is the paper's senior author alongside lead author Justin Suraci.

The team was especially interested in understanding how mammals respond to different types of human disturbance and whether these responses were related to species' traits, like body size, diet, and the number of young they have. Overall, the paper found that 33 percent of mammal species responded negatively to humans, meaning they were less likely to occur in places with higher disturbance and were less active when present, while 58 percent of species were actually positively associated with disturbance.

To get a closer look at these trends, the team broke their results down by two different types of human disturbance. One was the footprint of human development: the things that people build, like roads, houses, and agricultural fields. Another was the mere presence of people, including activities like recreation and hunting, since fear of humans can change an animal's behavior and use of space.

In comparing continent-wide data from camera trap locations with varying levels of human development, researchers found that grizzly bears, lynx, wolves, and wolverines were generally less likely to be found in more developed areas and were less active when they did visit. Moose and martens were also less active in areas with a higher development footprint.

Meanwhile, raccoons and white-tailed deer were actually more likely to hang out in more developed areas and were more active in these spaces. Elk, mule deer, striped skunks, red foxes, bobcats, coyotes, and pumas weren't more likely to be found in developed landscapes, but they did tend to be more active in these areas.

Some of the species that frequent more developed areas may actually benefit from living in these places, but the study's lead author, Justin Suraci, a lead scientist at Conservation Science Partners and former postdoctoral researcher at UC Santa Cruz, says that's not necessarily the case. While raccoons can thrive in developed areas by finding food in our garbage cans and avoiding predators, higher levels of puma activity in these same places could mean something very different.

"It's not because these developed areas are really good for puma activity," Suraci said. "It's probably because the camera traps happened to be set in the one pathway that the poor puma can use when it's navigating its way through an otherwise very heavily developed landscape."

In other words, some animals in the study may be increasingly active or present on cameras near human development simply because there's such little remaining natural habitat.

Still, there were certain traits that emerged across species as clear advantages for making a living within the footprint of development. Overall, mammals that were smaller and faster-reproducing, with generalist diets, were the most positively associated with development. Researchers expected they might find similar results in comparing camera trap data by levels of human presence, but in fact, both positive and negative responses to human presence were observed for species across the spectrum of body sizes and diets.

Elk were less likely to stick around in places frequented by humans, and moose, mountain goats, and wolverines were less active in these habitats. On the other hand, bighorn sheep, black bears, and wolverines were more likely to be found in areas frequented by humans, while mule deer, bobcats, grey foxes, pumas, and wolves were more active.

One trend that may be influencing these findings is the growth of outdoor recreation, which increases levels of human presence in otherwise remote and wild landscapes. The study's results may indicate that most mammals are willing to tolerate some level of human recreation in order to remain in high quality habitats, and they could instead be increasing their nocturnal activity in order to avoid humans. Some animals may even take advantage of hiking trails and fire roads as easy movement pathways.

But the study also clearly identified that there's a limit to how much human impact animals can withstand. Even among species that were either more active or more likely to be present around humans or in developed areas, those effects peaked at low to intermediate levels of human disturbance then began to decline beyond those thresholds. Red foxes were the only animals in the study that seemed to continue to be more active or present at medium to high levels of human disturbance.

Ultimately, most species have both something to lose and something to gain from being around humans, and understanding the cutoff at which the costs outweigh the benefits for each species will be important to maintaining suitable habitats that support diversity in mammal populations for the future. Suraci says this may prove to be the new paper's most important contribution.

"From a management perspective, I think the thresholds that we've started to identify are going to be really relevant," he said. "This can help us get a sense of how much available habitat is actually out there for recolonizing or reintroduced species and hopefully allow us to more effectively coexist with wildlife in human-dominated landscapes."

###

OU researcher identifies new mode of transmission for bacteria

UNIVERSITY OF OKLAHOMA

Research News

IMAGE

IMAGE: CAMPYLOBACTER view more 

CREDIT: STOCK IMAGE

OKLAHOMA CITY AND DENMARK - Campylobacter infection, one of the most common foodborne illnesses in the Western world, can also be spread through sexual contact, according to a new research discovery by an OU Hudson College of Public Health faculty member, working in conjunction with colleagues in Denmark.

The team's research has been published in Emerging Infectious Diseases, a journal published by the Centers for Disease Control and Prevention (CDC), and is the first known study to prove this mode of transmission for Campylobacter. During a time when COVID-19 has dominated news about infectious diseases, the research is a reminder that many other pathogens affect lives around the world every day. The study was led by infectious disease epidemiologist Katrin Kuhn, Ph.D., an assistant professor in the Department of Biostatistics and Epidemiology in the OU Hudson College of Public Health.

"This research is important for public health messaging and for physicians as they talk to their patients about risks associated with sexual contact," Kuhn said. "Although Campylobacter infection is usually not a serious disease, it causes diarrhea, which can result in people missing work, losing productivity or perhaps losing their job. It poses an additional risk for people with underlying health conditions."

Campylobacter infections usually occur when people eat chicken that has not been cooked thoroughly or when juices from uncooked poultry make their way into other food. Infections can also be caused by drinking unpasteurized milk or water that has been contaminated by the feces of infected animals. However, those didn't account for all cases of infection, Kuhn said, and she wondered if there was another route of transmission that remained unproven. An outbreak of Campylobacter infections in northern Europe among men who have sex with men prompted her to study that population of people in Denmark, where she was working when the research began.

The study results showed that the rate of Campylobacter infection was 14 times higher in men who have sex with men than the control subjects. Although the study focused on men who have sex with men, the results are relevant to people of any sexual orientation who engage in sexual behavior that may involve fecal-oral contact, Kuhn said.

Two other bacteria, Salmonella and Shigella, were used as comparisons in the study. Salmonella is spread primarily through infected foods, while Shigella can be transmitted through food or sexual contact. Salmonella has a high infectious dose, meaning people must ingest a significant amount of the bacteria before they become ill. However, Shigella and Campylobacter have low infectious doses, which makes transmission easier.

"That's an additional reason why we believe Campylobacter can be transmitted through sexual contact like Shigella is - because people can become infected when only small amounts of the bacteria are present," Kuhn said.

Campylobacter infections are probably more prevalent than the numbers show. For every one person who goes to the doctor and is diagnosed, epidemiologists estimate that 20 more people are infected, Kuhn said. Although treatment is usually required only for severe cases, complications can occur, especially in people who have compromised immune systems. In some cases, infection can result in reactive arthritis, in which the body's immune system attacks itself, causing pain in the joints. Infection can also lead to Guillain-Barré Syndrome, a serious nerve disorder that can cause paralysis.

"This is an interesting time because COVID-19 has made people more aware of the importance of monitoring infectious diseases in general, not only during a pandemic," she said. "There are many infections like the one caused by Campylobacter that make people sick. It's important that we spotlight the fact that these diseases exist and that we continue to conduct research on their effects and modes of transmission."

Before arriving at the OU Hudson College of Public Health, Kuhn served as a senior infectious disease epidemiologist at Statens Serum Institut in Denmark. Her work focused on food- and water-borne infections, and she was responsible for the national surveillance of Campylobacter and Shigella. She began this study while in Denmark and completed it after moving to Oklahoma. Statens Serum Institut is the Danish national institute for infectious diseases and the primary institute for surveillance of and research on infectious diseases in Denmark.

"A formal collaboration between OU Hudson College of Public Health and Statens Serum Institut will build a solid foundation for strengthening transatlantic research and, not least, improving the way that we monitor, understand and prevent infectious diseases in Oklahoma," Kuhn said.

###

OU HEALTH SCIENCES CENTER

One of nation's few academic health centers with seven professional colleges -- Allied Health, Dentistry, Medicine, Nursing, Pharmacy, Public Health and Graduate Studies -- the University of Oklahoma Health Sciences Center serves approximately 4,000 students in more than 70 undergraduate and graduate degree programs on campuses in Oklahoma City and Tulsa. For more information, visit ouhsc.edu.

OU HEALTH

OU Health -- along with its academic partner, the University of Oklahoma Health Sciences Center -- is the state's only comprehensive academic health system of hospitals, clinics and centers of excellence. With 11,000 employees and more than 1,300 physicians and advanced practice providers, OU Health is home to Oklahoma's largest physician network with a complete range of specialty care. OU Health serves Oklahoma and the region with the state's only freestanding children's hospital, the only National Cancer Institute-Designated OU Health Stephenson Cancer Center and Oklahoma's flagship hospital, which serves as the state's only Level 1 trauma center. Becker's Hospital Review named University of Oklahoma Medical Center one of the 100 Great Hospitals in America for 2020. OU Health's oncology program at Stephenson Cancer Center and University of Oklahoma Medical Center was named Oklahoma's top facility for cancer care by U.S. News & World Report in its 2020-21 rankings. OU Health was also ranked by U.S. News & World Report as high performing in these specialties: Colon Surgery, COPD and Congestive Heart Failure. OU Health's mission is to lead healthcare in patient care, education and research. To learn more, visit ouhealth.com.

Researchers develop advanced model to improve safety of next-generation reactors

The model can better predict the physical phenomenon inside of very-high-temperature pebble-bed reactors

TEXAS A&M UNIVERSITY

Research News

IMAGE

IMAGE: PEBBLE-BED REACTORS USE PASSIVE NATURAL CIRCULATION TO COOL DOWN, MAKING IT THEORETICALLY IMPOSSIBLE FOR A CORE MELTDOWN TO OCCUR. view more 

CREDIT: DR. JEAN RAGUSA AND DR. MAURICIO EDUARDO TANO RETAMALES/TEXAS A&M UNIVERSITY ENGINEERING

When one of the largest modern earthquakes struck Japan on March 11, 2011, the nuclear reactors at Fukushima-Daiichi automatically shut down, as designed. The emergency systems, which would have helped maintain the necessary cooling of the core, were destroyed by the subsequent tsunami. Because the reactor could no longer cool itself, the core overheated, resulting in a severe nuclear meltdown, the likes of which haven't been seen since the Chernobyl disaster in 1986.

Since then, reactors have improved exponentially in terms of safety, sustainability and efficiency. Unlike the light-water reactors at Fukushima, which had liquid coolant and uranium fuel, the current generation of reactors has a variety of coolant options, including molten-salt mixtures, supercritical water and even gases like helium.

Dr. Jean Ragusa and Dr. Mauricio Eduardo Tano Retamales from the Department of Nuclear Engineering at Texas A&M University have been studying a new fourth-generation reactor, pebble-bed reactors. Pebble-bed reactors use spherical fuel elements (known as pebbles) and a fluid coolant (usually a gas).

"There are about 40,000 fuel pebbles in such a reactor," said Ragusa. "Think of the reactor as a really big bucket with 40,000 tennis balls inside."

During an accident, as the gas in the reactor core begins to heat up, the cold air from below begins to rise, a process known as natural convection cooling. Additionally, the fuel pebbles are made from pyrolytic carbon and tristructural-isotropic particles, making them resistant to temperatures as high as 3,000 degrees Fahrenheit. As a very-high-temperature reactor (VHTR), pebble-bed reactors can be cooled down by passive natural circulation, making it theoretically impossible for an accident like Fukushima to occur.

However, during normal operation, a high-speed flow cools the pebbles. This flow creates movement around and between the fuel pebbles, similar to the way a gust of wind changes the trajectory of a tennis ball. How do you account for the friction between the pebbles and the influence of that friction in the cooling process?

This is the question that Ragusa and Tano aimed to answer in their most recent publication in the journal Nuclear Technology titled "Coupled Computational Fluid Dynamics-Discrete Element Method Study of Bypass Flows in a Pebble-Bed Reactor."

"We solved for the location of these 'tennis balls' using the Discrete Element Method, where we account for the flow-induced motion and friction between all the tennis balls," said Tano. "The coupled model is then tested against thermal measurements in the SANA experiment."

The SANA experiment was conducted in the early 1990s and measured how the mechanisms in a reactor interchange when transmitting heat from the center of the cylinder to the outer part. This experiment allowed Tano and Ragusa to have a standard to which they could validate their models.

As a result, their teams developed a coupled Computational Fluid Dynamics-Discrete Element Methods model for studying the flow over a pebble bed. This model can now be applied to all high-temperature pebble-bed reactors and is the first computational model of its kind to do so. It's very-high-accuracy tools such as this that allow vendors to develop better reactors.

"The computational models we create help us more accurately assess different physical phenomena in the reactor," said Tano. "As a result, reactors can operate at a higher margin, theoretically producing more power while increasing the safety of the reactor. We do the same thing with our models for molten-salt reactors for the Department of Energy."

As artificial intelligence continues to advance, its applications to computational modeling and simulation grow. "We're in a very exciting time for the field," said Ragusa. "And we encourage any prospective students who are interested in computational modeling to reach out, because this field will hopefully be around for a long time."

###

Pandemic paleo: A wayward skull, at-home fossil analyses, a first for Antarctic amphibians

UNIVERSITY OF WASHINGTON

Research News

IMAGE

IMAGE: THE FOUR FOSSIL SPECIMENS OF MICROPHOLIS STOWI EXCAVATED IN THE TRANSANTARCTIC MOUNTAINS BY UNIVERSITY OF WASHINGTON PROFESSOR CHRISTIAN SIDOR'S TEAM AND ANALYZED BY UW POSTDOCTORAL RESEARCHER BRYAN GEE. view more 

CREDIT: CHRISTIAN SIDOR

Paleontologists had to adjust to stay safe during the COVID-19 pandemic. Many had to postpone fossil excavations, temporarily close museums and teach the next generation of fossil hunters virtually instead of in person.

But at least parts of the show could go on during the pandemic -- with some significant changes.

"For paleontologists, going into the field to look for fossils is where data collection begins, but it does not end there," said Christian Sidor, a University of Washington professor of biology and curator of vertebrate paleontology at the UW's Burke Museum of Natural History & Culture. "After you collect fossils, you have to bring them to the laboratory, clean them off and see what you've found."

Among other adaptations during the pandemic, Sidor and his UW colleagues have spent more time cleaning, preparing and analyzing fossils excavated before the pandemic, as well as managing new pandemic-related struggles -- such as a misplaced shipment of irreplaceable specimens.

For Sidor's team, a recent triumph came from an analysis -- led by UW postdoctoral researcher Bryan Gee -- of fossils of Micropholis stowi, a salamander-sized amphibian that lived in the Early Triassic, shortly after Earth's largest mass extinction approximately 252 million years ago, at the end of the Permian Period. Micropholis is a temnospondyl, a group of extinct amphibians known from fossil deposits around the globe. In a paper published May 21 in the Journal of Vertebrate Paleontology, Gee and Sidor report on the first occurrence of Micropholis in ancient Antarctica.

"Previously, Micropholis was only known from South African specimens," said Gee. "That isolation was considered fairly typical for amphibians in the Southern Hemisphere during the Early Triassic. Each region -- South Africa, Madagascar, Antarctica, Australia -- will have its own set of amphibian species. Now, we're seeing that Micropholis was more widespread than previously recognized."

Out of more than 30 Early Triassic amphibians in the Southern Hemisphere, Micropholis is now only the second found in more than one region, according to Gee. That is surprising given Earth's geography. In the Early Triassic, most of Earth's continents were connected as a part of a single, large landmass, Pangea. Places like South Africa and Antarctica were not as far apart as they are today, and may have had similar climates. Some scientists theorize that these closely placed regions could harbor different amphibian species as a consequence of the end-Permian mass extinction.

"It had been proposed that there were only small populations of survivors and low movement of species in the Early Triassic, which could have explained these regional differences," said Gee.

Finding Micropholis in two regions may indicate that this species was a "generalist" -- adaptable to many types of environments -- and could easily spread after the mass extinction.

Alternatively, it's possible that many other amphibians actually lived in multiple regions, like Micropholis, but paleontologists haven't found evidence yet. While some Southern Hemisphere regions like South Africa have been well sampled, others have not -- like Antarctica, which in the Early Triassic was relatively temperate, but is today largely covered by ice sheets.

Sidor's team collected skulls and other fragile body parts from four individuals of Micropholis during a 2017-2018 collection trip to the Transantarctic Mountains. In 2019, Gee agreed to come to the UW to lead the analysis of amphibian fossils from that trip after completing his doctoral degree at the University of Toronto. He completed his degree early in the pandemic and moved to Seattle during the second wave of COVID-19.

With social distancing measures in place on campus, Sidor delivered the fossils and a microscope to Gee's home, where he analyzed the specimens in his living room.

"Having access to the microscope was really the most essential piece of equipment, to be able to identify all the small-scale anatomical features that we need to definitively prove these were Micropholis fossils," said Gee.

On the same trip, Sidor's team collected another rare find: a well-preserved skull of a therocephalian, a group of extinct mammal relatives that lived in the Permian and Triassic periods. Therocephalians were a widespread group of both herbivores and carnivores.

"But the Antarctic record for these animals is very poor," said Sidor. "So this was a rare find."

It was a rare find that nearly went extinct again. Sidor shipped the therocephalian skull in October 2019 to Chicago's Field Museum, where it was cleaned and prepped by his longtime colleague Akiko Shinya.

"Not being able to travel to museums to do research, we've been shipping fossils to each other -- which we don't like to do, but sometimes we have to in order to keep the work going," said Sidor.

In early April, Shinya shipped the finished specimens overnight back to Sidor in Seattle, but the package did not show up at the projected time. As Sidor recounted on Twitter, the skull was apparently lost in a transfer facility in Indiana -- he feared for good. After several days, the package was found, and was promptly transported to Seattle and delivered safely to the UW.

"I was so relieved," said Sidor. "When I thought it was lost, I had been thinking about the insurance forms. How do you put a dollar value on a specimen that you needed an LC-130 Hercules to collect?"

The skull is undergoing analysis at the UW. As for the Antarctic Micropholis specimens, they'll soon receive a new home. Later this year, they'll go on display at the Burke Museum.

###

The research was funded by the National Science Foundation.


CAPTION

University of Washington postdoctoral researcher Bryan Gee's at-home set-up for fossil

 analyses during the pandemic. His dog, Bart, was also a pandemic adoption.

CREDIT

Bryan Gee



CAPTION

The prepared therocephalian fossil that was nearly lost in shipping from Chicago to Seattle.

CREDIT

Christian Sidor



New research examines why some firms prepare for natural disasters and others don't

Strategic Management Journal explores storm preparedness

STRATEGIC MANAGEMENT SOCIETY

Research News

Despite the increasing frequency and severity of floods, storms, wildfires and other natural hazards, some firms in disaster-prone areas prepare while others do not.

That issue was examined in a new study by Jennifer Oetzel, professor, American University and Chang Hoon Oh, William & Judy Docking Professor of Strategy, University of Kansas published in the Strategic Management Journal (SMJ).

"Due to the increased frequency and severity of floods, storms, epidemics, wildfires and other natural hazards anticipated over the coming decades (according to the National Oceanic and Atmospheric Administration), there is growing pressure on managers and their firms to develop strategies for managing natural disaster risk," write the researchers.

"Preparing for future events that may never occur is challenging. Day-to-day events tend to crowd out long-term planning, but business continuity depends on managers anticipating and planning for large scale disasters. For these reasons, our goal in this study was to understand the antecedents associated with disaster preparation so that managers can better prepare for natural disasters."

They defined disaster preparedness as the acquisition of the skills and capabilities needed to reduce damage to a firm, to minimize disruption to the supply chain, and business activity more generally, and to save lives and protect employees.

Disaster preparedness can entail a wide variety of initiatives including conducting an assessment of firm vulnerability to natural disasters, establishing a natural disaster response plan, training employees about natural disaster preparedness, purchasing insurance, developing a business continuity plan, and arranging to move business operations temporarily to another location, among others.

Emergency preparedness pays off. A review conducted by the Wharton Risk Center that focused on floods suggested that for every dollar spent on flood risk reduction, on average, five dollars is saved through avoided and reduced losses. But despite the documented value of preparing, most firms fail to do so.

"Since not all firms located in disaster prone areas prepare for disasters, what are the antecedents to disaster preparation? To answer this question," write the authors, "we looked at several factors that are likely to affect whether or not businesses will prepare. The first factor is organizational experience with disaster, which can be a transformational and powerful motivator for change when managers see the value of disaster preparation and planning."

The mechanisms driving the relationship between experience and preparedness are multifaceted. Managers may fail to learn from past experiences if they do not consider a recently experienced disaster as representative of future events. Even when managers learn from experience and see preparation as valuable, they may lack the organizational influence and find that they are unable to leverage learning to inform decision-making.

Aside from experience, strategic decisions around disaster preparation are likely to be affected by managers' subjective judgments and/or knowledge about disaster risks. Depending upon the nature of their experience, managers may either over- or under-estimate disaster risk and thus over or under prepare.

Research has also shown that willingness to learn from other organizations about how to manage natural disaster risk is also important. External sources of information provide different perspectives and may help organizations to avoid internal biases in decision making.

"Another set of factors that are presumed to affect preparation are the characteristics of disasters, including their type, frequency, and impact," write the researchers. "Historical records and scientific data indicate whether or not a given location is subject to natural disasters and, if so, of what type.

"Natural scientists examining climate change trends are raising concerns, however, that past experiences may not be predictive of the future. In certain geographic areas (e.g., Houston, Texas), the frequency of major disasters may be increasing substantially, deviating significantly from the past."

In conducting two studies -- an international survey in 18 disaster-prone countries and a U.S. survey in New York City and Miami - Oetzel and Oh found that managers are more likely to prepare when their companies experienced prior disasters. The likelihood of preparedness is even higher when companies work with and learn from other organizations and stakeholders.

"Managers operating in locations characterized by high impact, low frequency disasters are more willing to learn from others," they wrote. "In contrast, managers in areas characterized by low impact, high frequency disasters, are more likely to prepare alone. Since effective disaster preparation typically entails working with, and learning from others, those companies that choose a go-it-alone strategy may misjudge disaster risk."

The SMJ is published by the Strategic Management Society (SMS), comprised of 3,000 academics, business practitioners, and consultants from 80 countries, focuses on the development and dissemination of insights on the strategic management process, as well as on fostering contacts and interchanges around the world.

###

Article available at: https://onlinelibrary.wiley.com/doi/abs/10.1002/smj.3272


Darwin foreshadowed modern scientific theories

UNIVERSITY OF TENNESSEE AT KNOXVILLE

Research News

When Charles Darwin published Descent of Man 150 years ago, he launched scientific investigations on human origins and evolution. This week, three leading scientists in different, but related disciplines published "Modern theories of human evolution foreshadowed by Darwin's Descent of Man," in Science, in which they identify three insights from Darwin's opus on human evolution that modern science has reinforced.

"Working together was a challenge because of disciplinary boundaries and different perspectives, but we succeeded," said Sergey Gavrilets, lead author and professor in the Departments of Ecology and Evolutionary Biology and Mathematics at the University of Tennessee, Knoxville.

Their goal with this review summary was to apply the framework of modern speciation theory to human origins and summarize recent research to highlight the fact that Darwin's Descent of Man foreshadowed many recent scientific developments in the field.

They focused on the following three insights:

    1. We share many characteristics with our closest relatives, the anthropoid apes, which include genetic, developmental, physiological, morphological, cognitive, and psychological characteristics.

    2. Humans have a talent for high-level cooperation reinforced by morality and social norms.

    3. We have greatly expanded the social learning capacity that we see already in other primates.

"The paper's insights have important implication for understanding behavior of modern humans and for developing policies to solve some of the most pressing problems our society faces," Gavrilets said.

Gavrilets is director of the Center for the Dynamics of Social Complexity (DySoC) at UT, which promotes transdisciplinary research into the origins, evolution, and futures of human social complexity. This paper is one of the outcomes of activities from the Center. Other related outcomes include free online learning modules on cultural evolution and a series of online webinars about cultural evolution and human origins, which thousands of students and researchers worldwide have watched.

###

Co-authors are Peter Richerson, a cultural evolutionist with the Department of Environmental Science and Policy at the University of California, Davis, and Frans de Waal, a primatologist with Living Links, Yerks National Primate Research Center at Emory University in Atlanta, Georgia.

The paper was sponsored by the UT National Institute for Mathematical and Biological Synthesis with an NSF award. Researchers also received support from the US Army Research Office, the Office of Naval Research, the John Templeton Foundation, and the NIH.

Making the gray cells happy

Neutrons show a connection between lithium concentrations in the brain and depression

TECHNICAL UNIVERSITY OF MUNICH (TUM)

Research News

IMAGE

IMAGE: WITH THE PGAA-INSTRUMENT AT THE RESEARCH NEUTRON SOURCE HEINZ MAIER-LEIBNITZ (FRM II) AT THE TECHNICAL UNIVERSITY OF MUNICH (TUM) JOSEF LICHTINGER EXAMINES THE LITHIUM DISTRIBUTION IN BRAIN SAMPLES. IN HIS... view more 

CREDIT: WENZEL SCHUERMANN / TUM

Depressive disorders are among the most frequent illnesses worldwide. The causes are complex and to date only partially understood. The trace element lithium appears to play a role. Using neutrons of the research neutron source at the Technical University of Munich (TUM), a research team has now proved that the distribution of lithium in the brains of depressive people is different from the distribution found in healthy humans.

Lithium is familiar to many of us from rechargeable batteries. Most people ingest lithium on a daily basis in drinking water. International studies have shown that a higher natural lithium content in drinking water coincides with a lower suicide rate among the population.

In much higher concentrations lithium salts have been used for decades to treat mania and depressive disturbances. However, the exact role lithium plays in the brain is still unknown.

Physicists and neuropathologists at the Technical University of Munich joined forensic medical experts at Ludwig-Maximilian-University of Munich (LMU) and an expert team from the Research Neutron Source Heinz Maier-Leibnitz (FRM II) to develop a method which can be used to precisely determine the distribution of lithium in the human brain. The team hopes to be able to draw conclusions for therapy as well as to gain a better understanding of the physiological processes involved in depression.

Neutrons detect the slightest traces of lithium

The scientists investigated the brain of a suicidal patient and compared it with two control persons. The investigation focused on the ratio of the lithium concentration in white brain matter to the concentration in the gray matter of the brain.

In order to determine where how much lithium is present in the brain, the researchers analyzed 150 samples from various brain regions - for example those regions which are presumably responsible for processing feelings. At the FRM II Prompt Gamma-Ray Activation Analysis (PGAA) instrument the researchers irradiated thin brain sections with neutrons.

"One lithium isotope is especially good at capturing neutrons; it then decays into a helium atom and a tritium atom," explains Dr. Roman Gernhäuser of the Central Technology Laboratory of the TUM Department of Physics. The two decay products are captured by detectors in front of and behind the sample and thus provide information on where exactly the lithium is located in the brain section.

Since the lithium concentration in the brain is usually very low, it is also very difficult to ascertain. "Until now it wasn't possible to detect such small traces of lithium in the brain in a spatially resolved manner," says Dr. Jutta Schöpfer of the LMU Munich Institute for Forensic Medicine. "One special aspect of the investigation using neutrons is that our samples are not destroyed. That means we can repeatedly examine them several times over a longer period of time," Gernhäuser points out.

Significant difference between depressive patients and healthy persons

"We saw that there was significantly more lithium present in the white matter of the healthy person than in the gray matter. By contrast, the suicidal patient had a balanced distribution, without a measurable systematic difference," summarizes Dr. Roman Gernhäuser.

"Our results are fairly groundbreaking, because we were able for the first time to ascertain the distribution of lithium under physiological conditions," Schöpfer is glad to report. "Since we were able to ascertain trace quantities of the element in the brain without first administering medication and because the distribution is so clearly different, we assume that lithium indeed has an important function in the body."

Just a beginning

"Of course the fact that we were only able to investigate brain sections from three persons marks only a beginning," Gernhäuser admits. "However, in each case we were able to investigate many different brain regions which confirmed the systematic behavior."

"We would be able to find out much more with more patients, whose life stories would also be better known," says Gernhäuser, adding that it might then also be possible to answer the question as to whether the deviating lithium distribution in depressive persons is a cause or a result of the illness.

###

The research work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft; DFG). Scientists from the Institute for Forensic Medicine at Ludwig Maximilian University of Munich as well as from the TUM Institute of Pathology and TUM Department of Physics took part in the research.

The research neutron source Heinz Maier-Leibnitz (FRM II) provides neutrons and positrons for research, industry and medicine. Operating as a user facility for up to 1200 guest scientist per year, the Heinz Maier-Leibnitz Zentrum (MLZ) offers a unique suite of high-performance neutron scattering and positron instruments. The MLZ is a cooperation of the Technical University of Munich, the Forschungszentrum Jülich and the Helmholtz-Zentrum hereon. It is funded by the German Federal Ministry of Education and Research, together with the Bavarian State Ministry of Science and the Arts and the partners of the cooperation.

Publication:

J. Schoepfer, R. Gernhäuser, S. Lichtinger, A. Stöver, M. Bendel, C. Delbridge, T. Widmann, S. Winkler & M. Graw

Position sensitive measurement of trace lithium in the brain with NIK (neutron-induced coincidence method) in suicide

Scientific Reports vol. 11, Art. no: 6823 (2021) - DOI: 10.1038/s41598-021-86377-x

Best predictor of arrest rates? The 'birth lottery of history'

Study finds social context of when one comes of age a bigger deal than socioeconomics

HARVARD UNIVERSITY

Research News

Social scientists have had a longstanding fixation on moral character, demographic information, and socioeconomic status when it comes to analyzing crime and arrest rates. The measures have become traditional markers used to quantify and predict criminalization, but they leave out a crucial indicator: what's going on in the changing world around their subjects.

An unprecedented longitudinal study, published today in the American Journal of Sociology, looks to make that story more complete and show that when it comes to arrests it can come down to when someone is rather than who someone is, a theory the researchers refer to as the birth lottery of history.

Harvard sociologist Robert J. Sampson and Ph.D. candidate Roland Neil followed arrests in the lives of more than 1,000 Americans as they transitioned out of adolescence to being young adults over a 23-year span. This was a time period that saw some of the largest social change in recent memory, and the results indicate how these changes, which included the rise of mass incarceration, aggressive policing tactics, and the mid-1990s sudden drop in crime that became known as the "great American crime decline" -- helped shape how these adolescents and young adults came into contact with the criminal justice system.

"What we're attempting to do is to look at birth cohorts who were coming of age at different times during these social changes," said Sampson, Henry Ford II Professor of the Social Sciences. "The setting is roughly the last quarter-century or so. We focused on that because it's a time of great social change in the United States. Mass incarceration comes to the top of many people's minds, but we also saw a rise in violence before that and then a large decline in violence over most of the past 25 years. We saw tremendous changes in policing practices, and most recently, concerns about police brutality and police killings have risen."

What Sampson and Neil tried to do is link those changes with what it's like to grow up when it comes to criminalization, particularly arrest -- the trigger generating a criminal record in the ?rst place. It sheds new light on the arrest patterns of people who came of age in different eras of the war on drugs, mass incarceration, and plummeting violence starting in the 1990s.

The researchers based their work on a multi-cohort longitudinal study of 1,057 children who were originally enrolled in a National Institute of Justice study called the Project on Human Development in Chicago Neighborhoods, a study of how families, schools, and neighborhoods affect child and adolescent development. The oldest individuals tracked were born between 1980 and the mid-1980s and were ages nine, 12, and 15 at the start of the study. The youngest in the study were born in 1995. All participants were tracked from 1995 to 2018.

All participants in the study, originally all Chicagoans, were followed over the course of nearly twenty-five years as they came of age. They were selected randomly based on a representative sample reflecting the diversity of contemporary urban America. Blacks and Latinos each comprised over a third of the sample while white participants made up 20 percent. More than a third of the individuals came from immigrant families. The researchers also collected information through interviews with caretakers and the participants over multiple rounds of data collection. It allowed Sampson and Neil to dig deep into the characteristics of the participants, their families, and early-life neighborhood conditions.

They used data based on criminal history records that were collected through the end of 2018 for all participants, allowing them to study arrest over a 23-year span. The analysis showed large differences in patterns of arrest among the four age cohorts across substantial portions of their lives "We wanted to know not only if there were differences in arrest rates for the different cohorts, but why were there differences," Neil said. "Do these differences reflect fundamental differences in who these people were, or differences in what happened early on in their life? Or did they reflect differences in the larger context through which they were aging?"

The researchers found it was the latter. In many cases, for example, even people who shared the same kind of character traits, grew up in similar families, and came from similar economic backgrounds had much higher or lower chances of getting arrested depending on the years during which they were 17 to 23 years old, the peak ages for arrest.

For instance, younger cohorts (those born in the 1990s) came of age during a radically different and, in some ways, more peaceful world than the older cohorts, who were born in the 1980s. In fact, the chances of arrest for the older cohorts were nearly double -- 96 percent higher -- than the younger cohorts, according to the study.

"The explanation for this can't just be reduced to the usual suspects -- childhood experiences, family structure, demographics, social class, family upbringing -- or individual characteristics," Sampson said.

This is where the birth lottery of history comes in, meaning the fortune of when they were born factored into their chances of arrest. Analysis showed just how significant a few years of social changes can make when it comes to arrest rates by looking at what are often cited as the two leading explanations for crime: socioeconomic disadvantage and having low self-control.

Approximately 70 percent of children born in the 1980s to disadvantaged families were arrested by their mid-20s while only about a quarter of disadvantaged children born in the mid-1990s were arrested by that same age. For participants from more advantaged backgrounds changes were moderate. Looking at those same cohorts, the study found that those born in the 1980s with higher self-control had about the same arrest rates as those born in the 1990s with low self-control.

"We should really be looking at not what was virtuous or wrong with individuals of a particular cohort but rather looking at what's right or wrong with the larger social environment during the historical period in which they happen to come of age," Sampson said. "This study is showing that historical changes are built into those very criminal records."

Changing law enforcement patterns explained about half the cohort differences in criminalization, with disorderly conduct and drug arrests falling substantially in the period they studied. However, the researchers make clear that these differences were not driven by aggressive policing alone.

They believe behavioral changes caused by larger societal changes also led to lower arrests for younger cohorts. For example, from the mid-90s to 2018, parts of urban Chicago underwent revitalization, gentrification, repopulation, and saw an influx of immigrants. In more recent years the rise of technologies such as smartphones, video games, the internet, and social media have also transformed the lives of young people, potentially reducing time spent in risky situations for arrest.

"Put simply, our results show that when we are matters as much and perhaps more than who we are or even what we have done. To the extent that arrest is a result of substantial social changes in both criminal justice practices and societal norms that strongly differentiate the life experience of successive birth cohorts, independent of individual or family differences, the idea of an individual's propensity to crime needs reconsideration," Sampson said.

The study pointed out potential caveats such as the study being limited to people originally from Chicago and only looking at 20 years of a person's life.

The researchers hope to expand their theory and the data they collected on cohort inequalities in criminalization. They plan on doing new interviews and continuing to add to the records they've built to dig into the data further.

###