Sunday, November 20, 2022

How is puberty onset developmentally programmed?

Peer-Reviewed Publication

CHINESE ACADEMY OF SCIENCES HEADQUARTERS


A schematic model summarizing the lineage-dependent and lineage-independent fate determination during neuronal fate specification 

IMAGE: A SCHEMATIC MODEL SUMMARIZING THE LINEAGE-DEPENDENT AND LINEAGE-INDEPENDENT FATE DETERMINATION DURING NEURONAL FATE SPECIFICATION IN THE HYPOTHALAMUS. view more 

CREDIT: IGDB

The hypothalamus, one of the most complex brain regions in the mammalian nervous system, contains an astonishing heterogeneity of neurons that regulate endocrine, autonomic and behavioral functions. It not only regulates food consumption, water intake, body temperature, circadian rhythm and sleep to maintain the survival of individual organisms, but also controls puberty onset and reproductive behavior to sustain the breeding population.

"What triggers puberty" is one of the 125 Big Questions posed in the 125th anniversary edition of Science magazine. Researchers led by Prof. WU Qingfeng from the Institute of Genetics and Developmental Biology of the Chinese Academy of Sciences have revealed that the developmental programming of puberty onset is dependent on TBX3. They have also uncovered new rules for lineage progression, which operate through neuronal differentiation during the development of the hypothalamus.   

Results were published in Science Advances on Nov. 16.

In this study, Prof. WU's group found that TBX3 defines a progenitor domain in the developing hypothalamus and serves as a fate determinant to sequentially control the establishment and maintenance of neuronal fate.

The neuroendocrine system consists of a heterogeneous collection of neuropeptidergic neurons in the brain, among which hypothalamic KNDy neurons represent an indispensable cell subtype controlling puberty onset. Although it has been proposed that hypothalamic neural progenitors and neuronal precursors along the lineage hierarchy adopt a cascade diversification strategy to generate extreme neuronal diversity, the cellular logic for specifying a subtype of neuroendocrine neurons has been unclear.

Previous genetic studies suggest that genetic mutations in TBX3 cause ulnar-mammary syndrome (UMS), which is characterized by shortened forelimbs, defective mammary gland development and genital abnormalities. It is notable that most UMS patients display delayed onset of puberty.

According to the researchers, at the organism level, genetic ablation of Tbx3 significantly delays the puberty onset of animals and disturbs the estrous cycle of female mice. At the cellular level, TBX3 plays an important role in the fate establishment and maintenance of hypothalamic KNDy neurons. In addition, at the molecular level, TBX3 regulates gene transcription via phase separation, thereby inducing neuropeptide expression in the hypothalamic neurons.

Importantly, multiple TBX3 mutants identified in UMS patients fail to form phase-separated condensates and cannot efficiently regulate neuropeptide expression, which provides a pathological mechanism underlying the delayed puberty in UMS patients.

Furthermore, Prof. WU aimed to answer how neuronal lineage progresses during hypothalamus development under physiological and pathological conditions. He and his colleagues used an unprecedented strategy of cell-type alignment by comparing single-cell datasets from lineage tracing and genetically manipulated mice, and revealed two lineage-independent rules—intralineage retention (ILR) and interlineage interaction (ILI)—that regulate lineage progression under pathological conditions.

Collectively, this study uncovers the cellular and molecular mechanisms underlying how TBX3 mutations interfere with the onset of puberty in UMS patients and reveals the rules of ILR and ILI during cell fate specification.

This work was funded by the National Key R&D Program of China, the National Natural Science Foundation of China, the Strategic Priority Research Program of CAS and the Beijing Municipal Science & Technology Commission.

Salt more important than cold polar temperatures in sea ice formation

Peer-Reviewed Publication

UNIVERSITY OF GOTHENBURG

When polar seas freeze and ice forms, it is not only due to cold air chilling the surface of the water. Even more important is that warm water is prevented from rising to the surface from the depths of the ocean, due to the much lower salinity of the surface water. Researchers from the University of Gothenburg and elsewhere have described this effect in a new scientific study.

Sea ice forms in the polar regions because it gets so cold in winter. However, cold water is heavier than warm water, so the chilled water should sink and not remain on the surface. This sinking should bring the warmer water back to the surface preventing ice to form in the oceans. Researchers from the University of Gothenburg and elsewhere are now presenting an explanation as to why it is not so. The study is presented in the journal Science Advances.

“The salinity of the surface water is lower, thanks to the supply of freshwater from melting ice at the poles and precipitation to the ocean. The difference in salinity between the surface water and the deeper water is an important factor for the formation of sea ice at low temperatures at the poles. Without the difference in salinity, the water would not have become stratified, leading to continuous mixing of sea water, thus preventing the formation of ice,” says Fabien Roquet, professor of physical oceanography at the University of Gothenburg.

Difference in salinity creates a “lid”

The surface of the water, with its lower salinity, creates a “lid” that prevents warm water from rising to the surface. Without that lid, the cold polar temperatures would not be sufficient to freeze continuously moving warmer water.

The strength of this salinity lid is due to the unique properties of the seawater. In freshwater, water that is colder than 4 degrees Celsius has a lower density and therefore remains at the surface and freezes into ice, without mixing with water from greater depths. In the ocean, saltwater has a density that is lowest exactly at the freezing point, around -2C. However, the density of the water varies much less with the temperature in cold water than when it is warmer, which is very unusual for a fluid.

Sea ice inhibits the greenhouse effect

“The closer you get to the poles, the more important salinity is for limiting the mixing and evening out of water temperature throughout the water,” says Fabien Roquet.

This discovery shows how important the special properties of the water molecule are for Earth’s climate. The exchange of heat between the ocean and atmosphere is affected not only by temperature differences, but also by the salinity of the ocean. Without this fact, it would be impossible for sea ice to form to any greater extent. Sea ice is itself an important factor for impeding the greenhouse effect, because it reflects sunlight away.

“With global warming, we see a decline in sea ice, which impairs the capacity of the polar seas to maintain the ‘lid’ of lower salinity that prevents carbon from rising into the atmosphere. But at the same time, warmer weather can lead to increased freshwater in the polar seas as glaciers melt and precipitation potentially increases. The difference in salinity can then increase, which may contribute to sustain sea ice formation. But it’s difficult to predict which effect will be dominant; we just have to wait and see,” says Fabien Roquet.

2022 Stockholm Prize Winner calls for a more enlightened U.S. prison system

Peer-Reviewed Publication

AMERICAN SOCIETY OF CRIMINOLOGY

An address by a 2022 winner of the prestigious Stockholm Prize in Criminology looks to the future of corrections in proposing a more enlightened U.S. prison system. The proposal places prison reform within the Biden Administration’s support for improving infrastructure to establish new prisons that are rational, based on evidence, improve offenders’ lives, and foster a new era of progress in corrections.

The address, delivered in Stockholm in the summer of 2022, appears in Criminology & Public Policy, a publication of the American Society of Criminology. It was written by Francis T. Cullen, distinguished research professor emeritus of criminal justice and senior research associate at the University of Cincinnati. Cullen won the Stockholm Prize for his work on the effectiveness of offender rehabilitation strategies. The prize was established under the aegis of the Swedish Ministry of Justice and with contributions from the Torsten Söderberg Foundation.

“The United States is at a turning point in the history of corrections,” explains Cullen, referring to the end of mass incarceration around 2010. “Prison reform is inhibited by the twin realities that states have limited budgets and prisons are rarely shuttered. We need new thinking to move beyond these restrictions.”

Cullen’s “Build Back Better Prison Experiment” would be funded by the U.S. government in line with the mission of the Biden Administration’s infrastructure plan and result in the building of 10 prisons, each funded at $1 billion. Proposals for the prisons would compete, with states partnering with criminologists and prison experts worldwide as well as with private enterprises to set forth a model for prisons; a panel commissioned by the U.S. Department of Justice would select the 10 winners.

Building on the book Enlightenment Now, by the Harvard University cognitive scientist Steven Pinker, Cullen argues that each prison proposal should be rooted in reason and science and should nourish the development of prisoners rather than diminish them. In each prison, every prisoner would have to be engaged every day in an activity (e.g., education, work, treatment) that improves human capital and decreases crime-related risks. A team of researchers commissioned by the Justice Department would conduct an ongoing process and outcome evaluation.

“We have inherited a physical design of prisons and a resulting society of captives that blunt our imagination of alternative possibilities,” says Cullen. “We need to develop new models of the prison that can be shown to be enlightened and then test them experimentally.”

Bias against older organ donors may be leading to smaller organ supply for transplants

Organ procurement organizations and transplant centers were about 5% less likely to choose organs from 70-year-old donors compared to those who were just a year younger

Peer-Reviewed Publication

MICHIGAN MEDICINE - UNIVERSITY OF MICHIGAN

Left digit bias 

IMAGE: WHEN ORGAN PROCUREMENT ORGANIZATIONS AND TRANSPLANT CENTERS ACCEPT OR SELECT ORGANS, THEY SOMETIMES DISPLAY A FORM OF AGEISM KNOWN AS LEFT DIGIT BIAS. view more 

CREDIT: JUSTINE ROSS/MICHIGAN MEDICINE

American transplant centers as well as organ procurement organizations, the groups responsible for recovering organs from deceased donors in the United States, were less likely to accept or select organs from donors who were 70 years old when they died compared to those who were 69, new research found.

This is an example of left digit bias, a common type of unconscious bias that involves placing value based on the first digit in a number and thus is often linked to ageism.

A previous study had revealed this type of bias is present when organizations decide whether or not to discard donor kidneys. Clare E. Jacobson, M.D., a general surgery resident at University of Michigan Health, was curious about whether the bias would still exist if the research encompassed all organ types.

“Donated organs are a lifesaving resource, but there are many more people on the waiting list than there are available organs,” said Jacobson, the co-first author of this study. “We were interested in looking at how we could make small changes to optimize our current supply of deceased donor organs, with the goal of both serving the patients on the waiting list and honoring the gift of life these donors are providing.”

Using data from the United Network for Organ Sharing, the nonprofit that manages the nation’s organ transplant system, Jacobson and her research team found that both OPOs and transplant centers were about 5% less likely to choose organs, regardless of their type, for those aged 70 versus 69. The results suggest that about 1 in 18 donors who are age 70 or older will have no organs selected for transplant, Jacobson said.

“This demonstrated bias is not limited to a single transplant center, OPO or even step in the transplant process and is seen across organ types,” Jacobson said. “In our role as stewards of these gifted organs and for all patients waiting for a transplant, interventions must target every step in the transplantation process to overcome our prejudiced thinking.”

Of note: Left hand bias was not significant when organ acceptance and selection rates were compared for donors who were 60 versus 59, the researchers found. This may be because, in younger organ donors, age is less heavily weighted compared to metrics like weight, bloodwork and other health problems in the donor, Jacobson said.

Additional authors include Craig S. Brown and Seth A. Waits, both of Michigan Medicine, and Kyle H. Sheetz of the University of California, San Francisco.

PETA science group co-edits special issue of journal on human-relevant toxicity testing

Peer-Reviewed Publication

PEOPLE FOR THE ETHICAL TREATMENT OF ANIMALS (PETA)

Lausanne, Switzerland — Scientists from PETA Science Consortium International e.V., international government agencies, and the chemical industry have coedited the Frontiers Research Topic “Chemical Testing Using New Approach Methodologies” and coauthored articles for In Vitro Toxicology, a section of the journal Frontiers in ToxicologyFrontiersthe third most cited publisher in the world, included the Research Topic as an opportunity to highlight diverse approaches to assess chemical toxicity without the use of animals.

 

Altogether, the Research Topic includes nine articles about non-animal toxicity testing. The full issue can be accessed here.

 

Two articles in the Research Topic coauthored by the Science Consortium point out existing opportunities to use non-animal testing methods and provide examples of their use to meet regulatory testing requirements. The non-animal tests described have been used to replace the use of animals in tests to assess how chemicals affect the lungs, if they’re absorbed through the skin, or if they cause an allergic skin reaction. The articles also highlight remaining impediments to the use of non-animal tests and opportunities when improvements can be made to expand their acceptance. The other article coauthored by the Science Consortium is one of three included in the Research Topic that focus on assessing the toxicity of inhaled chemicals on the human respiratory tract.

 

Before chemicals, pharmaceutical drugs, pesticides, and other products are bought and sold, regulatory agencies around the world require these substances to be tested for their potential to harm humans or the environment. Millions of animals are used each year in these tests, many of which are decades old, but innovative, animal-free tests can replace them and offer protection from harmful chemicals.

 

“We’re pleased to partner with forward-thinking scientists to create a Research Topic dedicated to non-animal toxicity testing,” says Science Consortium President Dr. Amy Clippinger. “With contributions from more than 70 authors from seven countries, the breadth of authorship shows the widespread interest in advancing non-animal toxicity testing approaches.”

 

For more information, please visit ThePSCI.eu.

E-visits tested as way to help people to quit smoking

Primary care is the best way to reach smokers, but primary care providers in rural areas – where people are more likely to smoke – are already overwhelmed. A study seeks to use e-visits to bridge the gap

Grant and Award Announcement

MEDICAL UNIVERSITY OF SOUTH CAROLINA

People who live in rural areas are both more likely to smoke and less likely to quit than people in nonrural areas, said MUSC Hollings Cancer Center researcher Jennifer Dahne, Ph.D.

Dahne, a clinical psychologist who focuses on cigarette smoking among vulnerable populations, wants to reach these people and help them to become former smokers – and along the way reduce their risks for a dozen kinds of cancer.

Dahne also serves as the co-director of remote and virtual trials for the South Carolina Clinical & Translational Research Institute based at MUSC. A remote trial, anchored by the primary care providers in their communities whom patients know and trust, looked like a possibility to help people in rural areas to quit smoking.

She’s now leading a team of researchers and clinicians, including primary care providers in communities served by MUSC Health-Florence Division and MUSC Health-Lancaster Division, on a $4.6 million grant from the National Cancer Institute to test whether a proactive electronic visit would help to promote smoking cessation.

Florence Division Chief Medical Officer Rami Zebian, M.D., is excited about the opportunity to get smoking cessation help to more people.

“This is something that's very much needed in our areas,” he said. “The percentage of smokers in Florence and Marion is huge, much higher than Charleston.”

Tobacco has been integral to the Pee Dee economy since the late 19th century. Zebian said he talks to many people who say they put their kids through college on tobacco farming. But they also note that they see the long-term effects of smoking – heart disease, strokes and cancer. As a pulmonologist, Zebian deals with some of those effects, like chronic obstructive pulmonary disease and lung cancer. And he sees a pattern among his patients.

“Every single one of my patients who has quit smoking, they all tell me one thing. They say, ‘I wish I quit sooner.’”

Dahne said that primary care is one of the best places to reach people with smoking cessation aids.

“Every time you go to your doctor, they are screening you for smoking. So, we actually have pretty good data in our electronic health record to identify our patients who smoke. And we know that most adults who smoke in this country receive care for smoking cessation from their primary care providers, and that smokers are visiting primary care providers pretty regularly,” she said.

But primary care providers are swamped.

Dahne said primary care providers have told them that while they really care about helping their patients to quit smoking, they are so busy when they have in-person appointments, managing, for example, hypertension medications, doing diabetic foot exams and everything else, that oftentimes talking about smoking cessation might fall by the wayside.

Edward McCutcheon, M.D., chief medical officer of the Lancaster Division, agreed that fitting it in is a challenge.

“There's just a tremendous number of priorities that have to be addressed in a single physician visit,” he said. “Trying to convince somebody to quit smoking in a matter of minutes just doesn't happen. Patients have become dependent on tobacco, and it’s something that has happened over years. Trying to discuss the physiology and the dependency on nicotine can be a challenge, and then trying to discuss treatment options and therapy requires a lot more time.”

Instead, during this trial, patients who are identified as smokers will receive messages through MyChart inviting them to take part in e-visits for smoking cessation. The e-visits were developed during a pilot trial that showed promising results.

Dahne explained that the e-visits will look like a questionnaire. Patients will answer questions about their smoking histories, how much they smoke and whether and how they’ve tried to quit in the past. An algorithm will then present a recommendation for a smoking cessation treatment for each participant. The algorithm prioritizes the most efficacious FDA-approved medications for smoking cessation, including varenicline, a medication better known by its brand name, Chantix. At that point, patients will have the option to agree with the recommendation or request a different approach.

The computer program doesn’t do the prescribing, though. A record of the entire e-visit will be sent to the patient’s primary care provider to review, and the provider can either prescribe the recommended medication or recommend a different plan based on the patient’s medical history. In addition to medications, all patients are provided with referrals for behavioral support to either the South Carolina Tobacco Quitline or to MUSC’s tobacco treatment program. Plus, Dahne said, the providers will be compensated for the e-visit review, so the program isn’t merely adding paperwork to their days.

At the same time that this clinical trial is running in rural areas, Dahne is running a similar trial with another grant at primary care practices affiliated with MUSC Health Charleston.

“I'm really excited to have these two grants running at the same time,” Dahne said. “We’re evaluating whether this proactive electronic visit approach works to help promote smoking cessation within these rural areas and the unique implementation barriers and facilitators within these rural clinical settings. My guess is that implementation factors are very much going to differ within our rural primary care clinics and clinics that are here as part of MUSC Charleston.”

Zebian said that quitting smoking can improve health and quality of life no matter a person’s age or health status.

“A lot of times people have misconceptions like, ‘Well, my lungs are already damaged. So what's the benefit of quitting now?’” he said. But even people with lung damage can avoid needing to use oxygen if they quit, and even people who need oxygen can add months or years to their lives by quitting. People with cancer – either lung cancer or some other type of cancer – will find their bodies respond better to treatment if they quit smoking.

“We all age. We all lose lung function,” he said. “But how fast we lose lung function is very different in people who are smokers and people who do not smoke.”

For help to quit smoking, contact the MUSC Health Tobacco Treatment Program at 843-792-9101.

About MUSC Hollings Cancer Center

MUSC Hollings Cancer Center is South Carolina’s only National Cancer Institute-designated cancer center, with the largest academic-based cancer research program in the state. The cancer center comprises more than 120 faculty cancer scientists and 20 academic departments. It has an annual research funding portfolio of more than $44 million and is dedicated to preventing and reducing the cancer burden across South Carolina. Hollings offers state-of-the-art diagnostic capabilities, therapies and surgical techniques within multidisciplinary clinics that include surgeons, medical oncologists, radiation therapists, radiologists, pathologists, psychologists and other specialists equipped for the full range of cancer care, including more than 200 clinical trials across South Carolina. For more information, visit hollingscancercenter.musc.edu.

Rockets to uncover electric circuit that powers the Northern Lights

Business Announcement

NASA/GODDARD SPACE FLIGHT CENTER

Auroral Currents Graphic 

IMAGE: THE AURORAL CURRENTS ARE ONE KIND OF FIELD-ALIGNED CURRENT – WHEREIN AN ELECTRIC CURRENT FLOWS ALONG EARTH’S MAGNETIC FIELD LINES. THE AURORAL CURRENT CARRIED CHARGED PARTICLES (ELECTRONS AND IONS) FROM SPACE INTO OUR ATMOSPHERE AND BACK OUT TO SPACE. view more 

CREDIT: NASA

A NASA-funded rocket mission is headed to space to measure the global electric circuit underlying the northern lights. For its second trip to space, the Aurora Current and Electrodynamics Structures II, or ACES II, instrument will launch from Andøya Space in Andenes, Norway. The launch window opens Nov. 16, 2022, at 6 p.m. local time.

High above us, electrons from space stream into our sky. As they wind down Earth’s magnetic field lines, they strike gases in our atmosphere, causing them to glow. From the ground, observers see effervescent ribbons of ruby and emerald: the aurora borealis and australis, or northern and southern lights.

But auroras are just one part of a much larger system. Like a lightbulb plugged into an outlet, they are powered by a larger electrical circuit connecting our planet to near-Earth space.

“It's these incoming high-energy electrons that produce the auroral display we're familiar with, but there's also part of the system that is unseen,” said Scott Bounds, a physicist at the University of Iowa and the principal investigator for the ACES II mission.

Just as charged particles flow in, a stream of charged particles flows from our atmosphere back out to space. Together, this inflow and outflow complete a global electrical circuit known as the auroral current.

One of the biggest mysteries about the auroral current is what happens at the “turnaround point,” where the inflow ends and the outflow begins. This turnaround is in the ionosphere, a layer of our atmosphere that begins some 40 miles overhead and extends into space, where charged particles and neutral gases coexist and interact.

The ionosphere is like a bustling border town where travelers from different lands, unfamiliar with each other’s customs, meet and exchange their wares. Those arriving from above are electrically charged particles from space. Accustomed to the wide-open pathways of space, they rarely collide with one another. Their electric charge keeps them tethered to Earth’s magnetic field lines, which they twirl around as they nosedive into our atmosphere or outwards into space.

Those arriving from lower altitudes are neutral gases from our air. They bump through dense crowds, bouncing back and forth hundreds of times a second. Without an electric charge, they move freely across magnetic field lines as they are carried about by the wind.

In the ionosphere, these two populations merge – colliding, combining with one another and separating again, and interacting in complex ways. It is a chaotic scene. And yet, this turbulent mixing in the ionosphere is what keeps the auroral current churning.

To date, most studies of the auroral current have only measured inflow and outflow from high above the ionosphere, making simplifying assumptions about what’s happening below. ACES II was designed to remedy that, taking a “snapshot” of the complete auroral current at one moment in time. The strategy is to fly two rockets: a “high-flyer” that will measure particles flowing in and out of our atmosphere, and a “low-flyer” that, at the same time, will see the dynamic exchange in the ionosphere that keeps it all flowing.

At the Andøya Space Center in Andenes, Norway, the auroral oval – the magnetic “ring” encircling Earth’s northern magnetic pole within which auroras form – passes overhead each night. Bounds and his team will wait until the auroral oval is overhead – their clue that the auroral current is flowing above them.

The team will then launch the high-flyer, aiming for a peak altitude of about 255 miles (410 km). Its goal is to see the streams of particles flowing into and out of our atmosphere. Roughly two minutes later, they will launch the low-flyer through the lower parts of the ionosphere, peaking at about 99 miles (159 km). Its goal is to capture the energy exchange happening at the turnaround point, where inflow turns into outflow. The trajectories of the two rockets are aligned in space and time, to ensure they are measuring different parts of the same current. Like all sounding rockets, both the high- and low-flyer will make their measurements and fall back to Earth a few minutes later.

The ACES instrument has flown once before, launching from the Poker Flat Research Range in Fairbanks, Alaska, in 2009. There, it flew through an active, turbulent aurora. It was like measuring the weather during a particularly stormy day.

“We got great results, but what we want to understand for this flight is the ‘average case,’” Bounds said. Andøya is located much closer to Earth’s magnetic north pole, meaning milder, more typical auroras that don’t spread as far south are more accessible.

If all goes as planned, ACES II will help scientists model the auroral current as a whole, including its trickiest part: our ionosphere.

“This is just a single case – it doesn't answer all questions,” Bounds said. “But it gets us a data point we need.”

Study shows how to boost early intervention for climate-related health risks

Best-laid plans need local buy-in and support to be truly effective

Peer-Reviewed Publication

DUKE UNIVERSITY

Malaria Planning in Peru 

IMAGE: DUKE UNIVERSITY PROFESSOR WILLIAM PAN (STANDING, RIGHT) MEETS WITH LEADERS OF A LOCAL COMMUNITY IN PERU TO DISCUSS A NEW EARLY WARNING SYSTEM HE’S DEVELOPED TO PREDICT AND HELP PREVENT MALARIA OUTBREAKS. view more 

CREDIT: DUKE UNIVERSITY, NICHOLAS SCHOOL OF THE ENVIRONMENT

DURHAM, N.C.—Being able to predict where and when extreme weather and other environmental impacts of climate change will increase the risk of infectious disease outbreaks can help public health officials respond earlier and more effectively to control the spread and reduce its toll.

In fact, early warning systems designed to do just this have been developed in recent years to help control outbreaks of malaria, dengue fever and other diseases in parts of the Tropics. But their implementation has been undercut by funding uncertainties, overburdened local health systems, insufficient training for local health technicians, and a lack of buy-in from decision makers in government.

An analysis by an international team of researchers from 15 institutions evaluates these barriers to implementation and proposes new ways forward. The team published its recommendations Nov. 9 in The Lancet Planetary Health.

One key, the researchers say, is early engagement with crucial decision makers.

“We looked at five case studies and most of the barriers we identified likely could have been resolved by getting policymakers and community leaders on board right from the start,” said William Pan, the Elizabeth Brooks Reid and Whitelaw Reid Associate Professor of Population Studies at Duke University, who was co-lead author of the analysis.

In the case studies, scientists often placed a higher initial priority on getting their monitoring systems, disease-control protocols and local partners in place before turning their attention to briefing national or regional policymakers about it. In hindsight, the problem with this approach, Pan said, is that the local partners who were supposed to take over running the system once it was set up were not necessarily the government decision makers with the authority to commit ongoing financial or political support for it.

“You need to engage with these key decisions makers immediately so they understand what the benefits of the system will be and buy into it,” he said. Without this support, scaling up the system to a regional or national level can become an insurmountable challenge.

Training local health providers and technicians in environmental science also needs to be an immediate priority.

Local health systems, especially in poor or remote regions, are typically stretched thin and the people who do on-the-ground disease surveillance often have limited training in climate science or environmental epidemiology, Pan said. They’re trained in medicine or public health but not in how climate-sensitive environmental conditions such as extreme heat, drought, or flooding can drive disease outbreaks. So, they don’t know what to look for, how to project future impacts, or how or when to adjust their disease-control interventions in anticipation of them, he said.

“There’s currently no place in the world that has an early warning system for infectious diseases that integrates climate and environmental data with disease surveillance data,” Pan said. “Policymakers just haven’t picked up on it yet, even though it is what science tells us we need to be doing. It’s like trying to bake a cake with only half the ingredients.”

In the new paper, he and his coauthors use knowledge and tools from the emerging field of implementation science to propose a four-step, science-based framework for overcoming such barriers and enhancing the success of early warning systems.

First, you need to understand and anticipate potential barriers. Second, you need to engage key stakeholders at the outset and co-create the system with them. Third, you need to identify promising strategies to overcome barriers and test how they work in pilot studies under various conditions, using both quantitative data and qualitative measures, such as stakeholder interviews, to inform your analysis. Fourth, after implementing the strategies, you need to measure the outcomes of their implementation across a broad range of considerations, including acceptability, feasibility, sustainability and cost-effectiveness.

The new analysis is based on case studies of early warning systems for climate-related disease outbreaks or public health risks that have been recently developed in Peru, Barbados, Ethiopia, India, and the Federated States of Micronesia and the Marshall Islands.

Pan holds faculty appoints at Duke’s Nicholas School of the Environment and the Duke Global Health Institute. He co-led the new analysis with Gila Neta of the National Cancer Institute, part of the National Institutes of Health.

The paper was co-authored by researchers from the University of Washington; the Pan American Health Organization; the National Institute of Environmental Health Sciences; the Barcelona Supercomputing Center; the Catalan Institution for Research and Advanced Studies; the London School of Hygiene and Tropical Medicine; the University of Florida; the Inter-American Institute for Global Change Research; the Pacific Island Health Officers Association; the Energy and Resources Institute of New Delhi, India; the University of Oklahoma; the Barbados Ministry of Health and Wellness; and the University of Pittsburgh School of Public Health.

The team’s analysis was conducted with no new external funding.

CITATION: “Advancing Climate Change Health Adaption Through Implementation Science,” Gila Neta, William Pan, Kristie Ebi, Daniel F. Buss, Trisha Castranio, Rachel Lowe, Sadie J. Ryan, Ann M. Stewart-Ibarra, Limb K. Hapairai, Meena Sehgal, Michael C. Wimberly, Leslie Rollock, Maureen Lichtveld and John Balbus; The Lancet Planetary Health, Nov. 9, 2022.

Where humans live, microplastics end up in rivers, SLU research finds

Peer-Reviewed Publication

SAINT LOUIS UNIVERSITY

A paper published in Environmental Pollution authored by Saint Louis University (SLU) scientists shows that human proximity is the best indicator of microplastics being found in the Meramec River in Missouri.

A team of researchers, led by Jason Knouft, Ph.D., professor of biology, principal investigator with the WATER Institute at SLU, and scientist at the National Great Rivers Research and Education Center, and Elizabeth Hasenmueller, Ph.D., associate professor of Earth and atmospheric sciences and associate director of the WATER Institute at SLU, studied levels of microplastics at 19 sites along the Meramec River, including locations downstream from a major metropolitan area as well as less populated rural areas.

“What we found was that the human factors basically told us where the microplastics were,” Hasenmueller said. “The distribution of microplastics in the watershed wasn't driven by river flow or sediment inputs. Instead, it was mostly related to how close the site was to inputs of wastewater or a city. Those sorts of things were the biggest predictors.”

Microplastics are generally characterized as plastic particles smaller than 5.0 millimeters and can be found across marine, terrestrial and freshwater environments. Due to the durability of plastic and the potential dangers of microplastics being found in freshwater systems, Knouft, Hasenmueller, and team set out to determine how microplastics enter freshwater systems and what is the best indicator to determine where microplastics will be found.

To determine where microplastics were in a freshwater system and to determine the levels of microplastics present, researchers examined the river sediments in the Meramec River watershed. The team also applied hydrologic modeling to estimate the importance of river discharge, sediment load, land cover and wastewater discharge sites to determine how these factors affect microplastic distribution.

Throughout their research, Knouft and Hasenmueller made several new and yet expected discoveries. The data showed that the best indicator of finding microplastics in the Meramec River was the proximity to humans. Plastic is created and consumed by humans; it makes sense that if a river site is near humans, microplastics will be found there.  

“Before we began, I kept an open mind,” Knouft said. “I approached it this way: whatever we find, it's not going to be surprising to me. If we found that these things are hammering the ecosystem, I would say, ‘Yeah, that makes sense.’ But if we had found they aren’t really doing anything, I’d say, ‘Yeah, that makes sense, because they’re these things that are just passing through.”

Hasenmueller was not surprised that humans were the largest contributing factor to finding microplastics in the Meramec River, but she was surprised just how prevalent those microplastics were throughout the basin.

“We knew microplastics would be there, but the amount of plastic, it was just everywhere you looked,” Hasenmueller said. “I thought that there might be more of an influence of the stream’s discharge and sediment loads on the distribution than what we saw, which surprised me.”

Now, the attention turns to what can be done to prevent these microplastics from reaching freshwater systems. There are already small steps individuals can take to reduce their plastic consumption, such as ensuring plastics go to recycling plants, but researchers will also be looking ahead to determine large-scale remedies to protect our freshwater systems.

“I think the biggest solution to remedy the issue of microplastics is also one of the hardest solutions, which is reducing the amount of plastic that we use,” Hasenmueller said. “Nearly everything is plastic; our clothes have plastic in them, food and water are stored in plastic, and all of these different things in our everyday life are made of plastic. So having big companies reduce the amount of plastic could be impactful because there’s only so much we can do as consumers.”

SLU students contributed to this research effort, including co-first authors Teresa Baraza and Natalie Hernandez. Other authors on the paper include Chin-Lung Wu, Ph.D., from the department of biology at SLU, and Jack Sebok from Washington University in St. Louis.

This research was supported by The Nature Conservancy (061716-01).

Saint Louis University

Founded in 1818, Saint Louis University is one of the nation’s oldest and most prestigious Catholic institutions. Rooted in Jesuit values and its pioneering history as the first university west of the Mississippi River, SLU offers nearly 13,000 students a rigorous, transformative education of the whole person. At the core of the University’s diverse community of scholars is SLU’s service-focused mission, which challenges and prepares students to make the world a better, more just place.

WATER Institute at Saint Louis University

Located in the City of St. Louis and at the confluence of the Mississippi and Missouri Rivers, the Water Access, Technology, Environment and Resources (WATER) Institute at Saint Louis University is the first of its kind in the American Midwest. Inspired by the University’s Jesuit identity, the mission of the WATER Institute is to advance water innovation to serve humanity.

The ocean in a cup: Environmental DNA successfully captures marine biodiversity

Peer-Reviewed Publication

NATURAL HISTORY MUSEUM OF LOS ANGELES COUNTY

Los Angeles, CA (November 16, 2022) — Measuring marine biodiversity with “environmental DNA” — an application of gene sequencing to environmental biology — should permit rapid assessment of changes in marine life. That makes environmental DNA (eDNA) a critical tool for managing our response to climate change. But eDNA only works well if key implementation steps are followed, according to a new study of the Los Angeles and Long Beach area published in the journal PeerJ

“What do we need to know to use eDNA in the coastal ocean, and can we make it work well in an important urban setting? Those are the questions that motivated us to launch this study,” said Regina Wetzer, Curator and Director of the Marine Biodiversity Center at the Natural History Museum of Los Angeles County (NHM).

Answering those questions involved contributions from a natural history museum, multiple academic institutions, environmental consultants, and government agencies — highlighting the challenges involved in using eDNA, but also the widespread interest in its use. 

eDNA uses genetic sequencing of samples from the environment (in this case, ocean water) to inventory biodiversity. “There are genes that differ enough between species that they can be used as identification markers. Every organism sheds DNA by dropping skin cells or other materials, so we can take a cup of seawater, sequence the DNA in it, and use that to inventory organisms in the area,” said Zack Gold, lead author of the study. 

The neighboring Port of Los Angeles and Port of Long Beach form one of the largest port complexes in the world and are a site of intense environmental interest. That made it an interesting site to test eDNA’s ability to act as an effective tool for biodiversity assessment. 

This study paired eDNA sampling and conventional ship-based trawl net sampling at seven sites in the port complex. At each site, researchers collected multiple eDNA samples, each about one liter of seawater, just before the trawl net was towed through the same area. That permitted a comparison between eDNA and traditional biodiversity assessment techniques: eDNA detected nearly all of the 17 species of fish found in the trawls, but also detected an additional 55 native fish species. Detecting those additional species through conventional sampling requires many more sampling trips and a very high expense. 

“We were happy to see eDNA validated alongside ‘conventional’ sampling, but we were really excited to see the extra information that came from the eDNA,” said Dean Pentcheff, researcher and program manager of the Diversity Initiative for the Southern California Ocean (DISCO) at NHM. But getting that extra information depended on having a complete genetic reference library for all the fish in the area—a genetic sequence in an eDNA sample can only be resolved to a species if there is a reference sequence on file for that species. All the fish in the eDNA samples in this study were resolved only after the researchers added the last few fish references to the sequence library.

The eDNA samples from different locations in the ports yielded different species inventories at a  statistically significant level. That answered an important question: can eDNA measure variability across an area as small as the port complex, or does seawater mix so thoroughly that local differences are completely blurred? This study demonstrated that eDNA in this ocean environment can expose differences between places as close as a few hundred meters apart. 

Based on this pilot project, the authors assembled a set of recommendations for managers considering eDNA as a tool for biodiversity assessments. The recommendations cover careful selection of the identifying genes and specific advice on how to clean up the sequence data from eDNA samples before searching for sequence matches. Because of the successful species resolution that resulted from building a full sequence reference library, a key recommendation is to create regional reference databases. 

“These samples of the environment are like time capsules that we’ll be able to exploit in the future,” said Adam Wall, Crustacea Collections Manager at NHM. That sentiment prompted another of the group’s recommendations: archive eDNA samples and sequence data for long-term use. As sequencing technology improves, additional information could come from the samples. As genetic data analysis techniques improve and genetic reference libraries are expanded, the sequence data can be analyzed again to get additional results beyond the fish inventories published in this study.

Co-authors on this study include researchers from a broad suite of institutions: Zachary Gold (now at the National Oceanic and Atmospheric Administration), Rachel S. Meyer (now at the University of California, Santa Cruz), Paul H. Barber, and Robert Wayne at the University of California, Los Angeles;  Teia M. Schweizer at Colorado State University; Emily E. Curd at Landmark College, Vermont; Regina Wetzer, Adam R. Wall, and N. Dean Pentcheff at the Natural History Museum of Los Angeles County;  Kevin Stolzenbach at Wood Environment and Infrastructure, Inc.; Kat Prickett at the Port of Los Angeles, and Justin Luedy at the Port of Long Beach. The project was supported by funding from the Port of Los Angeles and the Port of Long Beach. 

About the Natural History Museums of Los Angeles County
The Natural History Museums of Los Angeles County (NHMLAC) include the Natural History Museum in Exposition Park, La Brea Tar Pits in Hancock Park, and the William S. Hart Museum in Newhall. They operate under the collective vision to inspire wonder, discovery, and responsibility for our natural and cultural worlds. The museums hold one of the world’s most extensive and valuable collections of natural and cultural history—more than 35 million objects. Using these collections for groundbreaking scientific and historical research, the museums also incorporate them into on- and offsite nature and culture exploration in L.A. neighborhoods, and a slate of community science programs—creating indoor-outdoor visitor experiences that explore the past, present, and future. Visit NHMLAC.ORG for adventure, education, and entertainment opportunities.