Tuesday, July 14, 2020

Factors maximize impact of yoga, physical therapy on back pain in underserved population

Fear avoidance, pain medication use, and treatment expectations impact response to nonpharmaceutical treatments to relieve chronic low back pain
BOSTON MEDICAL CENTER
BOSTON - New research shows that people with chronic low back pain (cLBP) have better results from yoga and physical therapy compared to reading evidence-based self-help materials. While this finding was consistent across many patient characteristics, a much larger effect was observed among those already taking pain medication to treat their condition and those who did not fear that exercise would make their back pain worse. Led by researchers at Boston Medical Center and published in Pain Medicine, the findings also showed that individuals who expected to do well with yoga were more likely to have a meaningful improvement in their function if they received yoga compared to receiving physical therapy.
Studying a population of predominately non-white and low-income patients, the results from this clinical trial show that overall, 39 percent of participants responded to one of the three treatment options with a greater response to yoga-or-physical therapy (42 percent) than the self-care group (23 percent). There was not a significant difference in proportion of people responding to yoga versus physical therapy - both showed similar improvements in back-related physical function. Among the study participants that were also using pain medication to treat chronic lower back pain, a large effect was observed among more participants in yoga (42 percent) or physical therapy (34 percent) compared with self-care (11 percent).
This study highlights the effect that fear can have on patient outcomes. Among the participants identified to have less fear around physical activity, 53 percent were more likely to respond to yoga and 42 percent were more likely to respond to physical therapy than self-care (13 percent). In contrast, among participants who started out with high fear avoidance around taking part in physical activity, the proportions of responders to the three treatment options showed no additional effect in response to treatment.
"Adults living with chronic low back pain could benefit from a multi-disciplinary approach to treatment including yoga or physical therapy, especially when they are already using pain medication,' said Eric Roseen, DC, MSc, a chiropractic physician at Boston Medical Center.
The yoga intervention consisted of 12 group-based weekly 75-minute hatha yoga classes incorporating poses, relaxation and meditation exercises, yoga breathing and yoga philosophy. Thirty minutes of daily home practice was encouraged and supported with at-home yoga supplies. The physical therapy intervention consisted of 15 one-on-one 60-minute appointments over 12 weeks. During each appointment, the physical therapist utilized the Treatment-Based Classification Method and supervised aerobic exercise, while providing written instructions and supplies to continue exercises at home. The self-care intervention consisted of reading from a copy of The Back Pain Handbook, a comprehensive resource describing evidence-based self-management strategies for chronic lower back pain including stretching, strengthening, and the role of psychological and social factors. Participants received check-in calls regarding the reading every three weeks.
The study involved 299 participants with chronic lower back pain at a safety net hospital and seven federally qualified community health centers, across 12 weeks of treatment. An exploratory analysis was performed identifying patient-level characteristics that predicted large improvements in physical function and/or modified the effectiveness of yoga, physical therapy, or self-care. The characteristics that were studied as predictors of improvement or treatment effect modifiers were from the domains of sociodemographic, general health, back-related, psychological, and treatment expectations data.
"Focusing on a diverse population with an average income well below the US median, this research adds important data for an understudied and often underserved population," said Roseen, also an assistant professor of family medicine at Boston University School of Medicine. "Our findings of predictors are consistent with existing research, also showing that lower socioeconomic status, multiple comorbidities, depression, and smoking are all associated with poor response to treatment."
###
Funding for this study was supported by the National Center for Complementary and Integrative Health (1F32AT009272, 5R01AT005956), the Boston University Clinical and the Translational Science Institute Clinical Research Training Program (National Center for Advancing Translational Sciences, 1UL1TR001430).
About Boston Medical Center
Boston Medical Center is a private, not-for-profit, 514-bed, academic medical center that is the primary teaching affiliate of Boston University School of Medicine. It is the largest and busiest provider of trauma and emergency services in New England. Boston Medical Center offers specialized care for complex health problems and is a leading research institution, receiving more than $97 million in sponsored research funding in fiscal year 2018. It is the 15th largest funding recipient in the U.S. from the National Institutes of Health among independent hospitals. In 1997, BMC founded Boston Medical Center Health Plan, Inc., now one of the top ranked Medicaid MCOs in the country, as a non-profit managed care organization. Boston Medical Center and Boston University School of Medicine are partners in Boston HealthNet - 14 community health centers focused on providing exceptional health care to residents of Boston. For more information, please visit http://www.bmc.org.
#MEDICAREFORALL

Correlations identified between insurance coverage and states' voting patterns

'Red' states have the highest uninsured rate, study finds
CASE WESTERN RESERVE UNIVERSITY
Cleveland - Researchers at Case Western Reserve University reviewed national data from the U.S. Census bureau and found associations between states' voting patterns in the 2016 presidential elections and decreases in the number of adults 18 to 64 years of age without health insurance coverage.
"Following the implementation of the Affordable Care Act (ACA), we observed sharp decreases in the number of uninsured Americans nationwide," said Uriel Kim, lead author on the study and an MD candidate at the Case Western Reserve University School of Medicine. "However, since the 2016 presidential election, these gains are reversing in so-called 'red' states, and 'purple' states that flipped from blue to red."
The paper State Voting Patterns in the 2016 Presidential Election and Uninsured Rates in Non-elderly Adults was recently published in the Journal of General Internal Medicine.
Kim and colleagues at the medical school defined states based on voting patterns in the 2016 general election, categorizing them as Blue (21 states and Washington, D.C.), Red (24), or Purple (6)--states that switched from Blue to Red in the 2016 election. (No Red states switched to Blue.)
"The implementation of Medicaid expansion and the marketplaces has varied across states, at least partially explaining our study findings," Kim said. "For example, of the 14 states that have not expanded Medicaid, most are red or purple states. Additionally, while all Americans have access to the insurance marketplaces, the degree to which states invest in outreach and navigation programs for marketplace insurance generally varies along party lines."
In the years 2014 through 2016 (compared to 2013, before key provisions of the ACA were implemented), the data showed that the number of uninsured adults age 18 to 64 decreased by 15.8 million nationwide.
  • Blue states saw a decrease in the uninsured of over 7.6 million.
  • Purple states saw a decrease in the uninsured of nearly 3 million.
  • Red states saw a decrease in the uninsured of nearly 5.2 million.
  • While the number of uninsured Americans reached record lows in 2016, over 23.5 million remained uninsured.
From 2017-18, following the presidential election, the number of uninsured individuals increased by more than 850,000 nationwide, reversing the positive trends.
  • Blue states saw a negligible decrease in the number of uninsured.
  • Purple states saw the number of uninsured grow by 240,000.
  • Red states saw the uninsured grow by 620,000
  • Over 24.3 million were still uninsured by 2018, with the majority living in Red states.
Tables and the full study are available in the paper here. Data from 2019 and 2020 were not yet available for the researchers to review.
The ACA expanded coverage with two approaches: the expansion of Medicaid (in some states) to individuals with higher incomes and the creation of "marketplaces" (in all states) that allow individuals to purchase health insurance for themselves and their families. Individuals purchasing insurance on the marketplace receive sliding-scale subsidies based on their income.
The study's senior author, Siran Koroukian, an associate professor in the Department of Population and Quantitative Health Sciences in the medical school, added that the study highlights the importance of policies to enable health care access, which has particular relevance during the COVID-19 pandemic and the resulting economic fallout.
"Since the majority of Americans receive insurance through their employer," said Koroukian, "the rise in unemployment following COVID-19 could mean that millions of people could be left without any insurance coverage, especially in states with less robust Medicaid programs or insurance marketplaces. This is problematic when the ability to access care is essential."
###

5,000 years of history of domestic cats in Central Europe  

Interdisciplinary studies in paleogenetics and archaeozoology


NICOLAUS COPERNICUS UNIVERSITY IN TORUN



IMAGE
IMAGE: PERSPEKTYWICZNA CAVE INSIDE VIEW DURING EXCAVATION. view more 
CREDIT: MAGDALENA KRAJCARZ

A loner and a hunter with highly developed territorial instincts, a cruel carnivore, a disobedient individual: the cat. These features make the species averse to domestication. Even so, we did it. Nowadays, about 500 million cats live in households all around the world; it is also difficult to estimate the amount of the homeless and the feral ones
Although the common history of cats and people began 10,000 years ago, the origins of the relation still remain unknown. How was the domestication process carried out? When did the first domesticated cats appear in Central Europe? Where did they come from, and how? What was their role in contemporary people's lives. The knowledge gaps in the topic are numerous; thus, archaeologists, archaeozoologists, biologists, anthropologists as well as other researchers all around the world cooperate to find answers to the questions. Scientists from the Institute of Archaeology at the Nicolaus Copernicus University in Torun have outstanding merits in this field. An article discussing significant research achievements in the area has been published in PNAS, a prestigious official journal of the National Academy of Sciences. The first author is Dr Magdalena Krajcarz who has made an attempt to find ancestors of domestic cats in Neolithic Central Europe. By analyzing cat diet, she is trying to check how close they cohabitated with people

Winding paths of the domesticated cat
According to the assumptions made, the deliberate creation of a breed which involved selecting particular individuals, cross-breeding and reproducing them, took place relatively recently, in the 19th century. In Medieval Poland, cats were not as popular as we could think. According to evidence provided by researchers, semi-domesticated weasels, or even snakes, were used to protect grain crops against rodents. These were people who settled in towns founded in the second half of the 13th century who increased the popularity of cats.
It does not mean, however, that cats had entered into no relations with people even earlier. The first, best-documented domesticated cat remains on the territory of Poland date back to the beginnings of our era. The animals are believed to have spread across Central Europe mainly due to the influence of the Roman Empire. Nonetheless, the earliest cat remains in the area date back to even 4,200-2,300 BC and evidence the first migrations of the Nubian cat which originally inhabited the Near East and North Africa. This particular species is considered as the ancestor of domestic cats in Central Europe.
The Nubian cat is one of wildcat subspecies (next to the European wildcat which is not the domestic cat ancestor even though it is able to cross-bread with it) whose domestication began in the Fertile Crescent ca. 10,000 - 9.000 years ago. In archaeological excavation sites in Anatolia, Syria as well as Israel, a variety of stone figurines representing those cats has been found. Apparently, cats stayed in the proximity of the first farmers and, with high probability, the Neolithic Age is when the first human-cat interrelations were initiated. People gave up nomadism in favor of sedentary life and started to gather eatables which, consequently, attracted rodents of many kinds. This could result in attracting wild cats to easily achievable food sources and the benefits turned out to be mutual. With much likelihood, cats remained rather neutral to people.
Cat skeleton analyses, together with the mammal iconography, allow researchers to make an assumption that cats reached Europe migrating from the Near East, through Anatolia, Cyprus, Crete, Greece, to Ancient Rome, where they were taken over by Celts and Germans .
Ulna bone of near eastern cat from Perspektywiczna cave.
Cat diet vs the history of domestication
The role cats played in Late Neolithic Poland is not clear since scientists have little evidence of these animals presence. The remains found come from caves rather than from human settlements which means that cats not necessarily had to be buried by men. They could as well be pray to other predators or they simply lived and died in caves. Nevertheless, researchers do not reject the hypothesis which says that the animals could be kept by men in order to protect crops from rodents, and thus, benefit from their skills, and occasionally follow them to the caves which contemporary people used as shelters.
Research performed by Dr Magdalena Krajcarz helps to resolve the mystery. In the article entitled Ancestors of domestic cats in Neolithic Central Europe: Isotopic evidence of a synanthropic diet published in PNAS, she provides an insight into cats diet in order to determine how close human-cat relations were.
To carry out studies, six Neolithic cat remains of the Near East characteristics from four cave sites in the Kraków- Czestochowa Upland (southern Poland) were used. Nearby, there used to be farmer settlements located on fertile soils. Moreover, four European wildcat remains from an analogous period and area as well as three Pre-Neolithic and two others from the Roman Period were examined. The reference material additionally covered human and other animal remains.
Analyzing stable carbon and nitrogen isotopes in bone collagen constituted the methodological basis. The stable isotope analysis method is a commonly applied tool in the paleontology and ecology of animals because the isotope composition of their remains reflects the isotope composition of food. According to Krajcarz, the method enables, for example, the identification of feeding habits of particular fossil animal species. In research on wild animal feeding habits, conventional techniques involve analyzing food remnants in faeces or stomachs, which imposes significant limitations. Most importantly, not all the remnants can be identified. Moreover, the remnants are from the last feeding. Finally, the access to such fossil material is very poor.
Owing to the isotope analysis, taking accurate chemical measurements as well as recognizing average diet covering the whole animal lifespan are possible. Primarily, the method allows the examination of feeding habits of animals from the past. All we have are bone tissue remnants which have survived in the unaltered state as the isotope composition of bones has been unchanged for thousands of years - says Dr Krajcarz. To simplify the issue, the Neolithic farmers were knowledgeable enough to apply fertilizers such as dung or plant ash. Rodents which fed on the collected crops were consumed by cats. By the stable isotopes examination we are able to decide whether contemporary cats found food taking advantage of human activity somehow.
So, what are the conclusions drawn by the researchers? According to the examination results, the Near East cats were not fully dependent on men. They made use of all the available food sources, but could also find others in their habitat. They could do it periodically, either benefiting from human activity or hunting individually in forests. Thus, they maintained their independence.
As Dr Krajcarz explains, their findings confirm the hypothesis that the Near East wildcats have spread across Europe accompanying the first farmers, probably as commensal animals. The results of the stable isotope analysis obtained for the Roman Period cats. however, seem to resemble those of men and dogs which suggests that cats followed a similar diet, i.e. they benefited from human resources or were possibly fed by men. Also, the development in farming partially influenced our native European wildcat, even if it was more forest resources oriented.

Neolithic cultural level inside Zarska Cave, where one of the earliest cat remains of the Near Eastern lineage was discovered.

On the track of the cat history
Dr Magdalena Krajcarz and Prof. Daniel Makowiecki from the Institute of Archaeology at the Nicolaus Copernicus University are continuing their research on the history of domestic cats. Together with a team of paleogeneticians supervised by Dr Danijela Popovi? from the Warsaw University, they are initiating a new research project, 5,000 Years of History of Domestic Cats in Central Europe: an Interdisciplinary Paleogenetic and Archaeozoological Study funded by the National Centre of Science. The project will be based on the international cooperation with researchers representing European institutions including Belgium, Serbia, Lithuania, Slovakia, and the Czech Republic.
The main aim of the four year project is to reconstruct migration trails of domestic cats from their domestication regions to Europe and look for traces of the cat genome selection, natural and/or controlled by men. The research team is planning to analyze hundreds of cat bone remains from archaeological and paleontological sites. In the interdisciplinary project, conventional archaeozoological and paleontological morphometric methods as well as fossil DNA analysis and radiocarbon dating will be employed.
The researchers wish to trace all the phenotypic and genetic changes in cats which are responsible for domestication (aesthetic: size, coloration; behavioral: reducing aggression; physiological: adopting to digest anthropogenic food, e.g. milk, starch).
On the basis of the genomic data, they want to estimate the cross-breeding intensity of the Nubian cat and the European wildcat in order to check whether it increased together with the domestic cat population expansion.
###

The earliest cat on the Northern Silk Road

NATIONAL RESEARCH UNIVERSITY HIGHER SCHOOL OF ECONOMICS
IMAGE
IMAGE: REMAINS OF THE EARLY MEDIEVAL CAT FROM DZHANKENT (KAZAKHSTAN) view more 
CREDIT: (COPYRIGHT A. HARUDA 2020)
Dr. Irina Arzhantseva and Professor Heinrich Haerke from the Centre for Classical and Oriental Archaeology (IKVIA, Faculty of Humanities, HSE University) have been involved in the discovery of the earliest domestic cat yet found in northern Eurasia.
Since 2011, the abandoned town of Dzhankent, located near Kazalinsk and Baikonur (Kazakhstan), has been the object of international research and expeditions led by the two HSE archaeologists, together with Kazakh colleagues from Korkyt-Ata State University of Kyzylorda. Last year, the sharp-eyed archaeozoologist on the team, Dr. Ashleigh Haruda from Martin Luther University Halle-Wittenberg (Germany), while looking through the masses of animal bones from the excavation, spotted the bones of a feline and immediately realized the significance of the find.
She assembled an international and interdisciplinary team to study all aspects of this cat, and obtain all possible information from the largely complete skeleton. As a result, we now have an astonishingly detailed picture of a tomcat that lived and died in the late 8th century AD in a large village on the Syr-Darya river, not far from the Aral Sea (as it was then). X-rays, 3D imaging and close inspection of the bones revealed a number of serious fractures that had healed, meaning that humans must have looked after the animal while he was unable to hunt. In fact, he was looked after quite well: in spite of his disabilities, he reached an age well over one year, probably several years. Also, stable isotope analysis showed that this tomcat most likely fed on fish, an observation which would also fit the local environment.
Excavations at the citadel of Dzhankent at the spot where the cat remains were found
But even more intriguing is what this high-calibre scientific study says about the relationship between humans and pets at the time. We know from 10th century Arab geographers that Yengi-kent (as Dzhankent was called then) was a town where the ruler of the Turkic Oguz nomads had his winter quarters. But not only is this two centuries after the time when the tomcat lived here: we also know from ethnographic studies that nomads do not keep cats - or rather, cats may temporarily live in nomad camps, but they do not follow the movements of the nomads with their herds. Cats thrive on small rodents which are attracted by human food stores, mostly grain, and nomads do not have large grain stores; such stores are typical of villages and towns, and that is where the history of cats and cat-keeping started.
Cat in the modern village next to Dzhankent
So the presence of Dzhanik (as the archaeologists have begun to call him) at this place implies that this was a reasonably large settlement with a sedentary population even 200 years before it was surrounded by big walls and was called a town. This fits the provisional ideas of the archaeologists about the origins of Dzhankent: the later town of the 10th century grew out of a large fishing village which, as early as the 7th/8th centuries, had trading links to the south, to the Iranian civilization of Khorezm on the Amu-Darya river. Khorezmian traders should have been interested in the location of Dzhankent on the Syr-Darya, the river which around that time became the route of the Northern Silk Road, connecting Central Asia (and ultimately, China) to the Volga, the Caspian and Black Seas and the Mediterranean. And it is along one of these trade routes that domestic cats must have reached Dzhankent, perhaps with a caravan or more likely on a river boat or sailing ship. Because Dzhanik was not a captured, tame wildcat which had lived in the Aral Sea region: ancient DNA has proven that he was most likely a true representative of the Felis catus L. species, the kind of modern domestic cat. And this makes him the earliest domestic mouser in Eurasia north of Central Asia and east of China, about 1200 years ago.

COVID-19 may attack patients' central nervous system

University of Cincinnati researcher says depressed mood and anxiety may be symptoms of a COVID-19 impact on the brain
UNIVERSITY OF CINCINNATI
IMAGE
IMAGE: AHMAD SEDAGHAT, MD, PHD, SHOWN IN UNIVERSITY OF CINCINNATI GARDNER NEUROSCIENCE INSTITUTE. view more 
CREDIT: COLLEEN KELLEY/UNIVERSITY OF CINCINNATI CREATIVE + BRAND
Depressed mood or anxiety exhibited in COVID-19 patients may possibly be a sign the virus affects the central nervous system, according to an international study led by a University of Cincinnati College of Medicine researcher.
These two psychological symptoms were most closely associated with a loss of smell and taste rather than the more severe indicators of the novel coronavirus such as shortness of breath, cough or fever, according to the study.
"If you had asked me why would I be depressed or anxious when I am COVID positive, I would say it is because my symptoms are severe and I have shortness of breath or I can't breathe or I have symptoms such as cough or high fever," says Ahmad Sedaghat, MD, PhD, an associate professor and director of rhinology, allergy and anterior skull base surgery, in the UC College of Medicine's Department of Otolaryngology-Head and Neck Surgery.
"None of these symptoms that portended morbidity or mortality was associated with how depressed or anxious these patients were," explains Sedaghat, also a UC Health physician specializing in diseases of the nose and sinuses. "The only element of COVID-19 that was associated with depressed mood and anxiety was the severity of patients' loss of smell and taste. This is an unexpected and shocking result."
Sedaghat conducted a prospective, cross-sectional telephone questionnaire study which examined characteristics and symptoms of 114 patients who were diagnosed with COVID-19 over a six-week period at Kantonsspital Aarau in Aarau, Switzerland. Severity of the loss of smell or taste, nasal obstruction, excessive mucus production, fever, cough and shortness of breath during COVID-19 were assessed. The findings of the study are available online in The Laryngoscope.
First author of the study is Marlene M. Speth, MD, and other co-authors include Thirza Singer-Cornelius, MD; Michael Oberle, PhD; Isabelle Gengler, MD; and Steffi Brockmeier, MD.
At the time of enrollment in the study, when participants were experiencing COVID-19, 47.4% of participants reported at least several days of depressed mood per week while 21.1% reported depressed mood nearly every day. In terms of severity, 44.7% of participants reported expressing mild anxiety while 10.5% reported severe anxiety.
"The unexpected finding that the potentially least worrisome symptoms of COVID-19 may be causing the greatest degree of psychological distress could potentially tell us something about the disease," says Sedaghat. "We think our findings suggest the possibility that psychological distress in the form of depressed mood or anxiety may reflect the penetration of SARS-CoV-2, the virus that causes COVID-19, into the central nervous system."
Sedaghat says researchers have long thought that the olfactory tract may be the primary way that coronaviruses enter the central nervous system. There was evidence of this with SARS, or severe acute respiratory syndrome, a viral illness that first emerged in China in November 2002 and spread through international travel to 29 countries. Studies using mouse models of that virus have shown that the olfactory tract, or the pathway for communication of odors from the nose to the brain, was a gateway into the central nervous system and infection of the brain.
"These symptoms of psychological distress, such as depressed mood and anxiety are central nervous system symptoms if they are associated only with how diminished is your sense of smell," says Sedaghat. "This may indicate that the virus is infecting olfactory neurons, decreasing the sense of smell, and then using the olfactory tract to enter the central nervous symptom."
Infrequent but severe central nervous system symptoms of COVID-19 such as seizures or altered mental status have been described, but depressed mood and anxiety may be the considerably more common but milder central nervous symptom of COVID-19, explains Sedaghat.
"There may be more central nervous system penetration of the virus than we think based on the prevalence of olfaction-associated depressed mood and anxiety and this really opens up doors for future investigations to look at how the virus may interact with the central nervous system," says Sedaghat.
###
For the cross-sectional telephone questionnaire study: The two-item Patient Health Question (PHQ-2) and the two-item Generalized Anxiety Disorder questionnaire (GAD-2) were used to measure depressed mood and anxiety level, respectively during COVID-19 and for participants' baseline pre-COVID-19 state.
Funding for the study came from Kantonsspital Aarau, Aarau, Switzerland.

Why hydration is so important when hiking in the heat of summer

ARIZONA STATE UNIVERSITY
You don't have to be an experienced trailblazer to know that if you choose to hike in the heat, you better be hydrated. Yet scientific literature on the subject reports that roughly 25% of heat-related illness cases are a result of a fluid imbalance, rather than heat exposure alone.
New research out of Arizona State University seeks to understand exactly what is going on in the body as it responds to heat stress, looking in particular at hydration levels, core temperature and sweat loss, in the hopes of developing interventions and best practices for those whose mountainous wanderlust just can't be quenched.
The findings of one such related study, recently published in the International Journal of Environmental Research and Public Health, show that compared to moderate weather conditions, hikers' performance during hot weather conditions was indeed impaired, resulting in slower hiking speeds and prolonged exposure to the elements, thus increasing their risk of heat-related illness.
Perhaps most telling, though, the research team found that most hikers did not bring enough fluid with them on their hike to compensate for their sweat loss. They also found that less aerobically fit participants were most negatively affected by heat stress and performed worse overall compared to their more aerobically fit counterparts.
"The current guidelines for hikers in general are very broad and geared more toward safety than quantifying the adequate amount of fluid they need," said ASU College of Health Solutions Assistant Professor Floris Wardenaar, corresponding author on the paper. "The guidelines also do not take into account fitness levels or the importance of incremental exposure to the heat, which can be affected by acclimatization to specific environments and weather conditions."
Former College of Health Solutions master's degree students Joshua Linsell and Emily Pelham are the first and second authors of the paper, followed by School of Geographical Sciences and Urban Planning Assistant Professor David Hondula and Wardenaar.
In their study, 12 participants -- seven women and five men in their 20s -- were asked to hike "A" Mountain on a moderate day (68 degrees Fahrenheit) and then again on a hot day (105 degrees Fahrenheit). They were told to prepare as they normally would, bringing however much fluid they thought they would need, and were asked to hike as quickly as possible without becoming uncomfortable. Each time, they hiked up and down the mountain four times, which adds up to roughly the same distance and incline as Camelback Mountain, one of the most popular hiking destinations in the Phoenix area that sees its fair share of heat-related illness cases.
Before their trek, participants' resting metabolism was recorded to estimate their energy production during the hike. Their weight, heart rate, core temperature and hydration status were measured before and after the hike, and their drinking behavior - how much or how little fluid they consumed - was monitored throughout.
Using that data, researchers were able to calculate participants' rate of sweat loss through their bodyweight reduction, which averaged out to about 1%, whether conditions were hot or moderate.
"The 1% bodyweight reduction had different reasons," Wardenaar said. "During hot conditions, participants' sweat rates were higher while drinking more, often resulting in consuming all of the fluid brought, whereas during moderate conditions, sweat rates were lower, but participants drank less. A 1% bodyweight loss is considered manageable and not likely to result in detrimental performance decline. My concern is that when people hike longer than 80 to 90 minutes in hot conditions that they will not bring enough fluid, resulting in larger bodyweight losses."
Overall, compared to moderate conditions, hot conditions significantly impaired hiking performance by 11%, reduced aerobic capacity by 7%, increased rate of perceived exertion by 19% and elevated core temperature. On average, participants took about 20 minutes longer to complete the hike during hot conditions than during moderate conditions, which theoretically could exponentially increase the chance of developing heat-related illness.
"Heat slows you down," Wardenaar explained. "This means that what you normally can hike in 75 minutes under moderate conditions may take up to 95 minutes in the heat. That is something that people should take into account, especially when their hike will substantially exceed the 90 minute cut-off."
Based on their findings, Wardenaar suggests preparing for a hike by familiarizing yourself with your personal hydration needs. You can do so by multiplying your weight before the hike by .01, then subtracting your weight after your hike from your starting weight. If the difference between your starting weight and your ending weight is greater than the product of your starting weight multiplied by .01, you need to be drinking more fluid during your hike.
It's also important to be well-hydrated before you even get out on the trail, Wardenaar said. And avoid alcohol, as it can contribute to dehydration.
###

Converting female mosquitoes to non-biting males with implications for mosquito control

VIRGINIA TECH
IMAGE
IMAGE: JAMES BIEDLER (LEFT), A RESEARCH SCIENTIST IN ZHIJIAN TU'S LAB; AZADEH ARYAN (MIDDLE), THE FIRST AUTHOR ON THE PAPER AND A RESEARCH SCIENTIST IN ZHIJIAN TU'S LAB; AND MARIA SHARAKHOVA... view more 
CREDIT: VIRGINIA TECH
Virginia Tech researchers have proven that a single gene can convert female Aedes aegypti mosquitoes into fertile male mosquitoes and identified a gene needed for male mosquito flight.
Male mosquitoes do not bite and are unable to transmit pathogens to humans. Female mosquitoes, on the other hand, are able to bite.
Female Aedes aegypti mosquitoes require blood to produce eggs, making them the prime carriers of the pathogens that cause Zika and dengue fever in humans.
"The presence of a male-determining locus (M locus) establishes the male sex in Aedes aegypti and the M locus is only inherited by the male offspring, much like the human Y chromosome," said Zhijian Tu, a professor in the Department of Biochemistry in the College of Agriculture and Life Sciences.
"By inserting Nix, a previously discovered male-determining gene in the M locus of Aedes aegypti, into a chromosomal region that can be inherited by females, we showed that Nix alone was sufficient to convert females to fertile males. This may have implications for developing future mosquito control techniques."
These findings were published in the Proceedings of the National Academy of Sciences.
"We also discovered that a second gene, named myo-sex, was needed for male flight. This work sheds light into the molecular basis of the function of the M locus, which contains at least 30 genes," said Azadeh Aryan, a research scientist in Tu's lab and the first author on the paper.
Aryan and colleagues generated and characterized multiple transgenic mosquito lines that expressed an extra copy of the Nix gene under the control of its own promoter. Maria Sharakhova, an assistant professor of entomology in the College of Agriculture and Life Sciences, and Anastasia Naumencko, a former graduate research assistant, mapped the chromosomal insertion site of the extra copy of Nix.
The Virginia Tech team, in collaboration with Zach Adelman's lab in the Department of Entomology at Texas A&M University and Chunhong Mao of the Biocomplexity Institute & Initiative at the University of Virginia, found that the Nix transgene alone, even without the M locus, was sufficient to convert females into males with male-specific sexually dimorphic features and male-like gene expression.
"Nix-mediated sex conversion was found to be highly penetrant and stable over many generations in the laboratory, meaning that these characteristics will be inherited for generations to come," said Michelle Anderson, a former member of the Adelman and Tu labs and currently a senior research scientist at the Pirbright Institute in the United Kingdom.
Although the Nix gene was able to convert the females into males, the converted males could not fly as they did not inherit the myo-sex gene, which is also located in the M locus.
Knocking out myo-sex in wild-type males confirmed that the lack of myo-sex in the sex-converted males is the reason why they could not fly. Although flight is needed for mating, the sex-converted males were still able to father viable sex-converted progeny when presented with cold-anesthetized wild-type females.
"Nix has great potential for developing mosquito control strategies to reduce vector populations through female-to-male sex conversion, or to aid in the Sterile Insect Technique, which requires releasing only nonbiting males," said James Biedler, a research scientist in the Tu lab.
Genetic methods that rely on mating to control mosquitoes target only one specific species. In this case, the Tu team is targeting Aedes aegypti, a species that invaded the Americas a few hundred years ago and poses a threat to humans.
However, more research is needed before potentially useful transgenic lines can be generated for initial testing in laboratory cages. "One of the challenges is to produce transgenic lines that convert females into fertile, flying male mosquitoes by inserting both the Nix and myo-sex genes into their genome together," said Adelman.
As the Tu team looks to the near future, they wish to explore the mechanism by which the Nix gene activates the male developmental pathway. The team is also interested in learning about how it evolves within mosquito species of the same genus.
"We have found that the Nix gene is present in other Aedes mosquitoes. The question is: how did this gene and the sex-determining locus evolve in mosquitoes?" said Tu, who is also an affiliated faculty member of the Fralin Life Sciences Institute.
In addition to diving into the depths of the Nix gene in mosquitoes, researchers hope that these findings will inform future investigations into homomorphic sex chromosomes that are found in other insects, vertebrates, and plants.
###
Yumin Qi, a research scientist at Virginia Tech, and Justin Overcash, a former graduate student at both Virginia Tech and Texas A&M University, also contributed to this research.
Written by Kristin Rose Jutras and Kendall Daniels

For chimpanzees, salt and pepper hair not a marker of old age

New GW study finds there is significant variation in how chimpanzees experience pigment loss
GEORGE WASHINGTON UNIVERSITY
IMAGE
IMAGE: THERE IS SIGNIFICANT INDIVIDUAL VARIATION IN HOW CHIMPANZEES, LIKE THIS ONE AT GOMBE NATIONAL PARK, EXPERIENCE PIGMENT LOSS. view more 
CREDIT: IAN C. GILBY
WASHINGTON (July 14, 2020)--Silver strands and graying hair is a sign of aging in humans, but things aren't so simple for our closest ape relatives--the chimpanzee. A new study published today in the journal PLOS ONE by researchers at the George Washington University found graying hair is not indicative of a chimpanzee's age.
This research calls into question the significance of the graying phenotype in wild non-human species. While graying is among the most salient traits a chimpanzee has--the world's most famous chimpanzee was named David Greybeard--there is significant pigmentation variation among individuals. Graying occurs until a chimpanzee reaches midlife and then plateaus as they continue to age, according to Elizabeth Tapanes, a Ph.D. candidate in the GW Department of Anthropology and lead author of the study.
"With humans, the pattern is pretty linear, and it's progressive. You gray more as you age. With chimps that's really not the pattern we found at all," Tapanes said. "Chimps reach this point where they're just a little salt and peppery, but they're never fully gray so you can't use it as a marker to age them."
The researchers gathered photos of two subspecies of wild and captive chimpanzees from their collaborators in the field to test this observation. They visually examined photos of the primates, evaluated how much visible gray hair they had and rated them accordingly. The researchers then analyzed that data, comparing it to the age of the individual chimpanzees at the time the photos were taken.
The researchers hypothesize there could be several reasons why chimpanzees did not evolve graying hair patterns similar to humans. Their signature dark pigmentation might be critical for thermoregulation or helping individuals identify one another.
Dr. Brenda Bradley, an associate professor of anthropology, is the senior author on the paper. This research dates back to an observation Dr. Bradley made while visiting a field site in Uganda five years ago. As she was learning the names of various wild chimpanzees, she found herself making assumptions about how old they were based on their pigmentation. On-site researchers told her that chimps did not go gray the same way humans do. Dr. Bradley was curious to learn if that observation could be quantified.
There has been little previous research on pigmentation loss in chimpanzees or any wild mammals, Dr. Bradley said. Most existing research on human graying is oriented around the cosmetic industry and clinical dermatology.
"There's a lot of work done on trying to understand physiology and maybe how to override it," Dr. Bradley said. "But very little work done on an evolutionary framework for why is this something that seems to be so prevalent in humans."
The researchers plan to build on their findings by looking at the pattern of gene expression in individual chimpanzee hairs. This will help determine whether changes are taking place at the genetic level that match changes the eye can see.
This study comes ahead of World Chimpanzee Day on July 14. GW's faculty and student researchers make contributions to our global understanding of chimpanzees and primates as part of the GW Center for the Advanced Study of Human Paleobiology. Through various labs, investigators study the evolution of social behavior in the chimpanzees and bonobos, the evolution of primate brain structure, and lead on-the-ground projects at the Gombe Stream Research Center in Tanzania. Dr. Bradley's lab is also currently working on research about color vision and hair variation in lemurs.
###

Ancient oyster shells provide historical insights

New research suggests ways to improve reef management and stabilize ecosystems
UNIVERSITY OF GEORGIA
An interdisciplinary team of scientists studying thousands of oyster shells along the Georgia coast, some as old as 4,500 years, has published new insights into how Native Americans sustained oyster harvests for thousands of years, observations that may lead to better management practices of oyster reefs today.
Their study, led by University of Georgia archaeologist Victor Thompson, was published July 10 in the journal Science Advances.
The new research argues that understanding the long-term stability of coastal ecosystems requires documenting past and present conditions of such environments, as well as considering their future. The findings highlight a remarkable stability of oyster reefs prior to the 20th century and have implications for oyster-reef restoration by serving as a guide for the selection of suitable oyster restoration sites in the future.
Shellfish, such as oysters, have long been a food staple for human populations around the world, including Native American communities along the coast of the southeastern United States. The eastern oyster Crassostrea virginica is a species studied frequently by biologists and marine ecologists because of the central role the species plays in coastal ecosystems.
Oyster reefs are a keystone species that provide critical habitats for other estuarine organisms. Oyster populations, however, have dramatically declined worldwide over the last 100 years due to overexploitation, climate change and habitat degradation.
"Oyster reefs were an integral part of the Native American landscape and our study shows that their sustainability over long periods of time was likely due to the sophisticated cultural systems that governed harvesting practices," said Thompson, professor of anthropology in the Franklin College of Arts and Sciences and director of the UGA Laboratory of Archaeology.
According to Thompson, prior models used by archaeologists have not adequately accounted for the role Indigenous people had not only sustaining ecosystems, but also enhancing biodiversity.
"Our research shows that harvesting was done likely with an aim towards sustainability by Native American communities," he said. "Work here along the Georgia coast, along with colleagues working in the Pacific and in Amazonia, indicates that Indigenous peoples had a wealth of traditional ecological knowledge regarding these landscapes and actively managed them for thousands of years."
Changes in oyster shell size and abundance is widely used to examine human population pressures and the health of oyster reefs. The researchers measured nearly 40,000 oyster shells from 15 Late Archaic (4500 - 3500 years Before Present) through Mississippian (1150 - 370 years BP) period archaeological sites situated along the South Atlantic coast of the United States to provide a long-term record of oyster harvesting practices and to document oyster abundance and size across time.
The new findings show an increase in oyster size throughout time and a nonrandom pattern in their distributions across archaeological sites up and down the coastline that the authors believe is related to the varying environmental conditions found in different areas.
When the researchers compared their work to maps of the 19th-century oyster reef distributions, they found that the two were highly correlated. All of the data on oyster size and reef size suggested there was considerable stability in oyster productivity over time, even if some reefs were not quite as productive as others. This overall productivity changed, however, in the early 1900s when industrial oyster canning devastated the reefs, leaving only a small percent of the reefs viable today.
"This work, which was partially supported by the Georgia Coastal Ecosystems Long Term Ecological Research project, demonstrates the importance of understanding the role that humans play in shaping the landscape, and that is something that is not always appreciated in ecological studies," said Merryl Alber, professor and director of the UGA Marine Institute on Sapelo Island, a site of excavations for this study.
###
The Georgia Coastal Ecosystems research project, established in 2000 with a grant from the National Science Foundation and renewed for the third time in 2019, studies long-term change in coastal ecosystems such as the saltwater marshes that characterize Georgia's coastline.

New lithium battery charges faster, reduces risk of device explosions

Researchers at Texas A&M University have invented a technology that can prevent lithium batteries from heating and failing
TEXAS A&M UNIVERSITY
IMAGE
IMAGE: A SCHEMATIC SHOWING LITHIUM BATTERY WITH THE NEW CARBON NANOTUBE ARCHITECTURE FOR THE ANODE view more 
CREDIT: JURAN NOH/TEXAS A&M UNIVERSITY COLLEGE OF ENGINEERING
Cell phone batteries often heat up and, at times, can burst into flames. In most cases, the culprit behind such incidents can be traced back to lithium batteries. Despite providing long-lasting electric currents that can keep devices powered up, lithium batteries can internally short circuit, heating up the device.
Researchers at Texas A&M University have invented a technology that can prevent lithium batteries from heating and failing. Their carbon nanotube design for the battery's conductive plate, called the anode, enables the safe storage of a large quantity of lithium ions, thereby reducing the risk of fire. Further, they said that their new anode architecture will help lithium batteries charge faster than current ¬¬commercially available batteries.
"We have designed the next generation of anodes for lithium batteries that are efficient at producing large and sustained currents needed to quickly charge devices," said Juran Noh, a material sciences graduate student in Dr. Choongho Yu's laboratory in the J. Mike Walker '66 Department of Mechanical Engineering. "Also, this new architecture prevents lithium from accumulating outside the anode, which over time can cause unintended contact between the contents of the battery's two compartments, which is one of the major causes of device explosions."
Their results are published in the March issue of the journal Nano Letters.
When lithium batteries are in use, charged particles move between the battery's two compartments. Electrons given up by lithium atoms move from one side of the battery to the other. On the other hand, lithium ions travel the other direction. When charging the battery, lithium ions and electrons go back to their original compartments.
Hence, the property of the anode, or the electrical conductor that houses lithium ions within the battery, plays a decisive role in the battery's properties. A commonly used anode material is graphite. In these anodes, lithium ions are inserted between layers of graphite. However, Noh said this design limits the amount of lithium ions that can be stored within the anode and even requires more energy to pull the ions out of the graphite during charging.
These batteries also have a more insidious problem. Sometimes lithium ions do not evenly deposit on the anode. Instead, they accumulate on the anode's surface in chunks, forming tree-like structures, called dendrites. Over time, the dendrites grow and eventually pierce through the material that separates the battery's two compartments. This breach causes the battery to short circuit and can set the device ablaze. Growing dendrites also affect the battery's performance by consuming lithium ions, rendering them unavailable for generating a current.
Noh said another anode design involves using pure lithium metal instead of graphite. Compared to graphite anodes, those with lithium metal have a much higher energy content per unit mass or energy density. But they too can fail in the same catastrophic way due to the formation of dendrites.
To address this problem, Noh and her teammates designed anodes using highly conductive, lightweight materials called carbon nanotubes. These carbon nanotube scaffolds contain spaces or pores for lithium ions to enter and deposit. However, these structures do not bind to lithium ions favorably.
Hence, they made two other carbon nanotube anodes with slightly different surface chemistry -- one laced with an abundance of molecular groups that can bind to lithium ions and another that had the same molecular groups but in a smaller quantity. With these anodes, they built batteries to test the propensity to form dendrites.
As expected, the researchers found that scaffolds made with just carbon nanotubes did not bind to lithium ions well. Consequently, there was almost no dendrite formation, but the battery's ability to produce large currents was also compromised. On the other hand, scaffolds with an excess of binding molecules formed many dendrites, shortening the battery's lifetime.
However, the carbon nanotube anodes with an optimum quantity of the binding molecules prevented the formation of dendrites. In addition, a vast quantity of lithium ions could bind and spread along the scaffold's surface, thereby boosting the battery's ability to produce large, sustained currents.
"When the binding molecular groups are abundant, lithium metal clusters made from lithium ions end up just clogging the pores on the scaffolds," said Noh. "But when we had just the right amount of these binding molecules, we could 'unzip' the carbon nanotube scaffolds at just certain places, allowing lithium ions to come through and bind on to the entire surface of the scaffolds rather than accumulate on the outer surface of the anode and form dendrites."
Noh said that their top-performing anodes handle currents five times more than commercially-available lithium batteries. She noted this feature is particularly useful for large-scale batteries, such as those used in electric cars, that require quick charging.
"Building lithium metal anodes that are safe and have long lifetimes has been a scientific challenge for many decades," said Noh. "The anodes we have developed overcome these hurdles and are an important, initial step toward commercial applications of lithium metal batteries."
###
Other contributors to this research include Jian Tan from the mechanical engineering department; and Digvijay Rajendra Yadav, Peng Wu and Dr. Kelvin Xie in the materials science and engineering department.