Wednesday, August 09, 2023

 

Good smells, bad smells: It’s all in the insect brain


Raman looks at natural and acquired preferences using locusts


Peer-Reviewed Publication

WASHINGTON UNIVERSITY IN ST. LOUIS

locust odor test 

IMAGE: BARANI RAMAN AND HIS LAB AT THE MCKELVEY SCHOOL OF ENGINEERING STUDIED THE BEHAVIOR OF LOCUSTS AND HOW THE NEURONS IN THEIR BRAINS RESPONDED TO APPEALING AND UNAPPEALING ODORS TO LEARN MORE ABOUT HOW THE BRAIN ENCODES FOR PREFERENCES AND HOW IT LEARNS. view more 

CREDIT: PHOTO: RAMAN LAB, WASHINGTON UNIVERSITY IN ST. LOUIS




Everyone has scents that naturally appeal to them, such as vanilla or coffee, and scents that don’t appeal. What makes some smells appealing and others not?

Barani Raman, a professor of biomedical engineering at the McKelvey School of Engineering at Washington University in St. Louis, and Rishabh Chandak, who earned bachelor’s, master’s and doctoral degrees in biomedical engineering in 2016, 2021 and 2022, respectively, studied the behavior of the locusts and how the neurons in their brains responded to appealing and unappealing odors to learn more about how the brain encodes for preferences and how it learns.

The study provides insights into how our ability to learn is constrained by what an organism finds appealing or unappealing, as well as the timing of the reward. Results of their research were published in Nature Communications Aug. 5.

Raman has used locusts for years to study the basic principles of the enigmatic sense of smell. While it is more of an aesthetic sense in humans, for insects, including locusts, the olfactory system is used to find food and mates and to sense predators. Neurons in their antennae convert chemical cues to electrical signals and relay them to the brain. This information is then processed by several neural circuits that convert these sensory signals to behavior.

Raman and Chandak set about to understand how neural signals are patterned to produce food-related behavior. Like dogs and humans salivating, locusts use sensory appendages close to their mouths called palps to grab food. The grabbing action is automatically triggered when some odorants are encountered. They termed odorants that triggered this innate behavior as appetitive. Those that did not produce this behavior were categorized as unappetitive.

Raman and Chandak, who earned the outstanding dissertation award from biomedical engineering, used 22 different odors to understand which odorants the locusts found appetitive and which they did not. Their favorite scents were those that smelled like grass (hexanol) and banana (isoamyl acetate), and their least favorites smelled like almond (benzaldehyde) and citrus (citral).

“We found that the locusts responded to some odors and not others, then we laid them out in a single behavioral dimension,” Raman said.

To understand what made some odorants more likable and others not, they exposed the hungry locusts to each of the scents for four seconds and measured their neural response. They found that the panel of odorants produced neural responses that nicely segregated depending on the behavior they generated. Both the neural responses during odor presentation and after its termination contained information regarding the behavioral prediction.

“There seemed to be a simple approach that we could use to predict what the behavior was going to be,” Raman said.

Interestingly, some of the locusts showed no response to any of the odors presented, so Raman and Chandak wanted to see if they could train them to respond. Very similar to how Pavlov trained his dog with a bell followed by a food reward, they presented each locust with an odorant and then gave them a snack of a piece of grass at different time points following the odor presentation. They found that locusts only associated appealing scents with a food reward. Delaying the reward, they found that locusts could be trained to delay their behavioral response.

“With the ON-training approach, we found that the locusts opened their palps immediately after the onset of the odor, stayed open during the presentation of the odor, then closed after the odor was stopped,” Raman said. “In contrast, the OFF-training approach resulted in the locusts opening their palps much slower, reaching the peak response after the odor was stopped.”

The researchers found that the timing of giving the reward during training was important. When they gave the reward four seconds after the odor ended, the locusts did not learn that the odor indicated they would get a reward. Even for the appealing scents no training was observed.

They found that training with unpleasant stimuli led locusts to respond more to the pleasant ones. To explain this paradoxical observation, Raman and Chandak developed a computational model based on the idea that there is a segregation of information relevant to behavior very early in the sensory input to the brain. This simple idea was sufficient to explain how innate and learned preference for odorants could be generated in the locust olfactory system.

“This all goes back to a philosophical question: How do we know what is positive and what is negative sensory experience?” Raman said. “All information received by our sensory apparatus, and their relevance to us, has to be represented by electrical activity in the brain. It appears that sorting information in this fashion happens as soon as the sensory signals enter the brain.”


Chandak R, Raman B. Neural manifolds for odor-driven innate and acquired appetitive preferences. Nature Communications, Aug. 5, 2023., DOI: 10.1038/s41467-023-40443-2.

This research was supported by the National Science Foundation (1453022, 1724218, 2021795) and the Office of Naval Research (N00014-19-1-2049, 955 N00014-21-1-2343).

Originally published by the McKelvey School of Engineering website.

 

Cybersecurity project plans to connect researchers across the country


Grant and Award Announcement

TEXAS A&M UNIVERSITY

Dr. Narasimha Reddy 

IMAGE: DR. NARASIMHA REDDY view more 

CREDIT: TEXAS A&M UNIVERSITY



From building fighter jets to automobiles, the manufacturing world is increasingly adapting digital instruction as technology advances. Mechanical parts can be designed on a computer and shipped over the network to a manufacturing machine that follows digital instructions to produce a specific part. The move into the digital world makes securing online information a national interest. 

Dr. Narasimha Reddy, a professor in the Department of Electrical and Computer Engineering at Texas A&M University, recently received a National Science Foundation grant to research cybersecurity issues in digital manufacturing.

“The hope is that by getting ahead of the deployment of these digital manufacturing machines and finding solutions for the cybersecurity problems, we will make manufacturing more secure,” he said. “Since these machines need to receive instructions over the network, they can potentially be sent malicious packets to damage the machines. We’re looking at these issues related to the security of the machines.”

When a company produces parts for fighter jets using modern manufacturing processes, there is a risk that someone could break into the network and compromise their integrity. A national security issue arises if defective machinery ends up in these jets.

“An easier way to think about this is with 3D-manufactured automobile parts,” Reddy said. “Let's say when you go to the car manufacturer or dealership, you need a part such as an axle. They don’t hold the parts anymore in inventory, so they print it for you. This may especially be the case for old cars that are not around anymore. If those designs are compromised, you could potentially get into an accident in a car with a defective part. The idea is to prevent these designs and parts to be compromised.”

Reddy aims to make manufacturers aware of potential issues so they can implement safety practices before deploying digital manufacturing machines. The idea is to get ahead of the problem before it becomes too commonplace. A website will also be built to open the lines of communication between manufacturers and researchers.

“This grant is about trying to get the people from the cybersecurity side and people from the manufacturing side to talk to each other to create a community that's going to be interested in solving the problems,” Reddy said. “Not only are we working together on research problems, but we're also trying to bring people of similar interests together through workshops, conferences and student design competitions. The intent is to create several activities that spark interest in this space.”

Working alongside Reddy for this project is Dr. Satish Bukkapatnam, co-principal investigator and Texas A&M industrial engineering professor. The team also includes Dr. Ramesh Karri and Dr. Nikhil Gupta from New York University, Dr. Nektarios Tsoutsos from the University of Delaware, Dr. Sidi Berri from The City University of New York and Dr. Annamalai Annamalai from Prairie View A&M.

By Katie Satterlee, Texas A&M Engineering

 

Mothers experiencing depression can still thrive as parents


UBC Okanagan researcher explores how external supports offset the risks to children’s health posed by maternal depression


Peer-Reviewed Publication

UNIVERSITY OF BRITISH COLUMBIA OKANAGAN CAMPUS




The proverb “It takes a village to raise a child” takes on new significance when a mother of a child is experiencing depression.

“Being a mother with depression carries increased risks for a child’s physical and psychological health,” says Dr. Sarah Dow-Fleisner, Assistant Professor in the School of Social Work and Director of the Centre for the Study of Services to Children and Families at UBC Okanagan. “But it’s not fated to be, especially if mothers have external supports.”

Dr. Dow-Fleisner’s findings, recently published in the Journal of Family Issues, have important implications for how social workers and clinical practitioners—as well as families and communities—can help.

While a lot of research focuses on the postpartum period during which the rate of depression among mothers is highest, Dr. Dow-Fleisner wanted to focus on depression occurring later in childhood. Her team used data from a large longitudinal US study to compare depressed and non-depressed mothers of nine-year-old children.

Her analyses revealed that mothers with depression were more likely to report parenting stress and less likely to view themselves as competent parents as compared to non-depressed mothers. They also reported engaging in more disciplinary tactics, including nonviolent tactics like taking away privileges as well as aggressive tactics like cursing or threatening the child. In terms of involvement, they were less likely to be involved at the child’s school, such as attending an open house. However, they were equally likely to be involved in home activities, such as helping with homework.

“Furthermore, mothers with depression reported fewer interpersonal supports and community resources than mothers without depression,” says Dr. Dow-Fleisner. “This is consistent with previous research.”

Interpersonal supports refer to both emotional and material help from others, such as a relative providing advice or emergency childcare. Community resources refer to safety and neighbourhood cohesion. Neighbourhood cohesion measures the willingness of neighbours to help and the shared values of the neighbourhood, among other social and trust factors.

“Notably, those mothers with depression who reported higher levels of support and cohesion felt less stressed and more competent in their parenting,” says Dr. Dow-Fleisner. “These positive perceptions translated to less psychological aggression-based discipline and more home and school involvement with their children.”

These findings fit with a resilience perspective, whereby mothers facing adversity like depression can still thrive as parents—especially when these protective factors are present.

“We want to help moms both address their depression and improve the child’s health and wellbeing—this is known as a two-generation approach,” says Dr. Dow-Fleisner. “As mothers may not seek out help for their depression alone, a child health check-up in a primary care setting is a good opportunity to screen for maternal depression and provide support in identifying interpersonal supports and community resources.”

Dr. Dow-Fleisner adds that supportive programs should go beyond addressing immediate parenting problems and instead build capacity. For example, a community-based parenting support group could help a mother to build a network of people who could provide material and emotional support as needed. Dr. Dow-Fleisner cites Mamas for Mamas as one such community-based group. Mamas for Mamas, with branches in Kelowna and Vancouver, builds community and provides material as well as other supports for mothers and other caregivers.

“Further funding of programs that empower mothers—including those experiencing mental health concerns—would go a long way in improving the health and wellbeing of children, mothers and families,” says Dr. Dow-Fleisner.

 

 

PRAXIS BY ANY OTHER NAME

Theory meets practice


Marine protected areas overwhelmingly manage with climate change in mind

Peer-Reviewed Publication

UNIVERSITY OF CALIFORNIA - SANTA BARBARA

Galapagos-Marine-Reserve-shark- Cori-Lopazanski 

IMAGE: THE GALAPAGOS MARINE RESERVE IS ONE OF MANY MARINE PROTECTED AREAS AROUND THE GLOBE THAT SAFEGUARDS BIODIVERSITY, CULTURAL HERITAGE AND MARINE RESOURCES. view more 

CREDIT: CREDIT: LOPAZANSKI ET AL.




(Santa Barbara, Calif.) — Scientific findings don’t always translate neatly into actions, especially in conservation and resource management. The disconnect can leave academics and practitioners disheartened and a bit frustrated.

“We want conservation science to be informing real-world needs,” said Darcy Bradley, a senior ocean scientist at The Nature Conservancy and a former director of UC Santa Barbara’s Environmental Markets Lab.

“Most managers and practitioners also want to incorporate science into their work,” added Cori Lopazanski, a doctoral student at UCSB’s Bren School of Environmental Science & Management.

Lopazanski and Bradley were particularly curious how much science was finding its way into the management plans of marine protected areas, or MPAs. These are areas of the ocean set aside for conservation of biodiversity, cultural heritage and natural resources. The pair led a study investigating the management plans for 555 marine protected areas to clarify how the documents incorporated recommendations for climate resilience. The team found that many plans contain forward-looking strategies, even when they didn’t explicitly reference “climate change” or related terms. The heartening results appear in the journal Conservation Letters

This is the first study to examine this question in detail on an international scale. The authors considered marine protected areas of various sizes, locations and layouts across 52 countries, with plans written in nine languages. Their list included practically any marine reserve that barred extractive activities at least somewhere within its borders, including the Channel Islands National Marine Sanctuary, just off the coast of Santa Barbara.

Previous studies mostly focused on the explicit language of management plans. This literal approach gave the appearance that marine protected areas weren’t being managed effectively for climate change. In contrast, Lopazanski, Bradley and their co-authors searched the plans for strategies that promote resilience.

The results appear worrying at first. Just over half of the plans in the study did not explicitly include strategies to tackle climate change impacts. In fact, about 22% didn’t mention climate change at all. “You could mistakenly draw the conclusion that we have a long way to go to really prepare the world’s MPAs for climate change,” Bradley stated. However, a more holistic review revealed a different picture.

Management plans overwhelmingly contained key principles for building resilience, even when they didn’t explicitly mention climate change. Roughly speaking, 94% outlined long-term objectives, 99% included threat-reduction strategies, 98% had monitoring programs, and 93% incorporated adaptive management.

Adaptive management evolves to keep up with changing circumstances. It’s a continual process of evaluating what is and is not working, and correcting course to keep on target. It begins with setting objectives for the area: conservation goals, species and communities of interest, etc. Managers then assess what’s happening in the area to develop strategies to meet these goals.

The objectives and assessment then inform the MPA’s design, including its size, shape and location. Once it’s established, monitoring can begin to track indicators for the objectives. With a clear goal and active observation, managers can implement strategies and interventions, such as addressing pollution, removing invasive species, and restoring habitat.

Adaptive management offers dynamic protection. “We don’t have a ton of evidence about which types of climate strategies are going to be most effective well into the future because climate change impacts are a moving target,” Bradley said. So she was thrilled to see how many management plans incorporated principles of adaptive management.

Managing with the future in mind is particularly important in our changing world. In a recent study, Lopazanski and her colleagues found that marine heatwaves impact ecological communities regardless of whether they are protected inside an MPA. The results raise the question of whether marine protected areas will remain effective conservation tools.

Lopazanski believes this critique misses the point. Marine protected areas will experience losses under climate change just like protected areas on land. That doesn’t mean these parks, reserves and sanctuaries aren’t worthwhile. “There are some things that marine protected areas do really well,” she said. They’re particularly effective at mitigating the impact of fishing and other extractive activities. That’s why MPAs have to be one part of a more comprehensive conservation and management plan for our ocean biodiversity and marine resources.

What’s more, large marine heatwaves are a relatively new phenomenon, and dealing with that uncertainty is part of designing an effective MPA. “It’s easy to criticize MPAs as a static strategy, ill-suited to deal with the dynamic nature of climate change,” Bradley said, “But a deeper look at the plans reveals that they are more dynamic than they appear.”

The authors compiled many different management strategies in the paper, highlighting some they think are underutilized. They also peppered the study with examples and lessons from different MPAs. They were particularly impressed by the management plan of the Greater Farallones National Marine Sanctuary, off the coast of San Francisco. Its comprehensive plan included diverse strategies that targeted different climate change impacts and challenges facing that specific region. “This study can be a resource for managers who are looking to make their MPAs more resilient,” Lopazanski said. 

In fact, utility was one of the study’s key aims. This research was a collaboration between academic scientists and conservation practitioners supported by the Arnhold UC Santa Barbara-Conservation International Collaborative. It was intentionally designed to gather information that would be immediately actionable and useful for real-world MPA management. A document to bring academics and managers just a bit closer together.

 

$4M NIH grant will test worksite sleep health coaching for Arizona firefighters


Researchers at the UArizona Mel and Enid Zuckerman College of Public Health seek to improve firefighter health by focusing on behavioral interventions to improve sleep and recovery.

Grant and Award Announcement

UNIVERSITY OF ARIZONA HEALTH SCIENCES



A $4 million award from the National Heart, Lung, and Blood Institute, a division of the National Institutes of Health, will allow researchers in the University of Arizona Mel and Enid Zuckerman College of Public Health to identify key factors for the successful implementation of workplace sleep coaching to improve sleep health in Arizona firefighters.

Almost half of career firefighters report short sleep and poor sleep quality, and about 37% of firefighters screen positive for sleep disorders like sleep apnea, insomnia or shift work disorder, according to research led by the Harvard Work Hours Health and Safety Group. Unfortunately, firefighters face unique barriers, including long working shifts and mandatory overtime, that can prevent them from using evidence-based interventions to improve sleep.

“Other studies have showed us that firefighters’ personal circumstances and shift schedules often dictate their sleep,” said principal investigator Patricia Haynes, PhD, CBSM, DBSM, whose previous research found that more recovery sleep in firefighters during off-days is associated with less stress and irritability.

Researchers will work with 20 fire agencies across Arizona to evaluate a flexible, personalized sleep health intervention that can be administered in real-world situations. The study team also aims to train fire service managers and promote the benefits of sleep and recovery within the fire service.                                                                 

“A sleep intervention is most likely to be successful and utilized if it is tailored to the firefighter and a firefighter lifestyle,” said Dr. Haynes, one of several faculty at the Zuckerman College of Public Health’s Center for Firefighter Health Collaborative Research whose research focuses on aspects of firefighter health.

Partnering on the research are various nonprofit and advisory stakeholder groups committed to the health of first responders, including the 100 Club of Arizona, Greater Tucson Fire Foundation, Arizona Fire Chiefs Association and the Professional Fire Fighters of Arizona.

“Dr. Haynes’ innovative research and programs to support mental health and sleep health for firefighters have had proven results, and her work has benefited so many first responders,” said Iman Hakim, MD, PhD, MPH, dean of the Zuckerman College of Public Health, “We are so proud of the work she and her team members do to improve health for firefighters in Arizona, and this research can be used to help fire departments around the country.”

In addition to Dr. Haynes, the research team includes: Ed Bedrick, PhD, professor in the Zuckerman College of Public Health; David Glickenstein, PhD, professor in the UArizona College of Science’s Department of Mathematics; Michael Grandner, PhD, MTR, CBSM, FAASM, associate professor in the UArizona College of Medicine – Tucson’s Department of Psychiatry; and Daniel Taylor, PhD, professor in the College of Science’s Department of Psychology. Additional collaborators from the Arizona State University College of Health Solutions include professor Matthew Buman, PhD, and research professor Dana Epstein, PhD, RN.

Chemical contamination on International Space Station is out of this world


Peer-Reviewed Publication

UNIVERSITY OF BIRMINGHAM




Concentrations of potentially harmful chemical compounds in dust collected from air filtration systems on the International Space Station (ISS) exceed those found in floor dust from many American homes, a new study reveals.

In the first study of its kind, scientists analysed a sample of dust from air filters within the ISS and found levels of organic contaminants which were higher than the median values found in US and Western European homes.

Publishing their results today in Environmental Science and Technology Letters, researchers from the University of Birmingham, UK, as well as the NASA Glenn Research Center, USA, say their findings could guide the design and construction of future spacecraft.

Contaminants found in the ‘space dust’ included polybrominated diphenyl ethers (PBDEs), hexabromocyclododecane (HBCDD), ‘novel’ brominated flame retardants (BFRs), organophosphate esters (OPEs), polycyclic aromatic hydrocarbons (PAH), perfluoroalkyl substances (PFAS), and polychlorinated biphenyls (PCBs).

BFRs and OPEs are used in many countries to meet fire safety regulations in consumer and commercial applications like electrical and electronic equipment, building insulation, furniture fabrics and foams.

PAH are present in hydrocarbon fuels and emitted from combustion processes, PCBs were used in building and window sealants and in electrical equipment as dielectric fluids, while PFAS have been used in applications like stain proofing agents for fabrics and clothing. However, their potential human health effects have led to some of them being banned or limited in use.

PCBs, some PFAS, HBCDD and the Penta- Octa-, and Deca-BDE commercial formulations of PBDEs, are classed as persistent organic pollutants (POPs) under the UNEP Stockholm Convention. In addition, some PAH are classified as human carcinogens, while some OPEs are under consideration for restriction by the European Chemicals Agency.

Co-author Professor Stuart Harrad, from the University of Birmingham, commented: “Our findings have implications for future space stations and habitats, where it may be possible to exclude many contaminant sources by careful material choices in the early stages of design and construction. 

“While concentrations of organic contaminants discovered in dust from the ISS often exceeded median values found in homes and other indoor environments across the US and western Europe, levels of these compounds were generally within the range found on earth.”

Researchers note that PBDE concentrations in the dust sample falling within the range of concentrations detected in US house dust may reflect use on the ISS of inorganic FRs like ammonium dihydrogen phosphate to make fabrics and webbing flame retardant. They believe that the use of commercially available ‘off-the-shelf’ items brought on board for the personal use of astronauts, such as cameras, MP3 players, tablet computers, medical devices, and clothing, are potential sources of many of the chemicals detected.

Air inside the ISS is constantly recirculated with 8-10 changes per hour. While CO2 and gaseous trace contaminant removal occurs, the degree to which this removes chemicals like BFRs is unknown. High levels of ionizing radiation can accelerate ageing of materials, including breakdown of plastic goods into micro and nanoplastics that become airborne in the microgravity environment. This may cause concentrations and relative abundance of PBDEs, HBCDD, NBFRs, OPEs, PAH, PFAS, and PCBs in ISS dust to differ notably from those in dust from terrestrial indoor microenvironments.

Scientists measured concentrations of a range of target chemicals in dust collected from the ISS. In a microgravity environment, particles float around according to ventilation system flow patterns, eventually depositing on surfaces and air intakes.

Screens covering the ISS HEPA filters accumulate this debris, requiring weekly vacuuming to maintain efficient filtration. Material in ISS vacuum bags comprises of previously airborne particles, clothing lint, hair and other debris generally identified as spacecraft cabin dust. Some vacuum bags were returned to Earth for studies of this unique dust, with a small sample shipped to the University of Birmingham for analysis in the study.

ENDS

For more information, interview requests or an embargoed copy of the research paper, please contact Tony Moran, International Communications Manager, University of Birmingham on +44 (0)782 783 2312 or t.moran@bham.ac.uk. For out-of-hours enquiries, please call +44 (0) 7789 921 165.

Notes to Editors

  • The University of Birmingham is ranked amongst the world’s top 100 institutions, its work brings people from across the world to Birmingham, including researchers and teachers and more than 8,000 international students from over 150 countries.
  • Persistent Organic Contaminants in Dust from the International Space Station’ - Stuart Harrad, Mohamed Abou-Elwafa Abdallah, Daniel Drage, and Marit Meyer is published by Environmental Science and Technology Letters.

After 15 years, pulsar timing yields evidence of cosmic background gravitational waves


Groups report evidence that the cosmos is filled with a background of gravitational waves likely due to mergers of supermassive black hole binaries


 NEWS RELEASE 

UNIVERSITY OF CALIFORNIA - BERKELEY

Pulsar timing array detects evidence of gravitational wave background 

IMAGE: ARTIST’S INTERPRETATION OF AN ARRAY OF PULSARS BEING AFFECTED BY GRAVITATIONAL RIPPLES PRODUCED BY A SUPERMASSIVE BLACK HOLE BINARY IN A DISTANT GALAXY. view more 

CREDIT: AURORE SIMONNET FOR THE NANOGRAV COLLABORATION




The universe is humming with gravitational radiation — a very low-frequency rumble that rhythmically stretches and compresses spacetime and the matter embedded in it.

That is the conclusion of several groups of researchers from around the world who simultaneously published a slew of journal articles in June describing more than 15 years of observations of millisecond pulsars within our corner of the Milky Way galaxy. At least one group — the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) collaboration — has found compelling evidence that the precise rhythms of these pulsars are affected by the stretching and squeezing of spacetime by these long-wavelength gravitational waves.

"This is key evidence for gravitational waves at very low frequencies,” says Vanderbilt University’s Stephen Taylor, who co-led the search and is the current chair of the collaboration. “After years of work, NANOGrav is opening an entirely new window on the gravitational-wave universe."

Gravitational waves were first detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 2015. The short-wavelength fluctuations in spacetime were caused by the merger of smaller black holes, or occasionally neutron stars, all of them weighing in at less than a few hundred solar masses.

The question now is: Are the long-wavelength gravitational waves — with periods from years to decades — also produced by black holes?

In one paper from the NANOGrav consortium, published Aug. 1 in The Astrophysical Journal Letters (ApJ Letters), University of California, Berkeley, physicist Luke Zoltan Kelley and the NANOGrav team argued that the hum is likely produced by hundreds of thousands of pairs of supermassive black holes — each weighing billions of times the mass of our sun — that over the history of the universe have gotten close enough to one another to merge. The team produced simulations of supermassive black hole binary populations containing billions of sources and compared the predicted gravitational wave signatures with NANOGrav’s most recent observations.

The black holes' orbital dance prior to merging vibrates spacetime analogous to the way waltzing dancers rhythmically vibrate a dance floor. Such mergers over the 13.8-billion-year age of the universe produced gravitational waves that today overlap, like the ripples from a handful of pebbles tossed into a pond, to produce the background hum. Because the wavelengths of these gravitational waves are measured in light years, detecting them required a galaxy-sized array of antennas — a collection of millisecond pulsars.

"I guess the elephant in the room is we're still not 100% sure that it's produced by supermassive black hole binaries. That is definitely our best guess, and it's fully consistent with the data, but we're not positive," said Kelley, UC Berkeley assistant adjunct professor of astronomy. "If it is binaries, then that's the first time that we've actually confirmed that supermassive black hole binaries exist, which has been a huge puzzle for more than 50 years now."

"The signal we're seeing is from a cosmological population over space and over time, in 3D. A collection of many, many of these binaries collectively give us this background," said astrophysicist Chung-Pei Ma, the Judy Chandler Webb Professor in the Physical Sciences in the departments of astronomy and physics at UC Berkeley and a member of the NANOGrav collaboration.

Ma noted that while astronomers have identified a number of possible supermassive black hole binaries using radio, optical and X-ray observations, they can use gravitational waves as a new siren to guide them where in the sky to search for electromagnetic waves and conduct detailed studies of black hole binaries.

Ma directs a project to study 100 of the closest supermassive black holes to Earth and is eager to find evidence of activity around one of them that suggests a binary pair so that NANOGrav can tune the pulsar timing array to probe that patch of the sky for gravitational waves. Supermassive black hole binaries likely emit gravitational waves for a couple of million years before they merge.

Other possible causes of the background gravitational waves include dark matter axions, black holes left over from the beginning of the universe — so-called primordial black holes — and cosmic strings. Another NANOGrav paper appearing in ApJ Letters today lays out constraints on these theories.

"Other groups have suggested that this comes from cosmic inflation or cosmic strings or other kinds of new physical processes which themselves are very exciting, but we think binaries are much more likely. To really be able to definitively say that this is coming from binaries, however, what we have to do is measure how much the gravitational wave signal varies across the sky. Binaries should produce far larger variations than alternative sources," Kelley said. "Now is really when the serious work and the excitement get started as we continue to build sensitivity. As we continue to make better measurements, our constraints on the supermassive black hole binary populations are just rapidly going to get better and better."

Galaxy mergers lead to black hole mergers

Most large galaxies are thought to have massive black holes at their centers, though they're hard to detect because the light they emit — ranging from X-rays to radio waves produced when stars and gas fall into the black hole — is typically blocked by surrounding gas and dust. Ma recently analyzed the motion of stars around the center of one large galaxy, M87, and refined estimates of its mass — 5.37 billion times the mass of the sun — even though the black hole itself is totally obscured.

Tantalizingly, the supermassive black hole at the center of M87 could be a binary black hole. But no one knows for sure.

"My question for M87, or even our galactic center, Sagittarius A*, is: Can you hide a second black hole near the main black hole we've been studying? And I think currently no one can rule that out," Ma said. "The smoking gun for this detection of gravitational waves being from binary supermassive black holes would have to come from future studies, where we hope to be able to see continuous wave detections from single binary sources."

Simulations of galaxy mergers suggest that binary supermassive black holes are common, since the central black holes of two merging galaxies should sink together toward the center of the larger merged galaxy. These black holes would begin to orbit one another, though the waves that NANOGrav can detect are only emitted when they get very close, Kelley said — something like 10 to 100 times the diameter of our solar system, or 1,000 to 10,000 times the Earth-sun distance, which is 93 million miles.

But can interactions with gas and dust in the merged galaxy make the black holes spiral inward to get that close, making a merger inevitable?

"This has kind of been the biggest uncertainty in supermassive black hole binaries: How do you get them from just after galaxy merger down to where they're actually coalescing," Kelley said. "Galaxy mergers bring the two supermassive black holes together to about a kiloparsec or so — a distance of 3,200 light years, roughly the size of the nucleus of a galaxy. But they need to get down to five or six orders of magnitude smaller separations before they can actually produce gravitational waves."

"It could be that the two could just be stalled," Ma noted. "We call that the last parsec problem. If you had no other channel to shrink them, then we would not expect to see gravitational waves."

But the NANOGrav data suggest that most supermassive black hole binaries don't stall.

"The amplitude of the gravitational waves that we're seeing suggests that mergers are pretty effective, which means that a large fraction of supermassive black hole binaries are able to go from these large galaxy merger scales down to the very, very small subparsec scales," Kelley said.

NANOGrav was able to measure the background gravitational waves, thanks to the presence of millisecond pulsars — rapidly rotating neutron stars that sweep a bright beam of radio waves past Earth several hundred times per second. For unknown reasons, their pulsation rate is precise to within tenths of milliseconds. When the first such millisecond pulsar was found in 1982 by the late UC Berkeley astronomer Donald Backer, he quickly realized that these precision flashers could be used to detect the spacetime fluctuations produced by gravitational waves. He coined the term "pulsar timing array" to describe a set of pulsars scattered around us in the galaxy that could be used as a detector.

In 2007, Backer was one of the founders of NANOGrav, a collaboration that now involves more than 190 scientists from the U.S. and Canada. The plan was to monitor at least once each month a group of millisecond pulsars in our portion of the Milky Way galaxy and, after accounting for the effects of motion, look for correlated changes in the pulse rates that could be ascribed to long-wavelength gravitational waves traveling through the galaxy. The change in arrival time of a particular pulsar signal would be on the order of a millionth of a second, Kelley said.

"It's only the statistically coherent variations that really are the hallmark of gravitational waves," he said. "You see variations on millisecond, tens of millisecond scales all the time. That's just due to noise processes. But you need to dig deep down through that and look at these correlations to pick up signals that have amplitudes of about 100 nanoseconds or so."

The NANOGrav collaboration monitored 68 pulsars in all, some for 15 years, and employed 67 in the current analysis. The group publicly released their analysis programs, which are being used by groups in Europe (European Pulsar Timing Array), Australia (Parkes Pulsar Timing Array) and China (Chinese Pulsar Timing Array) to correlate signals from different, though sometimes overlapping, sets of pulsars than used by NANOGrav.

The NANOGrav data allow several other inferences about the population of supermassive black hole binary mergers over the history of the universe, Kelley said. For one, the amplitude of the signal implies that the population skews toward higher masses. While known supermassive black holes max out at about 20 billion solar masses, many of those that created the background may have been bigger, perhaps even 40 or 60 billion solar masses. Alternatively, there may just be many more supermassive black hole binaries than we think.

"While the observed amplitude of the gravitational wave signal is broadly consistent with our expectations, it's definitely a bit on the high side," he said. "So we need to have some combination of relatively massive supermassive black holes, a very high occurrence rate of those black holes, and they probably need to be able to coalesce quite effectively to be able to produce these amplitudes that we see. Or maybe it's more like the masses are 20% larger than we thought, but also they merge twice as effectively, or some combination of parameters."

As more data comes in from more years of observations, the NANOGrav team expects to get more convincing evidence for a cosmic gravitational wave background and what's producing it, which could be a combination of sources. For now, astronomers are excited about the prospects for gravitational wave astronomy.

"This is very exciting as a new tool," Ma said. "This opens up a completely new window for supermassive black hole studies."

NANOGrav's data came from 15 years of observations by the Arecibo Observatory in Puerto Rico, a facility that collapsed and became unusable in 2020; the Green Bank Telescope in West Virginia; and the Very Large Array in New Mexico. Future NANOGrav results will incorporate data from the Canadian Hydrogen Intensity Mapping Experiment (CHIME) radio telescope, which was added to the project in 2019.

The NANOGrav collaboration receives support from National Science Foundation Physics Frontiers Center award numbers 1430284 and 2020265, the Gordon and Betty Moore Foundation, NSF AccelNet award number 2114721, a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, and the Canadian Institute for Advanced Research (CIFAR).

 

World’s largest study shows the more you walk, the lower your risk of death, even if you walk fewer than 5,000 steps



Peer-Reviewed Publication

EUROPEAN SOCIETY OF CARDIOLOGY




The number of steps you should walk every day to start seeing benefits to your health is lower than previously thought, according to the largest analysis to investigate this.

 

The study, published in the European Journal of Preventive Cardiology [1] today (Wednesday), found that walking at least 3967 steps a day started to reduce the risk of dying from any cause, and 2337 steps a day reduced the risk of dying from diseases of the heart and blood vessels (cardiovascular disease).

 

However, the new analysis of 226,889 people from 17 different studies around the world has shown that the more you walk, the greater the health benefits. The risk of dying from any cause or from cardiovascular disease decreases significantly with every 500 to 1000 extra steps you walk. An increase of 1000 steps a day was associated with a 15% reduction in the risk of dying from any cause, and an increase of 500 steps a day was associated with a 7% reduction in dying from cardiovascular disease.

 

The researchers, led by Maciej Banach, Professor of Cardiology at the Medical University of Lodz, Poland, and Adjunct Professor at the Ciccarone Center for the Prevention of Cardiovascular Disease, Johns Hopkins University School of Medicine, found that even if people walked as many as 20,000 steps a day, the health benefits continued to increase. They have not found an upper limit yet.

 

“Our study confirms that the more you walk, the better,” says Prof. Banach. “We found that this applied to both men and women, irrespective of age, and irrespective of whether you live in a temperate, sub-tropical or sub-polar region of the world, or a region with a mixture of climates. In addition, our analysis indicates that as little as 4,000 steps a day are needed to significantly reduce deaths from any cause, and even fewer to reduce deaths from cardiovascular disease.”

 

There is strong evidence that a sedentary lifestyle may contribute to an increase in cardiovascular disease and a shorter life. Studies have shown that insufficient physical activity affects more than a quarter of the world’s population. More women than men (32% versus 23%), and people in higher income countries compared to low-income countries (37% versus 16%) do not undertake a sufficient amount of physical activity. According to World Health Organization data, insufficient physical activity is the fourth most frequent cause of death in the world, with 3.2 million deaths a year related to physical inactivity. The COVID-19 pandemic also resulted in a reduction in physical activity, and activity levels have not recovered two years on from it.

 

Dr Ibadete Bytyçi from the University Clinical Centre of Kosovo, Pristina, Kosovo, senior author of the paper, says: “Until now, it’s not been clear what is the optimal number of steps, both in terms of the cut-off points over which we can start to see health benefits, and the upper limit, if any, and the role this plays in people’s health. However, I should emphasise that there were limited data available on step counts up to 20,000 a day, and so these results need to be confirmed in larger groups of people.”

 

This meta-analysis is the first not only to assess the effect of walking up to 20,000 steps a day, but also to look at whether there are any differences depending on age, sex or where in the world people live.

 

The studies analysed by the researchers followed up participants for a median (average) of seven years. The mean (average) age was 64, and 49% of participants were female.  

 

In people aged 60 years or older, the size of the reduction in risk of death was smaller than that seen in people aged younger than 60 years. In the older adults, there was a 42% reduction in risk seen in those who walked between 6,000 and 10,000 steps a day, while there was a 49% reduction in risk in younger adults who walked between 7,000 and 13,000 steps a day.

 

Prof. Banach says: “In a world where we have more and more advanced drugs to target specific conditions such as cardiovascular disease, I believe we should always emphasise that lifestyle changes, including diet and exercise, which was a main hero of our analysis, might be at least as, or even more effective in reducing cardiovascular risk and prolonging lives. We still need good studies to investigate whether these benefits may exist for intensive types of exertion, such as marathon running and iron man challenges, and in different populations of different ages, and with different associated health problems. However, it seems that, as with pharmacological treatments, we should always think about personalising lifestyle changes.”

 

Strengths of the meta-analysis include its size and that it was not restricted to looking at studies limited to a maximum of 16,000 steps a day. Limitations include that it was an observational study and so cannot prove that increased step counts cause the reduction in the risk of death, only that it is associated with it. The impact of step counts was not tested on people with different diseases; all the participants were generally healthy when they entered the studies analysed. The researchers were not able to account for differences in race and socioeconomic status, and the methods for counting steps were not identical in all the studies included in this meta-analysis.

 

(ends)

 

[1] “The association between daily step count and all-cause and cardiovascular mortality: a meta-analysis”, by Maciej Banach et alEuropean Journal of Preventive Cardiology. doi:10.1093/eurjpc/zwad229