Sunday, July 23, 2023

SPACE NEWS

Supernova Alert: We Will Soon See an Exploding Star in the Night Sky

 


SpaceVerse
Jul 21, 2023  #interstellar #astrophysics #universe

Get ready to witness a breathtaking celestial event as we unveil an extraordinary revelation in the night sky! Brace yourself for the imminent explosion of a massive star, a phenomenon known as a supernova. In this captivating video, we bring you the ultimate cosmic experience as we unravel the secrets and beauty behind this awe-inspiring celestial occurrence.

Prepare to marvel at the sheer magnitude of the imminent supernova in the night sky. Our knowledgeable astronomy experts have researched and scrutinized the celestial patterns, allowing them to predict this mesmerizing display of cosmic fireworks. As the universe constantly evolves, we are fortunate to be part of this rare moment that will forever etch itself into the tapestry of our memories.

Join us as we venture into the depths of outer space, where grandeur meets mystery. This awe-inspiring video not only promises a visual feast for stargazers and astronomers alike but also delves into the fascinating science and symbolism behind supernovae eruptions. Gain a deeper understanding of the cataclysmic forces at play, as we explore the birth, life, and death of these colossal stars.

Immerse yourself in the ethereal beauty of the celestial canopy as it unveils the grand spectacle of a supernova explosion. Witness how the dying star, under immense gravitational pressure, culminates in a glorious burst of light and energy, illuminating vast stretches of the universe. Prepare to be captivated as we unveil the intricate dance of particles and the cosmic aftermath that follows such an extraordinary event.

Stay tuned for this groundbreaking revelation that promises to leave stargazers awestruck. Don't miss out on this rare opportunity to witness a supernova in all its magnificence. Subscribe to our channel and hit the notification bell to receive updates as we navigate the celestial wonders and deliver unparalleled cosmic adventures straight to your screen.

https://www.tor.com/2018/03/29/destruction-and-renewal-nova-by-samuel-r-delany

Mar 29, 2018 ... In Nova, he created a novel that works on many levels, including myth and legend, unfolding against a solidly-researched science fiction 




In new space race, scientists propose geoarchaeology can aid in preserving space heritage


Peer-Reviewed Publication

UNIVERSITY OF KANSAS




LAWRENCE, KANSAS — As a new space race heats up, two researchers from the Kansas Geological Survey at the University of Kansas and their colleagues have proposed a new scientific subfield: planetary geoarchaeology, the study of how cultural and natural processes on Earth’s moon, on Mars and across the solar system may be altering, preserving or destroying the material record of space exploration.

“Until recently, we might consider the material left behind during the space race of the mid-20th century as relatively safe,” said Justin Holcomb, postdoctoral researcher at the Kansas Geological Survey, based at the University of Kansas, and lead author on a new paper introducing the concept of planetary geoarchaeology in the journal Geoarchaeology. “However, the material record that currently exists on the moon is rapidly becoming at risk of being destroyed if proper attention isn’t paid during the new space era.”

Since the advent of space exploration, humans have launched more than 6,700 satellites and spacecraft from countries around the globe, according to the Union of Concerned Scientists. The United States alone accounts for more than 4,500 civil, commercial, governmental and military satellites.

“We’re trying to draw attention to the preservation, study and documentation of space heritage because I do think there’s a risk to this heritage on the moon,” Holcomb said. “The United States is trying to get boots on the moon again, and China is as well. We’ve already had at least four countries accidentally crash into the moon recently. There are a lot of accidental crashes and not a lot of protections right now.”

Holcomb began considering the idea of planetary geoarchaeology during the COVID-19 lockdown. Applying geoarchaeological tools and methods to the movement of people into space and the solar system is a natural extension of the study of human migration on Earth, the focus of the ODYSSEY Archaeological Research Program housed at KGS and directed by Holcomb’s co-author, Rolfe Mandel, KGS senior scientist and University Distinguished Professor in the Department of Anthropology.

“Human migration out of Africa may have occurred as early as 150,000 years ago, and space travel represents the latest stage of that journey,” Mandel said. “Although the ODYSSEY program is focused on documenting the earliest evidence of people in the Americas, the next frontier for similar research will be in space.”

How planetary geoarchaeologists will determine whether an item is worth preserving is an open question.

“We feel that all material currently existing on extraterrestrial surfaces is space heritage and worthy of protection,” Holcomb said. “However, some sites, such as the very first footprints on the moon at Tranquility Base or the first lander on Mars, Viking 1, represent the material footprint of a long history of migration.”

Beyond those “firsts,” sifting through the hundreds of thousands of bits of material currently in orbit or strewn across the surfaces of the moon and Mars — what many call “trash” but Holcomb and his colleagues regard as heritage — will require case-by-case decision making.

“We have to make those decisions all the time with archaeological sites today,” Holcomb said. “The moon has such a limited record now that it’s totally possible to protect all of it. Certainly, we need to protect space heritage related to the Apollo missions, but other countries, too, deserve to have their records protected.”

With resources for protecting space heritage limited, Holcomb and his colleagues advocate for developing systems to track materials left in space.

“We should begin tracking our material record as it continues to expand, both to preserve the earliest record but also to keep a check on our impact on extraterrestrial environments,” he said. “It’s our job as anthropologists and archaeologists to bring issues of heritage to the forefront.”

Beyond the moon, Holcomb wants to see planetary geoarchaeology extend to issues related to exploration and migration to Mars. He points to NASA’s Spirit Rover as an example. The rover became stuck in Martian sand in 2008 and now risks being completely covered by encroaching sand dunes.

“As planetary geoarchaeologists, we can predict when the rover will be buried, talk about what will happen when it’s buried and make sure it’s well documented before it’s lost,” he said. “Planetary scientists are rightfully interested in successful missions, but they seldom think about the material left behind. That’s the way we can work with them.”

Holcomb believes geoarchaeologists should be included in future NASA missions to ensure the protection and safety of space heritage. Meanwhile, geoarchaeologists on Earth can lay the foundation for that work, including advocating for laws to protect and preserve space heritage, studying the effects extraterrestrial ecosystems have on items space missions leave behind and conducting international discussions regarding space heritage preservation and protection issues.

As for being part of a space mission himself?

“I’ll leave that to other geoarchaeologists,” Holcomb said. “There’s plenty to do down here, but I do hope to see an archaeologist in space before it’s all over.”

‘It almost doubled our workload’: AI is supposed to make jobs easier. These workers disagree

By Catherine Thorbecke, CNN
Updated Sat July 22, 2023

Maria Korneeva/Moment RF/Getty Images
CNN —

A new crop of artificial intelligence tools carries the promise of streamlining tasks, improving efficiency and boosting productivity in the workplace. But that hasn’t been Neil Clarke’s experience so far.

Clarke, an editor and publisher, said he recently had to temporarily shutter the online submission form for his science fiction and fantasy magazine, Clarkesworld, after his team was inundated with a deluge of “consistently bad” AI-generated submissions.

“They’re some of the worst stories we’ve seen, actually,” Clarke said of the hundreds of pieces of AI-produced content he and his team of humans now must manually parse through. “But it’s more of the problem of volume, not quality. The quantity is burying us.”

“It almost doubled our workload,” he added, describing the latest AI tools as “a thorn in our side for the last few months.” Clarke said that he anticipates his team is going to have to close submissions again. “It’s going to reach a point where we can’t handle it.”

Since ChatGPT launched late last year, many of the tech world’s most prominent figures have waxed poetic about how AI has the potential to boost productivity, help us all work less and create new and better jobs in the future. “In the next few years, the main impact of AI on work will be to help people do their jobs more efficiently,” Microsoft co-founder Bill Gates said in a blog post recently.

But as is often the case with tech, the long-term impact isn’t always clear or the same across industries and markets. Moreover, the road to a techno-utopia is often bumpy and plagued with unintended consequences, whether it’s lawyers fined for submitting fake court citations from ChatGPT or a small publication buried under an avalanche of computer-generated submissions.

Big Tech companies are now rushing to jump on the AI bandwagon, pledging significant investments into new AI-powered tools that promise to streamline work. These tools can help people quickly draft emails, make presentations and summarize large datasets or texts.

In a recent study, researchers at the Massachusetts Institute of Technology found that access to ChatGPT increased productivity for workers who were assigned tasks like writing cover letters, “delicate” emails and cost-benefit analyses. “I think what our study shows is that this kind of technology has important applications in white collar work. It’s a useful technology. But it’s still too early to tell if it will be good or bad, or how exactly it’s going to cause society to adjust,” Shakked Noy, a PhD student in MIT’s Department of Economics, who co-authored the paper, said in a statement.


Neil Clarke, Editor of Clarkesworld Magazine.Lisa R. Clarke

Mathias Cormann, the secretary-general of the Organization for Economic Co-operation and Development recently said the intergovernmental organization has found that AI can improve some aspects of job quality, but there are tradeoffs.

“Workers do report, though, that the intensity of their work has increased after the adoption of AI in their workplaces,” Cormann said in public remarks, pointing to the findings of a report released by the organization. The report also found that for non-AI specialists and non-managers, the use of AI had only a “minimal impact on wages so far” – meaning that for the average employee, the work is scaling up, but the pay isn’t.

Some workers feel like ‘guinea pigs’

Ivana Saula, the research director for the International Association of Machinists and Aerospace Workers, said that workers in her union have said they feel like “guinea pigs” as employers rush to roll out AI-powered tools on the job.

And it hasn’t always gone smoothly, Saula said. The implementation of these new tech tools has often led to more “residual tasks that a human still needs to do.” This can include picking up additional logistics tasks that a machine simply can’t do, Saula said, adding more time and pressure to a daily work flow.

The union represents a broad range of workers, including in air transportation, health care, public service, manufacturing and the nuclear industry, Saula said.

“It’s never just clean cut, where the machine can entirely replace the human,” Saula told CNN. “It can replace certain aspects of what a worker does, but there’s some tasks that are outstanding that get placed on whoever remains.”

Workers are also “saying that my workload is heavier” after the implementation of new AI tools, Saula said, and “the intensity at which I work is much faster because now it’s being set by the machine.” She added that the feedback they are getting from workers shows how important it is to “actually involve workers in the process of implementation.”

“Because there’s knowledge on the ground, on the frontlines, that employers need to be aware of,” she said. “And oftentimes, I think there’s disconnects between frontline workers and what happens on shop floors, and upper management, and not to mention CEOs.”

Perhaps nowhere are the pros and cons of AI for businesses as apparent as in the media industry. These tools offer the promise of accelerating if not automating copywriting, advertising and certain editorial work, but there have already been some notable blunders.

News outlet CNET had to issue “substantial” corrections earlier this year after experimenting with using an AI tool to write stories. And what was supposed to be a simple AI-written story on Star Wars published by Gizmodo earlier this month similarly required a correction and resulted in employee turmoil. But both outlets have signaled they will still move forward with using the technology to assist in newsrooms.

Others like Clarke, the publisher, have tried to combat the fallout from the rise of AI by relying on more AI. Clarke said he and his team turned to AI-powered detectors of AI-generated work to deal with the deluge of submissions but found these tools weren’t helpful because of how unreliably they flag “false positives and false negatives,” especially for writers whose second language is English.

“You listen to these AI experts, they go on about how these things are going to do amazing breakthroughs in different fields,” Clarke said. “But those aren’t the fields they’re currently working in.”

 

iEarth: An interdisciplinary framework in the era of big data and AI for sustainable development


Peer-Reviewed Publication

SCIENCE CHINA PRESS

iEarth: an interdisciplinary framework in the era of big data and AI for sustainable development 

IMAGE: FIGURE:CONCEPTUALIZED FRAMEWORK OF INTELLIGENT EARTH (IEARTH) view more 

CREDIT: ©SCIENCE CHINA PRESS



The United Nations Sustainable Development Goals (SDGs) hold the key to humanity's future existence and growth. In a bid to optimize the implementation of these SDGs, Professor Peng Gong's team from the University of Hong Kong and Professor Huadong Guo's team from the Chinese Academy of Sciences have collaboratively introduced an innovative "iEarth" framework. This interdisciplinary framework is powered by Big Earth Data science and seeks to amalgamate various interdisciplinary methodologies and expertise. It aims to quantify the processes of Earth systems and human civilization, uncover the intricate interplay between natural ecosystems and human society, foster cross-disciplinary ideologies and solutions, and furnish explicit evidence and valuable scientific knowledge for sustainable development.

The inception of the iEarth concept springs from intelligent Mapping (iMap), and its further development is influenced by a spectrum of disciplinary and interdisciplinary studies. The team distinguishes four primary themes within the iEarth framework: iEarth data, iEarth science, iEarth analytics, and iEarth decision.

iEarth data comprises all data related to Earth systems, encapsulating natural systems and human societies. iEarth science delves into a multidisciplinary exploration of the natural system, human society, and their mutual interaction and feedback, focusing on the diverse traits of objects when interconnected. iEarth analytics presents a methodology inclusive of detection, prediction, assessment, and optimization for achieving SDGs by leveraging the "iEarth+" model, which is dedicated to transcending disciplinary boundaries and actively connecting Earth observations with other disciplines. iEarth decision supports the implementation of SDGs by monitoring progress, pinpointing drivers, simulating pathways, and performing cost-benefit evaluations. The holistic iEarth framework thus consolidates multi-source data, interdisciplinary knowledge, and advanced technology to establish a comprehensive data-science-analytics-decision support system for fostering sustainable environmental, social, and economic prosperity.

The 'intelligence' in the iEarth framework is characterized by its potential for active learning and knowledge synthesis through Big Earth Data models powered by Artificial Intelligence (AI). Consequently, the iEarth framework can also be seen as an AI model anchored on Big Earth Data. According to the team, the successful implementation of the iEarth framework necessitates significant investment in both hard and soft infrastructures.

With an aim to reinforce the vision and boost the capability of iEarth for sustainable development, the team has outlined key research directions, practical implications, and educational curricula. The ultimate objective is to shape and build an interdisciplinary and synergistic framework for research, practice, and education that helps in preserving our living planet.

See the article:

iEarth: an interdisciplinary framework in the era of big data and AI for sustainable development

https://doi.org/10.1093/nsr/nwad178

What if AI models like GPT-4 don't automatically improve over time?

Alistair Barr
Wed, July 19, 2023 

Stefani Reynolds/Getty Images

GPT-4 users have complained that the OpenAI model is getting 'dumber.'


AI researchers studied the model to find out if this was true.


Their findings, published on Tuesday, challenge the assumption that AI models automatically improve.

One of the bedrock assumptions of the current artificial intelligence boom is that AI models "learn" and improve over time. What if that doesn't actually happen?

This is what users of OpenAI's GPT-4, the world's most-powerful AI model, have been experiencing lately. They have gone on Twitter and OpenAI's developer forum to complain about a host of performance issues.

After I reported on this, OpenAI responded that it hasn't "made GPT-4 dumber."

AI researchers decided to settle this debate once and for all by conducting a study. The results were published on Tuesday, and I can't wait any longer to tell you the conclusion: I was right.

"We find that the performance and behavior of both GPT-3.5 and GPT-4 vary significantly across these two releases and that their performance on some tasks have gotten substantially worse over time," the authors of the study wrote.

These are serious AI researchers. The main one is Matei Zaharia, the CTO of Databricks, one of the top AI data companies out there that was most recently valued at $38 billion.

You can read the rest of their findings here. What I'm most interested in is the new questions that these findings raise. Here's the most fascinating one.

"It is also an interesting question whether an LLM service like GPT4 is consistently getting 'better' over time," Zaharia and his research colleagues wrote in their paper.

Another common phrase for AI is machine learning. The magic of this technology is that it can ingest new data and use that to get better over time, without human software engineers manually updating code. Again, this is the core idea that is driving today's AI frenzy and accompanying stock market surges.

If GPT-4 is getting worse, not better, this premise begins to feel shaky.

The Microsoft factor

Microsoft has invested heavily in OpenAI, the creator of GPT-4. Microsoft is also baking this technology into its software, and charging users a lot for the new capabilities.

On Tuesday, the same day Zaharia & Co. published their paper, Microsoft unveiled pricing for Microsoft CoPilot, new AI-powered versions of popular cloud software such as Office 365. This costs $30 a month more, on top of what users are already paying.

Microsoft's market value jumped more than $150 billion after this announcement, showing that Wall Street is betting on AI, and the impact the technology will have on the company's products.

This recent GPT-4 research paper provides a healthy dose of skepticism to the assumptions that are driving these wild swings in value.

Scientist Gary Marcus read Zaharia's study and highlighted how unstable LLMs are. So unstable that relying on them for high-end business products might not be a good idea.

"Who in their right mind would rely on a system that could be 97.6% correct on a task in March and 2.4% correct on same task in June?," he tweeted, citing one of the findings in the research paper. "Important results. Anyone planning to rely on LLMs, take note."

"Prediction: this instability will be LLMs' undoing," Marcus added. "They will never be as commercially successful as the VC community is imagining, and some architectural innovation that allows for greater stability will largely displace LLMs within the next 10 years."

Spokespeople from OpenAI and Microsoft didn't respond to a request for comment on Wednesday.

 

A defense against attacks on unmanned ground and aerial vehicles


UTA research team investigating ways to thwart cyberattacks


Grant and Award Announcement

UNIVERSITY OF TEXAS AT ARLINGTON

Animesh Chakravarthy 

IMAGE: ANIMESH CHAKRAVARTHY view more 

CREDIT: UT ARLINGTON




A University of Texas at Arlington engineering researcher is working on defenses that could thwart cyberattacks against networks of self-driving cars and unmanned aerial vehicles.

Animesh Chakravarthy, associate professor in the Department of Mechanical and Aerospace Engineering (MAE), is the principal investigator on an approximately $800,000 U.S. Department of Defense grant titled “Resilient Multi-Vehicle Networks.” MAE Professor Kamesh Subbarao, and Bill Beksi, assistant professor in the Department of Computer Science and Engineering, are co-principal investigators.

“If hackers find a way to affect 10 out of 100 self-driving cars in a given area, they might have an impact on all 100 cars because the 10 hacked cars would have a ripple effect on the other vehicles,” Chakravarthy said. “We have to make these networks of vehicles resilient to such attacks. This project is meant to detect occurrences as they happen, then provide countermeasures.

Chakravarthy and his colleagues also will attempt to determine costs associated with cyberattacks on automated vehicles, including how much time and money are wasted in traffic or in waiting for accidents to clear.

MAE Chair Erian Armanios said Chakravarthy’s research will be vital to the growth of unmanned vehicle networks.

“You need to ensure smooth and safe operations of those vehicle networks,” Armanios said. “The work of Chakravarthy, Subbarao and Beksi in this grant will achieve that.”

 

Dark SRF experiment at Fermilab demonstrates ultra-sensitivity for dark photon searches


Peer-Reviewed Publication

DOE/FERMI NATIONAL ACCELERATOR LABORATORY

Dark SRF experiment 

IMAGE: STANDING AROUND THE DARK SRF EXPERIMENT FROM LEFT TO RIGHT ARE SQMS CENTER DIRECTOR ANNA GRASSELLINO, SQMS SCIENCE THRUST LEADER RONI HARNIK AND SQMS TECHNOLOGY THRUST LEADER ALEXANDER ROMANENKO. view more 

CREDIT: PHOTO: REIDAR HAHN, FERMILAB




Scientists working on the Dark SRF experiment at the U.S. Department of Energy’s Fermi National Accelerator Laboratory have demonstrated unprecedented sensitivity in an experimental setup used to search for theorized particles called dark photons.

Researchers trapped ordinary, massless photons in devices called superconducting radio frequency cavities to look for the transition of those photons into their hypothesized dark sector counterparts. The experiment has put the world’s best constraint on the dark photon existence in a specific mass range, as recently published in Physical Review Letters.

“The dark photon is a copy similar to the photon we know and love, but with a few variations,” said Roni Harnik, a researcher at the Fermilab-hosted Superconducting Quantum Materials and Systems Center and co-author of this study.

Light that allows us to see the ordinary matter in our world is made of particles called photons. But ordinary matter only accounts for a small fraction of all matter. Our universe is filled with an unknown substance called dark matter, which comprises 85% of all matter. The Standard Model that describes the known particles and forces is incomplete.

In theorists’ simplest version, one undiscovered type of dark matter particle could account for all the dark matter in the universe. But many scientists suspect that the dark sector in the universe has many different particles and forces; some of them might have hidden interactions with ordinary matter particles and forces.

Just as the electron has copies that differ in some ways, including the muon and tau, the dark photon would be different from the regular photon and would have mass. Theoretically, once produced, photons and dark photons could transform into each other at a specific rate set by the dark photon’s properties.

Innovative use of SRF cavities

To look for dark photons, researchers perform a type of experiment called a light-shining-through-wall experiment. This approach uses two hollow, metallic cavities to detect the transformation of an ordinary photon into a dark matter photon. Scientists store ordinary photons in one cavity while leaving the other cavity empty. They then look for the emergence of photons in the empty cavity.

Fermilab researchers in the SQMS Center have years of expertise working with SRF cavities, which are used primarily in particle accelerators. SQMS Center researchers have now employed SRF cavities for other purposes, such as quantum computing and dark matter searches, due to their ability to store and harness electromagnetic energy with high efficiency.

“We were looking for other applications with superconducting radio frequency cavities, and I learned about these experiments where they use two copper cavities side-by-side to test for light shining through the wall,” said Alexander Romanenko, SQMS Center quantum technology thrust leader. “It was immediately clear to me that we could demonstrate greater sensitivity with SRF cavities than cavities used in previous experiments.”

This experiment marks the first demonstration of using SRF cavities to perform a light-shining-through-wall experiment.

The SRF cavities used by Romanenko and his collaborators are hollow chunks of niobium. When cooled to ultralow temperature, these cavities store photons, or packets of electromagnetic energy, very well. For the Dark SRF experiment, scientists cooled the SRF cavities in a bath of liquid helium to around 2 K, close to absolute zero.

At this temperature, electromagnetic energy flows effortlessly through niobium, which makes these cavities efficient at storing photons.

“We have been developing various schemes trying to handle the new opportunities and challenges brought in by this ultra-high-quality superconducting cavities for this light-shining-through-wall experiment,” said study co-author Zhen Liu, an SQMS Center physics and sensing team member from the University of Minnesota.

Researchers now can use SRF cavities with different resonance frequencies to cover various parts of the potential mass range for dark photons. This is because the peak sensitivity on the mass of the dark photon is directly related to the frequency of the regular photons stored in one of the SRF cavities.

“The team has done many follow-ups and cross-checks of the experiment,” said Liu, who worked on the data analysis and the verification design. “SRF cavities open many new search possibilities. The fact we covered new parameter regions for the dark photon’s mass shows their successfulness, competitiveness and great promise for the future.”

“The Dark SRF experiment has paved the way for a new class of experiments under exploration at the SQMS Center, where these very high Q cavities are employed as extremely sensitive detectors.” said Anna Grassellino, director of the SQMS Center and co-PI of the experiment. “From dark matter to gravitational waves searches, to fundamental tests of quantum mechanics, these world’s-highest-efficiency cavities will help us uncover hints of new physics.”

Fermi National Accelerator Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

 

The Lancet: People on ART with low but detectible levels of HIV viral load have almost zero risk of sexually transmitting the virus to others, in-depth review suggests


Peer-Reviewed Publication

THE LANCET



  • Systematic review of 8 studies in more than 7,700 serodiscordant couples in 25 countries finds people living with HIV with viral loads less than 1,000 copies/mL have almost zero risk of transmitting the virus to their sexual partners. Previous studies have not been able to confirm a lack of transmission risk above 200 copies/mL.
  • Systematic review also consolidates and reinforces previous studies that have found there is zero risk of transmitting the virus to sexual partners when people living with HIV have an undetectable viral load.
  • Of the more than 320 documented sexual HIV transmissions in the study, only two involved a partner with a viral load of less than 1,000 copies/mL. In both cases, viral load testing was performed at least 50 days before transmission took place.
  • At least 80% of transmissions involved a partner with HIV who had a viral load greater than 10,000 copies/mL.
  • The findings are published alongside a new WHO policy brief providing updated guidance for HIV treatment monitoring and accompanying supportive messaging regarding transmission risks.
  • Together the new study and policy brief advance global efforts to achieve undetectable viral loads by expanding antiretroviral therapy (ART) for all people living with HIV and 
    reinforce the importance of decriminalising HIV and reducing stigma and discrimination for people living with HIV. 

People living with HIV who maintain low – but still detectible – levels of the virus and adhere to their antiretroviral regimen have almost zero risk of transmitting it to their sexual partners, according to an analysis published in The Lancet. The study’s findings will be presented at an official satellite session ahead of the 12th International AIDS Society Conference on HIV Science (IAS 2023). [1]

Findings from the systematic review indicate the risk of sexual transmission of HIV is almost zero at viral loads of less than 1,000 copies of the virus per millilitre of blood—also commonly referred to as having a suppressed viral load. The systematic review also confirms that people living with HIV who have an undetectable viral load (not detected by the test used) have zero risk of transmitting HIV to their sexual partners.

A new policy brief from the World Health Organization (WHO), published alongside the research paper, provides updated sexual transmission prevention and viral load testing guidance to policymakers, public health professionals, and people living with HIV based on this analysis. This guidance aims to further prevent the transmission of HIV and ultimately support global efforts to achieve undetectable viral loads through antiretroviral therapy for all people living with HIV and to prevent onward transmission to their sexual partner(s) and children. [2]

Previous research has shown people living with HIV with viral loads below 200 copies/mL have zero risk of sexually transmitting the virus. However, until now, the risk of transmission at viral loads between 200 and 1000 copies/mL was less well defined. 

The authors filled this knowledge gap by searching databases for all research studies published between January 2000 and November 2022 on sexual transmission of HIV at varying viral loads. In total, eight studies were included in the systematic review, providing data on 7,762 serodiscordant couples – in which one partner was living with HIV – across 25 countries. 

Lead author Laura Broyles, MD, of the Global Health Impact Group (Atlanta, USA), said: “These findings are important as they indicate that it is extremely rare for people who maintain low levels of HIV to transmit it to their sexual partners. Crucially, this conclusion can promote the expansion of alternative viral load testing modalities that are more feasible in resource-limited settings. Improving access to routine viral load testing could ultimately help people with HIV live healthier lives and reduce transmission of the virus.” [3]

Taking daily medicine to treat HIV – antiretroviral therapy, or ART – lowers the amount of the virus in the body which preserves immune function and reduces morbidity and mortality associated with the virus and helps reduce HIV progression. Without ART, people living with HIV can have a viral load of 30,000 to more than 500,000 copies/mL, depending on the stage of infection. [4]

While using lab-based plasma sample methods provides the most sensitive viral load test results, such tests are not feasible in many parts of the world. However, the new findings support the greater use of simpler testing approaches, such as using dried blood spot samples, as they are effective at categorising viral loads for necessary clinical decision-making. 

Of the 323 sexual transmissions of HIV detected across all eight studies, only two involved a partner with a viral load of less than 1000 copies/mL. In both cases, the viral load test was performed at least 50 days before transmission, suggesting individuals’ viral load may have risen in the period following the test. In studies that provided the full range of viral loads in partners with HIV, at least 80% of transmissions involved viral loads greater than 10,000 copies/mL. 

Co-author Dr Lara Vojnov, of WHO, said: “The ultimate goal of antiretroviral therapy for people living with HIV is to maintain undetectable viral loads, which will improve their own health and prevent transmission to their sexual partners and children. But these new findings are also significant as they indicate that the risk of sexual transmission of HIV at low viral loads is almost zero. This provides a powerful opportunity to help destigmatise HIV, promote the benefits of adhering to antiretroviral therapy, and support people living with HIV.” [3]

The authors acknowledge some limitations to their study. Some of the data analysed were imprecise due to variations across the studies in the definitions of ‘low viral load’, and in the timing and frequency of viral load testing and patient follow-up. Today, HIV treatment is recommended for everyone living with HIV and very large sample sizes would be needed to develop more precise estimates given the extremely low number of transmissions. 

Further, the findings do not apply to HIV transmission from mother to child, as the duration and intensity of exposure – during pregnancy, childbirth, and breastfeeding – is much higher. Differences also exist in the way the virus is passed from mother to child as compared with sexual transmission. Ensuring pregnant and breastfeeding women have undetectable viral loads throughout the entire exposure period is key to preventing new childhood HIV infections. 

Writing in a linked Comment, co-authors Linda-Gail Bekker, Philip Smith, and Ntobeko A B Ntusi (who were not involved in the study) said, “Laura N Broyles and colleagues’ systematic review in The Lancet further supports the almost zero risk for sexual transmission of HIV at levels less than 1000 copies per mL…This evidence is relevant for at least three important reasons. First, it highlights the need for viral load testing scale-up in all settings where people are living with HIV and taking ART…Second, as pointed out by Broyles and colleagues, these data are probably the best that we will ever have. Standard of care now requires that individuals are offered life-saving ART regardless of viral load…Third, and most importantly, this study provides strong support for the global undetectable equals untransmittable (U=U) campaign. This campaign seeks to popularise the concept that individuals with undetectable viral loads are not infectious to sexual partners, thereby reducing stigma and improving quality of life.”

NOTES TO EDITORS

This study was funded by the Bill & Melinda Gates Foundation. It was conducted by researchers from the Global Health Impact Group and the World Health Organization.

[1] What's new in WHO guidelines: innovations, treatment, integration and monitoring:  https://programme.ias2023.org/Programme/Session/4451
[2] The WHO policy brief will be available via the following link when the embargo lifts: https://apps.who.int/iris/handle/10665/360860. For embargoed access to the policy brief please contact mediainquiries@who.int
[3] Quote direct from author and cannot be found in the text of the Article.
[4] https://pubmed.ncbi.nlm.nih.gov/30401660/

The labels have been added to this press release as part of a project run by the Academy of Medical Sciences seeking to improve the communication of evidence. For more information, please see: http://www.sciencemediacentre.org/wp-content/uploads/2018/01/AMS-press-release-labelling-system-GUIDANCE.pdf if you have any questions or feedback, please contact The Lancet press office pressoffice@lancet.com  
 

 

Bodybuilding supplement may help stave off Alzheimer’s


Peer-Reviewed Publication

RUSH UNIVERSITY MEDICAL CENTER




The secret to protecting your memory may be a staple of a bodybuilder’s diet. RUSH researchers recently discovered that a muscle-building supplement called beta-hydroxy beta-methylbutyrate, also called HMB, may help protect memory, reduce plaques and ultimately help prevent the progression of Alzheimer’s disease.

HMB is not a prescription drug or a steroid, but an over-the-counter supplement that is available in sports and fitness stores. Bodybuilders regularly use HMB to increase exercise-induced gains in muscle size and strength while improving exercise performance. HMB is considered safe even after long-term use, with no known side effects.

“This may be one of the safest and the easiest approaches to halt disease progression and protect memory in Alzheimer’s disease patients,” said Kalipada Pahan, PhD, the Floyd A. Davis, MD, Professor of Neurology and professor of neurological sciences, biochemistry and pharmacology at RUSH Medical College.

Studies in mice with Alzheimer’s disease have shown that HMB successfully reduces plaques and increases factors for neuronal growth to protect learning and memory, according to neurological researchers at RUSH.

“Understanding how the disease works is important to developing effective drugs to protect the brain and stop the progression of Alzheimer’s disease,” Pahan said.

Previous studies indicate that a family of proteins known as neurotrophic factors are drastically decreased in the brains of people with Alzheimer’s disease and have been found to help in survival and function of neurons, which are cells that receive and send messages from the body to the brain and vice versa.

“Our study found that after oral consumption, HMB enters into the brain to increase these beneficial proteins, restore neuronal connections and improve memory and learning in mice with Alzheimer’s-like pathology, such as plaques and tangles,” Pahan said.

The study findings indicate that HMB stimulates a nuclear hormone receptor called PPARα within the brain that regulates the transport of fatty acids, which is key to the success of HMB as a neuroprotective supplement.

“If mouse results with HMB are replicated in Alzheimer’s disease patients, it would open up a promising avenue of treatment of this devastating neurodegenerative disease,” Pahan said.

Alzheimer's disease is an irreversible, progressive brain disease that slowly destroys memory and thinking skills and, eventually, even the ability to carry out the simplest tasks. In most people with Alzheimer's, symptoms first appear after age 60. Alzheimer's disease is the most common cause of dementia among older people, affecting as many as 6 million Americans and more than 10% of people age 65 and older. About two-thirds of Americans with Alzheimer’s disease are women.

Results from the study, funded by the National Institutes of Health, were recently published in the Cell Reports.

https://www.cell.com/cell-reports/fulltext/S2211-1247(23)00728-3

Other RUSH researchers involved in this study are Ramesh K. Paidi, PhD, Sumita Raha, PhD, and Avik Roy, PhD.

 

Researchers illuminate resilience of U.S. food supply chains


Peer-Reviewed Publication

UNIVERSITY OF ILLINOIS GRAINGER COLLEGE OF ENGINEERING




Researchers have identified a number of chokepoints in U.S. agricultural and food supply chains through a study that improves our understanding of agri-food supply chain security and may aid policies aimed at enhancing its resilience. The work is presented in a paper published in the July 20, 2023, issue of the journal Nature Food“Structural chokepoints determine the resilience of agri-food supply chains in the United States,” by authors including CEE Associate Professor Megan Konar and CEE Ph.D. student Deniz Berfin Karakoc.   

The agricultural and food systems of the United States are critical for ensuring the stability of both domestic and global food systems, so it is essential to understand the structural resilience of the country’s agri-food supply chains to threats, researchers write. Because the United States plays a key role in a highly integrated global food system, the resilience and security of the U.S. food supply chain has implications for global food security. Additionally, agricultural and food system security and resilience is increasingly recognized as a non-traditional defense objective in the national security community and is critical to the mission of U.S. national defense agencies. 

“We were inspired to perform this research due to the supply chain disruptions during the pandemic and in response to the Executive Order on America’s Supply Chains, which highlights the importance of supply chains for national security,” Konar says. “We hope this research can contribute to more resilient and secure food supply chains.” 

Chokepoints are locations that are critical for distributing agri-food commodities throughout the country. While much research on agri-food supply chains has been from the perspective of industrial firms with a focus on logistics, cost-savings and resilience, the researchers took a national and global security perspective due to growing threats such as pandemics, extreme weather events, climate shocks, and cyber and terrorist attacks. The researchers employed a complex network approach to determine the chokepoints within the agri-food supply chains in the continental U.S. for the years 2007, 2012 and 2017. They found that chokepoints were generally consistent over time. 

Co-authors also include Michael J. Puma of the Center for Climate Systems Research at Columbia University and Lav R. Varshney of the Department of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign.