Wednesday, December 20, 2023

 

“Honey, I shrunk the cookbook” – New approach to vaccine development


Bioinformatics: Publication in Cell Systems

Peer-Reviewed Publication

HEINRICH-HEINE UNIVERSITY DUESSELDORF

HOGVAX tool 

IMAGE: 

WITH THE HOGVAX TOOL, SO-CALLED EPITOPES – SHORT PROTEIN FRAGMENTS OF A PATHOGEN THAT TRIGGER AN IMMUNE RESPONSE – CAN BE COMBINED TO CREATE NOVEL VACCINES. THE AIM IS TO MAXIMISE POPULATION COVERAGE. (FIG.: HHU/SARA SCHULTE)

view more 

CREDIT: HHU/SARA SCHULTE




Vaccine development aims at protecting as many people as possible from infections. Short protein fragments of pathogens, so-called epitopes, are seen as a promising new approach for vaccine development. In the scientific journal Cell Systems, bioinformaticians from Heinrich Heine University Düsseldorf (HHU) now present a method for identifying those epitopes that promise safe immunisation across the broadest possible population group. They have also computed vaccine candidates against the coronavirus SARS-CoV-2 using their HOGVAX tool.

During the coronavirus pandemic, so-called mRNA vaccines proved particularly successful and flexible. These vaccines target the so-called spike proteins – characteristic structures on the surface of the virus. The mRNA contains the sequence of the spike protein, which is produced in the body after vaccination and then trains the human immune system.

“Epitopes” – short fragments of pathogen proteins that are capable of triggering an immune response – are seen as an alternative method to mRNA and a promising approach for obtaining targeted immune responses quickly, cost-effectively and safely.

Everyone has a unique immune system: Depending on their infection history, the immune system is trained to handle and react to different proteins. “This is a fundamental problem of vaccines based on epitopes,” explains Professor Dr Gunnar Klau, holder of the Chair of Algorithmic Bioinformatics at HHU. Together with his PhD student Sara Schulte and Professor Dr Alexander Dilthey from the Institute of Medical Microbiology and Hospital Hygiene, he considered a new approach to developing such vaccines.

Professor Klau compares the problem with a chef who needs to create a new dish for a large event: “Some guests have allergies, while others do not like certain ingredients, so the chef needs to select ingredients that as many of the guests as possible can eat and will enjoy.”

Translated to vaccine development, this means that they are seeking epitopes that trigger a good immune response in as many people as possible. This is necessary because it is not possible to pack an unlimited number of protein fragments into a vaccine so that the various immune systems can seek out the sequences suitable for them – the carrier medium simply does not have sufficient capacity.

The team of three researchers took a special approach with their bioinformatic tool “HOGVAX”. Sara Schulte: “Instead of stringing the epitopes for the vaccine together end-to-end, we use identical sequences at the beginning and end of the epitopes so we can overlay them. The identical section, known as the ‘overlap’, is thus only represented once in the vaccine, which enables us to save a huge amount of space.” This in turn enables many more epitopes to be included in a vaccine.

In order to manage the epitopes and their longest overlaps efficiently, the researchers use a data structure known as a “hierarchical overlap graph” (for short: HOG). Klau: “To stay with the cooking analogy: HOG corresponds to a compressed or shrunk cookbook, from which the chef can now select the recipes that are suitable for all guests.”

Professor Dilthey: “As a test, we applied HOGVAX to data for the SARS-CoV-2 virus and we were able to integrate significantly more epitopes than other tools. According to our calculations, we would be able to reach – and immunise – more than 98% of the world population.”

Sara Schulte comments on the further perspectives for their results: “In the future, we will work on adapting HOGVAX for use in cancer therapy. The aim here is to develop agents specifically designed for individual patients that attack tumour cells in a targeted manner.”

Original publication:

Sara C. Schulte, Alexander T. Dilthey, Gunnar W. Klau; HOGVAX: Exploiting Epitope Overlaps to Maximize Population Coverage in Vaccine Design with Application to SARS-CoV-2; Cell Systems 14, 1-9 December 20, 2023.

DOI: 10.1016/j.cels.2023.11.001


Functional principle of the HOGVAX tool. (Fig.: HHU/Sara Schulte)

CREDIT

HHU/Sara Schulte


 

Cosmic lights in the forest


PRIYA supercomputer simulations largest-ever of Lyman-𝛼 forest spectral data, illustrate large-scale structure of universe


Peer-Reviewed Publication

UNIVERSITY OF TEXAS AT AUSTIN

Lyman-α forest spectra 

IMAGE: 

TACC’S FRONTERA SUPERCOMPUTER HELPED ASTRONOMERS DEVELOP PRIYA, THE LARGEST SUITE OF HYDRODYNAMIC SIMULATIONS YET MADE OF LARGE-SCALE STRUCTURE IN THE UNIVERSE. EXAMPLE LYMAN-Α FOREST SPECTRA FROM QUASAR LIGHT AND CORRESPONDING GAS DENSITY AND TEMPERATURE FROM SIMULATIONS AT REDSHIFT Z = 4. TOP PANEL SHOWS HIGH RESOLUTION, BOTTOM PANEL SHOWS LOW RESOLUTION, MIDDLE PANEL SHOWS THE LYMAN-Α FOREST SPECTRA. CREDIT: DOI: 10.48550/ARXIV.2309.03943

view more 

CREDIT: DOI: 10.48550/ARXIV.2309.03943



Like a celestial beacon, distant quasars make the brightest light in the universe. They emit more light than our entire Milky Way galaxy. The light comes from matter ripped apart as it is swallowed by a supermassive black hole. Cosmological parameters are important numerical constraints astronomers use to trace the evolution of the entire universe billions of years after the Big Bang.

Quasar light reveals clues about the large-scale structure of the universe as it shines through enormous clouds of neutral hydrogen gas formed shortly after the Big Bang on the scale of 20 million light years across or more.

Using quasar light data, the National Science Foundation (NSF)-funded Frontera supercomputer at the Texas Advanced Computing Center (TACC) helped astronomers develop PRIYA, the largest suite of hydrodynamic simulations yet made for simulating large-scale structure in the universe.

“We’ve created a new simulation model to compare data that exists at the real universe,” said Simeon Bird, an assistant professor in astronomy at the University of California, Riverside.

Bird and colleagues developed PRIYA, which takes optical light data from the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey (SDSS). He and colleagues published their work announcing PRIYA October 2023 in the Journal of Cosmology and Astroparticle Physics (JCAP).

“We compare eBOSS data to a variety of simulation models with different cosmological parameters and different initial conditions to the universe, such as different matter densities,” Bird explained. “You find the one that works best and how far away from that one you can go without breaking the reasonable agreement between the data and simulations. This knowledge tells us how much matter there is in the universe, or how much structure there is in the universe.”

The PRIYA simulation suite is connected to large-scale cosmological simulations also co-developed by Bird, called ASTRID, which is used to study galaxy formation, the coalescence of supermassive black holes, and the re-ionization period early in the history of the universe. PRIYA goes a step further. It takes the galaxy information and the black hole formation rules found in ASTRID and changes the initial conditions.

“With these rules, we can we take the model that we developed that matches galaxies and black holes, and then we change the initial conditions and compare it to the Lyman-𝛼 forest data from eBOSS of the neutral hydrogen gas,” Bird said.

The ‘Lyman-𝛼 forest’ gets its name from the ‘forest’ of closely packed absorption lines on a graph of the quasar spectrum resulting from electron transitions between energy levels in atoms of neutral hydrogen. The ‘forest’ indicates the distribution, density, and temperature of enormous intergalactic neutral hydrogen clouds. What’s more, the lumpiness of the gas indicates the presence of dark matter, a hypothetical substance that cannot be seen yet is evident by its observed tug on galaxies.

PRIYA simulations have been used to refine cosmological parameters in work submitted to JCAP September 2023 and authored by Simeon Bird and his UC Riverside colleagues, M.A. Fernandez and Ming-Feng Ho.

Previous analysis of the neutrino mass parameters did not agree with data from the Cosmic Microwave Background radiation (CMB), described as the afterglow of the Big Bang. Astronomers use CMB data from the Plank space observatory to place tight constraints on the mass of neutrinos. Neutrinos are the most abundant particle in the universe, so pinpointing their mass value is important for cosmological models of large-scale structure in the universe.

“We made a new analysis with simulations that were a lot larger and better designed than anything before.  The earlier discrepancies with the Planck CMB data disappeared, and were replaced with another tension, similar to what is seen in other low redshift large-scale structure measurements,” Bird said. “The main result of the study is to confirm the σ8 tension between CMB measurements and weak lensing exists out to redshift 2, ten billion years ago.”

One well-constrained parameter from the PRIYA study is on σ8, which is the amount of neutral hydrogen gas structures on a scale of 8 megaparsecs, or 2.6 million light years. This indicates the number of clumps of dark matter that are floating around there,” Bird said.

Another parameter constrained was ns, the scalar spectral index. It is connected to how the clumsiness of dark matter varies with the size of the region analyzed. It indicates how fast the universe was expanding just moments after the Big Bang.

“The scalar spectral index sets up how the universe behaves right at the beginning. The whole idea of PRIYA is to work out the initial conditions of the universe, and how the high energy physics of the universe behaves,” Bird said.

Supercomputers were needed for the PRIYA simulations, Bird explained, simply because they were so big.

“The memory requirements for PRIYA simulations are so big you cannot put them on anything other than a supercomputer,” Bird said.

TACC awarded Bird a Leadership Resource Allocation on the Frontera supercomputer. Additionally, analysis computations were performed using the resources of the UC Riverside High Performance Computer Cluster.

The PRIYA simulations on Frontera are some of the largest cosmological simulations yet made, needing over 100,000 core-hours to simulate a system of 3072^3 (about 29 billion) particles in a ‘box’ 120 megaparsecs on edge, or about 3.91 million light years across. PRIYA simulations consumed over 600,000 node hours on Frontera.

“Frontera was very important to the research because the supercomputer needed to be big enough that we could run one of these simulations fairly easily, and we needed to run a lot of them. Without something like Frontera, we wouldn't be able to solve them. It's not that it would take a long time -- they just they wouldn't be able to run at all,” Bird said.

In addition, TACC’s Ranch system provided long-term storage for PRIYA simulation data.

“Ranch is important, because now we can reuse PRIYA for other projects. This could double or triple our science impact,” Bird said. "

“Our appetite for more compute power is insatiable," Bird concluded. "It's crazy that we're sitting here on this little planet observing most of the universe.”

  

TACC’s Frontera, the fastest academic supercomputer in the US, is a strategic national capability computing system funded by the National Science Foundation.

CREDIT

TACC

 

New brain-like transistor mimics human intelligence


Transistor performs energy-efficient associative learning at room temperature


Peer-Reviewed Publication

NORTHWESTERN UNIVERSITY

Synaptic transistor 

IMAGE: 

AN ARTISTIC INTERPRETATION OF BRAIN-LIKE COMPUTING.

view more 

CREDIT: XIAODONG YAN/NORTHWESTERN UNIVERSITY




Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

The study will be published on Wednesday (Dec. 20) in the journal Nature.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data move back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Hersam is the Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the department of materials science and engineering, director of the Materials Research Science and Engineering Center and member of the International Institute for Nanotechnology. Hersam co-led the research with Qiong Ma of Boston College and Pablo Jarillo-Herrero of MIT.

Recent advances in artificial intelligence (AI) have motivated researchers to develop computers that operate more like the human brain. Conventional, digital computing systems have separate processing and storage units, causing data-intensive tasks to devour large amounts of energy. With smart devices continuously collecting vast quantities of data, researchers are scrambling to uncover new ways to process it all without consuming an increasing amount of power. Currently, the memory resistor, or “memristor,” is the most well-developed technology that can perform combined processing and memory function. But memristors still suffer from energy costly switching.

“For several decades, the paradigm in electronics has been to build everything out of transistors and use the same silicon architecture,” Hersam said. “Significant progress has been made by simply packing more and more transistors into integrated circuits. You cannot deny the success of that strategy, but it comes at the cost of high power consumption, especially in the current era of big data where digital computing is on track to overwhelm the grid. We have to rethink computing hardware, especially for AI and machine-learning tasks.”

To rethink this paradigm, Hersam and his team explored new advances in the physics of moiré patterns, a type of geometrical design that arises when two patterns are layered on top of one another. When two-dimensional materials are stacked, new properties emerge that do not exist in one layer alone. And when those layers are twisted to form a moiré pattern, unprecedented tunability of electronic properties becomes possible.

For the new device, the researchers combined two different types of atomically thin materials: bilayer graphene and hexagonal boron nitride. When stacked and purposefully twisted, the materials formed a moiré pattern. By rotating one layer relative to the other, the researchers could achieve different electronic properties in each graphene layer even though they are separated by only atomic-scale dimensions. With the right choice of twist, researchers harnessed moiré physics for neuromorphic functionality at room temperature.

“With twist as a new design parameter, the number of permutations is vast,” Hersam said. “Graphene and hexagonal boron nitride are very similar structurally but just different enough that you get exceptionally strong moiré effects.”

To test the transistor, Hersam and his team trained it to recognize similar — but not identical — patterns. Just earlier this month, Hersam introduced a new nanoelectronic device capable of analyzing and categorizing data in an energy-efficient manner, but his new synaptic transistor takes machine learning and AI one leap further.

“If AI is meant to mimic human thought, one of the lowest-level tasks would be to classify data, which is simply sorting into bins,” Hersam said. “Our goal is to advance AI technology in the direction of higher-level thinking. Real-world conditions are often more complicated than current AI algorithms can handle, so we tested our new devices under more complicated conditions to verify their advanced capabilities.”

First the researchers showed the device one pattern: 000 (three zeros in a row). Then, they asked the AI to identify similar patterns, such as 111 or 101. “If we trained it to detect 000 and then gave it 111 and 101, it knows 111 is more similar to 000 than 101,” Hersam explained. “000 and 111 are not exactly the same, but both are three digits in a row. Recognizing that similarity is a higher-level form of cognition known as associative learning.”

In experiments, the new synaptic transistor successfully recognized similar patterns, displaying its associative memory. Even when the researchers threw curveballs — like giving it incomplete patterns — it still successfully demonstrated associative learning.

“Current AI can be easy to confuse, which can cause major problems in certain contexts,” Hersam said. “Imagine if you are using a self-driving vehicle, and the weather conditions deteriorate. The vehicle might not be able to interpret the more complicated sensor data as well as a human driver could. But even when we gave our transistor imperfect input, it could still identify the correct response.”

The study, “Moiré synaptic transistor with room-temperature neuromorphic functionality,” was primarily supported by the National Science Foundation.

Evaluating the truthfulness of fake news through online searches increases the chances of believing misinformation

Surprising study results show limits of using recommended steps to debunk false content


Peer-Reviewed Publication

NEW YORK UNIVERSITY




Conventional wisdom suggests that searching online to evaluate the veracity of misinformation would reduce belief in it. But a new study by a team of researchers shows the opposite occurs: Searching to evaluate the truthfulness of false news articles actually increases the probability of believing misinformation.

The findings, which appear in the journal Nature, offer insights into the impact of search engines’ output on their users—a relatively under-studied area.

“Our study shows that the act of searching online to evaluate news increases belief in highly popular misinformation—and by notable amounts,” says Zeve Sanderson, founding executive director of New York University’s Center for Social Media and Politics (CSMaP) and one of the paper’s authors.

The reason for this outcome may be explained by search-engine outputs—in the study, the researchers found that this phenomenon is concentrated among individuals for whom search engines return lower-quality information.

“This points to the danger that ‘data voids’—areas of the information ecosystem that are dominated by low quality, or even outright false, news and information—may be playing a consequential role in the online search process, leading to low return of credible information or, more alarming, the appearance of non-credible information at the top of search results,” observes lead author Kevin Aslett, an assistant professor at the University of Central Florida and a faculty research affiliate at CSMaP. 

In the newly published Nature study, Aslett, Sanderson, and their colleagues studied the impact of using online search engines to evaluate false or misleading views—an approach encouraged by technology companies and government agencies, among others.

To do so, they recruited participants through both Qualtrics and Amazon’s Mechanical Turk—tools frequently used in running behavioral science studies—for a series of five experiments and with the aim of gauging the impact of a common behavior: searching online to evaluate news (SOTEN). 

The first four studies tested the following aspects of online search behavior and impact:

  • The effect of SOTEN on belief in both false or misleading and true news directly within two days an article’s publication (false popular articles included stories on COVID-19 vaccines, the Trump impeachment proceedings, and climate events)
  • Whether the effect of SOTEN can change an individual’s evaluation after they had already assessed the veracity of a news story
  • The effect of SOTEN months after publication
  • The effect of SOTEN on recent news about a salient topic with significant news coverage—in the case of this study, news about the Covid-19 pandemic

A fifth study combined a survey with web-tracking data in order to identify the effect of exposure to both low- and high-quality search-engine results on belief in misinformation. By collecting search results using a custom web browser plug-in, the researchers could identify how the quality of these search results may affect users’ belief in the misinformation being evaluated.

The study’s source credibility ratings were determined by NewsGuard, a browser extension that rates news and other information sites in order to guide users in assessing the trustworthiness of the content they come across online. 

Across the five studies, the authors found that the act of searching online to evaluate news led to a statistically significant increase in belief in misinformation. This occurred whether it was shortly after the publication of misinformation or months later. This finding suggests that the passage of time—and ostensibly opportunities for fact checks to enter the information ecosystem—does not lessen the impact of SOTEN on increasing the likelihood of believing false news stories to be true. Moreover, the fifth study showed that this phenomenon is concentrated among individuals for whom search engines return lower-quality information.

“The findings highlight the need for media literacy programs to ground recommendations in empirically tested interventions and search engines to invest in solutions to the challenges identified by this research,” concludes Joshua A. Tucker, professor of politics and co-director of CSMaP, another of the paper’s authors.

The paper’s other authors included William Godel and Jonathan Nagler of NYU’s Center for Social Media and Politics, and Nathaniel Persily of Stanford Law School.

The study was supported by a grant from the National Science Foundation (2029610).

# # #

JOURNAL

DOI

METHOD OF RESEARCH

SUBJECT OF RESEARCH

ARTICLE TITLE

ARTICLE PUBLICATION DATE