Monday, February 13, 2023

Researchers use water treatment method to capture acids from agricultural waste

Peer-Reviewed Publication

PENN STATE

a new class of ion-exchange membrane wafer 

IMAGE: A PENN STATE-LED RESEARCH TEAM HAS INVENTED A NEW CLASS OF ION-EXCHANGE MEMBRANE WAFER ASSEMBLIES THAT SIGNIFICANTLY IMPROVES ELECTRODEIONIZATON’S ABILITY TO CAPTURE P-COUMARIC ACID FROM LIQUID MIXTURES. RESEARCHERS IMPROVED UPON THE RESIN WAFERS (SHOWN AT RIGHT), ALLOWING THEM TO IMPROVE THE PROCESS. THE PAPER WAS SELECTED AS ACS SUSTAINABLE CHEMICAL ENGINEERING'S JANUARY 23 JOURNAL COVER, SHOWN ON THE COMPUTER SCREEN AT LEFT. view more 

CREDIT: JEFF XU/PENN STATE

UNIVERSITY PARK, Pa. — Bound for the landfill, agricultural waste contains carbon sources that can be used to produce high-value compounds, such as p-coumaric acid, which is used in manufacturing pharmaceuticals. Electrodeionization, a separation method that uses ion-exchange membranes, is one way to capture the acids and other useful components. However, to capture large quantities at scale, improvements to the method must be made. 

A Penn State-led research team has invented a new class of ion-exchange membrane wafer assemblies that significantly improves electrodeionizaton’s ability to capture p-coumaric acid from liquid mixtures while using less energy and saving money. The researchers published their results in ACS Sustainable Chemical EngineeringTheir article also was selected for the journal’s Jan. 23 cover.

First commercialized to purify water, electrodeionization has been used to capture valuable components from waste streams in recent years. In the process, a liquid mixture stream is fed through a stack of several ion-exchange membranes and resin wafers, which resemble a sponge and are held together with a polymer adhesive. When electricity is applied, the ions in the liquid move through the stack, and p-coumaric acid separates into a concentrated process stream, where it can then be collected.

“To improve the process, we had to improve upon the resin wafer,” said corresponding author Chris Arges, Penn State associate professor of chemical engineering. “Previously, the membranes would sandwich the resin wafer sponge with a polyethylene adhesive, which is currently used in industry as resin ‘glue,’ but this led to poor contact between the membrane and resin wafer. We substituted the polyethylene with imidazolium ionomer, a type of polymer, and glued an imidazolium membrane on top of the resin wafer.”

By gluing the membrane to the wafer, the researchers reduced the amount of membrane needed by 30%, reducing the cost of the electrodeionization unit. The new design also reduced the interfacial resistance between the membrane and the wafer, as the same membrane and binder chemistries were glued together rather than sitting on top and below the sponge with air gaps. Reducing the resistance led to an increased rate of capturing p-coumaric acid, allowing researchers to use a smaller unit.

“We knew the new material was capturing more p-coumaric acid, but we were not sure why,” Arges said. “Our collaborator Revati Kumar ran simulations to find out why it worked better.”

Kumar, associate professor of chemistry at Louisiana State University, found the imidazolium increases the solubility of the p-coumaric acid and spurs faster diffusion within the material. 

“Multiplied together, solubility and diffusion equal permeability, or how fast we remove the acid as it travels across the membrane resin wafer network into the concentrate compartment,” Arges said. 

Arges compared permeability to the rate of travelers going through an airport security line. As more security checkpoints are added, more people can move through the line, increasing the line’s permeability. 

Increased permeability, therefore, decreases the chances of the p-coumaric acid binding to the to the membrane-resin wafer materials, known as fouling, instead of moving across the membrane. 

“The imidazolium membrane resin wafer assembly promotes the flow of p-coumaric acid through the membrane, which is a problem when other materials, like polyethylene, are used,” Arges said.

When benchmarked against the current resin wafer configuration, the new membrane configuration and materials result in a sevenfold increase in p-courmaric acid capture while using 70% less energy, according to researchers. The new assemblies also decrease the amount of membrane used in the process, resulting in significant cost savings.

Arges’ collaborators at Argonne National Laboratory filed for a patent for the novel membrane-wafer assembly technology. 

In addition to Arges and Kumar, the co-authors include Matthew Jordan, Hishara Keshani Gallage Dona and Dodangodage Ishara Senadheera, Louisiana State University; and Grzegorz Kokoszka and Yupo J. LinArgonne National Laboratory.

The United States Department of Energy supported this work.  

Recalls of fresh meat products may lower customer demand

Peer-Reviewed Publication

PENN STATE

UNIVERSITY PARK, Pa. — Fresh meats such as chicken and beef are staples of many Americans’ diets, but demand may take a hit after these products are recalled, according to new Penn State research.

The study, led by College of Agricultural Sciences researchers, found that both the number of recent recalls and the volume of food recalled have significant negative effects on the demand for fresh meat.

Additionally, the researchers found that large recalls caused by product contamination, recalls resulting from produce without benefit of inspection or having an import violation, recalls demanded by government agencies, and Class I recalls are all more likely to cause a larger loss in meat demand.

Pei Zhou, doctoral candidate in energy, environmental and food economics and lead author of the study, said the results suggest different strategies companies can take to prevent food recalls, which then can benefit both consumers’ health and the companies.

“Government agencies and food companies could take action to prevent recalls by, for example, increasing the mandatory inspection of fresh meat products prior to food distribution,” Zhou said. “They could also reduce recall scales and respond quickly by developing standard regulations, guides and procedures for recalling — especially for frequently recalled foods.”

According to the researchers, food safety issues that threaten consumers’ health have increased in recent years. In 2011, the Centers for Disease Control and Prevention estimated that about 48 million Americans get sick, 128,000 are hospitalized and 3,000 die of foodborne diseases each year.

Zhou explained that while food recalls are important to help keep consumers safe, they also pose a significant financial blow to the company whose product is being recalled. For example, previous research estimated that the average cost of a food recall to a company is about $10 million.

The U.S. food safety system has two branches — with the recalls of most meat, poultry and some egg products being monitored by the U.S. Department of Agriculture and the recalls of other foods and beverages being supervised by the Food and Drug Administration. Because of that system, the researchers noted, not all consumers are exposed to the same types of information during a food recall.

“Consumers are exposed to various types of food recall information, and that information may change consumers’ perceived health risks and further affect their purchasing behaviors and food demand,” Zhou said. “We wanted first to explore the impact of food recall on consumer demand and then follow up by examining how consumers respond to the various types of recall information.”

For this study, the researchers focused on the fresh meat market — the largest U.S. agricultural sector, with meat production totaling 52 billion pounds and poultry production totaling 48 billion pounds in 2017.

The study examined information from the Nielsen Retail Scanner Data from 2012 to 2016, which covers more than half the total sales volume of U.S. grocery and drug stores and more than 30% of all U.S. mass merchandiser sales volume.

The researchers also pulled data on recalls from the USDA Food Safety and Inspection Service website, including the volume of food recalled, the specific products recalled, the classification of the recalls, what caused the recalls, how consumers learned about the recalls and any health consequences stemming from the recalled products.

After analyzing the data, the researchers found that while recalls overall lowered product demand, customers also responded to recalls differently, depending on the severity of the recall classification, what caused the recall and how customers learned about the recall.

For example, declines in demand were more than eight times greater after recalls originated by government agencies, compared to recalls initiated by the manufacturers themselves.

The researchers said the study — recently published in the journal Food Policy — gives companies essential insight into the meat market and market strategies aimed at reducing the negative effect of food recalls on consumer demand.

“In general, reducing the number of food recalls by preventing food safety issues from occurring is the most straightforward way to avoid demand reduction,” Zhou said.

Yizao Liu, associate professor of agricultural economics, also contributed to this work.

New models shed light on life’s origin

The research reveals clues about the physical and chemical characteristics of Earth when life is thought to have emerged.

Peer-Reviewed Publication

UNIVERSITY OF ROCHESTER

The first signs of life emerged on Earth in the form of microbes about four billion years ago. While scientists are still determining exactly when and how these microbes appeared, it’s clear that the emergence of life is intricately intertwined with the chemical and physical characteristics of early Earth.

“It is reasonable to suspect that life could have started differently—or not at all—if the early chemical characteristics of our planet were different,” says Dustin Trail, an associate professor of earth and environmental sciences at the University of Rochester.

But what was Earth like billions of years ago, and what characteristics may have helped life to form? In a paper published in Science, Trail and Thomas McCollom, a research associate at the University of Colorado Boulder, reveal key information in the quest to find out. The research has important implications not only for discovering the origins of life but also in the search for life on other planets.

“We are now at an exciting time in which humankind is searching for life on other planets and moons, as well as in other planetary systems,” Trail says. “But we still do not know how—or even when, really—life started on our own planet. Research like ours helps identify specific conditions and chemical pathways that could have supported the emergence of life, work which is certain to factor prominently into the search for life outside of our planet.”

The importance of metals in the emergence of life

Research into life and its origins typically involves a variety of disciplines including genomics, the study of genes and their functions; proteomics, the study of proteins; and an emerging field called metallomics, which explores the important role of metals in performing cellular functions. As life evolved, the need for certain metals changed, but Trail and McCollom wanted to determine what metals may have been available when microbes first appeared billions of years ago.

“When hypotheses are proposed for different origin-of-life scenarios, scientists have generally assumed all metals were available because there weren’t studies that provided geologically robust constraints on metal concentrations of fluids for the earliest times of Earth’s history,” Trail says.

To address this shortcoming, Trail and McCollom studied the composition and characteristics of fluids in the lithosphere—the outer layer of Earth that includes the crust and upper mantle—billions of years ago. These lithospheric fluids are key pathways to transport dissolved parts of rocks and minerals between Earth’s interior and hydrothermal pools in its exterior where microbial life could have formed. While researchers cannot directly measure the metals that existed billions of years ago, by determining the properties of the fluids, they can infer what metals—and the concentrations of the metals—could feasibly have been transported between Earth’s interior and exterior during the time when life emerged on the planet.

Clues in billion-year-old minerals

Billion-year-old rocks and minerals are often the only direct sources of information about Earth’s earliest history. That’s because the rocks and minerals lock in information about the composition of Earth at the time they are formed.

The researchers conducted high-pressure, high-temperature experiments and applied these results to early-Earth zircons, a robust type of mineral collected at sites in Western Australia, to determine the oxygen pressure, chlorine content, and temperature of lithospheric fluids billions of years ago. They then input this information into computer models. The models allowed them to simulate the properties of the lithospheric fluids, and, in turn, simulate which metals could have travelled through the fluids to reach hydrothermal pools at Earth’s surface.

Understanding how life originated

The researchers were surprised by what the model simulations indicated. Many origin-of-life researchers, for instance, consider copper a likely component in the chemistry that could have led to life. But Trail and McCollom did not find evidence that copper would have been abundant under the constraints in their analysis.

One metal they did test that may have been available in high concentrations was manganese. While it is rarely considered in origin-of-life scenarios, today manganese helps the body form bones and assists enzymes in breaking down carbohydrates and cholesterol.

“Our research shows that metals like manganese may function as important links between the ‘solid’ Earth and emerging biological systems at Earth’s surface,” Trail says.

Trail says the research will help scientists studying the origin of life to input more concrete data into their experiments and models.

“Experiments designed with this information in mind will result in a better understanding of how life originated.”

Mechanical engineering meets electromagnetics to enable future technology

Researchers create compliant mechanism-enabled, reconfigurable antenna

Peer-Reviewed Publication

PENN STATE

antenna prototype 

IMAGE: RESEARCHERS ILLUSTRATED AND DESIGNED A CIRCULAR, IRIS-SHAPED PATCH ANTENNA PROTOTYPE USING COMMERCIAL ELECTROMAGNETIC SIMULATION SOFTWARE. THOUGH THE PROTOTYPE IS ONLY SLIGHTLY LARGER THAN A HUMAN PALM, THE TECHNOLOGY CAN BE SCALED TO THE INTEGRATED CIRCUIT LEVEL FOR HIGHER FREQUENCIES OR INCREASED IN SIZE FOR LOWER FREQUENCY APPLICATIONS, ACCORDING TO RESEARCHERS. view more 

CREDIT: JEFF XU/PENN STATE

UNIVERSITY PARK, Pa. — Reconfigurable antennas — those that can tune properties like frequency or radiation beams in real time, from afar — are integral to future communication network systems, like 6G. But many current reconfigurable antenna designs can fall short: they dysfunction in high or low temperatures, have power limitations or require regular servicing.  

To address these limitations, electrical engineers in the Penn State College of Engineering combined electromagnets with a compliant mechanism, which is the same mechanical engineering concept behind binder clips or a bow and arrow. They published their proof-of-concept reconfigurable compliant mechanism-enabled patch antenna today (Feb. 13) in Nature Communications

“Compliant mechanisms are engineering designs that incorporate elements of the materials themselves to create motion when force is applied, instead of traditional rigid body mechanisms that require hinges for motion,” said corresponding author Galestan Mackertich-Sengerdy, who is both a doctoral student and a full-time researcher in the college’s School of Electrical Engineering and Computer Science (EECS). “Compliant mechanism-enabled objects are engineered to bend repeatedly in a certain direction and to withstand harsh environments.”

When applied to a reconfigurable antenna, its complaint mechanism-enabled arms bend in a predictable way, which in turn changes its operating frequencies — without the use of hinges or bearings.  

“Just like a chameleon triggers the tiny bumps on its skin to move, which changes its color, a reconfigurable antenna can change its frequency from low to high and back, just by configuring its mechanical properties, enabled by the compliant mechanism,” said co-author Sawyer Campbell, associate research professor in EECS. 

The compliant mechanism-enabled designs supersede existing origami design technologies, named after the Japanese art of paper folding, which are reconfigurable but do not have the same advantages in robustness, long term reliability and high-power handling capability.

“Origami antenna designs are known for their compact folding and storage capabilities that can then be deployed later on in the application,” Mackertich-Sengerdy said. “But once these origami folded structures are deployed, they usually need a complex stiffening structure, so that they don’t warp or bend. If not carefully designed, these types of devices would suffer environmental and operational lifetime limitations in the field.” 

The team illustrated and designed a circular, iris-shaped patch antenna prototype using commercial electromagnetic simulation software. They then 3D printed it and tested it for fatigue failures as well as frequency and radiation pattern fidelity in Penn State’s anechoic chamber, a room insulated with electromagnetic wave-absorbing material that prevents signals from interfering with antenna testing. 

Though the prototype — designed to target a specific frequency for demonstration — is only slightly larger than a human palm, the technology can be scaled to the integrated circuit level for higher frequencies or increased in size for lower frequency applications, according to researchers.  

Compliant mechanism research has increased in popularity due to the rise of 3D printing, according to the researchers, which enables endless design variations. It was Mackertich-Sengerdy’s background in mechanical engineering that gave him the idea to apply this specific class of compliant mechanisms to electromagnetics. 

“The paper introduces compliant mechanisms as a new design paradigm for the entire electromagnetics community, and we anticipate it growing,” said co-author Douglas Werner, John L. and Genevieve H. McCain Chair Professor of EECS. “It could be the branching off point for an entirely new field of designs with exciting applications we haven’t dreamed of yet.”

The Penn State College of Engineering’s John L. and Genevieve H. McCain endowed chair professorship supported this work.  

Cinema has helped ‘entrench’ gender inequality in AI

Peer-Reviewed Publication

UNIVERSITY OF CAMBRIDGE

  • Study finds that just 8% of all depictions of AI professionals from a century of film are women – and half of these are shown as subordinate to men.
  • Cinema promotes AI as the product of lone male geniuses with god complexes, say researchers.
  • Cultural perceptions influence career choices and recruitment, they argue, with the AI industry suffering from severe gender imbalance, risking development of discriminatory technology. 

Cinematic depictions of the scientists behind artificial intelligence over the last century are so heavily skewed towards men that a dangerous “cultural stereotype” has been established – one that may contribute to the shortage of women now working in AI development.

Researchers from the University of Cambridge argue that such cultural tropes and a lack of female representation affects career aspirations and sector recruitment. Without enough women building AI there is a high risk of gender bias seeping into the algorithms set to define the future, they say.

The team from the University’s Leverhulme Centre for the Future of Intelligence (LCFI) whittled down over 1,400 films to the 142 most influential cinematic works featuring artificial intelligence between 1920 and 2020, and identified 116 characters they classed as “AI professionals”.

Of these, 92% of all AI scientists and engineers on screen were men, with representations of women consisting of a total of eight scientists and one CEO. This is higher than the percentage of men in the current AI workforce (78%). 

Researchers argue that films such as Iron Man and Ex Machina promote cultural perceptions of AI as the product of lone male geniuses.

Of the meagre eight female AI scientists to come out of 100 years of cinema, four were still depicted as inferior or subservient to men. The first major film to put a female AI creator on screen did not come until the 1997 comedy Austin Powers: International Man of Mystery, with the over-the-top Frau Farbissina and her ‘Fembots’.

This dearth of on-screen depictions may be linked to a lack of women behind the camera. Depending on how the directors’ gender is counted, not a single influential film with an AI plotline was directed solely by a woman.* The study is published in the journal Public Understanding of Science, with an accompanying report released on the LCFI website.

“Gender inequality in the AI industry is systemic and pervasive,” said co-author Dr Kanta Dihal from LCFI at Cambridge. “Mainstream films are an enormously influential source and amplifier of the cultural stereotypes that help dictate who is suited to a career in AI.”

“Our cinematic stock-take shows that women are grossly underrepresented as AI scientists on screen. We need to be careful that these cultural stereotypes do not become a self-fulfilling prophecy as we enter the age of artificial intelligence.”

The researchers found that a third (37 individuals) of cinema’s AI scientists are presented as “geniuses” – and of these, just one is a woman. In fact, 14% of all AI professionals on film are portrayed as former child prodigies of some kind.

The LCFI team point to previous research showing that people across age groups associate exceptional intellectual ability with men – the “brilliance bias” – and argue that the stereotype of AI scientists as genius visionaries “entrench” beliefs that women are not suited for AI-related careers.

“Genius is not a neutral concept,” said co-author Dr Stephen Cave, director of LCFI. “Genius is an idea based in gendered and racialised notions of intelligence, historically shaped by a white male elite. Some influential technologists, such as Elon Musk, have deliberately cultivated ‘genius’ personas that are explicitly based on cinematic characters such as Iron Man.”

Dihal and Cave, along with their LCFI colleagues – and hosts of the Good Robot podcast – Dr Eleanor Drage and Dr Kerry McInerney, also catalogue the way in which cinema’s male scientists create human-like AI as a form of emotional compensation.

Some 22% of the male AI scientists or engineers throughout cinematic history create human-like AI to “fulfil their desires”: replacing lost loved ones, building ideal lovers, or creating AI copies of themselves.

“Cinema has long used narratives of artificial intelligence to perpetuate male fantasies, whether it’s the womb envy of a lone genius creating in his own image, or the god complex of returning the dead to life or constructing obedient women,” said LCFI co-author Dr Kerry McInerney.

All this is further exacerbated by the overwhelmingly “male milieu” of many AI movies, argue researchers – with AI often shown as a product of male-dominated corporations or the military.

The LCFI team argue that the current state of female representation in the AI industry is grim. Globally, only 22% of AI professionals are women (compared to 39% across all STEM** fields). Over 80% of all AI professors are men, with women comprising just 12% of authors at AI conferences.

“Women are often confined to lower-paid, lower-status roles such as software quality assurance, rather than prestigious sub-fields such as machine learning,” said LCFI co-author Dr Eleanor Drage.

“This is not just about inequality in one industry. The marginalisation of women could contribute to AI products that actively discriminate against women – as we have seen with past technologies. Given that science fiction shapes reality, this imbalance has the potential to be dangerous as well as unfair.”

While some may question whether on-screen representation truly influences the real world, the LCFI team point to research showing that nearly two-thirds (63%) of women in STEM say that Dr Dana Scully, the scientist protagonist on legendary TV show The X Files, served as an early role model.***   

The eight female AI scientists and engineers (and one CEO) from a century of cinema:

  • Quintessa, the female alien in Transformers: the Last Knight (2017)
  • Shuri in Avengers: Infinity War (2018)
  • Evelyn Caster in Transcendence (2014)
  • Ava in The Machine (2013)
  • Dr Brenda Bradford in Inspector Gadget (1999)
  • Dr Susan Calvin in I, Robot (2004)
  • Dr Dahlin in Ghost in the Shell (2017)
  • Frau Farbissina in Austin Powers: International Man of Mystery (1997)
  • Smiler, a female emoji in The Emoji Movie (2017)

Notes:

* Of the 142 influential AI films in the corpus, one, Captain Marvel, was co-directed by a man and a woman (Ryan Fleck and Anna Boden). Four were directed by the Wachowskis, who are transgender women. However, when the first three films in the corpus were made (the Matrix films, 1999-2003), both Wachowski siblings presented as male, and when Jupiter Ascending was made (2015), Lilly presented as male while Lana presented as female.

** STEM stands for Science, Technology, Engineering and Medicine.

*** 21st Century Fox, Geena Davis Institute on Gender in Media, and J. Walter Thompson Intelligence (2018) The Scully Effect: I Want to Believe ... in STEM. Geena Davis Institute on Gender in Media. Available at: https://seejane.org/research-informs-empowers/the-scully-effect-i-want-to-believe-in-stem/

ChatGPT can (almost) pass the US Medical Licensing Exam

The AI software was able to achieve passing scores for the exam, which usually requires years of medical training

Peer-Reviewed Publication

PLOS

ChatGPT can score at or around the approximately 60 percent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights, according to a study published February 9, 2023 in the open-access journal PLOS Digital Health by Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth.

ChatGPT is a new artificial intelligence (AI) system, known as a large language model (LLM), designed to generate human-like writing by predicting upcoming word sequences. Unlike most chatbots, ChatGPT cannot search the internet. Instead, it generates text using word relationships predicted by its internal processes.

Kung and colleagues tested ChatGPT’s performance on the USMLE, a highly standardized and regulated series of three exams (Steps 1, 2CK, and 3) required for medical licensure in the United States. Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.

After screening to remove image-based questions, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release. 

After indeterminate responses were removed, ChatGPT scored between 52.4% and 75.0% across the three USMLE exams. The passing threshold each year is approximately 60%. ChatGPT also demonstrated 94.6% concordance across all its responses and produced at least one significant insight (something that was new, non-obvious, and clinically valid) for 88.9% of its responses. Notably, ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8% on an older dataset of USMLE-style questions.

While the relatively small input size restricted the depth and range of analyses, the authors note their findings provide a glimpse of ChatGPT’s potential to enhance medical education, and eventually, clinical practice. For example, they add, clinicians at AnsibleHealth already use ChatGPT to rewrite jargon-heavy reports for easier patient comprehension.

“Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation,” say the authors.

Author Dr Tiffany Kung added that ChatGPT's role in this research went beyond being the study subject: "ChatGPT contributed substantially to the writing of [our] manuscript... We interacted with ChatGPT much like a colleague, asking it to synthesize, simplify, and offer counterpoints to drafts in progress...All of the co-authors valued ChatGPT's input."

############

In your coverage, please use this URL to provide access to the freely available article in PLOS Digital Healthhttps://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000198

Citation: Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. (2023) Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2(2): e0000198. https://doi.org/10.1371/journal.pdig.0000198

Author Countries: USA

Funding: The authors received no specific funding for this work.

Mushrooms magnify memory by boosting nerve growth

Researchers from The University of Queensland have discovered the active compound from an edible mushroom that boosts nerve growth and enhances memory

Peer-Reviewed Publication

UNIVERSITY OF QUEENSLAND

Lion's mane mushroom 

IMAGE: RESEARCHERS FOUND LION'S MANE MUSHROOM IMPROVED BRAIN CELL GROWTH AND MEMORY IN PRE-CLINICAL TRIALS. IMAGE, UQ view more 

CREDIT: UQ

Researchers from The University of Queensland have discovered the active compound from an edible mushroom that boosts nerve growth and enhances memory.

Professor Frederic Meunier from the Queensland Brain Institute said the team had identified new active compounds from the mushroom, Hericium erinaceus.

Researchers have discovered lion's mane mushrooms improve brain cell growth and memory in pre-clinical trials. Image UQ. 

“Extracts from these so-called ‘lion’s mane’ mushrooms have been used in traditional medicine in Asian countries for centuries, but we wanted to scientifically determine their potential effect on brain cells,” Professor Meunier said.

“Pre-clinical testing found the lion’s mane mushroom had a significant impact on the growth of brain cells and improving memory.

“Laboratory tests measured the neurotrophic effects of compounds isolated from Hericium erinaceus on cultured brain cells, and surprisingly we found that the active compounds promote neuron projections, extending and connecting to other neurons.

“Using super-resolution microscopy, we found the mushroom extract and its active components largely increase the size of growth cones, which are particularly important for brain cells to sense their environment and establish new connections with other neurons in the brain.”

Co-author, UQ’s Dr Ramon Martinez-Marmol said the discovery had applications that could treat and protect against neurodegenerative cognitive disorders such as Alzheimer’s disease.

“Our idea was to identify bioactive compounds from natural sources that could reach the brain and regulate the growth of neurons, resulting in improved memory formation,” Dr Martinez-Marmol said.

Dr Dae Hee Lee from CNGBio Co, which has supported and collaborated on the research project, said the properties of lion’s mane mushrooms had been used to treat ailments and maintain health in traditional Chinese medicine since antiquity.

“This important research is unravelling the molecular mechanism of lion’s mane mushroom compounds and their effects on brain function, particularly memory,” Dr Lee said.

The study was published in the Journal of Neurochemistry.

UQ acknowledges the collaborative efforts of researchers from the Republic of Korea’s Gachon University and Chungbuk National University.

Researchers find substantial portion of U.S. public potentially interested in using genetic technologies to enhance offspring education

Survey conducted by team of researchers from Geisinger, University of Southern California, UCLA, National Bureau of Economic Research, and Harvard University also reports attitudes toward gene editing and SAT prep courses

Peer-Reviewed Publication

GEISINGER HEALTH SYSTEM

An article published today in the journal Science indicates that a substantial proportion of Americans are willing to use an essentially unregulated reproductive genetic technology to increase the chances of having a baby who is someday admitted to a top-100 ranked college.

Survey respondents with college degrees, as well as those under 35 years of age — prime child-bearing age — were more willing to use polygenic embryo screening in conjunction with in vitro fertilization (IVF) to do so, the study found.

Polygenic indexes (also called polygenic risk scores) can provide an estimate of disease risk — or other traits — based on an individual’s genes.  Private companies working with IVF clinics offer the service to patients who can select an embryo with a lower chance of developing diabetes, cancer, heart disease, inflammatory bowel disease, Alzheimer’s disease or schizophrenia as an adult.

Some patients have also reported uploading their embryos’ genomic data to online platforms that make predictions about non-medical traits, and the founder of one such company has not ruled out offering to screen for non-medical traits.

Noting how quickly new technologies can spread, researchers wanted to gauge public attitudes toward reproductive technologies and whether their willingness to use them was influenced by what others do.

Using a large, nationally representative sample, researchers asked respondents how likely they were to use polygenic screening, CRISPR-style gene editing, or standard SAT prep course training to increase the odds of their child getting into a top-100 ranked college, assuming that they were already using IVF and that all options were free and safe.

A majority of people (68%) said they were more likely than not to use SAT prep; substantial minorities were more likely than not to use gene editing (28%) and polygenic screening (38%) for this purpose. And people who were told that most people in a position to use each service choose to do so were more likely to say that they, too, would use it, suggesting the potential for a modest “bandwagon effect.”

These results suggest substantial—and likely growing—interest in using genetic technologies to try to influence offspring traits and outcomes, including to “enhance” social and behavioral outcomes like educational attainment.

The researchers argue that the time for a national conversation about possible regulation of polygenic embryo screening is now. They note that their survey about a complex technology that is only briefly described is not a substitute for the considered judgments that should emerge from such a sustained national conversation about the expected outcomes and risks of screening embryos for polygenic traits. It is not clear, for instance, whether the same people would still want to use that service if they were more fully informed about it.

Previous research by some of the same authors, published in the New England Journal of Medicinedescribed the limitations of polygenic screening, warning that patients and even IVF clinicians may form the mistaken impression that the technology is more effective and less risky than it is.

“Polygenic indexes are already only weak predictors for most individual adult outcomes, especially for social and behavioral traits, and there are several factors that lower their predictive power even more in the context of embryo selection,” said senior author Patrick Turley, Ph.D., assistant research professor of economics at the USC Dornsife College of Letters, Arts and Sciences. “Polygenic indexes are designed to work in a different setting than an IVF clinic. These weak predictors will perform even worse when used to select embryos.”

Assessments of the predictive power of polygenic indexes typically assume very similar environments for the generation from which the genetic information was collected and the generation born as a result of polygenic screening. An embryo selected via this technology may face a very different environment as an adult, which may lower predictive power. In addition, because biobanks disproportionately enroll people with predominantly European genetic ancestries, most of today’s polygenic indexes are less predictive for people of other genetic ancestries.

“There is—rightly—a lot of concern among scholars, including us, that companies and IVF clinics that use polygenic embryo screening could intentionally or unintentionally exaggerate its likely impact,” said Michelle N. Meyer, Ph.D., J.D., associate professor and chair of the Department of Bioethics and Decision Sciences at Geisinger and first author of the article. “But in this study, we stipulated a realistic effect—that each service would increase the odds of having a child who attends a top-100 college by 2 percentage points, from 3% to 5% odds—and lots of people are still interested.”

This research was supported by the National Institutes of Health, Open Philanthropy, and the Pershing Square Fund for Research on the Foundations of Human Behavior.

The authors are Michelle N. Meyer, Ph.D., J.D., Geisinger; Tammy Tan, National Bureau of Economic Research; Daniel J. Benjamin, Ph.D., UCLA; David Laibson, Ph.D., Harvard University; and Patrick Turley, Ph.D., USC Dornsife College of Letters, Arts and Sciences.