Tuesday, March 31, 2026

 

Choosing embryos based on genetic predictions raises new ethical and legal concerns



A new global review shows countries taking very different approaches to regulating polygenic embryo testing




Hokkaido University

A human embryo obtained through in vitro fertilization (IVF). 

image: 

A human embryo obtained through in vitro fertilization (IVF). 

view more 

Credit: Dr. Yasuyuki Mio, MIO Fertility Clinic





For more than four decades, in vitro fertilization (IVF) has helped families have children. Scientists estimate that more than 10 million people worldwide have been born through IVF and related assisted reproductive technologies, according to the International Committee for Monitoring Assisted Reproductive Technologies.

As part of these procedures, prospective parents may choose to genetically test embryos before implantation in the uterus. This process, known as preimplantation genetic testing (PGT), was originally developed to identify serious inherited diseases caused by single errors in the DNA or gene mutations, such as cystic fibrosis or haemophilia, and prevent them from being passed on to the next generation. However, advances in technology have significantly expanded the potential scope of preimplantation genetic testing.

In a new article published in Frontiers in Reproductive Health, Professor Tetsuya Ishii of Hokkaido University examines the emerging use of genomic testing to predict complex traits in embryos, such as intelligence or the risk of developing conditions like diabetes, heart disease, or Alzheimer’s disease later in life. As embryo testing moves beyond disease prevention toward the prediction of complex human social characteristics, Ishii argues that stronger oversight and clearer regulations are needed.

Many traits, such as intelligence and physical appearance, as well as the likelihood of developing common diseases like schizophrenia and cancer, are shaped by both our genes and our environment. Unlike monogenic diseases, which result from a mutation in a single gene, polygenic diseases and traits arise from the combined effects of many genes, each contributing subtly, alongside lifestyle and environmental factors.

In recent years, scientists have identified numerous genetic variants linked to these complex traits and can combine them into a single polygenic score, a statistical estimate of an individual’s genetic tendency to a particular trait or condition. With this advancement, preimplantation genetic testing has expanded from screening for monogenic diseases (PGT-M) to assessing polygenic conditions (PGT-PS), which often emerge later in life.

“However, predicting complex traits remains highly uncertain. Polygenic scores attempt to predict these complex traits using only an embryo’s genetic variants and data from large genetic studies, without accounting for environmental influences,” explains Ishii.

Countries around the world regulate polygenic embryo testing in very different ways. In the United States, polygenic embryo screening has been commercially available since 2019, and some fertility clinics routinely offer it to prospective parents who wish to select embryos. Surveys suggest that many Americans support using polygenic scores to reduce disease risk, and some are also open to using them for non-medical traits.

In contrast, several European countries have adopted stricter limits. Germany and Italy allow embryo testing only to prevent serious genetic diseases, while the United Kingdom currently does not permit the use of polygenic scores for embryo selection.

In many other countries, however, clear regulations have yet to be established. Without explicit rules, the use of polygenic scores in embryo selection could expand even as scientists continue to debate their clinical value.

The technology has also raised several ethical concerns. Prospective parents could develop unrealistic expectations about their future children based on genetic predictions that remain uncertain. “Because of environmental influences, parental behavior, the child’s autonomy, and many other factors, the use of polygenic scores cannot guarantee that a child will develop the predicted trait,” says Ishii.

Then there are broader societal concerns, including the potential stigmatization of certain traits, the risk of viewing children as products designed to meet parental expectations, and fears that the technology could revive ideas associated with eugenics.

The underlying challenge is the growing gap between expert opinion and public attitude. While many physicians and geneticists remain cautious about using polygenic scores for embryo selection, surveys suggest that some prospective parents are more receptive to the technology.

Because polygenic embryo testing remains a rapidly developing field, Ishii argues that policymakers should adopt precautionary regulations while improving public understanding of what genetic predictions can and cannot reliably reveal. Clear guidelines, he suggests, will be essential as reproductive technologies continue to advance.


A schematic illustration of how preimplantation genetic testing after IVF allows parents to select embryos with desired traits. 

A schematic illustration of how preimplantation genetic testing after IVF allows parents to select embryos with desired traits. 

Credit

Tetsuya Ishii


 

University of Tartu researchers discovered a new gene causing fetal developmental anomalies




Estonian Research Council
Laura Kasak, Lecturer of Human Genetics at the University of Tartu 

image: 

Laura Kasak, Lecturer of Human Genetics at the University of Tartu, investigated a rare family case that led to the discovery of a gene linked to fetal developmental anomalies.

view more 

Credit: Image author: Lilian Mõttus





The Human Genetics Research Group of the University of Tartu Faculty of Medicine has identified a gene whose defect may cause congenital heart malformations in the fetus. The MGRN1 gene has not previously been associated with early human development or with any disease. The discovery will help doctors better recognise similar cases in the future and improve the counselling and treatment offered to affected families. 

The finding emerged during the genetic investigation of one Estonian family. The family had experienced two pregnancies that were terminated after ultrasound scans revealed severe structural anomalies in the fetuses. Pathological examination in both cases demonstrated congenital heart malformations, and in one fetus, abnormal positioning of internal organs.  

At the same time, the family already had two healthy children. Therefore, the family’s genetic material reached the laboratory of Laura Kasak, Lecturer of Human Genetics at the University of Tartu.  

“In clinical practice, the fetuses and the parents had undergone genetic testing, but nothing unusual had been detected. This is not surprising, as the tests focused on known associations,” Kasak explained. “So we began searching for the cause of the anomalies elsewhere.” 

Because both parents were healthy yet the fetuses had similar developmental anomalies, Kasak suspected an autosomal recessive genetic condition. This means that the disorder manifests only when the child inherits a pathogenic variant from both parents. 

The analysis confirmed that both parents carry a rare recessive variant in the MGRN1 gene, which has not been studied in depth because it has never been linked to any specific human disorder. “It is even rarer for two carriers of the same variant to meet,” Kasak noted.  

The researchers’ hypothesis was further supported by genetic testing of the healthy children in the family: none of them carried two defective copies of the gene. Additional confirmation came from mouse-model studies, which showed that a defective MGRN1 gene leads to similar malformations and pregnancy loss in rodents. 

The discovery elicited mixed feelings among the researchers. “On one hand, it is very difficult emotionally, because you feel for the family and think how unfair such a chance event is. On the other hand, you are grateful for the scientific discovery, which may contribute to the early identification of similar situations in the future,” Kasak reflected.  

According to Kasak, further investigation of the gene and additional experiments are needed. The first priority, however, is to share the discovery with the scientific community so that others can recognise the association and expand on the opportunities it offers. 

The article describing the new finding, MGRN1 is linked to recessive heart and laterality defects: the first genotype–phenotype report in humans was published in the international journal Journal of Medical Genetics. 

 

The Earth formed from local building blocks



ETH Zurich

Formation of the earth 

image: 

This is roughly what the formation of the Earth in our solar system might have looked like. The birth of two planets (light brown dots) in a protoplanetary disc around the young star WISPIT 2

view more 

Credit: ESO / Lawlor C et al.





Planetary scientists have long debated where the material that formed our Earth comes from. Despite its location in the inner Solar System, they consider it likely that 6–40 per cent of this material must have come from the outer Solar System, i.e., beyond Jupiter. 

For a long time, material from the outer Solar System was considered necessary to bring volatile components such as water to Earth. Accordingly, there must also have been an exchange of material between the outer and inner Solar Systems during the formation of the Earth. But is that really true? 

“We were truly astonished” 

Planetary scientists Paolo Sossi and Dan Bower, from ETH Zurich, compared existing data on the isotopic ratios of a wide range of meteorites, including those from Mars and the asteroid Vesta, with those of Earth. Isotopes are sibling atoms of the same element (same number of protons) that have a different mass (different number of neutrons).

The researchers analysed this data in a new way and arrived at a surprising conclusion: the material that makes up Earth originates entirely from the inner region of the Solar System. 

Material from the outer Solar System, by contrast, is likely to account for less than two per cent of Earth’s mass, or even nothing at all. The corresponding study has just been published in the journal Nature Astronomy. 

“Our calculations make it clear: the building material of the Earth originates from a single material reservoir,” says Sossi. His colleague Bower adds: “We were truly astonished to find that the Earth is composed entirely of material from the inner Solar System distinct from any combination of existing meteorites.” 

For their study, the ETH researchers used existing data on ten different isotopic systems from meteorites, and analysed them using a specialised statistical method. Previous studies have mostly considered only two isotopic systems. 

“Our studies are actually data science experiments,” says Sossi. ‘We carried out statistical calculations that are rarely used in geochemistry, even though they are a powerful tool.’ 

Isotope signature reveals origin 

Isotopes in meteorites have long been used by researchers to determine the origin of celestial bodies, i.e. which part of the Solar System they come from. Historically, however, only the various isotopes of the element oxygen could be used to determine their provenance. 

It was not until the early 2010s that an American researcher discovered that other isotopes, such as of chromium and titanium, could also be used for this purpose. This has enabled researchers to classify meteorites into two categories: non-carbonaceous ones, which form exclusively in the inner Solar System, and carbonaceous ones, which contain more water and carbon and originate in the outer Solar System. 

The new analysis reveals that the Earth is composed entirely of non-carbonaceous material. No evidence for the previously suspected exchange between the outer- and inner solar system reservoirs was found. 

Therefore, the Earth grew within a relatively static system, incorporating its smaller neighbouring planets as it grew. This also implies that most volatile elements, such as water, must have already been present in the inner Solar System. 

Jupiter acts as a material barrier 

But why are there two distinct material reservoirs in our Solar System? Researchers assume that our Solar System split into two reservoirs during its formation due to Jupiter’s rapid growth and size. The gravity of the gas giant tore a gap in the protoplanetary disc orbiting the young Sun. These discs are ring-shaped and consist of gas and dust; they are the birthplace of planets. Jupiter prevented material from the outer solar system from entering the inner region. However, the extent to which this barrier was permeable remained unclear until now. 

In their new analysis, the two ETH researchers demonstrate that almost no material from beyond Jupiter flowed towards Earth. “Our calculations are very robust and rely solely on the data itself, not on physical assumptions, as these are not yet fully understood,” Bower emphasises. The analysis also shows that Earth's material composition is similar to that of Vesta and Mars.  

The researchers also suspect that Venus and Mercury lie on the same line. “Based on our analysis, we can theoretically predict the composition of these two planets,” says Paolo Sossi. However, he cannot verify this analytically, as no rock samples from Mercury and Venus, which are the two innermost planets in the Solar System, are currently available to the researchers. 

New light on the formation history 

“Our results shed new light on the formation history of our Earth and the other rocky planets,” says Sossi. 

Sossi and his team intend to follow up by investigating why there was sufficient water in the hot, inner Solar System to form the Earth’s oceans. Furthermore, they will examine whether these processes can be applied to exoplanetary systems.

“Until then, however, Dan and I will have to engage in many heated debates about the material composition of Earth and its neighbouring planets, because the scientific discourse over the building blocks of Earth is far from over, despite the new findings,” says Sossi.

Reference

Sossi PA, Bower DJ. Homogeneous accretion of the Earth in the inner Solar System, Nature Astronomy, 27 March 2026, DOI: 10.1038/s41550-026-02824-7

 

Shining Light on Lunar Darkness: The Network That Could End the Moon’s Power Cut





Higher Education Press
Image 

image: 

(a) A terrain-aware multi-site high-efficiency laser power beaming network on the lunar surface. (b) Distribution of received power for lunar mobile explorers before and after terrain-aware optimisation.

view more 

Credit: HIGHER EDUCATION PRESS





Harbin Institute of Technology researchers propose a new terrain-aware framework for jointly optimising coverage, connectivity, and cost, enabling the first system-level design of laser power-beaming networks for extreme exploration tasks in the Moon’s permanently shadowed regions

The Moon’s polar regions present one of the most alluring yet forbidding frontiers in human space exploration. Within the deep craters of the lunar south pole lie permanently shadowed regions (PSRs)—areas that have not seen sunlight for billions of years and which harbour valuable water ice deposits that could support future lunar bases. However, these same regions exist in perpetual darkness, with temperatures plunging below -230°C, making them inaccessible to traditional solar-powered equipment. While space agencies and commercial entities have proposed solutions ranging from fission reactors to orbital power stations, a fundamental question has remained unanswered: how can we design a practical, cost-effective energy delivery system that reliably powers exploration activities in these sun-forbidden zones?

A study published in Planet (Volume 2, Issue 1) by Professor Lifang Li and Pengzhen Guo’s team at the Harbin Institute of Technology offers a systematic research approach to this challenge. Their paper, titled “Optimal laser power beaming network for powering Lunar permanently shadowed regions: a coverage–connectivity–cost trade-off,” introduces a sophisticated terrain-aware network optimisation framework that advances laser power beaming from traditional single-link analysis to multi-station, system-level optimisation, offering a new perspective for future lunar energy infrastructure deployment. The work arrives at a critical juncture when multiple spacefaring nations are racing to establish a sustainable presence on the Moon, with NASA’s Artemis programme, China’s international lunar research station, and various commercial ventures all targeting the south pole for permanent outposts.

The fundamental challenge of lunar polar exploration lies in its paradoxical energy geography. The crater rims receive nearly continuous sunlight, making them ideal locations for solar energy harvesting and power deployment, yet the scientifically valuable crater floors—where water ice accumulates—remain in permanent darkness. Previous technical efforts have largely been limited to terrain-constrained point-to-point transmission links. Researchers have demonstrated laser power transmission over terrestrial distances, developed efficient photovoltaic converters for laser light, and proposed orbital power relay constellations. What has been lacking is a systems-level understanding of how multiple power transmission nodes can work together as a coordinated network under the triple constraints of improving effective target-area coverage, enhancing regional connectivity, and controlling infrastructure costs.

The team has tackled this optimisation problem head-on, developing a mathematical framework that treats lunar power delivery as a network design challenge rather than a simple point-to-point transmission problem. Their approach begins with realistic geography, using high-resolution topographic data from NASA’s Lunar Orbiter Laser Altimeter (LOLA) and focusing on the region near Shackleton crater. The model incorporates terrain obstruction, local illumination conditions, beam diffraction divergence, pointing errors, and lunar dust attenuation, thereby establishing a comprehensive framework for lunar laser transmission and network deployment. It is important to note that the power supply nodes in this study are not simply fixed “laser stations”; instead, the system adopts a split architecture in which fixed support platforms are responsible for power acquisition and supply, while the laser emission units can be adjusted and repositioned locally to achieve more favourable transmission conditions. Based on this framework, the team simulated how multiple emission units could transmit laser energy to receivers mounted on rovers, hoppers, or in-situ resource utilisation equipment operating in permanently shadowed areas.

The core innovation of the study lies in the first simultaneous optimisation of three key performance dimensions. Coverage ensures that more scientifically valuable PSRs can receive energy support when needed, whether for short rover traverses or long-term operation of fixed equipment. Connectivity is not simply about adding more isolated power-supply points, but about reducing fragmentation of the powered areas and creating a more continuous spatial structure, thereby lowering the risk that a mobile explorer will unintentionally leave the powered region during cross-regional movement and supporting sustained exploration tasks. Cost constraints recognise that every transmission unit, every square metre of receiver array, and every tonne of equipment delivered to the lunar surface carries a substantial price tag. By treating these three factors as interdependent variables rather than separate considerations, the team derived a terrain-aware optimised laser power-beaming network configuration that balances infrastructure scale and operational capability.

The study’s findings offer practical decision support for lunar base planning. The research shows that terrain-aware optimised deployment can significantly improve power coverage and regional connectivity in the south polar PSRs: the effective coverage ratio increases from 10.76% to 27.55%, while regional connectivity rises from 39.93% to 98.92%. Compared with the baseline scheme, which selects sites solely on the basis of local high-illumination conditions, the optimised configuration significantly improves overall network performance while keeping infrastructure requirements under control. More importantly, the team not only optimised the station selection, but also refined the local positioning of the laser emission units, enabling previously fragmented powered areas to be connected more effectively and providing more reliable sustained energy support for mobile exploration tasks on the lunar surface.

From a technical standpoint, the research advances laser power beaming beyond the laboratory demonstrations that have characterised the field to date. Recent experiments have shown that high-efficiency semiconductor lasers can maintain stable operation across the temperature extremes expected in lunar environments, while photovoltaic receivers have demonstrated conversion efficiencies that make laser power transmission economically viable. The HIT team’s contribution synthesises these technological building blocks into an architectural framework that provides lunar base mission planners with guidance on how emission units can be deployed, how different nodes can work together, and how overall system performance can be balanced across coverage, connectivity, and cost under complex lunar terrain conditions.

The broader significance of this work extends beyond the lunar context. As space exploration moves toward permanent human presence beyond Earth, the ability to deliver power wirelessly across challenging terrain will become increasingly essential. The same optimisation principles that the team has applied to lunar craters may also be transferable to Martian canyons, asteroid mining operations, or even terrestrial applications where conventional power infrastructure is impractical. The study establishes a methodological foundation for thinking about space power networks as integrated systems rather than isolated links—a perspective that will prove invaluable as humanity’s reach into the solar system expands.

The timing of this publication aligns with a surge of interest in lunar power solutions from multiple sectors. NASA has recently accelerated its Fission Surface Power programme, while commercial entities are proposing orbital power satellite networks and tower-based laser transmission systems. Each approach has its advocates, but all share a common need for the kind of systems-level thinking that the HIT team has now provided. By establishing rigorous optimisation criteria, this research enables apples-to-apples comparisons between different power delivery architectures and provides objective guidance for the difficult investment decisions that lie ahead.

Perhaps most encouragingly, the study demonstrates that laser power beaming networks exhibit clear engineering potential, while the relevant enabling technologies continue to mature. The required laser efficiencies have been demonstrated in laboratory settings; pointing and tracking systems have achieved the necessary precision for Earth-orbital applications; and photovoltaic receivers have been tested under simulated lunar conditions. What has been missing until now is the confidence that these components can be assembled into a system that reliably meets mission requirements at acceptable cost. The team has provided that confidence through rigorous analysis and optimisation.

As spacefaring nations prepare for the next decade of lunar exploration, the question is no longer whether we can deliver power to the Moon’s darkest places, but how to do so most effectively. This study by the Harbin Institute of Technology provides a systematic design approach, advancing laser power beaming from a single-link concept to a networked solution for mission planning. For the rovers, drilling systems, and life-support systems that may one day operate in the eternal twilight of lunar craters, reliable power supply will be an essential foundation for the continued advance of deep-space exploration.

 

Food: New approach combines safety and sustainability




Ludwig-Maximilians-Universität München





A recent lead article challenges fundamental assumptions in food safety and advocates for a risk-based approach – to enhance the sustainability and resilience of food systems.

Foodborne diseases cause about 600 million illnesses and around 420,000 deaths globally every year. But not every pathogen that is detected also poses a relevant risk to consumers. Increasingly sensitive detection methods, which will identify even miniscule amounts of pathogens and their toxins, are fueling a policy of “zero tolerance” that is leading to food being discarded prematurely. Any detection of a pathogen is deemed unacceptable – regardless of the dose, exposure or ability of a food to support microbial growth.

A recent lead article published in the journal Frontiers in Science questions these central principles of food safety. LMU professor Sophia Johler and her co-authors from Cornell University advocate shifting the focus away from “zero tolerance” to a risk-based assessment of foods – to enhance the sustainability and resilience of our food systems.

Away from the principle of zero tolerance

“Zero risk does not exist – and should also not be the goal,” emphasizes Johler. Efforts that are being made to make food that is already sufficiently safe to eat even safer have drastic consequences for the environment and for the availability of food, with only marginal added value in terms of public health. What’s needed instead are evidence-based acceptable levels of protection.

Food production is responsible for around 30 percent of all global greenhouse gas emissions. The researchers emphasize that a shift in how the safety of a food item is assessed, moving away from zero tolerance and toward sufficiently safe food, can make a valuable contribution to reducing these emissions. “Food safety needs to be considered alongside sustainability and food security,” says Johler.

Assessment using AI

According to the authors, one approach that is highly promising is to integrate modern data-based models. Artificial intelligence, genomics and extensive system data could be used to assess risks more precisely and define acceptable levels of protection. “Data-driven models and artificial intelligence make it possible to assess highly complex, real-world risks with greater precision,” explains Johler. As part of this, food safety needs to be more closely aligned with sustainability, food security and priorities for society.

These insights provide important food for thought for politicians, industry and research: Move away from zero tolerance and embrace balanced management of risk based on scientific evidence.