Monday, December 29, 2025

PARACELCEUS

Low-dose peanut therapy may help protect more kids with peanut allergy



The Hospital for Sick Children






Children with peanut allergies may not need large doses of peanut oral immunotherapy (OIT) to build protection to peanut, finds a new study led by The Hospital for Sick Children (SickKids) and Montreal Children’s Hospital. Researchers found that a small dose can help children with their peanut allergy and reduce the risk of severe reactions from accidental exposures, with less side effects than the current standard treatment. 

In Canada, peanut allergies affect almost two per cent of children and adults and increasingly contribute to hospital admissions. Peanut OIT is a method to increase the amount of peanut that a child can eat before experiencing a reaction, helping to protect children against accidental contamination. Children receiving peanut OIT eat a gradually increasing amount of peanut over time until they reach a “maintenance” dose that is eaten regularly, even after the treatment, to help keep up the benefits. 

While peanut OIT can help children with peanut allergies to build protection, current approaches use large doses that require lengthy treatment, close medical supervision and often can result in discontinuation due to dislike of the taste and side effects of allergic reactions like anaphylaxis. 

The study is the first of its kind to compare a commonly used peanut OIT treatment to reduced doses in children, and provides evidence to support a significantly lower dose that could increase treatment accessibility and help protect more children with peanut allergy. 

A very little goes a long way 

To investigate the safety and effectiveness of a very low maintenance dose of peanut OIT, the study, published in the Journal of Allergy and Clinical Immunology – In Practice, randomly assigned 51 children with peanut allergy to three groups: low-dose treatment (30mg maintenance), standard-dose treatment (300mg maintenance) or avoidance (no peanut OIT).  

Both peanut OIT treatment groups experienced significant and similar increases in their allergic reaction threshold to peanuts, showing that eating even small amounts is better than avoidance when it comes to training the immune system to manage more peanut. 

“We were excited to find that peanut OIT maintenance doses can be much lower than previously thought and still contribute to positive outcomes,” says Dr. Julia Upton, Head of the Division of Immunology & Allergy, Project Investigator in the SickKids Research Institute, Co-Director of the SickKids Food Allergy and Anaphylaxis Program and co-first author. “The more options we have, the more we can support patients’ experience and provide meaningful, tailored care.” 

Children who were in the 30mg maintenance group had fewer adverse reactions than the 300mg maintenance group, and none withdrew from treatment.  

“This is a small enough dose that even children who do not like the taste can continue treatment,” says co-senior study author Dr. Thomas Eiwegger, Adjunct Scientist in the Translational Medicine program. “This is the first time we’ve compared standard doses to such a low dose, but the minimum maintenance dose to provide benefit may be even lower than 30mg.” 

The research team notes that some children and families may choose to remain on very low doses, while others may prefer to increase over time depending on their goals. This study marks an important step to further the development of safe and effective protocols for peanut OIT. Ultimately, the goal is to make peanut OIT accessible to more peanut-allergic children.  

“The study found that very small amounts of peanuts, that are associated with less reactions, could be used as effectively as large amounts for oral immunotherapy, making it safer and accessible to more Canadians, even those who are very sensitive to the allergen,” says Dr. Moshe Ben-Shoshan, co-senior author of the study, a paediatric allergy and immunology specialist at the Montreal Children's Hospital and Scientist in the Infectious Diseases and Immunity in Global Health Program at the Research Institute of the McGill University Health Centre.

This study is funded by SickKids Food Allergy and Anaphylaxis Program, Canadian Institutes of Health Research (CIHR), Montreal Children’s Hospital Foundation and the US peanut advisory board. 

Safety decision-making for autonomous vehicles integrating passenger physiological states by fNIRS



Beijing Institute of Technology Press Co., Ltd
The overall diagram of this study 

image: 

It contains two main parts: fNIRS risk detection and fNIRS integrated reinforcement learning (RL). The first part covers real-time pre-processing, feature extraction, and classification models. The second part focuses on the proposed human-guided deep reinforcement learning (DRL) scheme, and it contains a TD3 agent with some modifications, an intelligent driver model (IDM), and a human-guided DRL switching mechanism. This switching mechanism may accelerate the learning speed of TD3 based on passengers’risk assessment using fNIRS.

view more 

Credit: Xiaofei Zhang, School of Vehicle and Mobility, Tsinghua University.





In recent years, several serious traffic accidents have exposed the shortcomings of current autonomous driving systems in making safe decisions. Traditional decision-making methods, due to functional deficiencies or machine performance limitations, struggle to address potential risky behaviors, leading to a continued need for human intervention in complex driving scenarios. To address this, researchers have begun exploring the use of human physiological states as an information source to improve the safety decision-making of autonomous vehicles. “Functional Near-Infrared Spectroscopy (fNIRS), as a non-invasive real-time brain activity monitoring method, can provide cognitive information related to human risk perception and emotional states, and is thus considered a tool that can enhance autonomous driving systems.” said the author Xiaofei Zhang, a professor at Tsinghua University, “Our study introduces an intelligent decision-making algorithm based on fNIRS by analyzing passengers' physiological states, aiming to improve the safety and decision-making efficiency of autonomous vehicles when facing risky scenarios.”

The research process of this paper can be divided into the following parts: Initially,  an intelligent safety decision-making algorithm that integrates passengers’ physiological states (detected via fNIRS) into the decision-making process of autonomous driving is presented. This algorithm is based on Twin-Delayed Deep Deterministic Policy Gradient (TD3) and incorporates passenger risk assessment information to help the system make safer decisions in risky scenarios. The algorithm utilizes human guided deep reinforcement learning mechanisms to switch to a more conservative intelligent driving model (IDM) when passengers are detected to be in a risky state, thereby accelerating the learning process and improving the safety and comfort of the system. The experimental results show that this method is superior to the traditional methods in convergence speed, safety and driving comfort, which shows the potential application in auto drive system.

 

This study demonstrates that integrating passenger physiological states detected by fNIRS into the decision-making algorithm for autonomous driving effectively enhances safety and comfort in risky scenarios. Compared to traditional methods, this algorithm shows superior performance in learning convergence speed, safety, and driving comfort. However, the experimental scenarios were relatively simplified, and the participants had a narrow age range and homogeneous background, which may limit the generalizability of the findings. Additionally, due to time constraints, the learning process of the algorithm did not fully explore the optimal strategy. “Future research will aim to validate the algorithm in more complex and realistic driving scenarios and further enhance the accuracy and robustness of driving risk assessment by integrating information from vehicle sensors..” said Xiaofei Zhang.

Authors of the paper include Xiaofei Zhang, Haoyi Zheng, Jun Li, Zongsheng Xie, Huamu Sun, and Hong Wang.

This work was supported by the National Science Foundation of China Project 52072215, 52221005 and 12361105, Beijing Natural Science Foundation L243025, National key R&D Program of China 2022YFB2503003 and State Key Laboratory of Intelligent Green Vehicle and Mobility.

The paper, “Safety Decision-Making for Autonomous Vehicles Integrating Passenger Physiological States by fNIRS” was published in the journal Cyborg and Bionic Systems on May 13, 2025, at  10.34133/cbsystems.0205.

 

Decoding how pear trees are pruned: 3D insights pave the way for automated orchards



Nanjing Agricultural University The Academy of Science
Figure 6. Determination of parameters from shoots point clouds. 

image: 

(a) The shoot length was calculated by summing the distances between neighboring points after skeletonization of the shoot point cloud. (b) The shoot angle was determined based on the inclination of the minimum enclosing box surrounding the shoot. (c) The volume of the canopy was calculated using the method of the minimum convex hull, which approximates the spatial boundary of the shoot.

view more 

Credit: The authors



By aligning three-dimensional point clouds of the same trees across consecutive growing seasons, the team was able to accurately identify which shoots were removed during pruning and how these decisions relate to annual shoot growth.

Pear trees are widely cultivated across temperate regions, and dormant pruning is essential for maintaining canopy structure, balancing vegetative and reproductive growth, and preventing problems such as poor light penetration or alternate bearing. However, pruning remains one of the most expensive orchard operations, accounting for a substantial share of annual production costs. While mechanized pruning tools exist, they often rely on non-selective cutting, which can reduce fruit yield and quality. Intelligent, selective pruning requires detailed knowledge of tree structure and shoot growth—information that has been difficult to obtain in complex canopies using conventional imaging or manual measurements.

study (DOI:10.1016/j.plaphe.2025.100136) published in Plant Phenomics on 13 November 2025 by Yue Mu’s team, Nanjing Agricultural University, provides a critical foundation for intelligent and automated pruning systems, offering new opportunities to reduce labor dependence while improving precision and consistency in orchard management.

Using repeated 3D point cloud acquisitions of the same pear trees before and after pruning and across consecutive growing seasons, the researchers first applied a two-step alignment strategy—coarse registration followed by fine Iterative Closest Point (ICP) optimization—to precisely align paired point clouds, with alignment accuracy quantified by root mean square error (RMSE). After removing overlapping regions, shoots were segmented using density-based clustering, and shoot architectural parameters, including shoot number, length, and angle, were automatically extracted and validated against manual measurements. The alignment results showed that average RMSE across whole trees was reduced to 0.032 m after registration, with no significant differences between cultivars but clear differences among tree architectures, where the “2 + 1” architecture achieved the highest accuracy (RMSE = 0.025 m). Registration accuracy was also strongly influenced by the time interval between scans, with point clouds collected shortly before and after pruning showing significantly lower errors than those separated by a full year of natural growth. The extracted shoot parameters closely matched manual measurements, with strong correlations for shoot number, total shoot length, shoot angle, and individual shoot length (R² = 0.82–0.92), demonstrating the robustness of the method despite some segmentation errors in densely packed canopies. Applying this pipeline to analyze pruning outcomes revealed that tree architecture, rather than cultivar, was the dominant factor shaping pruning characteristics, influencing pruned shoot number, individual shoot length, canopy volume, and shoot length density. Spatial overlap analysis further showed that most pruned shoots corresponded to annual (one-year-old) shoots, whose angle and length distributions closely matched those of pruned shoots, indicating that pruning decisions largely reflect annual shoot growth patterns. Quantitatively, annual shoots accounted for about 79% of pruned shoot number and over 92% of total pruning length, with the majority exhibiting large inclination angles. Overall, manual dormant pruning followed a highly consistent pattern across years, primarily targeting upright annual shoots through thinning, while a smaller proportion of perennial shoots was selectively removed to maintain canopy structure.

By quantifying what experienced pruners do intuitively, this research provides a practical foundation for automated pruning. Instead of attempting to evaluate every branch in a complex canopy, intelligent systems could focus primarily on identifying annual shoots with specific angle and length characteristics. This simplification could greatly accelerate the development of robotic pruning tools, reduce labor costs, and improve consistency in orchard management.

###

References

DOI

10.1016/j.plaphe.2025.100136

Original Source URl

https://doi.org/10.1016/j.plaphe.2025.100136

Funding information

This work was co-financed by the Major Science and Technology Projects of Xinjiang Uygur Autonomous Region (2024A02006-3), the Jiangsu Agricultural Science and Technology Innovation Fund (No. CX (22)2025 and No. CX (23)1011), and the National Natural Science Foundation of China (No. 32001980).

About Plant Phenomics

Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.



How Deepfakes Could Lead to Doomsday

America’s Nuclear Warning Systems Aren’t Ready for AI

STILL USING FLOPPY DISKS!


Erin D. Dumbacher
December 29, 2025
FOREIGN AFFAIRS

Since the dawn of the nuclear age, policymakers and strategists have tried to prevent a country from deploying nuclear weapons by mistake. But the potential for accidents remains as high as it was during the Cold War. In 1983, a Soviet early warning system erroneously indicated that a U.S. nuclear strike on the Soviet Union was underway; such a warning could have triggered a catastrophic Soviet counterattack. The fate was avoided only because the on-duty supervisor, Stanislav Petrov, determined that the alarm was false. Had he not, Soviet leadership would have had reason to fire the world’s most destructive weapons at the United States.

The rapid proliferation of artificial intelligence has exacerbated threats to nuclear stability. One fear is that a nuclear weapons state might delegate the decision to use nuclear weapons to machines. The United States, however, has introduced safeguards to ensure that humans continue to make the final call over whether to launch a strike. According to the 2022 National Defense Strategy, a human will remain “in the loop” for any decisions to use, or stop using, a nuclear weapon. And U.S. President Joe Biden and Chinese leader Xi Jinping agreed in twin statements that “there should be human control over the decision to use nuclear weapons.”

Yet AI poses another insidious risk to nuclear security. It makes it easier to create and spread deepfakes—convincingly altered videos, images, or audio that are used to generate false information about people or events. And these techniques are becoming ever more sophisticated. A few weeks after Russia’s 2022 invasion of Ukraine, a widely shared deepfake showed Ukrainian President Volodymyr Zelensky telling Ukrainians to set down their weapons; in 2023, a deepfake led people to falsely believe that Russian President Vladimir Putin interrupted state television to declare a full-scale mobilization. In a more extreme scenario, a deepfake could convince the leader of a nuclear weapons state that a first strike from an adversary was underway or an AI-supported intelligence platform could raise false alarms of a mobilization, or even a dirty bomb attack, by an adversary.

The Trump administration wants to harness AI for national security. In July, it released an action plan calling for AI to be used “aggressively” across the Department of Defense. In December, the department unveiled GenAI.mil, a platform with AI tools for employees. But as the administration embeds AI in national security infrastructure, it will be crucial for policymakers and systems designers to be careful about the role machines play in the early phases of nuclear decision-making. Until engineers can prevent problems inherent to AI, such as hallucinations and spoofing—in which large language models predict inaccurate patterns or facts—the U.S. government must ensure that humans continue to control nuclear early warning systems. Other nuclear weapons states should do the same.

CASCADING CRISES

Today, President Donald Trump uses a phone to access deepfakes; he sometimes reposts them on social media, as do many of his close advisers. As the lines become blurred between real and fake information, there is a growing possibility that such deepfakes could infect high-stakes national security decisions, including on nuclear weapons.


If misinformation can deceive the U.S. president for even a few minutes, it could spell disaster for the world. According to U.S. law, a president does not need to confer with anyone to order the use of nuclear weapons for either a retaliatory attack or a first strike. U.S. military officials stand at the ready to deploy the planes, submarines, and ground-based missiles that carry nuclear warheads. A U.S. intercontinental ballistic missile can reach its target within a half hour—and once such a missile is launched, no one can recall it.

Deepfakes could help create pretexts for war.

Both U.S. and Russian nuclear forces are prepared to “launch on warning,” meaning that they can be deployed as soon as enemy missiles are detected heading their way. That leaves just minutes for a leader to evaluate whether an adversary’s nuclear attack has begun. (Under current U.S. policy, the president has the option to delay a decision until after an adversary’s nuclear weapon strikes the United States.) If the U.S. early warning system detects a threat to the United States, U.S. officials will try to verify the attack using both classified and unclassified sources. They might look at satellite data for activity at known military facilities, monitor recent statements from foreign leaders, and check social media and foreign news sources for context and on-the-ground accounts. Military officers, civil servants, and political appointees must then decide which information to communicate up the chain and how it is presented.

AI-driven misinformation could spur cascading crises. If AI systems are used to interpret early warning data, they could hallucinate an attack that isn’t real—putting U.S. officials in a similar position to the one Petrov was in four decades ago. Because the internal logic of AI systems is opaque, humans are often left in the dark as to why AI came to a particular conclusion. Research shows that people with an average level of familiarity with AI tend to defer to machine outputs rather than checking for bias or false positives, even when it comes to national security. Without extensive training, tools, and operating processes that account for AI’s weaknesses, advisers to White House decision-makers might default to assuming—or at least to entertaining—the possibility that AI-generated content is accurate.

Deepfakes that are transmitted on open-source media are nearly as dangerous. After watching a deepfake video, an American leader might, for example, misinterpret Russian missile tests as the beginning of offensive strikes or mistake Chinese live-fire exercises as an attack on U.S. allies. Deepfakes could help create pretexts for war, gin up public support for a conflict, or sow confusion.

A CRITICAL EYE

In July, the Trump administration released an AI action plan that called for aggressive deployment of AI tools across the Department of Defense, the world’s largest bureaucracy. AI has proved useful in making parts of the military more efficient. Machine learning makes it easier to schedule maintenance of navy destroyers. AI technology embedded in autonomous munitions, such as drones, can allow soldiers to stand back from the frontlines. And AI translation tools help intelligence officers parse data on foreign countries. AI could even be helpful in some other standard intelligence collection tasks, such as identifying distinctions between pictures of bombers parked in airfields from one day to the next.

Implementing AI across military systems does not need to be all or nothing. There are areas that should be off-limits for AI, including nuclear early warning systems and command and control, in which the risks of hallucination and spoofing outweigh the benefits that AI-powered software could bring. The best AI systems are built on cross-checked and comprehensive datasets. Nuclear early warning systems lack both because there have not been any nuclear attacks since the ones on Hiroshima and Nagasaki. Any AI nuclear detection system would likely have to train on existing missile test and space tracking data plus synthetic data. Engineers would need to program defenses against hallucinations or inaccurate confidence assessments—significant technical hurdles.


It may be tempting to replace checks from highly trained staff with AI tools or to use AI to fuse various data sources to speed up analysis, but removing critical human eyes can lead to errors, bias, and misunderstandings. Just as the Department of Defense requires meaningful human control of autonomous drones, it should also require that each element of nuclear early warning and intelligence technology meet an even higher standard. AI data integration tools should not replace human operators who report on incoming ballistic missiles. Efforts to confirm early warning of a nuclear launch from satellite or radar data should remain only partially automated. And participants in critical national security conference calls should consider only verified and unaltered data.

In July 2025, the Department of Defense requested funds from Congress to add novel technologies to nuclear command, control, and communications. The U.S. government would be best served by limiting AI and automation integration to cybersecurity, business processes and analytics, and simple tasks, such as ensuring backup power turns on when needed.

A VINTAGE STRATEGY

Today, the danger of nuclear war is greater than it has been in decades. Russia has threatened to use nuclear weapons in Ukraine, China is rapidly expanding its arsenal, North Korea now has the ability to send ICBMs to the United States, and policies preventing proliferation are wavering. Against this backdrop, it is even more important to ensure that humans, not machines trained on poor or incomplete data, are judging the actions, intent, and aims of an adversary.

Intelligence agencies need to get better at tracking the provenance of AI-derived information and standardize how they relay to policymakers when data is augmented or synthetic. For example, when the National Geospatial-Intelligence Agency uses AI to generate intelligence, it adds a disclosure to the report if the content is machine-generated. Intelligence analysts, policymakers, and their staffs should be trained to bring additional skepticism and fact-checking to content that is not immediately verifiable, just as many businesses are now vigilant against cyber spear phishing. And intelligence agencies need the trust of policymakers, who might be more inclined to believe what their own eyes and devices tell them—true or false—than what an intelligence assessment renders.

Experts and technologists should keep working to find ways to label and slow fraudulent information, images, and videos flowing through social media, which can influence policymakers. But given the difficulty of policing open-source information, it is all the more important for classified information to be accurate.


AI can already deceive leaders into seeing an attack that isn’t there.

The Trump administration’s updates to U.S. nuclear posture in the National Defense Strategy ought to guard against the likely and unwieldy AI information risks to nuclear weapons by reaffirming that a machine will never make a nuclear launch decision without human control. As a first step, all nuclear weapons states should agree that only humans will make nuclear use decisions. Then they should improve channels for crisis communications. A hotline for dialogue exists between Washington and Moscow but not between Washington and Beijing.

U.S. nuclear policy and posture have changed little since the 1980s, when leaders worried the Soviet Union would attack out of the blue. Policymakers then could not have wrapped their heads around how much misinformation would be delivered to the personal devices of the people in charge of nuclear weapons today. Both the legislative and executive branches should reevaluate nuclear weapons posture policies built for the Cold War. Policymakers might, for example, require future presidents to confer with congressional leaders before they launch a nuclear first strike or require a period of time for intelligence professionals to validate the information on which the decision is being based. Because the United States has capable second-strike options, accuracy should take precedence over speed.

AI already has the potential to deceive key decision-makers and members of the nuclear chain of command into seeing an attack that isn’t there. In the past, only authentic dialogue and diplomacy averted misunderstandings among nuclear armed states. Policies and practices should protect against the pernicious information risks that could ultimately lead to doomsday.


ERIN D. DUMBACHER is Stanton Nuclear Security Senior Fellow at the Council on Foreign Relations.