Thursday, March 26, 2026

 

New study argues AI is reopening the “end of history” and forcing a fundamental rethink of education



Research proposes educational reconfiguration as the key to rebuilding trust and legitimacy in the age of artificial intelligence




ECNU Review of Education





Since the wide adoption of generative AI systems after 2022, societies worldwide have entered an intelligence transition period. AI has reopened the “End of History” by creating new ideological alternatives, promoting competition between different governance models and reshaping the foundations of national legitimacy.

A study, which was made available online on February 17, 2026 in  ECNU Review of Education, reexamines Francis Fukuyama’s “End of History” thesis in the light of recent AI breakthroughs. The research, led by Professor Yilei Shao from East China Normal University employs a problem-analysis-solution structure to explain how AI alters political legitimacy, national capacity, and the human condition. Using interdisciplinary analysis, the study identifies three singular transformations driven by AI, explores three structural gaps, and proposes dual-track educational reconfiguration to rebuild trust and institutional resilience.

“At precisely this moment, more than ever before, we need new forms of explanation, ability, legitimacy, and governance to fill the void of thoughts, trust, and policy,” said Prof. Shao. “This is the fundamental reason for the humanities and social sciences, and education to reconstitute themselves as the new-quality infrastructure’ of a humanmachine symbiotic society in front of us all.

The study repositions education as the decisive mechanism for resolving the crisis of technological legitimacy, arguing that the most urgent task for education in the age of AI is to redefine its position and function within the social system.

The author calls for a deeper structural transformation. It proposes that future education should focus on cultivating two essential capacities: civic literacy for the AI society and the ability for human-AI collaboration. “Education must now shoulder the fundamental task of guiding societies through legitimacy crises, rebuilding public trust, and cultivating a new civic literacy for the AI era.” Prof. Shao concluded.

 

***

 

Reference
DOI: 10.1177/20965311261422769

 

Funding information
This work was supported by the Shanghai Municipal Education Commission's “AI-Driven Scientific Research Paradigm Reform for Disciplinary Advancement Program” (Grant No. 2024AI01005).

 

Deepfake x-rays fool radiologists and AI




Radiological Society of North America
Anatomy-matched real and GPT-4o-generated X-rays 

image: 

Anatomy-matched real and GPT-4o-generated radiographs: (A) real and (B) GPT-4o-generated posteroanterior chest radiographs, (C) real and (D) GPT-4ogenerated lateral cervical spine radiographs, (E) real and (F) GPT-4o-generated posteroanterior hand radiographs, and (G) real and (H) GPT-4o-generated lateral lumbar spine radiographs. The pairs demonstrate that GPT-4o can produce radiographically plausible images across different anatomic regions.

view more 

Credit: Radiological Society of North America (RSNA)





OAK BROOK, Ill. – Neither radiologists nor multimodal large language models (LLMs) are able to easily distinguish artificial intelligence (AI)-generated “deepfake” X-ray images from authentic ones, according to a study published today in Radiology, a journal of the Radiological Society of North America (RSNA). The findings highlight the potential risks associated with AI-generated X-ray images, along with the need for tools and training to protect the integrity of medical images and prepare health care professionals to detect deepfakes.

The term “deepfake” refers to a video, photo, image or audio recording that appears real but has been created or manipulated using AI.

“Our study demonstrates that these deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists, even when they were aware that AI-generated images were present,” said lead study author Mickael Tordjman, M.D., post-doctoral fellow, Icahn School of Medicine at Mount Sinai, New York. “This creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one. There is also a significant cybersecurity risk if hackers were to gain access to a hospital’s network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos by undermining the fundamental reliability of the digital medical record.”

Seventeen radiologists from 12 different centers in six countries (United States, France, Germany, Turkey, United Kingdom and United Arab Emirates) participated in the retrospective study. Their professional experience ranged from 0 to 40 years. Half of the 264 X-ray images in the study were authentic, and the other half were generated by AI. Radiologists were evaluated on two distinct image sets, with no overlapping between the datasets. The first dataset included real and ChatGPT-generated images of multiple anatomical regions. The second dataset included chest X-ray images—half authentic and the other half created by RoentGen, an open-source generative AI diffusion model developed by Stanford Medicine researchers.

When radiologist readers were unaware of the study’s true purpose, yet asked after ranking the technical quality of each ChatGPT image if they noticed anything unusual, only 41% spontaneously identified AI-generated images. After being informed that the dataset contained synthetic images, the radiologists’ mean accuracy in differentiating the real and synthetic X-rays was 75%.

Individual radiologist performance in accurately detecting the ChatGPT-generated images ranged from 58% to 92%. Similarly, the accuracy of four multimodal LLMs—GPT-4o (OpenAI), GPT-5 (OpenAI), Gemini 2.5 Pro (Google), and Llama 4 Maverick (Meta)—ranged from 57% to 85%. Even ChatGPT-4o, the model used to create the deepfakes, was unable to accurately detect all of them, though it identified the most by a considerable margin compared to Google and Meta LLMs.

Radiologist accuracy in detecting the RoentGen synthetic chest X-Rays ranged from 62% to 78% and the LLM models’ performance ranged from 52% to 89%.

There was no correlation between a radiologist’s years of experience and their accuracy in detecting synthetic X-ray images. However, musculoskeletal radiologists demonstrated significantly higher accuracy than other radiology subspecialists.

The study identified common features of synthetic X-rays.

"Deepfake medical images often look too perfect,” Dr. Tordjman said. “Bones are overly smooth, spines unnaturally straight, lungs overly symmetrical, blood vessel patterns excessively uniform, and fractures appear unusually clean and consistent, often limited to one side of the bone." 

Recommended solutions to clearly distinguish real and fake images and help prevent tampering include implementing advanced digital safeguards, such as invisible watermarks that embed ownership or identity data directly into the images and automatically attaching technologist-linked cryptographic signatures when the images are captured.

“We are potentially only seeing the tip of the iceberg,” Dr. Tordjman said. “The logical next step in this evolution is AI-generation of synthetic 3D images, such as CT and MRI. Establishing educational datasets and detection tools now is critical.”

The study’s authors have published a curated deepfake dataset with interactive quizzes for educational purposes.

Examples of GPT-4o-generated X-rays of fractures 

Examples of GPT-4o-generated radiographs of fractures: (A) posteroanterior radiograph of the hand, (B) posteroanterior radiograph of the lower leg, and (C) medial oblique radiograph of the foot. The images show fracture lines (arrow) that are unusually smooth, clean, and consistent and, in the case of B, unicortical. The presence of these idealized fracture lines, characterized by unnatural smoothness and incomplete cortical disruption, could serve as a primary diagnostic cue for identifying artificial intelligence–generated trauma images.

Credit

Radiological Society of North America (RSNA)


“The Rise of Deepfake Medical Imaging: Radiologists’ Diagnostic Accuracy in Detecting ChatGPT-generated Radiographs.” Collaborating with Dr. Tordjman were Murat Yuce, M.D., M.S., Amine Ammar, M.D., Mingqian Huang, M.D., Fadila Mihoubi Bouvier, M.D., Maxime Lacroix, M.D., Anis Meribout, M.D., Ian Bolger, M.S., Efe Ozkaya, Ph.D., Himanshu Joshi, Ph.D., Amine Geahchan, M.D., Rayane El Rahi, M.D., Haidara Almansour, M.D., Ashwin Singh Parihar, M.D., Carolyn Horst, M.D., Samet Ozturk, M.D., Muhammed Edip Isleyen, M.D., Gul Gizem Pamuk, M.D., Ahmet Tan Cimilli, M.D., Timothy Deyer, M.D., Arvin Calinghen, M.D., Enora Guillo, M.D., Rola Husain, M.D., Jean-Denis Laredo, M.D., Zahi A. Fayad, Ph.D., Xueyan Mei, Ph.D., and Bachir Taouli, M.D., M.H.A.

Radiology is edited by Suhny Abbara, M.D., FACR, MSCCT, Mayo Clinic, Jacksonville, Florida, and owned and published by the Radiological Society of North America, Inc. (https://pubs.rsna.org/journal/radiology)

RSNA is an association of radiologists, radiation oncologists, medical physicists and related scientists promoting excellence in patient care and health care delivery through education, research and technologic innovation. The Society is based in Oak Brook, Illinois. (RSNA.org)

For patient-friendly information on X-ray and AI in medical imaging, visit RadiologyInfo.org.

 

Not just faster but smarter: AI that explains its discoveries



Fritz Haber Institute of the Max Planck Society
Schematic of the accelerated discovery loop 

image: 

Schematic of the accelerated discovery loop.

view more 

Credit: © ACS Catal. 2026





An SDL integrates an AI doing the experiment planning with lab automation and robotics. In the race to develop better materials, AI and SDLs are often celebrated for one main reason: speed.
These systems can rapidly test and optimize new materials, helping researchers find improved solutions in a fraction of the usual time. But critics have raised important concerns: If AI simply delivers better results without explaining why they work, is this still true scientific progress and how can we control reliability?
A new study published in ACS Catalysis by our Institute’s Theory Department, in collaboration with BASF, and BasCat – UniCat BASF JointLab, shows that speed does not need to compromise understanding. The team developed an advanced AI-driven strategy that not only accelerates catalyst discovery, but also reveals why the identified materials perform better. This approach was successfully demonstrated on a key industrial reaction: the conversion of propane into propylene, an essential building block of the chemical industry and starting material for a wide range of everyday products, including plastics and synthetic fibers. 

From “Black-Box” to “Gray-Box” AI

Most current AI-driven discovery approaches focus on identifying a single best material as quickly as possible. In doing so, they often act as “black boxes”, producing answers without explanations. While this can be useful for optimization, it leaves scientists with limited understanding of the underlying chemistry. Here, a different approach was taken: by carefully designing how AI explores possible material combinations, improved performance was achieved while simultaneously providing meaningful insights: a strategy referred to as “gray-box,” making the process more transparent and controllable.

Understanding and Efficiency Combined

Beyond rapidly identifying a catalyst superior to the current industry reference, the approach translated the improved performance into a language understandable to chemists. It highlighted the effect of individual promoters contained in the identified catalyst, and especially synergistic interactions between them that were missed in previous traditional studies. At the same time, the method remained highly efficient: less than 50 experiments were needed to search a design space containing more than 1013 aka 10000000000000 possible promoter combinations.
Overall, the study demonstrates that AI and automation in chemistry do not have to come at the expense of understanding. Thoughtfully designed, these technologies are capable of transforming materials development – moving from simply finding better solutions to truly understanding them. Ultimately, this will position AI as an agentic partner in scientific discovery rather than just an efficient, but barely assessable tool.

 

New approach finds privacy vulnerability and performance are intertwined in AI neural networks




North Carolina State University





Researchers have discovered that some of the elements of AI neural networks that contribute to data-privacy vulnerabilities are also key to the performance of those models. The researchers used this new information to develop a technique that better balances performance and privacy protection in these models.

The findings involve protecting neural networks against membership inference attacks (MIAs), which are techniques that allow attackers to determine whether a particular piece of data was used to train a specific AI model.

“MIAs can jeopardize the privacy of individuals whose data was part of the training dataset,” says Xingli Fang, first author of a paper on the work and a Ph.D. student at North Carolina State University. “For example, if an attacker has partial data from an individual, it could use an MIA to determine if an AI model was trained using data from that individual.”

“And if the individual’s data was used to train that model, the attacker could then infer the rest of the user’s information,” says Jung-Eun Kim, corresponding author of the paper and an assistant professor of computer science at NC State. “Basically, MIAs pose a privacy vulnerability.”

To understand what the researchers learned, you have to understand “weight parameters.” Weight parameters are an important component of AI neural networks, such as large language models. Essentially, weight parameters serve as the synapses that link all of the neurons in the model together, and data inputs travel through these weight parameters as the model takes the data and produces an output.

“When we started this project, we wanted to get a better understanding of which weight parameters in a model are most important for protecting privacy and which weight parameters are most important for performance,” says Kim. “It was fundamental AI research.”

“We found that only a few weight parameters represent a significant privacy vulnerability,” says Fang. “However, we were surprised to learn that the vulnerable weight parameters are also among the most important weight parameters when it comes to performance. This means it is extremely difficult to reduce vulnerability risk without also hurting performance.

“However, we were able to use our new insights to develop a novel approach for improving data privacy by modifying the weight parameters and going through a fine-tuning process to adjust the model.”

To test the new approach, the researchers compared their privacy protection technique to four other techniques to see how they performed when defending against two state-of-the-art MIAs.

“We found that our approach achieves a better balance of privacy and performance relative to the previous techniques,” says Kim. “We’re happy to talk with anyone in the field about how to incorporate this approach into their training.”

The paper, “Learnability and Privacy Vulnerability Are Entangled in a Few Critical Weights,” will be presented at the Fourteenth International Conference on Learning Representations (ICLR2026), being held April 23-27 in Rio de Janeiro, Brazil.