Monday, December 29, 2025

EMPTYNESS

Not thinking about anything: Toward a brain signature of mind blanking




Institut du Cerveau (Paris Brain Institute)

Mind the blank 

image: 

Mind blanking.

view more 

Credit: Ana Yael.





When we are awake, we seem to experience a continuous stream of sensations, reflections, memories, and impressions that make up our mental life. Yet some people report moments when they think about nothing at all. Is that even possible? Or is it an illusion caused by a memory bias?

Mind blanking is defined as the complete absence of mental content that can be described to others. No mental images, no catchy tune looping in your head, no obsessive thoughts... nothing! This experience is often sought after by practitioners of meditation[1] or mindfulness. But it isn’t confined to them: it seems to be very common after intense, prolonged cognitive effort—such as a university exam—or in cases of sleep deprivation,” explains Esteban Munoz-Musat, neurologist and former doctoral student in the Picnic Lab at Paris Brain Institute.

The definition of mind blanking is still debated within the scientific community. Hence, there is a need to better characterize this phenomenon, which could teach us more about the richness of our subjective experiences.

Mind blanking also appears in the clinical profile of certain psychiatric conditions, such as generalized anxiety disorder, and it seems more frequent in people with attention-deficit/hyperactivity disorder (ADHD). Studying it closely might help us better understand these conditions,” says the researcher.

A novel description of the neural substrate of mind blanking

To investigate further, Esteban Munoz-Musat, Lionel Naccache, Thomas Andrillon, and their colleagues recruited 62 healthy volunteers. The participants performed cognitive exercises designed to track fluctuations in attention during a long, tedious task. At the same time, their brain activity was recorded using high-density electroencephalography (hdEEG), and their behavior was carefully monitored.

The results show that episodes of mind blanking reported by participants were associated with specific neurophysiological markers and behavioral patterns.

During these moments, connectivity between distant neural networks decreased, and visual information processing was disrupted. In particular, “late” visual processing (250–300 ms after exposure to a stimulus—a time window considered in some models to reflect the conscious stage of visual processing) was largely absent. Participants were also slightly drowsy, slower, and more error-prone.

These observations suggest that during a mind blanking episode, participants had reduced access to sensory information from their environment,” explains Thomas Andrillon, senior author of the study. “These new data support an emerging idea: being awake does not necessarily mean being conscious of something. Mind blanking corresponds to a genuine interruption in the stream of thoughts.”

A momentary loss of consciousness?

Recent work shows that the fluctuations in consciousness we experience during the day and night are complex and do not coincide with the classic dichotomy between wakefulness and sleep.

For example, some individuals are capable of lucid dreaming—that is, they are aware that they are dreaming—while in REM sleep. Perhaps mind blanking is the opposite experience: a temporary loss of consciousness during wakefulness.

Mind blanking is likely an extremely common occurrence, during which certain brain regions briefly slip into a sleep-like state. We estimate it accounts for 5 to 20% of waking hours, although there are significant differences between individuals,” notes the researcher.

The study also shows that, at the neurophysiological level, mind blanking is distinct from two other mental states: intense concentration on a task (on-task) and mind wandering, in which attention withdraws from the external environment and turns to thoughts unrelated to the current context.

Our findings suggest that the structure of conscious experience is more like a mosaic of discrete states rather than a continuous mental film. A mosaic in which the absence of certain tiles results in brief moments of unconsciousness when the subject is awake,” concludes Lionel Naccache, neurologist and co-lead of the Picnic Lab.

Future research will determine whether mind blanking could be used in the clinical description of certain neurological or psychiatric disorders. Above all, it opens new avenues for understanding consciousness and attention.

 


[1] In certain spiritual traditions, such as Buddhism, this is called a state of cessation—or nirodha in Sanskrit.

 

HKUST researchers unlock why arctic ice melt paused



Accelerated arctic collapse forecast post-2040 without emission cuts




Hong Kong University of Science and Technology





A research team led by The Hong Kong University of Science and Technology (HKUST) scholars has discovered a significant slowdown in Arctic sea ice melting since 2012, with the decrease rate of 11.3% per decade to an insignificant downward trend of only −0.4% per decade. This phenomenon is closely related to a shift in the North Atlantic Oscillation (NAO) pattern, from a negative phase to its positive phase, which traps cold air within the Arctic region. It is projected to peak between 2030 and 2040, after which the Arctic could enter a new phase of accelerated ice melt. Without reductions in greenhouse gas emissions, this may trigger severe climate and environmental crises within decades.

The groundbreaking study, titled “Recent slowing of Arctic sea ice melt tied to multidecadal NAO variability” has been published in Nature Communications. It is led by Prof. SU Hui, Chair Professor in the Department of Civil and Environmental Engineering and Global STEM Professor at HKUSTProf. ZHAI Chengxing, Associate Professor from the Division of Emerging Interdisciplinary Areas at HKUST and Dr. WANG Cen, postdoctoral fellow of the Department of Civil and Environmental Engineering.

By analyzing multiple observational Arctic sea ice concentration (SIC) datasets, the research team uncovered striking trends. Since 1970, there is a sharp decrease in SIC with accelerated melt rates beginning in the 1990s and reaching a historic low in September 2012. Despite record global temperatures since 2014, the pace of Arctic ice loss slowed dramatically—from 11.3% per decade between 1996 and 2011 to just 0.4% per decade after 2012.

To decode this paradox, the team investigates the connection between internal atmospheric variability and multidecadal variability in Arctic sea ice. The team reveal a critical link between the NAO (changes in pressure between Azores and Iceland). The first author Dr. Wang Cen remarked, “Data show that between 1990 and the early 2010s, the NAO evolved towards its peak negative phase and summer anomalies of air temperature, water vapor and downwelling longwave radiation at surface changes from negative to positive, promoting rapid decline of Arctic sea ice. However, after 2012, the NAO transitioned to a positive phase, reversing these conditions. This led to an increasing tendency in sea ice extent on interdecadal scale that counteracts the long-term decline caused by persistent global warming.”

Prof. Su Hui who is an atmospheric science expert further elaborated, “The NAO focuses on the North Atlantic region, from North Africa and the Mediterranean to Northern Europe. It is inseparable from the Arctic Oscillation (AO), which governs high-latitude westerly winds around the Arctic. When the NAO is in its positive phase, stronger westerlies over the North Atlantic intensify storm activity. Simultaneously, the AO enters its positive phase, lowering average Arctic pressure, cooling the air, and trapping frigid polar air within the Arctic via powerful westerlies.”

Prof. Zhai Chengxing warned, “Our projections indicate the NAO’s positive phase is likely to extend until roughly 2030–2040, followed by a phase of accelerated Arctic sea ice decline when the NAO passes the peak positive phase. In the absence of greenhouse gas emission reduction, we may encounter a sequence of climate and environmental crises triggered by a sharp reduction in Arctic sea ice after approximately a decade or so.”

The study also involves postdoctoral fellow Dr. YU Shiwei and PhD students MO Huisi and WANG Yanjia from the Department of Civil and Environmental Engineering, as well as collaborators from the School of Earth and Space Sciences at the University of Science and Technology of China.

AI points toward a new paradigm for fully automated processor chip design


Researchers propose an AI-driven framework that aims to enable fully automated processor chip design, offering a pathway to more efficient and customizable chips.



Science China Press

Potential framework for fully automated processor chip design 

image: 

Potential framework for fully automated processor chip design, including three core components: a) A domain-specific LLM to comprehend the specification and generate a primary design; b) An automated repair mechanism based on functional  verification to guarantee the design’s correctness; c) An automated search mechanism based on performance feedback to address the problem of enormous solution space.

view more 

Credit: ©Science China Press



Processor chips are the basic engines of the digital world, powering everything from smartphones and personal computers to cloud servers and Internet of Things (IoT) devices. As demand for computing continues to soar, chip design has become a critical bottleneck: it is slow, expensive, and heavily dependent on scarce human expertise.

In a new Perspective article in National Science Review, researchers from the State Key Laboratory of Processors,Institute of Computing Technology, Chinese Academy of Sciences, argue that incremental automation is no longer enough. They call for a fully automated processor chip design paradigm that can take high-level functional requirements and automatically deliver verified, high-performance hardware and software stacks.

Traditional electronic design automation (EDA) tools and recent AI methods have significantly accelerated individual design steps such as logic synthesis, placement, routing and design-space exploration. However, most current AI-driven approaches act as local optimizers inside a conventional flow. They improve efficiency at specific stages but do not fundamentally change how entire chips are conceived and built, and thus cannot keep pace with exploding demand and growing design complexity.

The authors identify three challenges that block the path toward fully automated processor design:

1) Specification comprehension: In real projects, requirements for a processor are usually written in informal, sometimes ambiguous natural language, while existing tools expect precise formal inputs such as C/C++ or hardware description languages (HDLs) like Verilog and VHDL. Bridging this gap still requires substantial manual work by experts.

2) Correctness guarantee: Processor chips must meet extremely strict correctness standards. For example, the functional verification of a modern CPU may target 99.99999999999% correctness or higher. Yet large language models (LLMs), which operate on probabilistic generation, cannot directly satisfy such deterministic guarantees.

3) Enormous solution space: A processor design spans foundational software, logic and circuit design, and physical implementation. Modeling this at the bitstream level leads to an astronomically large design space. For example, a 32-bit CPU possesses a solution space whose size is approximately 10^10^540.

To overcome these challenges, the Perspective proposes a three-part framework centered on a domain-specialized “Large Processor Chip Model”:

1) Domain-specific LLM for specification comprehension

The first component is a large language model trained specifically on processor design data. Its job is to read informal natural-language specifications, resolve ambiguities, and generate an initial formal design in HDLs or other suitable representations. Because high-quality training data for processor design is scarce, the authors highlight the role of LLM-based data synthesis and cross-verification to automatically build better corpora at scale—an approach already shown effective in recent reasoning-enhanced RTL design work.

2) Automated repair driven by functional verification

The second component addresses correctness. Instead of trusting a single model output, the framework integrates automatic verification tools to check intermediate designs, and uses their feedback to repair errors. When verification detects a functional bug, the system rolls back to a previously verified version and regenerates the faulty part based on error signals, iterating until the design passes all checks. This idea has already been validated in the fully automated CPU “Enlightenment-1” (QiMeng-CPU-v1), whose logic is represented with a novel graph structure called the Binary Speculation Diagram (BSD). Using Boolean distance as a verification metric and BSD expansion for repair, QiMeng-CPU-v1 reportedly reaches over 99.99999999999% functional accuracy and can successfully boot Linux, providing a concrete proof-of-concept for correctness-aware automation.

3) Performance-feedback-driven search in an enormous solution space

The third component tackles performance optimization under a huge design space. The authors suggest organizing candidate designs as a hierarchical search tree. Performance predictions or real measurements are fed back at intermediate nodes, allowing the system to prune poor branches and focus exploration on promising regions. Similar search-with-feedback ideas have already been applied to automated foundational software design. Systems such as QiMeng-TensorOp and QiMeng-Xpiler use Monte Carlo Tree Search (MCTS) guided by real execution time to automatically generate high-performance tensor operators and to transcompile tensor programs across platforms. Extending such performance-aware search frameworks from software into full processor design is expected to dramatically reduce the effective solution space while still discovering highly optimized designs.

Importantly, the authors emphasize that this fully automated framework is not meant to replace the existing EDA ecosystem. Instead, AI models can orchestrate and call mature tools—such as logic optimizers, floorplanning and placement engines, and formal verification suites—by generating appropriate scripts and constraints. In this way, a “large processor chip model” acts as an intelligent conductor sitting on top of today’s design tools, coordinating them to deliver end-to-end automated solutions.

 

The State Key Laboratory of Processors, housed at the Institute of Computing Technology of the Chinese Academy of Sciences (ICT, CAS), is one of the first national key laboratories formally approved for construction by CAS. The laboratory is chaired by Academician Ninghui Sun, who serves as the Chair of the Academic Committee, and is directed by Prof. Yunji Chen. In recent years, the laboratory has achieved a series of landmark accomplishments. It received the first-ever National Natural Science Award in the field of processor chips, along with six national-level science and technology awards. The laboratory consistently ranks first in China in terms of publications at leading international conferences in processor architecture and chip design. Internationally, it has pioneered several influential research directions, including deep learning processors, which have since become major global research hotspots. The laboratory has also played a pivotal role in fostering China’s domestic processor ecosystem: it has directly or indirectly incubated several leading Chinese processor companies with a combined market valuation of hundreds of billions of RMB, significantly advancing the country’s strategic capabilities in high-performance and AI-oriented chip technologies.

 

Journey to the Center of a Quantized vortex



CNR-INO
Illustration related to the quantum vortices dynamics 

image: 

Investigating the quantum vortex synamic (artistic illustration)

view more 

Credit: Illustration by Giulia Del Pace





Step inside the strange world of a superfluid, a liquid that can flow endlessly without friction, defying the common-sense rules we experience every day, where water pours, syrup sticks and coffee swirls and slows under the effect of viscosity. In these extraordinary fluids, motion often organizes itself into quantized vortices: tiny, long-lived whirlpools that act as the fundamental building blocks of superfluid flow.

An international study conducted at the European Laboratory for Non-Linear Spectroscopy (LENS), involving researchers from CNR-INO, the Universities of Florence, Bologna, Trieste, Augsburg, and the Warsaw University of Technology, has embarked on this journey by investigating the dynamics of vortices within strongly interacting superfluids, uncovering the fundamental mechanisms that govern their behavior. Using ultracold atomic gases, the scientists open a unique window into this exotic realm, recreating conditions similar to those found in superfluid helium-3, the interiors of neutron stars, and superconductors.

“In a superfluid, vortices are stable objects because decay is suppressed,” says Nicola Grani, PhD in Physics and Astronomy at the University of Florence and first author of the publication. “However, dissipation of the superfluid flow can still arise from internal microscopic forces acting directly on the vortices. These forces arise from the interplay between the superfluid and normal components in finite-temperature superfluids, giving rise to the so-called mutual friction. Vortices therefore play a crucial role in determining the efficiency of current transport, and their dynamics can be used as a sensitive probe of the microscopic mechanisms governing mutual friction.

The researchers have studied vortices in a strongly interacting fermionic superfluid gas of lithium atoms cooled to just ten billionths of a degree above absolute zero. In conventional systems, such as superfluid liquid helium or solid-state superconductors,

experiments typically involve large numbers of interacting vortices arranged in complex and hard-to-control configurations. Ultracold atomic gases, by contrast, provide an exceptionally clean and highly programmable platform, enabling unprecedented control over vortex behavior.

“For this study, we used laser light to precisely excite quantized vortices by moving an optical potential, allowing us to engineer arbitrary vortex configurations,” explains Diego Hernández-Rajkov, a researcher at CNR-INO at LENS. “This level of control allows us to study the dynamics of a single vortex.” Using this approach, the researchers observed the spiraling motion of a vortex orbiting around another vortex pinned at the center of a disk-shaped superfluid. “By analyzing the vortex trajectories, we reconstructed the microscopic processes that regulate vortex motion and we directly access to its internal structure,” adds Diego Hernández-Rajkov. “The analysis revealed that, in the explored regime, vortex dynamics are influenced by quasiparticles trapped within the vortex core occupaying the so-called Caroli-de Gennes Matricon states, providing the first indirect experimental evidence of their presence in this regime.”

The study, published in Nature Communications, opens new perspectives for understanding vortex dynamics in superfluids and superconductors. According to Giacomo Roati, LENS member, CNR-INO Director of Research, and head of the research group,

“Ultracold atomic gases provide a unique platform to explore the exotic behaviors of superfluids. By precisely controlling these gases in the laboratory, we can recreate and study their fundamental properties in ways that are difficult or impossible in other systems. Understanding how these vortices move is essential for controlling energy dissipation and designing highly efficient new quantum devices. Our platform can also be extended to study systems with many interacting vortices, opening the door to controlled investigations of superfluid turbulence, a phenomenon with implications across physics, from fundamental research to advanced technology.”