A new theory of biological computation might explain consciousness
Estonian Research Council
image:
In conventional computing, we can draw a clean line between software and hardware. In brains, there is no such separation of different scales. In the brain, everything influences everything else, from ion channels to electric fields to circuits to whole-brain dynamics.
view moreCredit: Borjan Milinkovic
Right now, the debate about consciousness often feels frozen between two entrenched positions. On one side sits computational functionalism, which treats cognition as something you can fully explain in terms of abstract information processing: get the right functional organization (regardless of the material it runs on) and you get consciousness. On the other side is biological naturalism, which insists that consciousness is inseparable from the distinctive properties of living brains and bodies: biology isn’t just a vehicle for cognition, it is part of what cognition is. Each camp captures something important, but the stalemate suggests that something is missing from the picture.
In our new paper, we argue for a third path: biological computationalism. The idea is deliberately provocative but, we think, clarifying. Our core claim is that the traditional computational paradigm is broken or at least badly mismatched to how real brains operate. For decades, it has been tempting to assume that brains “compute” in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines, and treating them as though they do forces us into awkward metaphors and brittle explanations. If we want a serious theory of how brains compute and what it would take to build minds in other substrates, we need to widen what we mean by “computation” in the first place.
Biological computation, as we describe it, has three defining properties.
First, it is hybrid: it combines discrete events with continuous dynamics. Neurons spike, synapses release neurotransmitters, and networks exhibit event-like transitions, yet all of this is embedded in evolving fields of voltage, chemical gradients, ionic diffusion, and time-varying conductances. The brain is not purely digital, and it is not merely an analog machine either. It is a layered system where continuous processes shape discrete happenings, and discrete happenings reshape continuous landscapes, in a constant feedback loop.
Second, it is scale-inseparable. In conventional computing, we can draw a clean line between software and hardware, or between a “functional level” and an “implementation level.” In brains, that separation is not clean at all. There is no tidy boundary where we can say: here is the algorithm, and over there is the physical stuff that happens to realize it. The causal story runs through multiple scales at once, from ion channels to dendrites to circuits to whole-brain dynamics and the levels do not behave like modular layers in a stack. Changing the “implementation” changes the “computation,” because in biological systems, those are deeply entangled.
Third, biological computation is metabolically grounded. The brain is an energy-limited organ, and its organization reflects that constraint everywhere. Importantly, this is not just an engineering footnote; it shapes what the brain can represent, how it learns, which dynamics are stable, and how information flows are orchestrated. In this view, tight coupling across levels is not accidental complexity. It is an energy optimization strategy: a way to produce robust, adaptive intelligence under severe metabolic limits.
These three properties lead to a conclusion that can feel uncomfortable if we are used to thinking in classical computational terms: computation in the brain is not abstract symbol manipulation. It is not simply a matter of shuffling representations according to formal rules, with the physical medium relegated to “mere implementation.” Instead, in biological computation, the algorithm is the substrate. The physical organization does not just support the computation; it constitutes it. Brains don’t merely run a program. They are a particular kind of physical process that performs computation by unfolding in time.
This also highlights a key limitation in how we often talk about contemporary AI. Current systems, for all their power, largely simulate functions. They approximate mappings from inputs to outputs, often with impressive generalization, but the computation is still fundamentally a digital procedure executed on hardware designed for a very different computational style. Brains, by contrast, instantiate computation in physical time. Continuous fields, ion flows, dendritic integration, local oscillatory coupling, and emergent electromagnetic interactions are not just biological “details” we might safely ignore while extracting an abstract algorithm. In our view, these are the computational primitives of the system. They are the mechanism by which the brain achieves real-time integration, resilience, and adaptive control.
This does not mean we think consciousness is magically exclusive to carbon-based life. We are not making a “biology or nothing” argument. What we are claiming is more specific: if consciousness (or mind-like cognition) depends on this kind of computation, then it may require biological-style computational organization, even if it is implemented in new substrates. In other words, the crucial question is not whether the substrate is literally biological, but whether the system instantiates the right class of hybrid, scale-inseparable, metabolically (or more generally energetically) grounded computation.
That shift changes the target for anyone interested in synthetic minds. If the brain’s computation is inseparable from the way it is physically realized, then scaling digital AI alone may not be sufficient. Not because digital systems can’t become more capable, but because capability is only part of the story. The deeper challenge is that we might be optimizing the wrong thing: improving algorithms while leaving the underlying computational ontology untouched. Biological computationalism suggests that to engineer genuinely mind-like systems, we may need to build new kinds of physical systems: machines whose computing is not layered neatly into software on hardware, but distributed across levels, dynamically coupled, and grounded in the constraints of real-time physics and energy.
So, if we want something like synthetic consciousness, the problem may not be, “What algorithm should we run?” The problem may be, “What kind of physical system must exist for that algorithm to be inseparable from its own dynamics?” What are the necessary features—hybrid event–field interactions, multi-scale coupling without clean interfaces, energetic constraints that shape inference and learning—such that computation is not an abstract description laid on top, but an intrinsic property of the system itself?
That is the shift biological computationalism demands: moving from a search for the right program to a search for the right kind of computing matter.
Journal
Neuroscience & Biobehavioral Reviews
Method of Research
Literature review
Subject of Research
People
Article Title
On biological and artificial consciousness: A case for biological computationalism
To flexibly organize thought, the brain makes use of space
In a new study, MIT researchers tested their theory of Spatial Computing, which holds that the brain recruits and controls ad hoc groups of neurons for cognitive tasks by applying brain waves to patches of the cortex
Picower Institute at MIT
image:
Spatial Computing theory holds that the brain recruits and controls ad hoc groups of neurons for cognitive tasks by applying brain waves to patches of the cortex.
view moreCredit: David Orenstein/MIT Picower Institute
Our thoughts are specified by our knowledge and plans, yet our cognition can also be fast and flexible in handling new information. How does the well-controlled and yet highly nimble nature of cognition emerge from the brain’s anatomy of billions of neurons and circuits? A new study by researchers in The Picower Institute for Learning and Memory at MIT provides new evidence from tests in animals that the answer might be a theory called “Spatial Computing.”
First proposed in 2023 by Picower Professor Earl K. Miller and colleagues Mikael Lundqvist and Pawel Herman, Spatial Computing theory explains how neurons in the prefrontal cortex can be organized on the fly into a functional group capable of carrying out the information processing required by a cognitive task. Moreover, it allows for neurons to participate in multiple such groups, as years of experiments have shown that many prefrontal neurons can indeed participate in multiple tasks at once. The basic idea of the theory is that the brain recruits and organizes ad hoc “task forces” of neurons by using “alpha” and “beta” frequency brain waves (about 10-30 Hz) to apply control signals to physical patches of the prefrontal cortex. Rather than having to rewire themselves into new physical circuits every time a new task must be done, the neurons in the patch instead process information by following the patterns of excitation and inhibition imposed by the waves.
Think of the alpha and beta frequency waves as stencils that shape when and where in the prefrontal cortex groups of neurons can take in or express information from the senses, Miller said. In that way, the waves represent the rules of the task and can organize how the neurons electrically “spike” to process the information content needed for the task.
“Cognition is all about large-scale neural self-organization,” said Miller, senior author of the paper in Current Biology and a faculty member in MIT’s Department of Brain and Cognitive Sciences. “Spatial Computing explains how the brain does that.”
Testing five predictions
A theory is just an idea. In the study, lead author Zhen Chen and other current and former members of Miller’s lab, put Spatial Computing to the test by examining whether five predictions it makes about neural activity and brain wave patterns were actually evident in measurements made in the prefrontal cortex of animals as they engaged in two working memory and one categorization tasks. Across the tasks there were distinct pieces of sensory information to process (e.g. “a blue square appeared on the screen followed by a green triangle”) and rules to follow (e.g. “when new shapes appear on the screen, do they match the shapes I saw before and appear in the same order?”)
The first two predictions were that alpha and beta waves should represent task controls and rules, while the spiking activity of neurons should represent the sensory inputs. When the researchers analyzed the brain wave and spiking readings gathered by the four electrode arrays implanted in cortex, they found that indeed these predictions were true. Neural spikes, but not the alpha/beta waves, carried sensory information. While both spikes and the alpha/beta waves carried task information, it was strongest in the waves and it peaked at times relevant to when rules were needed to carry out the tasks.
Notably, in the categorization task the researchers purposely varied the level of abstraction to make categorization more or less cognitively difficult. The researchers saw that the greater the difficulty, the stronger the alpha/beta wave power was, further showing that it carries task rules.
The next two predictions were that alpha/beta would be spatially organized and that when and where it was strong, the sensory information represented by spiking would be suppressed but where and when it was weak, spiking would increase. These predictions also held true in the data. Under the electrodes Chen, Miller and the team could see distinct spatial patterns of higher or lower wave power, and where power was high, the sensory information in spiking was low and vice versa.
Finally, if Spatial Computing is valid, the researchers predicted, then trial by trial alpha/beta power and timing should accurately correlate with the animals’ performance. Sure enough, there were significant differences in the signals on trials where the animals performed the tasks correctly vs. when they made mistakes. In particular, the measurements predicted mistakes due to messing up task rules vs. sensory information. For instance, alpha/beta discrepancies pertained to the order in which stimuli appeared (first square then triangle) rather than the identity of the individual stimuli (square or triangle).
Compatible with findings in humans
By experimenting with animals, the researchers were able to make direct measurements of individual neural spikes as well as brain waves, but in the paper, they note that other studies in humans report some similar findings. For instance, studies using non-invasive EEG and MEG brain wave readings show that humans use alpha oscillations to inhibit activity in task-irrelevant areas under top-down control and that alpha oscillations appear to govern task-related activity in the prefrontal cortex.
While Miller said he finds the results of the new study, and their intersection with human studies, to be encouraging, he acknowledges that more evidence is still needed. For instance, his lab has shown that brain waves are typically not still (like a jump rope) but travel across areas of the brain. Spatial Computing should account for that, he said.
In addition to Chen and Miller, the paper’s other authors are Scott Brincat, Mikael Lundqvist, Roman Loonis and Melissa Warden.
The Office of Naval Research, The Freedom Together Foundation and The Picower Institute for Learning and Memory funded the study.
Journal
Current Biology
Method of Research
Experimental study
Subject of Research
Animals
Article Title
Oscillatory Control of Cortical Space as a Computational Dimension
Article Publication Date
22-Dec-2025
No comments:
Post a Comment