Scientists on ‘urgent’ quest to explain consciousness as AI gathers pace
As AI—and the ethical debate surrounding it—accelerates, scientists argue that understanding consciousness is now more urgent than ever
Frontiers
As AI—and the ethical debate surrounding it—accelerates, scientists argue that understanding consciousness is now more urgent than ever.
Researchers writing in Frontiers in Science warn that advances in AI and neurotechnology are outpacing our understanding of consciousness—with potentially serious ethical consequences.
They argue that explaining how consciousness arises—which could one day lead to scientific tests to detect it—is now an urgent scientific and ethical priority. Such an understanding would bring major implications for AI, prenatal policy, animal welfare, medicine, mental health, law, and emerging neurotechnologies such as brain–computer interfaces.
“Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society—and for understanding what it means to be human,” said lead author Prof Axel Cleeremans from Université Libre de Bruxelles. “Understanding consciousness is one of the most substantial challenges of 21st-century science—and it’s now urgent due to advances in AI and other technologies.
“If we become able to create consciousness—even accidentally—it would raise immense ethical challenges and even existential risk” added Cleeremans, a European Research Council (ERC) grantee.
Sentience test
Consciousness—the state of being aware of our surroundings and of ourselves—remains one of science’s deepest mysteries. Despite decades of research, there is still no consensus over how subjective experience arises from biological processes.
While scientists have made progress in identifying the brain areas and neural processes that are involved in consciousness, there is still controversy about which areas and processes are necessary for consciousness, and how exactly they contribute to it. Some even wonder if this is the right way to consider the challenge.
This new review explores where consciousness science stands today, where it could go next, and what might happen if humans succeed in understanding or even creating consciousness—whether in machines or in lab-grown brain-like systems like “brain organoids.”
The authors say that tests for consciousness—evidence-based ways to judge whether a being or a system is aware—could help identify awareness in patients with brain injury or dementia, and determine when it arises in fetuses, animals, brain organoids, or even AI.
While this would mark a major scientific breakthrough, they warn it would also raise profound ethical and legal challenges about how to treat any system shown to be conscious.
“Progress in consciousness science will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world,” said co-author Prof Anil Seth from the University of Sussex and ERC grantee. “The question of consciousness is ancient—but it’s never been more urgent than now.”
Wide implications
A better understanding of consciousness could:
- transform medical care for unresponsive patients once thought to be unconscious. Measurements inspired by integrated information theory and global workspace theory[1] have already revealed signs of awareness in some people diagnosed as having unresponsive wakefulness syndrome. Further progress could refine these tools to assess consciousness in coma, advanced dementia, and anesthesia—and reshape how we approach treatment and end-of-life care
- guide new therapies for mental health conditions such as depression, anxiety, and schizophrenia, where understanding the biology of subjective experience may help bridge the gap between animal models and human emotion
- clarify our moral duty towards animals by identifying which creatures and systems are sentient. This could affect how we conduct animal research, farm animals, consume animal products, and approach conservation. “Understanding the nature of consciousness in particular animals would transform how we treat them and emerging biological systems that are being synthetically generated by scientists,” said co-author Prof Liad Mudrik from Tel Aviv University and ERC grantee.
- reframe how we interpret the law by illuminating the conscious and unconscious processes involved in decision-making. New understanding could challenge legal ideas such as mens rea—the “guilty mind” required to establish intent. As neuroscience reveals how much of our behavior arises from unconscious mechanisms, courts may need to reconsider where responsibility begins and ends
- shape the development of neurotechnologies. Advances in AI, brain organoids, and brain–computer interfaces raise the prospect of producing or modifying awareness beyond biological life. While some suggest that computation alone might support awareness, others argue that biological factors are essential. “Even if ‘conscious AI’ is impossible using standard digital computers, AI that gives the impression of being conscious raises many societal and ethical challenges,” said Seth.
The authors call for a coordinated, evidence-based approach to consciousness. For example, using adversarial collaborations, rival theories are pitted against each other in experiments co-designed by their proponents. ”We need more team science to break theoretical silos and overcome existing biases and assumptions,” said co-author Prof Liad Mudrik. “This step has the potential to move the field forward.”
The researchers also urge more attention to phenomenology (what consciousness feels like) to complement the study of what it does (its function).
“Cooperative efforts are essential to make progress—and to ensure society is prepared for the ethical, medical, and technological consequences of understanding, and perhaps creating, consciousness,” said Cleeremans.
NOTES TO EDITORS
- Global workspace theory suggests that consciousness arises when information is made available and shared across the brain via a specialized global workspace, for use by different functions—like action and memory.
Higher-order theories suggest that a thought or feeling represented in some brain states only becomes conscious when there is another brain state that “points at it”, signaling that “this is what I am conscious of now”. They align with the intuition that being conscious of something means being aware of one’s own mental state
Integrated information theory argues that a system is conscious if its parts are highly connected and integrated in very specific ways defined by the theory, in line with the idea that every conscious experience is both unified and highly informative.
Predictive processing theory suggests that what we experience is the brain’s best guess about the world, based on predictions of what something will look or feel like, checked against sensory signals.
Please link to the original Frontiers in Science article in your reporting: “Consciousness science: where are we, where are we going, and what if we get there?” by Axel Cleeremans, Liad Mudrik, and Anil K. Seth, published 30 October 2025 in Frontiers in Science: https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2025.1546279/full [The link will go live with the full paper once the embargo lifts.]
-ENDS-
Journal
Frontiers in Science
Method of Research
Systematic review
Subject of Research
Not applicable
Article Title
Consciousness science: where are we, where are we going, and what if we get there?
Article Publication Date
30-Oct-2025
Huxley, Aldous The Doors of Perception : Free Download, Borrow, and Streaming : Internet Archive
How can (A)I help you?
AI that reads emotions can handle customer complaints, but it sometimes needs human assistance
University of Texas at Austin
As the saying goes, “The customer is always right.” With the proliferation of artificial intelligence in consumer-facing roles, however, that may not always be so. Some customers have figured out how to game AI chatbots, exaggerating their complaints to get bigger benefits, such as discounts.
On the plus side, however, AI customer service can help companies respond better to consumer complaints, saving money and reducing emotional burdens on human employees.
A new study by Yifan Yu, a Texas McCombs assistant professor of information, risk, and operations management, offers companies guidance on how to balance the promise and perils of AI for customer care.
With McCombs postdoctoral researcher Wendao Xue, he analyzes AI systems that detect human emotions — so-called emotion AI — and how companies might deploy them in various kinds of scenarios.
“Firms can refine how they use AI to ensure fairer, more effective decision-making,” says Yu. “Our study provides a practical framework for businesses to navigate this balance, particularly in customer care, where emotional communication plays a crucial role.”
Yu and Xue, with co-authors Lina Jia of the Beijing Institute of Technology and Yong Tan of the University of Washington, used game theory to model interactions among customers, employees, and companies. Variables included a customer’s level of emotional intensity, how much recompense an employee can offer to satisfy a customer, and costs and benefits to the company.
Overall, the analysis showed emotion AI works best for customer service when it’s integrated with human employees. Some kinds of scenarios were handled better by AI and others by people. Yu shares some principles for both.
Emotions can enhance chatbots. “Many companies already use AI to handle basic customer inquiries,” says Yu. “Adding emotion AI could help these systems better gauge frustration, confusion, or urgency.
“Instead of providing one-size-fits-all responses, the chatbot could tailor its approach based on the detected emotions, offering quicker solutions or escalating the case to another agent when needed.”
AI can play first responder. Emotion AI can decrease emotional toll and employee turnover by serving as a first point of contact with irate customers. Humans can step in when more nuance is required or when customers demand more.
Channels require different approaches. In public channels such as social media, where other users might be watching, human customer service may handle customer complaints with more sensitivity. Private channels such as customer phone calls might be a better use case for emotion AI.
Weak beats strong. Noise in the emotion AI system — random or irrelevant data — may make it harder to game the system and discourage customers from trying. Therefore, a weaker AI, with higher levels of noise in recognizing emotions, may sometimes better regulate gaming behaviors and increase the system’s social benefits.
“Normally, companies assume that better emotion recognition leads to better decisions,” Yu says. “But we found that when AI is too strong, customers are more likely to game the system by exaggerating their emotions, creating a ‘rat race’ of emotional escalation. This leads to misallocated resources and an overall loss in efficiency.”
For businesses, emotion AI could handle more than customer complaints, he says. It could help screen job candidates and monitor employees. For any of those uses, though, he recommends keeping a human component.
“AI has made remarkable strides in reasoning and problem-solving, often surpassing human capabilities in these areas,” says Yu. “But its ability to understand and respond to human emotions is still in its early stages.”
“When Emotion AI Meets Strategic Users” is published in Management Science.
Journal
Management Science
Article Title
When Emotion AI Meets Strategic Users
No comments:
Post a Comment