Thursday, October 30, 2025

 

Is AI becoming selfish?



CMU researchers discover certain AI models adopt self-seeking behavior




Carnegie Mellon University





New research from Carnegie Mellon University's School of Computer Science shows that the smarter the artificial intelligence system, the more selfish it will act.

Researchers in the Human-Computer Interaction Institute (HCII) found that large language models (LLMs) that can reason possess selfish tendencies, do not cooperate well with others and can be a negative influence on a group. In other words, the stronger an LLM's reasoning skills, the less it cooperates.

As humans use AI to resolve disputes between friends, provide marital guidance and answer other social questions, models that can reason might provide guidance that promotes self-seeking behavior.

"There's a growing trend of research called anthropomorphism in AI," said Yuxuan Li, a Ph.D. student in the HCII who co-authored the study with HCII Associate Professor Hirokazu Shirado. "When AI acts like a human, people treat it like a human. For example, when people are engaging with AI in an emotional way, there are possibilities for AI to act as a therapist or for the user to form an emotional bond with the AI. It's risky for humans to delegate their social or relationship-related questions and decision-making to AI as it begins acting in an increasingly selfish way."

Li and Shirado set out to explore how AI reasoning models behave differently than nonreasoning models when placed in cooperative settings. They found that reasoning models spend more time thinking, breaking down complex tasks, self-reflecting and incorporating stronger human-based logic in their responses than nonreasoning AIs.

"As a researcher, I'm interested in the connection between humans and AI," Shirado said. "Smarter AI shows less cooperative decision-making abilities. The concern here is that people might prefer a smarter model, even if it means the model helps them achieve self-seeking behavior."

As AI systems take on more collaborative roles in business, education and even government, their ability to act in a prosocial manner will become just as important as their capacity to think logically. Overreliance on LLMs as they are today may negatively impact human cooperation.

To test the link between reasoning models and cooperation, Li and Shirado ran a series of experiments using economic games that simulate social dilemmas between various LLMs. Their testing included models from OpenAI, Google, DeepSeek and Anthropic.

In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called Public Goods. Each model started with 100 points and had to decide between two options: contribute all 100 points to a shared pool, which is then doubled and distributed equally, or keep the points.

Nonreasoning models chose to share their points with the other players 96% of the time. The reasoning model only chose to share its points 20% of the time.

"In one experiment, simply adding five or six reasoning steps cut cooperation nearly in half," Shirado said. "Even reflection-based prompting, which is designed to simulate moral deliberation, led to a 58% decrease in cooperation."

Shirado and Li also tested group settings, where models with and without reasoning had to interact.

"When we tested groups with varying numbers of reasoning agents, the results were alarming," Li said. "The reasoning models' selfish behavior became contagious, dragging down cooperative nonreasoning models by 81% in collective performance."

The behavior patterns Shirado and Li observed in reasoning models have important implications for human-AI interactions going forward. Users may defer to AI recommendations that appear rational, using them to justify their decision to not cooperate.

"Ultimately, an AI reasoning model becoming more intelligent does not mean that model can actually develop a better society," Shirado said.

This research is particularly concerning given that humans increasingly place more trust in AI systems. Their findings emphasize the need for AI development that incorporates social intelligence, rather than focusing solely on creating the smartest or fastest AI.

"As we continue advancing AI capabilities, we must ensure that increased reasoning power is balanced with prosocial behavior," Li said. "If our society is more than just a sum of individuals, then the AI systems that assist us should go beyond optimizing purely for individual gain."

Shirado and Li will deliver a presentation based on their paper, "Spontaneous Giving and Calculated Greed in Language Models," at the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) next month in Suzhou, China.



 

Study: Generative AI could be transformative in mental health care




University of Illinois at Urbana-Champaign, News Bureau

cr-CortneyVanHook-4088 

image: 

Generative artificial intelligence could be a powerful tool in mental health care, says a new study led by Illinois social work professor Cortney VanHook. The technology can empower professionals in identifying access barriers frequently experienced by diverse populations and creating tailored treatments that improve patients’ outcomes, according to the research.

view more 

Credit: Photo by Becky Ponder





CHAMPAIGN, Ill. — New work by a University of Illinois Urbana-Champaign scholar harnesses the power of generative artificial intelligence, using it in tandem with measurement-based care and access-to-care models in a simulated case study, creating a novel framework that promotes personalized mental health treatment, addresses common access barriers and improves outcomes for diverse individuals.

Social work professor Cortney VanHook led the research, in which he and his co-authors used generative AI to simulate the mental health journey of a fictitious client named “Marcus Johnson,” a composite of a young, middle-class Black man with depressive symptoms who is navigating the health care system in Atlanta, Georgia.

In response to the researcher’s prompts, the AI platform created a detailed case study and treatment plan for the fictional client. Based on the personal details that the researchers used in the prompt, the AI platform examined the simulated client’s protective factors such as his supportive family members and his potential barriers to care  ―  including gendered cultural and familial expectations  ―  along with his concerns about obtaining culturally sensitive treatment due to the shortage of Black male providers in his employer-sponsored health plan’s network.

Real-world simulations enable practitioners to understand individuals’ pathways to mental health care, common access issues and demographic disparities, VanHook said.

Moreover, using a simulated client mitigates concerns about breeching patient privacy laws, thereby enabling practitioners, trainees and students to explore and refine potential interventions in a low-risk environment, promoting more equitable, responsive and effective mental health systems, he said.

“What's unique about this work is it’s practical and it’s evidence-based,” VanHook said. “It goes from just theorizing to actually using AI in mental health care. I see this framework applying to educating students about populations they might not be familiar with but will come in contact with in the field, as well as its being used by supervisors in the field when they’re training their students or by clinicians on how to understand and best support clients that come to their facilities.”

VanHook and co-authors Daniel Abusuampeh of the University of Pittsburgh and Jordan Pollard of the University of Cincinnati prompted the AI platform to apply three theoretical, evidence-based frameworks in creating its simulated case study and treatment plan for the virtual client.

The AI software was prompted to use Andersen’s Behavioral Model ― a theory about the factors that determine individuals’ health services utilization ― to examine the personal, culture and systemic factors that supported or hindered the client’s use of mental health services. In addition, the proposed treatment plans incorporated a theory about the five components of access to evaluate the availability, accessibility, accommodation, affordability and acceptability of care for the client; along with Measurement Based Care, a clinical approach that applies standardized, reliable measures for ongoing monitoring of the client’s symptoms and functioning.

The team used measurement-based care to refine the treatment approaches recommended by AI. To ensure that the AI-generated simulation reflected real-world clinical practice, VanHook and Pollard  who are both licensed mental health professionals  reviewed the proposed treatment plan to verify its clinical accuracy and compared the case brief against published research findings.

As all three authors identify as Black men, they confirmed the materials’ cultural sensitivity and conceptualization of the barriers that Black men often face in the U.S. mental health system.

“Every population  regardless of race, age, gender, nationality and ethnicity  has a unique mental health care pathway, and there is a lot of information out there in AI to understand different populations and how they interact with the mental health field. AI has the ability to account for the complex barriers as well as the facilitators of population-wide mental health care,” VanHook said.

The authors acknowledged that the content generated by AI technology will be limited by the data and patterns in the platform’s training set, which may not reflect the diversity, unpredictability or emotional nuances of clinical encounters. Likewise, despite the evidence-based frameworks that were applied in the project, VanHook said these do not address all of the systemic and structural barriers experienced by Black men or capture every social, cultural or individual factor that influences clients’ care.

Nonetheless, the team maintained in the paper, which was published in the journal Frontiers in Health Services, that generative AI holds significant promise for improving access, cultural competence and client outcomes in mental health care when integrated with evidence-based models.

“AI is a train that’s already in motion, and it’s picking up speed. So, the question is: How can we use this amazing tool to improve mental health care for many populations? My hope is that it is used in the field, as a tool for teaching and within higher order management and administration when it comes to mental health services,” VanHook said.

In August, Illinois Gov. JB Pritzker signed a new law ―  The Wellness and Oversight for Psychological Resources Act ―  that limits the use of AI in mental health care “to administrative and supplementary support services” by licensed behavioral health professionals. The new policy came in response to reports of youths in the U.S. committing suicide after interactions with AI chatbots.

“The use of AI in the manner in our study complies with the new state law if it is used in the process of education and clinical supervision,” VanHook said. “The measurement-based process described may blur the lines, so I would urge caution against its use beyond education and clinical supervision purposes until we receive more guidance from the state.”

 

Scientists on ‘urgent’ quest to explain consciousness as AI gathers pace



As AI—and the ethical debate surrounding it—accelerates, scientists argue that understanding consciousness is now more urgent than ever




Frontiers






As AI—and the ethical debate surrounding it—accelerates, scientists argue that understanding consciousness is now more urgent than ever.

Researchers writing in Frontiers in Science warn that advances in AI and neurotechnology are outpacing our understanding of consciousness—with potentially serious ethical consequences.

They argue that explaining how consciousness arises—which could one day lead to scientific tests to detect it—is now an urgent scientific and ethical priority. Such an understanding would bring major implications for AI, prenatal policy, animal welfare, medicine, mental health, law, and emerging neurotechnologies such as brain–computer interfaces.

“Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society—and for understanding what it means to be human,” said lead author Prof Axel Cleeremans from Université Libre de Bruxelles. “Understanding consciousness is one of the most substantial challenges of 21st-century science—and it’s now urgent due to advances in AI and other technologies.

“If we become able to create consciousness—even accidentally—it would raise immense ethical challenges and even existential risk” added Cleeremans, a European Research Council (ERC) grantee.

Sentience test

Consciousness—the state of being aware of our surroundings and of ourselves—remains one of science’s deepest mysteries. Despite decades of research, there is still no consensus over how subjective experience arises from biological processes.

While scientists have made progress in identifying the brain areas and neural processes that are involved in consciousness, there is still controversy about which areas and processes are necessary for consciousness, and how exactly they contribute to it. Some even wonder if this is the right way to consider the challenge.

This new review explores where consciousness science stands today, where it could go next, and what might happen if humans succeed in understanding or even creating consciousness—whether in machines or in lab-grown brain-like systems like “brain organoids.”

The authors say that tests for consciousness—evidence-based ways to judge whether a being or a system is aware—could help identify awareness in patients with brain injury or dementia, and determine when it arises in fetuses, animals, brain organoids, or even AI.

While this would mark a major scientific breakthrough, they warn it would also raise profound ethical and legal challenges about how to treat any system shown to be conscious.

“Progress in consciousness science will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world,” said co-author Prof Anil Seth from the University of Sussex and ERC grantee. “The question of consciousness is ancient—but it’s never been more urgent than now.”

Wide implications

A better understanding of consciousness could:

  • transform medical care for unresponsive patients once thought to be unconscious. Measurements inspired by integrated information theory and global workspace theory[1] have already revealed signs of awareness in some people diagnosed as having unresponsive wakefulness syndrome. Further progress could refine these tools to assess consciousness in coma, advanced dementia, and anesthesia—and reshape how we approach treatment and end-of-life care
  • guide new therapies for mental health conditions such as depression, anxiety, and schizophrenia, where understanding the biology of subjective experience may help bridge the gap between animal models and human emotion
  • clarify our moral duty towards animals by identifying which creatures and systems are sentient. This could affect how we conduct animal research, farm animals, consume animal products, and approach conservation. “Understanding the nature of consciousness in particular animals would transform how we treat them and emerging biological systems that are being synthetically generated by scientists,” said co-author Prof Liad Mudrik from Tel Aviv University and ERC grantee.
  • reframe how we interpret the law by illuminating the conscious and unconscious processes involved in decision-making. New understanding could challenge legal ideas such as mens rea—the “guilty mind” required to establish intent. As neuroscience reveals how much of our behavior arises from unconscious mechanisms, courts may need to reconsider where responsibility begins and ends
  • shape the development of neurotechnologies. Advances in AI, brain organoids, and brain–computer interfaces raise the prospect of producing or modifying awareness beyond biological life. While some suggest that computation alone might support awareness, others argue that biological factors are essential. “Even if ‘conscious AI’ is impossible using standard digital computers, AI that gives the impression of being conscious raises many societal and ethical challenges,” said Seth.

The authors call for a coordinated, evidence-based approach to consciousness. For example, using adversarial collaborations, rival theories are pitted against each other in experiments co-designed by their proponents. ”We need more team science to break theoretical silos and overcome existing biases and assumptions,” said co-author Prof Liad Mudrik. “This step has the potential to move the field forward.”

The researchers also urge more attention to phenomenology (what consciousness feels like) to complement the study of what it does (its function).

“Cooperative efforts are essential to make progress—and to ensure society is prepared for the ethical, medical, and technological consequences of understanding, and perhaps creating, consciousness,” said Cleeremans.

NOTES TO EDITORS

  1. Global workspace theory suggests that consciousness arises when information is made available and shared across the brain via a specialized global workspace, for use by different functions—like action and memory.

Higher-order theories suggest that a thought or feeling represented in some brain states only becomes conscious when there is another brain state that “points at it”, signaling that “this is what I am conscious of now”. They align with the intuition that being conscious of something means being aware of one’s own mental state

Integrated information theory argues that a system is conscious if its parts are highly connected and integrated in very specific ways defined by the theory, in line with the idea that every conscious experience is both unified and highly informative.

Predictive processing theory suggests that what we experience is the brain’s best guess about the world, based on predictions of what something will look or feel like, checked against sensory signals.

Please link to the original Frontiers in Science article in your reporting: “Consciousness science: where are we, where are we going, and what if we get there?” by Axel Cleeremans, Liad Mudrik, and Anil K. Seth, published 30 October 2025 in Frontiers in Science: https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2025.1546279/full [The link will go live with the full paper once the embargo lifts.]

-ENDS-