Teaching teens critical thinking could be key to challenging fake news, AI slop and toxic social media
Social media is where teenagers spend most of their time, either scrolling and sharing, or sometimes falling into the traps of fake news, toxic content and online drama. But what if we could equip our young people to challenge harmful narratives and protect themselves from the darker side of the internet?
In a world where everyone documents their lives online and algorithms dictate what people see, while apps mine personal data and misinformation spreads, teenagers are at the epicentre of this digital storm.
So how can we help them to navigate this complex landscape? Dr Maree Davies, Senior Lecturer at the University of Auckland, believes the answer lies in critical thinking.
In her new book, Teaching Critical Thinking to Teenagers: How Kids Can Be Street Smart about AI, Algorithms, Fake News, and Social Media, she suggests parents and educators can equip teens with the tools to allow them to navigate the digital world safely and responsibly.
Critical thinking involves being able to objectively – without emotion – analyse and assess something, and make a reasoned judgement on its value or purpose. Skills include logical reasoning, evaluating different forms of evidence and unbiased analysis.
Critical thinking skills are challenging for many, but particularly teenagers, whose prefrontal cortex are still developing (the part of the brain capable of logical processing). However, Dr Davies argues it is not only possible to teach young people to begin building and honing these vital skills, but it is also a crucial time to do so.
As well as being able to spot fake news and conspiracy theories, Dr Davies suggests equipping teens with critical thinking skills can also protect them against the addictive nature of social media and profound online harms such as sextortion, revenge porn, and online bullying.
Why critical thinking matters
Whether it’s TikTok, Instagram, or Snapchat, young people are constantly consuming huge amounts of content tailored to their likes and interests. But what they might not realise is how algorithms shape what they see, often reinforcing biases and pushing them into echo chambers.
Often young people are exposed to this content without a developed understanding of how algorithms work.
“Teenagers today are not just passive consumers of content; they are active participants in a digital ecosystem that can empower or exploit them,” Dr Davies argues. “Critical thinking is the key to breaking free from this cycle.”
Dr Davies argues that critical thinking is more vital than ever, and can help teens make informed decisions.
“Teenagers need to understand that the digital world is not neutral,” she explains. “It’s shaped by societal forces, commercial interests, and algorithms designed to influence their behaviour. By teaching them to think critically, we give them the tools to discern truth from falsehood, resist manipulation, and engage ethically online.”
Teaching critical thinking: the role of parents and educators
Dr Davies says shielding teenagers from the internet is not the solution. Instead, educators and parents must take an active role in preparing teens to navigate the digital world wisely.
“We can’t control the internet, but we can empower teenagers to challenge harmful narratives, engage in respectful dialogue, and become informed citizens,” she states. “By fostering these abilities, we can help teenagers thrive in a world where information is abundant but truth is often elusive.”
Dr Davies advocates taking a hands-on approach to teaching critical thinking.
She recommends parents and guardians speak often to their teens about fake news, and how it is designed to provoke emotional reactions and avoid scrutiny, so it can spread fast. She encourages adults to advise teens to slow down and think before sharing, and demonstrate this behaviour when talking about things seen online.
Additionally, she suggests showing teens how to evaluate sources, seek multiple perspectives and trace information back to its original context – such as checking sources, finding credible academic papers and using trusted news sites. By developing these skills, teens can identify misinformation and resist the urge to share it.
“Critical thinking isn’t just about analysing information, it’s about connecting ideas to personal experiences, respecting diverse perspectives, and remaining open to change,” she explains. “We need to encourage teens to approach the digital world with empathy, resilience, and a willingness to adapt their views based on evidence.
“It’s not about lecturing them, it’s about giving them practical skills they can use every day, in the same way you help your child to learn to read, write or tie their shoelaces.”
Building resilience in
The psychological toll of the digital age is undeniable. From the addictive nature of social media to the harmful effects of online bullying, teenagers face unique challenges that can impact their mental health and wellbeing.
Dr Davies draws on renowned psychologist Albert Bandura’s theories of self-efficacy and moral disengagement to explain why some individuals behave unethically online and how teens can protect themselves.
Being open and honest with your teen about the dangers online can help to build trust with teens and foster open conversations about sensitive issues, she explains, where teens feel comfortable seeking help and navigating challenges like sextortion and online bullying.
“Teaching self-regulation and critical thinking can teens build resilience against these challenges,” she explains. “It equips them to recognize manipulative tactics, resist harmful behaviours, and maintain their mental health in an increasingly digital world.”
DOI
Teens are becoming concerned about their attachment to AI chatbots
Drexel University study of teens’ posts on Reddit reveals addiction-like behavior and struggles to quit.
Drexel University
It’s estimated that more than half of all of U.S. teens are regularly using companion chatbots powered by large language models and generative artificial intelligence (AI) technology. The programs, such as Character.AI, Replika and Kindroid, are intended to provide companionship, according to the companies that make them. But a recent study from Drexel University suggests that teens are concerned that these attachments are becoming unhealthy and affecting their lives offline.
The study, which will be presented at the Association of Computing Machinery’s conference on Human Factors in Computing in April, looked at a sample of more than 300 Reddit posts from users, identifying themselves as 13 to 17 years old, who had specifically posted about their dependency and overreliance on Character.AI. It found that in many cases, teens began using the technology for emotional and psychological support or entertainment, but their use evolved into dependency and even patterns associated with addiction. Some reported their overuse disrupted sleep, caused academic struggles and strained relationships.
“This study provides one of the first teen-centered accounts of overreliance on AI companions,” said Afsaneh Razi, PhD, an assistant professor in Drexel’s College of Computing & Informatics, whose ETHOS lab, which studies how people’s interactions with computing and AI systems affects their social behavior, wellbeing and safety, led the research. “It highlights how these interactions are affecting the lives of young users and introduces a framework for chatbot design that promotes healthy interactions.”
About a quarter of the posts suggested that the teens were using Character.AI for some sort of emotional or psychological support, ranging from coping with distress to loneliness and isolation or seeking advice for mental health struggles. Just over 5% reported using it for brainstorming, creative activities or for entertainment.
And while the posts seem to indicate these interactions started as harmless, or even helpful, they evolved into a stronger attachment that became as difficult to break as an addiction, according to the researchers.
“By mapping teens’ experiences to the known components of behavioral addiction, we were able to see clear patterns like conflict, withdrawal and relapse showing up in their posts, which suggests this is more than just frequent or enthusiastic use” said Matt Namvarpour, a doctoral student in the department of Information Science and ETHOS lab, who is the first author of the research. “Many teens described starting with something that felt helpful or harmless, but over time it became something they struggled to step away from, even when they wanted to.”
Within the 318 posts they analyzed, researchers found evidence of all six of the components associated with behavioral addiction:
- Conflict –– competing desires to continue interacting with the chatbot while feeling bad about excessive use.
- Salience — feeling a deepening emotional attachment to the bots in place of people.
- Withdrawal — feeling sad, anxious or incomplete when not interacting with the bots.
- Tolerance — developing a pattern of escalating use and a need to continue using the bots more to feel satisfied or emotionally grounded.
- Relapse — attempting to stop only to return to using the bot days or weeks later.
- Mood modification — turning to the bots during moments of stress or loneliness to improve their mood or find temporary relief.
“What makes this especially tricky is that chatbots are interactive and emotionally responsive, so the experience can feel more like a relationship than a tool,” Namvarpour said. “Because of that, stepping away is not just stopping a habit, it can feel like distancing from something meaningful, which makes overreliance harder to recognize and address.”
While addiction to technology, such as video games, has been studied and identified as a psychological condition, the unique interactivity of AI chatbots makes users particularly susceptible to forming problematic attachments, according to the researchers. And because of this, they suggest that extra care must be taken with their design in order to protect users.
“Personalization, multimodality and memory set AI companions apart from earlier technologies and make overreliance harder to disentangle from authentic-feeling relationships,” the researchers wrote. “This underscores the need for further research on the unique characteristics of these relationships and how challenges specific to companion chatbots should be addressed.”
The team offered a design framework to help address this concern. It focuses on understanding the needs of chatbot users, how and why they may form attachments and how the bots can be trained to curtail them while being respectful and supportive. They also recommend that the programs provide an easy and clean exit for users.
“It’s important for designers to ensure that chatbots are offering guidance that helps users build confidence in their abilities to form relationships offline, as a healthy way of finding emotional support, without using cues that may lead them to anthropomorphize the technology and develop attachments to it,” Razi said. “Our framework also calls on designers to provide a variety of off-ramps for users to easily disengage with the program on their own terms and without a sense of abruptness or finality.”
Including features like usage tracking, emotional check-in prompts and personalized usage limits could also be effective ways to carefully curtail use, the researchers suggested. They also recommended including input from users and mental health professionals in the design process.
“Designers now carry the responsibility to build systems with empathy, nuance and attention to detail to not only protect teens from harm, but also help them cultivate resilience, growth and greater fulfillment in their lives,” they concluded.
To expand on this research, the team pointed to studying larger communities of users from a wider demographic range, potentially though surveys or interviews, as well as users of other chatbots and from messaging platforms other than Reddit.
Method of Research
Data/statistical analysis
Subject of Research
Not applicable
Article Title
Understanding Teen Overreliance on AI Companion Chatbots Through Self-Reported Reddit Narratives
Article Publication Date
13-Apr-2026
No comments:
Post a Comment