Tuesday, January 13, 2026

How AI ‘deepfakes’ became Elon Musk’s latest scandal


By AFP
January 13, 2026


Musk has characterised criticism of X and Grok as an attack on free speech - Copyright AFP/File Lionel BONAVENTURE

Elon Musk’s company xAI has faced global backlash in recent days over sexualised “deepfake” images of women and children created by its Grok chatbot.

Here are the essential facts about the scandal, how governments have responded and the company’s attempts to cool the controversy.

– ‘Put her in a bikini’ –

Grok — Musk’s version of the chatbots also offered by OpenAI and other generative AI companies — has its own account on the X social network allowing users to interact with it.

Until last week, users could tag the bot in posts to request image generation and editing, receiving the image in a reply from Grok.

Many took advantage of the service by sending Grok photos of women or tagging the bot in replies to women’s photo posts.

They would ask it to “put her in a bikini” or “take her clothes off” — receiving photorealistic altered images in response.

Such AI-powered nonconsensual “nudifying” services had previously been available on niche websites, but Grok became the first to take it mainstream with social media integration and offer it for free.

Outrage grew as some users were discovered generating sexualised images of children and minors.

Still others used the tool to generate bikini images of women killed in the deadly New Year fire at Swiss ski resort Crans-Montana, as well as the woman shot and killed by an immigration officer in Minneapolis.

Last week, an analysis of more than 20,000 Grok-generated images by Paris non-profit AI Forensics found that more than half depicted “individuals in minimal attire” — most of them women, and two percent appearing to be under-18s.

– How have countries reacted?

Indonesia on Saturday became the first country to block access to Grok entirely, with neighbouring Malaysia following on Sunday.

India said Sunday that X had removed thousands of posts and hundreds of user accounts in response to its complaints.

Speaking on condition of anonymity, a government source told AFP 3,500 posts and 600 accounts had been removed.

Britain’s Ofcom media regulator — which can fine companies up to 10 percent of global revenue — said Monday it was opening a probe into whether X failed to comply with UK law over the sexual images.

“If X cannot control Grok, we will — and we’ll do it fast,” British Prime Minister Keir Starmer told MPs from his Labour Party.

France’s commissioner for children Sarah El Hairy said Tuesday she had referred Grok’s generated images to French prosecutors, the Arcom media regulator and the European Union.

Digital affairs minister Anne Le Henanff had earlier called the restriction of image creation to paying users “insufficient and hypocritical”.

And the European Commission, which acts as the EU’s digital watchdog, has ordered X to retain all internal documents and data related to Grok until the end of 2026 in response to the uproar.

The bloc has already been investigating X over potential breaches of its digital content rules since 2023.

“We will not be outsourcing child protection and consent to Silicon Valley,” Commission chief Ursula von der Leyen said Monday.

“If they don’t act, we will.”

– How did the company respond?

“We take action against illegal content… including Child Sexual Abuse Material (CSAM) by removing it, permanently suspending accounts, and working with local governments and law enforcement,” X’s safety team posted on January 4.

Musk himself said last week that anyone using Grok to “make illegal content will suffer the same consequences as if they upload illegal content”.

But he made light of the controversy in a separate post, adding laughing emojis as he reshared to his 232 million followers on X a post featuring a toaster wrapped in a bikini.

By January 9, Grok began responding to all requests for image generation or editing by saying the service was restricted to paying subscribers.

Musk has also fired back at politicians demanding action.

Critics of X and Grok “just want to suppress free speech” Musk posted on January 10.

Could Musk’s social media platform go X-tinct in Britain?


Photo: sdx15 / Shutterstock

Whether social media has been a net positive or negative thing for our society will be an essay question I am sure academics across sociology and human history will end up divided on in the future.

We are coming to a point of finally recognising that there are serious issues on social media, as Ofcom step in to investigate Elon Musk’s X, over in-built AI generated non-consensual sexualised images of women, and in the most despicable, vile cases, children too, by Grok, the platform’s own AI tool. Should Ofcom conclude it is necessary, they have the powers to ban access to the platform in the UK.

In the few examples from other countries, I have heard the banning of social media platforms talked of in our media as a controlling measure from states to restrict its own people from being able to communicate with the world. While anti-censorship warriors, I suspect, may try to use this narrative towards Keir Starmer and the Labour government, this case of X is very different from comparisons to North Korea.

Ever since Musk took over the platform, X has become a place of huge controversy, and clashes with the UK government have not been uncommon since the election in July 2024. I am sure we all remember the action the government took over comments displayed on Musk’s platform, inciting violence following the awful Southport attack. We saw riots and division increased across society right from the start of Labour’s time in office, with Musk and his platform playing a direct role in encouraging this. 

“It’s a cesspit” is one of the most common expressions I heard, having never really engaged with my X account until taking on my role for LabourList, as people tried to warn me before logging in as to what I was about to unleash on myself.

It is little wonder then that many Labour MPs have quit X in recent days, with Folkestone and Hythe MP Tony Vaughan among the latest to do so – citing the “toxic environment” and “horrendous content” the platform now hosts.

X has become known as the home of the far-right online, and it is no wonder why when their owner has been filmed throwing out certain salutes like he’s at a rally in 1940s Berlin, and attends, via a giant screen, a march to show his support for those in favour of a former EDL leader’s views on just how bad Britain is. Musk has personally set out to attack the Prime Minister and Labour on X, all while supporting far-right movements both in the UK and across Europe.

The more cynical approach may assume from this that I would be in favour of an X ban as if it was some method of shutting down the far-right voice online. As much as I think removing a platform so filled with hatred would have its perks, if there is one thing I would be confident of, those far-right voices would simply migrate (ironic) to another platform, where they would again find each other and remain within their echo-systems.

But this investigation is not looking into just how hate-filled and divisive X is, nor how impactful that hate and division is on society. It is also not being carried about by a Labour government that has had to deal with so many problems directly linked to the platform in question.

Ofcom are investigating a specific case of illegal content being created and distributed through Musk’s platform and the measures the social media company took to deal with this once they found out it was occurring. Musk’s public response to this, as you could guess, has been to complain about censorship in the UK. Many also understand that should you pay for Grok, it remains accessible for users.

Following a landmark Online Safety Act and a strategy to deal with violence against women and girls published by the government, should Ofcom conclude from their investigation that X is not a suitable platform in the UK, then the government should absolutely support this.

However, even if this does not happen, Starmer and his government must ensure they take this moment to discuss our relationship with social media as a whole, with the nation.

Moments like this prompt an opportunity to engage. The government needs to take it, and be clear in its messaging. Do not let those with an anti-Starmer agenda lead the narrative on this, pretending it is some kind of state intervention by a communist power to restrict your freedoms.

The government must stay strong, keep all its options on the table and ensure that it explains the rationale behind any decision it has to take following Ofcom’s investigation with complete clarity, a consistent narrative and a confident approach.

'It's a scam': French AI envoy on X making Grok chatbot pay-to-perve

Issued on: 11/01/2026

06:14 min




France's AI and Digital Ambassador Clara Chappaz says making public image generation a paid feature of Grok is a "scam", adding to the outcry over how tech mogul Elon Musk has dealt with a torrent of deepfake sexual abuse on his social media platform X.

After Malaysia and Indonesia suspended access to Grok this weekend, French foreign ministry official Chappaz declined to say whether France would follow suit, but indicated that it was working on an international response, and said she hoped the courts would mete out swift justice for what she called a "totally illegal" use of AI.

"Everyone in France who's been a victim of this should definitely take it to the courts," Chappaz said, underlining that in France, generating non-consensual sexual deepfakes is punishable by up to three years' prison and €75,000 in fines.

WATCH MOREGlobal anger over Grok undressing women online

The Telegraph, a British newspaper, reported this weekend that Australia, the UK and Canada are considering joint action – though Canada’s AI Minister Evan Solomon has since said in a post on X that the country won’t be banning the platform.

Since the start of the year, X users have flooded the platform with sexually abusive, violent and extremist content generated by the artificial intelligence chatbot Grok, with users notably asking it to alter pictures of women in order to strip them of their clothes.

After widespread criticism from governments and civil society, X put the image generation function behind a paywall for those using it within public-facing replies and posts on X. But the tool remains available for free in a private Grok area of the website.

Since this change, researchers have indicated that the number of illicit posts has declined from its peak of tens of thousands a day in the first week of January.

Chappaz called the tweak "completely hypocritical".

"I see it as a scam because what it means is people pay to get access to the functionality, and guess who benefits from it? X, who's getting more income as a result," she said.


From bans to probes: Which countries are taking aim at Elon Musk’s Grok AI chatbot?


Copyright Leon Neal/Pool Photo via AP, File

By Anna Desmarais with AP
Published on 13/01/2026 - EURONEWS

Global crackdown: These nations are restricting or speaking out against Grok over non-consensual sexually explicit deepfakes


Governments worldwide are moving swiftly to rein in Grok, Elon Musk’s artificial intelligence (AI) chatbot, amid growing concerns it generates fake, sexually explicit images.

Last summer, xAI, Grok’s operating company, added an image-generator feature that included a “spicy mode” that could generate adult content.

In recent weeks, Grok has responded to user prompts to “undress” images of women and dress them in bikinis, creating AI-generated deepfakes with no safeguards.

From outright blocks in Southeast Asia to criminal probes and regulatory warnings in Europe and Australia, authorities say existing safeguards are failing.

These are the countries that have banned or heeded warnings about Grok.

Which countries have banned Grok?

Indonesia

Indonesia was the first country to temporarily block Grok to protect women, children, and the broader community from fake pornographic content generated using AI.

"The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s communication and digital affairs minister Meutya Hafid said in a statement on Saturday.

Restrictions on Grok are a “preventative” measure while the authorities assess whether the platform is safe, Indonesia’s National Police said.

Initial findings showed that Grok does not have efficient safeguards to stop the creation and distribution of pornographic content based on real photos of Indonesian residents, according to a statement from Alexander Sabar, Indonesia’s director general of digital space supervision.

Sabar said such practices risk violating privacy and image rights when photos are manipulated or shared without consent, causing psychological, social, and reputational harm.

Malaysia


The Malaysian Communications and Multimedia Commission (MCMC) ordered a temporary ban on Grok on Sunday after what it said was “repeated misuse” of the tool to generate obscene, sexually explicit, and non-consensual manipulated images.

The regulator said it sent two notices this month to X and xAI demanding stronger safeguards. In its replies, X said that Grok relies mostly on users to submit complaints about abusive content.

The MCMC concluded that X “failed to address the inherent risks” in the design and operation of its AI platform, which it said is insufficient under Malay law.

“The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place.

How have other countries responded?

European Union


The European Commission announced it is looking into cases of sexually suggestive and explicit images of young girls generated by Grok.

“I can confirm from this podium that the Commission is also very seriously looking into this matter,” a Commission spokesperson told journalists in Brussels last week.

Reuters reported that the Commission ordered X to retain all documents relating to Grok until the end of the year so the bloc can evaluate whether it complies with EU rules.

A Commission spokesperson told Reuters that it doesn't mean that a formal investigation has been launched.

Ursula Von der Leyen, president of the European Commission, said in an interview with the Italian newspaper Corriere della Sera that she was "outraged that a technology platform allows users to digitally strip women and children online".

Without directly naming X or Grok, von der Leyen said the Commission "will not outsource child protection and consent to Silicon Valley. If they don't act, we will," she said.

United Kingdom


The United Kingdom’s media watchdog launched an investigation into X, xAI’s parent company, and Musk’s social media platform, over the use of Grok to generate sexually explicit and non-consensual images.

Ofcom said in a statement that there had been "deeply concerning reports" of the chatbot being used to create and share undressed images of people, as well as "sexualised images of children".

The media watchdog could also seek a court order to force internet service providers to block access to Grok if X doesn’t comply with Ofcom’s requirements.

X could face a fine of up to 10 percent of its worldwide revenue or £18 million (€20 million).

_“_The content which has circulated on X is vile. It’s not just an affront to decent society, it is illegal,” Liz Kendall, the technology secretary, told parliament.

France

​The Paris Prosecutor’s Office expanded an investigation into X in early January to include Grok, local media reported.

The initial probe, ongoing since last July, focused on suspected organised interference with X’s computer systems and the illegal extraction of data.

The decision to widen the investigation was made this week after five politicians accused the platform of generating and disseminating fake sexually explicit videos featuring minors, according to French newspaper Le Parisien.

It reported that France’s Regulatory Authority for Audiovisual and Digital Communication (Arcom) is investigating X’s potential breaches of the Digital Services Act, the European regulation rules for digital services.

Italy


On January 8, Italy’s Data Protection Authority (Garante) warned that anyone using Grok or other AI platforms to remove people’s clothing and those distributing these images risk criminal charges.

Using the tools without the permission of the person in the photo could be considered “[a] serious violation of the fundamental rights and freedoms of the individuals involved,” the body said.

The authority also reminded Grok and other AI providers that they must design, develop, and make products available that comply with privacy regulations. ​

The Italian regulator said it is working with Ireland’s Data Protection Commission, the lead privacy authority for X, because the company’s headquarters are based in Ireland.

In October, Italy’s authority blocked ClothOff, another AI-generated platform that removes clothing. The platform lets anyone, including minors, create photos and videos that portray real people in the nude or in sexual positions.

In September, Italy added a new articleunder its criminal code to punish those disseminating AI deepfakes with up to five years in prison.

Germany

Germany said that it will soon present a “concrete proposal” for a new law against digital violence.

Anna-Lena Beckfeld, a spokesperson for Germany’s justice ministry, said in a press conference in January that the eventual digital violence law will be a way to support victims of this “type of digital violence,” by making it “easier for them to take direct action against violations of their rights online”.

When asked specifically about Grok, Beckfeld said it is “unacceptable that manipulation is being used on a large scale for systemic violations of personal rights”.

"We, therefore, want to see stronger measures taken against this through criminal law,” she added.

In 2025, Germany’s three main parties agreed to reform cybercrime law and close “loopholes” in the criminal code for AI-related crimes, such as AI-generated sexual images, according to a coalition agreement.

Australia

In January, the office of Australia’s eSafety Commissioner said it had received a small but increasing number of reports in the past couple of weeks about Grok’s sexual AI content.

The commissioner’s office said it will use its powers, such as a removal notice, which could order a social media site to take down problematic content, if any of the content violates the country’s Online Safety Act.

The office has already requested more information from X about the misuse of Grok’s sexual service and to evaluate whether it is complying with Australia's new social media law.

It also reminded Grok that, as of March 9, online services, including AI companies, will have to block children’s access to sexual, violent or harmful content. ​


Social media harms teens, watchdog warns, as France weighs ban


By AFP
January 13, 2026


France is consider banning social media for under 15s
 - Copyright AFP/File Lionel BONAVENTURE


Rébecca FRASQUET

Social media harms the mental health of adolescents, particularly girls, France’s health watchdog said Tuesday as the country debates banning children under 15 from accessing the immensely popular platforms.

The results of an expert scientific review on the subject were announced after Australia became the first country to prohibit big platforms including Instagram, TikTok and YouTube for under 16s last month, while other nations consider following its lead.

Using social media is not the sole cause of the declining mental health of teenagers, but its negative effects are “numerous” and well documented, the French public health watchdog ANSES wrote in its opinion, the result of five years of work by a committee of experts.

France is currently debating two bills, one backed by President Emmanuel Macron, that would ban social media for under 15s.

The ANSES opinion recommended “acting at the source” to ensure that children can only access social networks “designed and configured to protect their health”.

This means that the platforms would have to change their personalised algorithms, persuasive techniques and default settings, according to the agency.

“This study provides scientific arguments for the debate about social networks in recent years: it is based on 1,000 studies,” the expert panel’s head Olivia Roth-Delgado told a press conference.

Social media can create an “unprecedented echo chamber” that reinforces stereotypes, promotes risky behaviour and promotes cyberbullying, the ANSES opinion said.

The content also portrays an unrealistic idea of beauty via digitally altered images that can lead to low self-esteem in girls, which creates fertile ground for depression or eating disorders, it added.

Girls — who use social media more than boys — are subjected to more of the “social pressure linked to gender stereotypes,” the opinion said.

This means girls are more affected by the dangers of social media — as are LGBT people and those with pre-existing mental health conditions, it added.

On Monday, tech giant Meta urged Australia to rethink its teen social media ban, while reporting that it has blocked more than 544,000 Instagram, Facebook and Threads accounts under the new law.

Meta said parents and experts were worried about the ban isolating young people from online communities, and driving some to less regulated apps and darker corners of the internet.

Elon Musk’s X, formerly Twitter, is meanwhile facing a global backlash for allowing users to use its AI chatbot Grok to create sexualised pictures of women and children using simple prompts such as “put her in a bikini”.

 

Capital, Labour and Primitive Accumulation
Paper Thumbnail
Author Photo Werner Bonefeld

View Paper ▸ Download PDF ⬇


ABSTRACT
Marx died over his chapter on class in volume III of Capital. The analysis of capitalism is with necessity a class analysis and generations of Marxists have sought to supply the Marxist 'definition' of class. I use the term 'definition' here with critical intent. How might it be possible to define 'class' within a...
read more...

 

Scientists demonstrate low-cost, high-quality lenses for super-resolution microscopy


New fabrication approach uses consumer-grade 3D printers to create high-performance lenses, opening the door to advanced, customizable imaging


Optica

Lenslet array 

image: 

Using an approach that combines 3D printing, silicone molding and a UV curable clear resin, researchers fabricated an inexpensive lenslet array, which they used for super-resolution imaging in a multifocal structured illumination microscope. The individual lenslet laser beams are distinguishable in the image.

view more 

Credit: Ralf Bauer, University of Strathclyde




WASHINGTON — Researchers have shown that consumer-grade 3D printers and low-cost materials can be used to produce multi-element optical components that enable super-resolution imaging, with each lens costing less than $1 to produce. The new fabrication approach is poised to broaden access to fully customizable optical parts and could enable completely new types of imaging tools.

“We created optical parts that enable imaging of life’s smallest building blocks at a remarkable level of detail,” said lead author Jay Christopher from the University of Strathclyde in the UK. “This approach opens the possibility for customized imaging systems and unlocks imaging scenarios that are traditionally either impossible or need costly glass manufacturing services.”

In the Optica Publishing Group journal Biomedical Optics Express, the researchers describe their lens design and manufacturing processes, which combine 3D printing, silicone molding and a UV curable clear resin. They used lenslets fabricated with their technique to create a multifocal structured illumination microscope that imaged microtubules in a cell’s cytoskeleton with a resolution of around 150 nm.

“Our new approach could empower scientists and companies to access tools previously locked behind specialist technology with high costs,” said Christopher. “Using budget-friendly 3D printers and materials, they could manufacture their own components to solve problems they are facing and, in turn, generate unique research and product development solutions.”

Inexpensive lenses for advanced imaging

The research builds upon earlier work in which the investigators showed that consumer-grade 3D printers and materials could be used to create basic lenses identical to factory-produced optics. These lenses, amongst others, were used to produce a fully 3D printed microscope.

“With consumer-grade 3D printing technologies becoming more sophisticated and precise every year, our ambitions grew from seeing whether 3D printed lenses could be used for biological imaging, in general, to just how far 3D printing lenses could really go within the latest advanced imaging concepts,” said research team lead Ralf Bauer.

For the new work, the researchers wanted to make inexpensive lenses that could be used in a multifocal structured illumination microscope (SIM). This type of microscope uses patterned light at multiple focal points to illuminate a sample, capturing multiple images that are computationally combined to reveal details smaller than the normal diffraction limit.

To create a high-quality lens for microscopy, the researchers needed to figure out a way to reduce the optical scattering they observed when focusing a laser through a 3D-printed lens. This scattering occurs because the lens is printed layer-by-layer using a pixelated screen, which can lead to unwanted diffraction effects in the lens. They developed a molding method to help eliminate this problem.

Making a laser-friendly lens

The new fabrication approach begins with a typical 3D printing process that involves designing the optic in freely available CAD software and then using a 3D printer to fabricate the design. After some simple processing steps, this produces a 3D printed raw optic. To enhance the clarity and transparency of the lens, the researchers attached more of the 3D printing material to each lens surface to smooth out the thin layers produced by 3D printing. This additive approach, which is much quicker than the traditional approach of polishing, created a custom-designed lens with surfaces smooth enough to compete with commercial-grade glass lenses.

For the multifocal structured illumination microscope, they designed and printed a lenslet array, which is a single optic consisting of many small lenses on the same surface. This optical design makes it possible to create many illumination points in the microscope, speeding the ability to capture tiny details in biological samples.

After printing and refining the lenslet array, the researchers made a silicone mold of it, which they then filled with inexpensive UV-curable clear resin. This created an optical part that didn’t suffer from diffraction effects.

The researchers used precision surface measurements to compare their low-cost optics against high-end and budget commercial optics, finding that the 3D-printed lens surfaces matched well with both types of commercial optic surfaces. They then used the 3D printed optical lens array in their lab-prototype multifocal structured illumination microscope, observing super-resolution biological data that was nearly identical in quality to that acquired with commercial glass lens arrays.

Next, the researchers plan to further explore the full design freedom offered by optical 3D printing. For example, the approach could be used to produce multiple focused points in three dimensions, to explore bio-inspired imaging and sensing designs or to combine different materials to make single, affordable components that combine transparent and opaque features for added functionality.

Paper: J. Christopher, L. M. Rooney, C. Butterworth, G. McConnell, R. Bauer, “Low-Cost 3D Printed Optics for Super-Resolution Multifocal Structured Illumination Microscopy,” Biomed. Opt. Express, 17, 769-783 (2025).
DOI: 10.1364/BOE.583760

About Optica Publishing Group

Optica Publishing Group is a division of the society, Optica, Advancing Optics and Photonics Worldwide. It publishes the largest collection of peer-reviewed and most-cited content in optics and photonics, including 19 prestigious journals, the society’s flagship member magazine, and papers and videos from over 1200 conferences. With over 505,000 journal articles, conference papers and videos to search, discover and access, its publications portfolio represents the full range of research in the field from around the globe.

About Biomedical Optics Express

Biomedical Optics Express serves the biomedical optics community with rapid, open-access, peer-reviewed papers related to optics, photonics and imaging in biomedicine. The journal scope encompasses fundamental research, technology development, biomedical studies and clinical applications. It is published monthly by Optica Publishing Group and edited by Christoph Hitzenberger, Medical University of Vienna, Austria. For more information, visit Biomedical Optics Express.


 Widefield and structured illumination [VIDEO] 

 

Children’s Hospital Colorado research outlines first pediatric classifications for suicide risk in adolescents and kids


Findings will improve earlier detection and interventions for kids




Children's Hospital Colorado




AURORA, Colo. (January 13, 2025) – Today, pediatric experts from Children’s Hospital Colorado (Children’s Colorado) announced published research in the Journal of the American Academy of Child and Adolescent Psychiatry that identifies five classifications of youth who have died by suicide. Using 10 years of national suicide data, Joel Stoddard, MD, MAS, child and adolescent psychiatrist at Children’s Colorado, and his team found that nearly half of youth who died by suicide did not have clinical contact or a known risk of suicide. These findings are critical for recommending new and increased suicide interventions for youth.  

“In order to help kids now, we need to dig into the mountain of data available to us to learn about youth who are at risk of dying by suicide,” said Dr. Stoddard, who is also an associate professor of psychiatry with the CU Anschutz School of Medicine. “Not every child who dies by suicide has the same story. This research looks at the whole person and gives primary care providers, caregivers who work with kids and pediatric experts a greater understanding of suicide risks that are specific to youth.”  

Because of the pressing nature of youth suicide, it became imperative to Dr. Stoddard and his team to understand risk factors and contextual characteristics in young people to identify warning signs of suicide andimplement new focused prevention efforts. Previous research has focused on analyzing adult populations who have died by suicide. Adult classifications, or profiles based on shared characteristics, improve the identification of suicide risk and prompt tailored intervention strategies for adults based on factors like demographics, psychological conditions, life circumstances, toxicological findings, mental health and substance use comorbidities, physical health issues and alcohol-related crises. These profiles provide insights into the varying pathways that lead adults to suicide, and now experts have a way of understanding those insights for youth. 

“Youth face a unique set of pressures and vulnerabilities that are not typically seen in adult populations, or that may manifest differently due to developmental and environmental factors,” said Dr. Stoddard. “This research underscores the importance of early identification because by knowing how others have passed away, we can work to prevent this harm in the future. We not only want to keep kids from dying — we want to help them thrive at home, in school, and with their friends and family. Earlier intervention helps connect kids to treatment more quickly so they can grow into adulthood with the foundations they need.”  

Dr. Stoddard and his team identified at least five subgroups of youth who had died by suicide from national data. These classifications group the behaviors or stories of individuals who have died by suicide to revealrisk factors pediatric experts should look for. For example, the Hidden and Surveillance classes represent almost half of suicide decedents, prompting the recommendation for greater universal screening. Theclassifications are explained below:  

  • Class 1, Crisis: These young people experienced a standalone acute interpersonal or school-related crisis. They did not present with prior suicidal thoughts or behaviors or med-psych challenges. This group of youth are most familiar to hospital emergency providers, as a crisis could be a common first reason for medical admission. An example crisis would be a young person experiencing a first relationship breakup.  

  • Class 2, Disclosing: These young people told someone about their suicidal thoughts. When people express distress, it is important for trusted adults to pay attention to their words and take appropriate nextsteps. Colorado’s Safe2Tell program is a model to other states, as its reporting system allows anyone to anonymously report concerns regarding their safety or the safety of others with quick intervention follow-up. This classification reveals a need for improved education around resources and interventions available to youth who disclose thoughts of suicide.  

  • Class 3, Hidden: These young people do not have any recorded risk factors and have minimal contact with the healthcare system. These youth would often be identified in the healthcare system while presenting for other physical symptoms (such as a broken arm) and their risk of suicide is not obvious. Most of these young people were predominantly male and more likely to use firearms.  

  • Class 4, Identified: These young people experience chronic crises and familial challenges and/or were frequent utilizers of the mental health system. Most of these young people were female and died by asphyxia or ingestion.   

  • Class 5, Surveillance: These young people were identified as having died by suicide through the state or local county coroner’s reporting, with no other reportable information shared about their death. When there are systematic gaps in reports to the CDC by the state, a youth may be classified in this way. It is important to note: this class number is much lower in Colorado, as Colorado has been a leader in participating in the National Violent Death Reporting System (NVDS) for more than 20 years. Colorado’s consistent recordkeeping and leadership in creating partnerships with county coroners has allowed for greater understanding of how and why a person died. 

In September, Children’s Colorado announced that The Anschutz Foundation generously committed a challenge donation to the University of Colorado Anschutz, with Children’s Colorado receiving funding directed to supporting the mental health of children across the region. At Children’s Colorado, the investment will first go toward suicide prevention. These classifications assist public health and education systems in refining strategies for earlier upstream detection of suicidality and intervention beyond traditional mental health care.  

Below are Children’s Colorado’s research-driven recommended actions that can be implemented in the community, schools, primary care offices, places of worship, homes and more:  

  • Universal suicide risk screening: Many young people who died by suicide lacked prior contact with the mental health system. It is important for screening to exist in health care and community- and school-based settings.  

  • Safe Firearm Storage and Counseling: Research shows that having firearms in the home increases the risk of unintentional injuries and deaths, as well as suicide and homicide. It is essential to normalize having conversations about gun safety and access to firearms in your home or the homes of friends and families, especially during periods of crisis or higher risk. Children’s Colorado recommends integrating counseling on firearm safety and safe storage of lethal means into routine primary care, given the increased usage of firearms in death across all five classifications of youth suicide. Both the Anschutz Medical Campus and Colorado Department of Public Health and Environment have safe storage guidance: 

  • Crisis-oriented outreach: Colorado has lower rates of crisis-classified deaths, likely due to the state’s crisis-oriented outreach interventions. Communities and states can implement interventions like text-based support (e.g. Colorado 988), peer disclosure programs (e.g. Safe2Tell) and rapid crisis counseling (e.g. IMatter or the Second Wind Fund) to urgently assist youth in acute relational or academic crises.  

  • Enhanced surveillance and data quality: From a state policy perspective, it would be advantageous to strengthen death scene investigation protocols and reporting standards to reduce missing data on youngpeople who have died by suicide. Data informs research which improves interventions recommended to save kids’ lives.  

  • Integrated care for high-risk classifications: It is important to follow up with youth who are already in treatment (those who identify with the Disclosing or Interpersonal classification) after they graduate from a treatment program or recommend integrated care models that address both chronic mental health needs and acute stressors. 

“Pediatric suicide rates still remain high, as one young person lost is one too many,” said Ron-Li Liaw, MD, Mental Health-in-Chief at Children’s Colorado. “I am proud of Children’s Colorado for leading the way in translating research and data collection into actionable policy and intervention recommendations that should be implemented across the continuum of mental health care in Colorado, primary care, schools and all mountain west communities. With youth suicide reporting to be on the decline, we can consider the effectiveness of these interventions already happening in our hospital and across our state. Other states can look to Colorado as a model for recognizing the need for greater investment, time and energy in understanding youth mental health so we can continue improving the prevention of youth suicide.” 

To learn more about investing in mental health research and prevention, visit Join the Movement: Kids’ Mental Health Can’t Wait. To learn more about pediatric mental health services available at Children’s Colorado, visit childrenscolorado.org.  

 

ABOUT CHILDREN’S HOSPITAL COLORADO 

Children’s Hospital Colorado is one of the nation’s leading and most expansive nonprofit pediatric healthcare systems with a mission to improve the health of children through patient care, education, research and advocacy. Founded in 1908 and ranked among the best children’s hospitals in the nation as recognized by U.S. News & World Report, Children’s Colorado has established itself as a pioneer in the discovery of innovative and groundbreaking treatments that are shaping the future of pediatric healthcare worldwide. Children’s Colorado offers a full spectrum of family-centered care at its urgent, emergency and specialty care locations throughout Colorado, including an academic medical center on the Anschutz Medical Campus in Aurora, hospitals in Colorado Springs, Highlands Ranch and Broomfield, and outreach clinics across the region. For more information, visit www.childrenscolorado.org or connect with us on Facebook, Instagram and YouTube.