MATT O'BRIEN and HALELUYA HADERO
Updated Wed, December 20, 2023
David Thiel, chief technologist at the Stanford Internet Observatory and author of its report that discovered images of child sexual abuse in the data used to train artificial intelligence image-generators, poses for a photo on Wednesday, Dec. 20, 2023, in Ă“bidos, Portugal.
(Camilla Mendes dos Santos via AP)
Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.
Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.
Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they've learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that's been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.
The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION told The Associated Press it was temporarily removing its datasets.
LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, said in a statement that it “has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.”
While the images account for just a fraction of LAION’s index of some 5.8 billion images, the Stanford group says it is likely influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.
It’s not an easy problem to fix, and traces back to many generative AI projects being “effectively rushed to market” and made widely accessible because the field is so competitive, said Stanford Internet Observatory's chief technologist David Thiel, who authored the report.
“Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention,” Thiel said in an interview.
A prominent LAION user that helped shape the dataset's development is London-based startup Stability AI, maker of the Stable Diffusion text-to-image models. New versions of Stable Diffusion have made it much harder to create harmful content, but an older version introduced last year — which Stability AI says it didn't release — is still baked into other applications and tools and remains “the most popular model for generating explicit imagery,” according to the Stanford report.
“We can’t take that back. That model is in the hands of many people on their local machines,” said Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection, which runs Canada's hotline for reporting online sexual exploitation.
Stability AI on Wednesday said it only hosts filtered versions of Stable Diffusion and that “since taking over the exclusive development of Stable Diffusion, Stability AI has taken proactive steps to mitigate the risk of misuse.”
“Those filters remove unsafe content from reaching the models,” the company said in a prepared statement. "By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.”
LAION was the brainchild of a German researcher and teacher, Christoph Schuhmann, who told the AP earlier this year that part of the reason to make such a huge visual database publicly accessible was to ensure that the future of AI development isn't controlled by a handful of powerful companies.
“It will be much safer and much more fair if we can democratize it so that the whole research community and the whole general public can benefit from it,” he said.
Much of LAION's data comes from another source, Common Crawl, a repository of data constantly trawled from the open internet, but Common Crawl's executive director, Rich Skrenta, said it was "incumbent on" LAION to scan and filter what it took before making use of it.
LAION said this week it developed “rigorous filters” to detect and remove illegal content before releasing its datasets and is still working to improve those filters. The Stanford report acknowledged LAION's developers made some attempts to filter out “underage” explicit content but might have done a better job had they consulted earlier with child safety experts.
Many text-to-image generators are derived in some way from the LAION database, though it's not always clear which ones. OpenAI, maker of DALL-E and ChatGPT, said it doesn't use LAION and has fine-tuned its models to refuse requests for sexual content involving minors.
Google built its text-to-image Imagen model based on a LAION dataset but decided against making it public in 2022 after an audit of the database “uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.”
Trying to clean up the data retroactively is difficult, so the Stanford Internet Observatory is calling for more drastic measures. One is for anyone who's built training sets off of LAION‐5B — named for the more than 5 billion image-text pairs it contains — to “delete them or work with intermediaries to clean the material.” Another is to effectively make an older version of Stable Diffusion disappear from all but the darkest corners of the internet.
“Legitimate platforms can stop offering versions of it for download,” particularly if they are frequently used to generate abusive images and have no safeguards to block them, Thiel said.
As an example, Thiel called out CivitAI, a platform that's favored by people making AI-generated pornography but which he said lacks safety measures to weigh it against making images of children. The report also calls on AI company Hugging Face, which distributes the training data for models, to implement better methods to report and remove links to abusive material.
Hugging Face said it is regularly working with regulators and child safety groups to identify and remove abusive material. Meanwhile, CivitAI said it has “strict policies” on the generation of images depicting children and has rolled out updates to provide more safeguards. The company also said it is working to ensure its policies are “adapting and growing” as the technology evolves.
The Stanford report also questions whether any photos of children — even the most benign — should be fed into AI systems without their family's consent due to protections in the federal Children’s Online Privacy Protection Act.
Rebecca Portnoff, the director of data science at the anti-child sexual abuse organization Thorn, said her organization has conducted research that shows the prevalence of AI-generated images among abusers is small, but growing consistently.
Developers can mitigate these harms by making sure the datasets they use to develop AI models are clean of abuse materials. Portnoff said there are also opportunities to mitigate harmful uses down the line after models are already in circulation.
Tech companies and child safety groups currently assign videos and images a “hash” — unique digital signatures — to track and take down child abuse materials. According to Portnoff, the same concept can be applied to AI models that are being misused.
“It’s not currently happening," she said. “But it’s something that in my opinion can and should be done.”
Thousands of child abuse images found in AI training tool
James Titcomb
Wed, December 20, 2023 at 5:59 AM MST·5 min read
2
laptop user
Thousands of child abuse images have been discovered in a database for artificial intelligence systems, raising fears that AI tools have been “trained” on the illegal images.
Researchers have identified more than 3,000 cases of child sexual abuse material in a vast trove of images designed to create AI photo generation software.
The free database, known as “LAION-5b”, has been used to develop AI software – including a version of Stable Diffusion, a popular image generator.
Google has used a separate LAION dataset to train an old version of one of its systems that had a restricted release to the public.
The research highlights the danger in developing AI systems using data scraped from swathes of the internet, much of which is not manually checked by companies.
AI models being trained on child abuse pictures would make them more capable of creating illegal images.
Child safety experts have rung the alarm on a tidal wave of AI-generated child abuse images this year, saying they risk being overwhelmed.
The research, led by the Stanford Internet Observatory in California, found 3,215 suspected cases of child abuse images in the LAION-5b dataset.
Hundreds of cases were confirmed by manual reviewers at the Canadian Centre for Child Protection.
LAION, the German non-profit behind the dataset, said it was taking it offline in response to the findings to ensure it was safe.
AI image creation tools, which are capable of turning text instructions into professional or photorealistic images, have exploded in the last year.
The systems are best known for creating viral deepfakes, such as fake images of the Pope wearing a puffer jacket or Donald Trump being arrested.
AI image creation tools were used to create a fake image of Donald Trump being arrested - Eliot Higgins/Twitter
They are developed by being “trained” on millions of existing images and captions, which makes them capable of creating new images.
Concerns about illegal images featuring in datasets have been raised before, but the Stanford research is believed to be the most comprehensive evidence of child abuse material being included in them.
David Thiel, the Stanford Internet Observatory’s chief technologist who led the research, said other image datasets might have had similar issues, although they are closely protected and difficult to research.
Parts of the LAION-5b dataset were used to train Stable Diffusion 1.5, a system released last year.
Stability AI, the London tech company that now operates Stable Diffusion, said the version in question was developed by a separate organisation, RunwayML.
Stability AI said it had since applied much stricter rules around datasets and blocked users from generating explicit content in subsequent releases.
“Stability AI only hosts versions of Stable Diffusion that include filters on its API. These filters remove unsafe content from reaching the models. By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.
“This report focuses on the LAION-5b dataset as a whole. Stability AI models were trained on a filtered subset of that dataset. In addition, we subsequently fine-tuned these models to mitigate residual behaviours.
“Stability AI is committed to preventing the misuse of AI. We prohibit the use of our image models and services for unlawful activity, including attempts to edit or create CSAM.”
A spokesman for RunwayML said: “Stable Diffusion 1.5 was released in collaboration with Stability AI and researchers from LMU Munich. This collaboration has been frequently reiterated by Stability themselves, along with numerous media outlets.”
Stable Diffusion is released as free and editable software, meaning that earlier versions of it are still downloaded and shared online.
The Internet Watch Foundation, Britain’s hotline for reporting child abuse material, said separately that it had been working with LAION to remove links to abuse images.
Susie Hargreaves, the IWF’s chief executive, said: “The IWF has engaged with the team behind the LAION dataset with the aim of supporting them in filtering and removing URLs that are known to link to child sexual abuse material.
“The IWF has found a relatively small number of links to illegal content have also found their way into the LAION dataset. Without strong content moderation and filtering there is always a danger that criminal material from the open internet will end up being rolled into these giant datasets.
“We are pleased the LAION team want to be proactive in tackling this issue, and we are looking forward to working with them on a robust solution.”
LAION is run by a team of volunteer researchers in Germany and designed to provide a free alternative to the vast image libraries built up by private companies such as OpenAI.
Responding to the research, LAION said: “LAION is a non-profit organisation that provides datasets, tools and models for the advancement of machine learning research. We are committed to open public education and the environmentally safe use of resources through the reuse of existing datasets and models.
“LAION datasets (more than 5.85 billion entries) are sourced from the freely available Common Crawl web index and offer only links to content on the public web, with no images. We developed and published our own rigorous filters to detect and remove illegal content from LAION datasets before releasing them.
“We collaborate with universities, researchers and NGOs to improve these filters and are currently working with the Internet Watch Foundation to identify and remove content suspected of violating laws. We invite Stanford researchers to join LAION to improve our datasets and to develop efficient filters for detecting harmful content.
“LAION has a zero tolerance policy for illegal content and in an abundance of caution, we are temporarily taking down the LAION datasets to ensure they are safe before republishing them.”
Google said it used a series of techniques to filter out offensive and illegal material, and that only the first versions of its Imagen system was trained using a LAION dataset.
“We have a long track record of fighting child sexual abuse and exploitation online and our approach to generative AI is no different,” the company said.
“We don’t allow child sexual abuse material (CSAM) to be created or shared on our platforms and we’ve built safeguards into Google’s AI models and products to detect and prevent related results.
“We will continue to act responsibly, working closely with industry experts to ensure we are evolving and strengthening our protections to stay ahead of new abuse trends as they emerge.”
Large AI Dataset Has Over 1,000 Child Abuse Images, Researchers Find
Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.
Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.
Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they've learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that's been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.
The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory’s report, LAION told The Associated Press it was temporarily removing its datasets.
LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, said in a statement that it “has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.”
While the images account for just a fraction of LAION’s index of some 5.8 billion images, the Stanford group says it is likely influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.
It’s not an easy problem to fix, and traces back to many generative AI projects being “effectively rushed to market” and made widely accessible because the field is so competitive, said Stanford Internet Observatory's chief technologist David Thiel, who authored the report.
“Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention,” Thiel said in an interview.
A prominent LAION user that helped shape the dataset's development is London-based startup Stability AI, maker of the Stable Diffusion text-to-image models. New versions of Stable Diffusion have made it much harder to create harmful content, but an older version introduced last year — which Stability AI says it didn't release — is still baked into other applications and tools and remains “the most popular model for generating explicit imagery,” according to the Stanford report.
“We can’t take that back. That model is in the hands of many people on their local machines,” said Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection, which runs Canada's hotline for reporting online sexual exploitation.
Stability AI on Wednesday said it only hosts filtered versions of Stable Diffusion and that “since taking over the exclusive development of Stable Diffusion, Stability AI has taken proactive steps to mitigate the risk of misuse.”
“Those filters remove unsafe content from reaching the models,” the company said in a prepared statement. "By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.”
LAION was the brainchild of a German researcher and teacher, Christoph Schuhmann, who told the AP earlier this year that part of the reason to make such a huge visual database publicly accessible was to ensure that the future of AI development isn't controlled by a handful of powerful companies.
“It will be much safer and much more fair if we can democratize it so that the whole research community and the whole general public can benefit from it,” he said.
Much of LAION's data comes from another source, Common Crawl, a repository of data constantly trawled from the open internet, but Common Crawl's executive director, Rich Skrenta, said it was "incumbent on" LAION to scan and filter what it took before making use of it.
LAION said this week it developed “rigorous filters” to detect and remove illegal content before releasing its datasets and is still working to improve those filters. The Stanford report acknowledged LAION's developers made some attempts to filter out “underage” explicit content but might have done a better job had they consulted earlier with child safety experts.
Many text-to-image generators are derived in some way from the LAION database, though it's not always clear which ones. OpenAI, maker of DALL-E and ChatGPT, said it doesn't use LAION and has fine-tuned its models to refuse requests for sexual content involving minors.
Google built its text-to-image Imagen model based on a LAION dataset but decided against making it public in 2022 after an audit of the database “uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.”
Trying to clean up the data retroactively is difficult, so the Stanford Internet Observatory is calling for more drastic measures. One is for anyone who's built training sets off of LAION‐5B — named for the more than 5 billion image-text pairs it contains — to “delete them or work with intermediaries to clean the material.” Another is to effectively make an older version of Stable Diffusion disappear from all but the darkest corners of the internet.
“Legitimate platforms can stop offering versions of it for download,” particularly if they are frequently used to generate abusive images and have no safeguards to block them, Thiel said.
As an example, Thiel called out CivitAI, a platform that's favored by people making AI-generated pornography but which he said lacks safety measures to weigh it against making images of children. The report also calls on AI company Hugging Face, which distributes the training data for models, to implement better methods to report and remove links to abusive material.
Hugging Face said it is regularly working with regulators and child safety groups to identify and remove abusive material. Meanwhile, CivitAI said it has “strict policies” on the generation of images depicting children and has rolled out updates to provide more safeguards. The company also said it is working to ensure its policies are “adapting and growing” as the technology evolves.
The Stanford report also questions whether any photos of children — even the most benign — should be fed into AI systems without their family's consent due to protections in the federal Children’s Online Privacy Protection Act.
Rebecca Portnoff, the director of data science at the anti-child sexual abuse organization Thorn, said her organization has conducted research that shows the prevalence of AI-generated images among abusers is small, but growing consistently.
Developers can mitigate these harms by making sure the datasets they use to develop AI models are clean of abuse materials. Portnoff said there are also opportunities to mitigate harmful uses down the line after models are already in circulation.
Tech companies and child safety groups currently assign videos and images a “hash” — unique digital signatures — to track and take down child abuse materials. According to Portnoff, the same concept can be applied to AI models that are being misused.
“It’s not currently happening," she said. “But it’s something that in my opinion can and should be done.”
Thousands of child abuse images found in AI training tool
James Titcomb
Wed, December 20, 2023 at 5:59 AM MST·5 min read
2
laptop user
Thousands of child abuse images have been discovered in a database for artificial intelligence systems, raising fears that AI tools have been “trained” on the illegal images.
Researchers have identified more than 3,000 cases of child sexual abuse material in a vast trove of images designed to create AI photo generation software.
The free database, known as “LAION-5b”, has been used to develop AI software – including a version of Stable Diffusion, a popular image generator.
Google has used a separate LAION dataset to train an old version of one of its systems that had a restricted release to the public.
The research highlights the danger in developing AI systems using data scraped from swathes of the internet, much of which is not manually checked by companies.
AI models being trained on child abuse pictures would make them more capable of creating illegal images.
Child safety experts have rung the alarm on a tidal wave of AI-generated child abuse images this year, saying they risk being overwhelmed.
The research, led by the Stanford Internet Observatory in California, found 3,215 suspected cases of child abuse images in the LAION-5b dataset.
Hundreds of cases were confirmed by manual reviewers at the Canadian Centre for Child Protection.
LAION, the German non-profit behind the dataset, said it was taking it offline in response to the findings to ensure it was safe.
AI image creation tools, which are capable of turning text instructions into professional or photorealistic images, have exploded in the last year.
The systems are best known for creating viral deepfakes, such as fake images of the Pope wearing a puffer jacket or Donald Trump being arrested.
AI image creation tools were used to create a fake image of Donald Trump being arrested - Eliot Higgins/Twitter
They are developed by being “trained” on millions of existing images and captions, which makes them capable of creating new images.
Concerns about illegal images featuring in datasets have been raised before, but the Stanford research is believed to be the most comprehensive evidence of child abuse material being included in them.
David Thiel, the Stanford Internet Observatory’s chief technologist who led the research, said other image datasets might have had similar issues, although they are closely protected and difficult to research.
Parts of the LAION-5b dataset were used to train Stable Diffusion 1.5, a system released last year.
Stability AI, the London tech company that now operates Stable Diffusion, said the version in question was developed by a separate organisation, RunwayML.
Stability AI said it had since applied much stricter rules around datasets and blocked users from generating explicit content in subsequent releases.
“Stability AI only hosts versions of Stable Diffusion that include filters on its API. These filters remove unsafe content from reaching the models. By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.
“This report focuses on the LAION-5b dataset as a whole. Stability AI models were trained on a filtered subset of that dataset. In addition, we subsequently fine-tuned these models to mitigate residual behaviours.
“Stability AI is committed to preventing the misuse of AI. We prohibit the use of our image models and services for unlawful activity, including attempts to edit or create CSAM.”
A spokesman for RunwayML said: “Stable Diffusion 1.5 was released in collaboration with Stability AI and researchers from LMU Munich. This collaboration has been frequently reiterated by Stability themselves, along with numerous media outlets.”
Stable Diffusion is released as free and editable software, meaning that earlier versions of it are still downloaded and shared online.
The Internet Watch Foundation, Britain’s hotline for reporting child abuse material, said separately that it had been working with LAION to remove links to abuse images.
Susie Hargreaves, the IWF’s chief executive, said: “The IWF has engaged with the team behind the LAION dataset with the aim of supporting them in filtering and removing URLs that are known to link to child sexual abuse material.
“The IWF has found a relatively small number of links to illegal content have also found their way into the LAION dataset. Without strong content moderation and filtering there is always a danger that criminal material from the open internet will end up being rolled into these giant datasets.
“We are pleased the LAION team want to be proactive in tackling this issue, and we are looking forward to working with them on a robust solution.”
LAION is run by a team of volunteer researchers in Germany and designed to provide a free alternative to the vast image libraries built up by private companies such as OpenAI.
Responding to the research, LAION said: “LAION is a non-profit organisation that provides datasets, tools and models for the advancement of machine learning research. We are committed to open public education and the environmentally safe use of resources through the reuse of existing datasets and models.
“LAION datasets (more than 5.85 billion entries) are sourced from the freely available Common Crawl web index and offer only links to content on the public web, with no images. We developed and published our own rigorous filters to detect and remove illegal content from LAION datasets before releasing them.
“We collaborate with universities, researchers and NGOs to improve these filters and are currently working with the Internet Watch Foundation to identify and remove content suspected of violating laws. We invite Stanford researchers to join LAION to improve our datasets and to develop efficient filters for detecting harmful content.
“LAION has a zero tolerance policy for illegal content and in an abundance of caution, we are temporarily taking down the LAION datasets to ensure they are safe before republishing them.”
Google said it used a series of techniques to filter out offensive and illegal material, and that only the first versions of its Imagen system was trained using a LAION dataset.
“We have a long track record of fighting child sexual abuse and exploitation online and our approach to generative AI is no different,” the company said.
“We don’t allow child sexual abuse material (CSAM) to be created or shared on our platforms and we’ve built safeguards into Google’s AI models and products to detect and prevent related results.
“We will continue to act responsibly, working closely with industry experts to ensure we are evolving and strengthening our protections to stay ahead of new abuse trends as they emerge.”
Large AI Dataset Has Over 1,000 Child Abuse Images, Researchers Find
Davey Alba and Rachel Metz
Wed, December 20, 2023
(Bloomberg) -- A massive public dataset used to build popular artificial intelligence image generators contains at least 1,008 instances of child sexual abuse material, a new report from the Stanford Internet Observatory found.
LAION-5B, which contains more than 5 billion images and related captions from the internet, may also include thousands of additional pieces of suspected child sexual abuse material, or CSAM, according to the report. The inclusion of CSAM in the dataset could enable AI products built on this data — including image generation tools like Stable Diffusion — to create new, and potentially realistic, child abuse content, the report warned.
The rise of increasingly powerful AI tools has raised alarms in part because these services are built with troves of online data — including public datasets such as LAION-5B — that can contain copyrighted or harmful content. AI image generators, in particular, rely on datasets that include pairs of images and text descriptions to determine a wide range of concepts and create pictures in response to prompts from users.
In a statement, a spokesperson for LAION, the Germany-based nonprofit behind the dataset, said the group has a “zero tolerance policy” for illegal content and was temporarily removing LAION datasets from the internet “to ensure they are safe before republishing them.” Prior to releasing its datasets, LAION created and published filters for spotting and removing illegal content from them, the spokesperson said.Christoph Schuhmann, LAION’s founder, previously told Bloomberg News that he was unaware of any child nudity in the dataset, though he acknowledged he did not review the data in great depth. If notified about such content, he said, he would remove links to it immediately.
A spokesperson for Stability AI, the British AI startup that funded and popularized Stable Diffusion, said the company is committed to preventing the misuse of AI and prohibits the use of its image models for unlawful activity, including attempts to edit or create CSAM. “This report focuses on the LAION-5B dataset as a whole,” the spokesperson said in a statement. “Stability AI models were trained on a filtered subset of that dataset. In addition, we fine-tuned these models to mitigate residual behaviors.”
LAION-5B, or subsets of it, have been used to build multiple versions of Stable Diffusion. A more recent version of the software, Stable Diffusion 2.0, was trained on data that substantially filtered out “unsafe” materials in the dataset, making it much more difficult for users to generate explicit images. But Stable Diffusion 1.5 does generate sexually explicit content and is still in use in some corners of the internet. The spokesperson said Stable Diffusion 1.5 was not released by Stability AI, but by Runway, an AI video startup that helped create the original version of Stable Diffusion. Runway said it was released in collaboration with Stability AI.
“We have implemented filters to intercept unsafe prompts or unsafe outputs when users interact with models on our platform,” the Stability AI spokesperson added. “We have also invested in content labeling features to help identify images generated on our platform. These layers of mitigation make it harder for bad actors to misuse AI.”
LAION-5B was released in 2022 and relies on raw HTML code collected by a California nonprofit to locate images around the web and associate them with descriptive text. For months, rumors that the dataset contained illegal images have circulated in discussion forums and on social media.“As far as we know, this is the first attempt to actually quantify and validate concerns,” David Thiel, chief technologist of the Stanford Internet Observatory, said in an interview with Bloomberg News.
For their report, Stanford Internet Observatory researchers detected the CSAM material by looking for different kinds of hashes, or digital fingerprints, of such images. The researchers then validated them using APIs dedicated to finding and removing known images of child exploitation, as well as by searching for similar images in the dataset.
Much of the suspected CSAM content that the Stanford Internet Observatory found was validated by third parties like Canadian Centre for Child Protection and through a tool called PhotoDNA, developed by Microsoft Corp., according to the report. Given that the Stanford Internet Observatory researchers could only work with a limited portion of high-risk content, additional abusive content likely exists in the dataset, the report said.
While the amount of CSAM present in the dataset doesn’t indicate that the illicit material “drastically” influences the images churned out by AI tools, Thiel said it does likely still have an impact. “These models are really good at being able to learn concepts from a small number of images,” he said. “And we know that some of these images are repeated, potentially dozens of times in the dataset.”
Stanford Internet Observatory’s work previously found that generative AI image models can produce CSAM, but that work assumed the AI systems were able to do so by combining two “concepts,” such as children and sexual activity. Thiel said the new research suggests these models might generate such illicit images because of some of the underlying data on which they were built. The report recommends that models based on Stable Diffusion 1.5 “should be deprecated and distribution ceased wherever feasible.”
--With assistance from Marissa Newman and Aggi Cantrill.
©2023 Bloomberg L.P.
No comments:
Post a Comment