How can visual artists protect their work from AI crawlers? It’s complicated
Most artists don’t have access to the tools that would allow them to block AI crawlers. And if they do have access, artists don’t know how to use these tools.
image:
In this example robots.txt file, Googlebot is allowed to crawl all URLs on the website, ChatGPT-User and GPTBot are disallowed from crawling any URLs, and all other crawlers are disallowed from crawling URLs under the /secret/ directory.
view moreCredit: University of California San Diego
Visual artists want to protect their work from non-consensual use by generative AI tools such as ChatGPT. But most of them do not have the technical know-how or control over the tools needed to do so.
One of the best ways to protect artists’ creative work is to prevent it from ever being seen by “AI crawlers” – the programs that harvest data on the Internet for training generative models. But most artists don’t have access to the tools that would allow them to take such actions. And when they do have access, they don’t know how to use them.
These are some of the conclusions of a study by a group of researchers at the University of California San Diego and University of Chicago, which will be presented at the 2025 Internet Measurement Conference in October in Madison, Wis.
“At the core of the conflict in this paper is the notion that content creators now wish to control how their content is used, not simply if it is accessible. While such rights are typically explicit in copyright law, they are not readily expressible, let alone enforceable in today’s Internet. Instead, a series of ad hoc controls have emerged based on repurposing existing web norms and firewall capabilities, none of which match the specificity, usability, or level of enforcement that is, in fact, desired by content creators,” the researchers write.
The research team surveyed over 200 visual artists about the demand for tools to block AI crawlers, as well as the artists’ technical expertise. Researchers also reviewed more than 1,100 professional artist websites to see how much control artists had over AI-blocking tools. Finally, the team evaluated which processes were the most effective at blocking AI crawlers.
Currently, artists can fairly easily use some tools that mask original artworks from AI crawlers by turning the art into something different. The study’s co-authors at the University of Chicago developed one of these tools, known as Glaze.
But ideally, artists would be able to keep AI crawlers from harvesting their data altogether. To do so, visual artists need to defend themselves against three categories of AI crawlers. One type harvests data to train the large language models that power chatbots, another to increase the knowledge of AI-backed assistants, and yet another to support AI-backed search engines.
Researchers will present their work at the ACM Internet Measurement Conference in October of this year in Madison, Wisc.
Artist survey
There has been extensive media coverage of how generative AI has severely disrupted the livelihoods of many artists. As a result, close to 80% of the 203 visual artists the researchers surveyed said they have tried to take proactive steps to keep their artwork from being included in training data for AI generating tools. Two-thirds reported using Glaze. In addition, 60% of artists have cut back on the amount of work they share online, and 51% of them share only low-resolution images of their work.
Also, 96% of artists said they would like to have access to a tool that can deter AI crawlers from harvesting their data. But more than 60% of them were not familiar with one of the simplest tools that can do this: robots.txt.
Tools for Deterring AI Crawlers
Robots.txt is a simple text file placed in the root directory of a website that spells out which pages crawlers are allowed to access on that website. The text file can also spell out which crawlers are not allowed to have access to the website at all. But the crawlers have no obligation to follow these restrictions.
Researchers surveyed the top 100,000 most popular websites on the Internet and found that more than 10% have explicitly disallowed AI crawlers in their robots.txt files. But some sites, including Vox Media and The Atlantic, removed this prohibition after entering into licensing agreements with AI companies. Indeed, the number of sites allowing AI crawlers is increasing, including popular right-wing misinformation sites. Researchers hypothesize that these sites might seek to spread misinformation to LLMs.
One issue for artists is that they do not have access to or control of the relevant robots.txt file. That’s because, in a survey of 1100 artist websites, researchers found that more than three quarters are hosted on third-party service platforms, most of which do not allow for modifications of robots.txt. Many of these content management systems artists use also give them little to no information about what type of crawling is blocked. Squarespace is the only company that provides a simple interface for blocking AI tools. But researchers found that only 17% of artists who use Squarespace enable this option. This might be because often, artists are not aware that this service is available.
But do crawlers respect the prohibitions listed in robots.txt, even though they are not mandatory?
The answer is mixed. Crawlers from big corporations generally do respect robots.txt, both in claim and in practice. The only crawler that researchers could clearly determine does not is Bytespider, deployed by TikTok owner ByteDance. In addition, a large number of crawlers claim they respect robots.txt restrictions but researchers were not able to verify that this is actually the case.
All in all, “the majority of AI crawlers operated by big companies do respect robots.txt, while the majority of AI assistant crawlers do not,” the researchers write.
More recently, network provider Cloudflare has launched a “block AI bots” feature. At this point, only 5.7% of the sites using Cloudflare have this option enabled. But researchers hope it will become more popular over time.
“While it is an 'encouraging new option', we hope that providers become more transparent with the operation and coverage of their tools (for example by providing the list of AI bots that are blocked),” said Elisa Luo, one of the paper’s authors and a Ph.D. student in Savage’s research group.
Legislative and legal uncertainties
The global landscape around AI crawlers is constantly changing due to different legal changes and a wide range of legislative proposals.
In the United States, AI companies face legal challenges around the extent to which copyright applies to models trained on data scraped from the Internet and what their obligations might be to the creators of this content. In the European Union, a recently passed AI Act requires providers of AI models to get authorization from copyright holders to use their data.
“There is reason to believe that confusion around the availability of legal remedies will only further focus attention on technical access controls,” the researchers write. “To the extent that any U.S. court finds an affirmative ‘fair use’ defense for AI model builders, this weakening of remedies on use will inevitably create an even stronger demand to enforce controls on access.”
The work was partially funded by NSF grant SaTC-2241303 and the Office of Naval Research project #N00014-24-1-2669.
Enze Alex Liu, Elisa Luo, Geoffrey M. Voelker, and Stefan Savage, Department of Computer Science and Engineering at the University of California San Diego
Shawn Shan, Ben Y. Zhao, University of Chicago
Number of sites that explicitly allow at least one AI crawler in their robots.txt over time, and number of sites that removed restrictions on AI crawlers. The vertical lines indicate public data deals between major publishers (who control 40+ domains) and OpenAI.
Summary of AI user agents studied and the companies associated with them. Researchers note whether companies publish the IP addresses they use when crawling with a particular user agent, whether their documentation claims to respect robots.txt, and whether they respect robots.txt in practice.
Credit
University of California San Diego
Method of Research
Survey
Subject of Research
People
Article Title
Somesite I Used To Crawl: Awareness, Agency and Efficacy in Protecting Content Creators From AI Crawlers
Connect and corrupt: C++ coroutines prone to code-reuse attack despite CFI
CISPA Helmholtz Center for Information Security
image:
CFOP: Hijacking C++ coroutines
view moreCredit: CISPA
A code-reuse attack named Coroutine Frame-Oriented Programming (CFOP) is capable of exploiting C++ coroutines across three major compilers, namely Clang/LLVM, GCC and MSVC. CFOP even succeeds in environments that are protected by Control Flow Integrity (CFI), exposing relevant gaps in 15 of these defense schemes. Rather than injecting new code, CFOP chains together existing functions, achieving arbitrary code execution after corrupting coroutine-internal memory structures. This new exploitation technique has been discovered by researchers at the CISPA Helmholtz Center for Information Security, who have been the first to study C++ coroutines from a security perspective. To mitigate CFOP, they propose structural changes to the ways in which C++ coroutines are implemented by major compilers.
Devising a novel code-reuse attack, the CISPA-researchers Marcos Sanchez Bajo and Professor Dr. Christian Rossow have demonstrated that all existing implementations of C++ coroutines can be exploited to bypass state-of-the-art CFI protections in both Linux and Windows. Called Coroutine Frame-Oriented Programming (CFOP), the attack results in a corruption of heap memory, allowing attackers to manipulate data and assume complete control over applications. A relatively recent addition to C++, coroutines are already present in more than 130 unique popular GitHub repositories. “They’re being used to pause and resume functions”, Bajo explains, “which is very useful for asynchronous programming, for example in servers, databases and web browsers.”
Connecting C++ coroutine functions to corrupt heap memory
In more concrete terms, coroutines can, for instance, be used to create generators that produce a sequence of elements. Imagine a Fibonacci series, where each new number in the series is the sum of the two numbers that have gone before. After each new number in the series, the coroutine is paused until it is called to generate the next one. In CFOP, entire C++ coroutines and other existing functions are used to create a code-reuse attack, as Bajo explains: “With code-reuse attacks in general, attackers take snippets of code that belong to the application anyway, so no new code is injected. They then form chains of these code snippets to manipulate the program’s execution flow. But bypassing CFI protections is a little more difficult. Instead of just taking snippets of code and creating chains, you have to take full coroutine functions and connect them in smart ways.” Once the CFI protections are circumvented by hijacking a coroutine function in this manner, any other existing function can be submitted to a code-reuse attack.
CFI schemes fail to protect C++ coroutines
Introduced to protect against code-reuse attacks, CFI schemes ensure that the correct program execution flow is observed. Programming languages, however, evolve dynamically, while CFI schemes only protect the programming paradigms that were present at the time of their creation, as Bajo points out: “The main problem with CFI is that this defense is static in time, meaning that it only covers the possibilities of a programming language as is. If new features are introduced to the programming language later on, CFI does not recognize them and cannot deal with them because it was created based on an older version of the programming language.” In their study, Bajo and Rossow found that only 7 out of the 15 CFI schemes they considered initially were compatible with coroutines. Of these 7, only 2 (IBT and Control Flow Guard) provided partial protection against the exploitation of coroutines, while the remaining 5 provided none. “In the end”, Bajo summarizes, “we were able to bypass all of them. With CFOP, you can still do all the things that were possible previous to CFI.”
Patching CFOP is a structural issue
The fact that C++ coroutines are enjoying increasing popularity exacerbates the potential reach of CFOP. Bajo says: “Coroutines were introduced to C++ in 2020 and, since then, developers have been using them more and more. Unfortunately, we found that coroutines have certain structures in memory that can be targeted by attackers. To the best of our knowledge, this has not yet been exploited in real life.” Essentially, CFOP is possible because the three major compilers implement C++ coroutines in a way that renders them structurally vulnerable. Bajo says: “Mitigating this exploitation technique is not as easy as patching the code – this is a structural issue and you need to rethink how the application works internally.” Bajo and Rossow have developed successful implementation alternatives for C++ coroutines and reported these mitigations to Clang/LLVM, GCC and MVSC in November 2024. The CISPA research on CFOP will be presented at the Black Hat USA in Las Vegas on August 7, 2025.
Subject of Research
Not applicable
Article Title
“Await() a Second: Evading Control Flow Integrity by Hijacking C++ Coroutines”
No comments:
Post a Comment