Wednesday, December 01, 2021

Facebook’s Secret “Dangerous Organizations and Individuals” List Creates Problems for the Company—and Its Users

BY JILLIAN C. YORK AND DAVID GREENE
EFF
DECEMBER 1, 2021


Along with the trove of "Facebook Papers" recently leaked to press outlets was a document that Facebook has, until now, kept intentionally secret: its list of "Dangerous Organizations and Individuals." This list comprises supposed terrorist groups, hate groups, criminal groups, and individuals associated with each, and is used to filter and remove speech on the platform. We're glad to have transparency into the document now, but as The Intercept recently reported, and as Facebook likely expected, seeing the list raises alarm bells for free speech activists and people around the world who are put into difficult, if not impossible, positions when it comes to discussing individuals or organizations that may play major roles in their government, for better or for worse.

While the list included many of the usual suspects, it also contained a number of charities and hospitals, as well as several musical groups, some of whom were likely surprised to find themselves lumped together with state-designated terrorist organizations. The leaked document demonstrated the opaque and seemingly arbitrary nature of Facebook’s rulemaking.

Tricky business


Let’s begin with an example: In August, as the Taliban gained control over Afghanistan and declared its intent to re-establish the Islamic Emirate of Afghanistan, the role of the Internet—and centralized social media platforms in particular—became an intense focus of the media. Facebook was of particular focus, both for the safety features it offered to Afghans and for the company’s strong stance toward the Taliban.

The Taliban has long been listed as a terrorist organization by various entities, including the United Nations and the U.S. government, but is additionally subject to various draconian sanctions since the 1990s by the UN Security Council, the U.S., and other countries that are designed to effectively prevent any economic or other service-related interactions with the Taliban.

As a result of these strict sanctions, a number of internet companies, including Facebook, had placed restrictions on the Taliban’s use of their platforms even prior to the group’s takeover. But as the group took power, Facebook reportedly put new resources into ensuring that the Taliban couldn’t use their services. By contrast, Twitter continued to allow the group to maintain a presence, although they did later remove the Pashto and Dari accounts of Taliban spokesperson Zabihullah Mujahid, leaving only his English account intact.

The conflicting decisions taken by these and other companies, as well as their often-confused messaging around legal obligations vis-a-vis the Taliban and other extremist groups, is worth picking apart, particularly in light of the growing use of terrorist lists by states as a means of silencing and exclusion. Not one but several groups listed as terrorists by the United States occupy a significant role in their countries’ governments.

As The Lawfare Podcast’s Quinta Jurecic put it: “What do you do when an insurgent group you’ve blocked on your platform is now de facto running a country?”

Legal obligations and privatized provisions

First, it’s important to clarify where companies’ legal obligations lie. There are three potential legal issues that come into play, and are, unfortunately, often conflated by company spokespeople.

The first is what is commonly referred to as “material support law,” which prohibits U.S. persons and entities from providing material support (that is, financial or in-kind assistance) to groups on the State Department’s list of foreign terrorist organizations (FTO). As we’ve written previously, “as far as is publicly known, the U.S. government has not taken the position that allowing a designated foreign terrorist organization to use a free and freely available online platform is tantamount to ‘providing material support’ for such an organization, as is prohibited under the patchwork of U.S. anti-terrorism laws” and U.S. courts have consistently rejected efforts to impose civil liability on online platforms when terrorist groups use them to communicate. More importantly, the Supreme Court has limited these restrictions to concerted “acts done for the benefit of or at the command of another.”

This is important because, as various documents leaked from inside Facebook have repeatedly revealed, the company appears to use the FTO list as part of the basis for their own policy on what constitutes a “dangerous organization” (though, notably, their list goes far beyond that of the U.S. government). Furthermore, the company has rules that restrict people who are not members of designated groups from praising or speaking positively in any way those entities—which, in practice, has resulted in the removal of large swathes of expression, including art, counterspeech, and documentation of human rights violations. In other words, the company simply isn’t very good at moderating such a complex topic.

The second legal issue is related to the more complicated issue of sanctions. U.S. sanctions are issued by the Department of Treasury’s Office of Foreign Asset Controls and have for many years had an impact on tech (we’ve written about that previously in the context of country-level sanctions on, for instance, Syria).

Facebook has stated explicitly that it removes groups—and praise of those groups—which are subject to U.S. sanctions, and that it relies on sanctions policy to “proactively take down anything that we can that might be dangerous or is related to the Taliban in general.”

Specifically, the sanctions policy that Facebook relies upon stems from an Executive Order, 13224, issued by then-president George W. Bush in 2002. The Order reads:

“In general terms, the Order provides a means by which to disrupt the financial support network for terrorists and terrorist organizations by authorizing the U.S. government to designate and block the assets of foreign individuals and entities that commit, or pose a significant risk of committing, acts of terrorism. In addition, because of the pervasiveness and expansiveness of the financial foundations of foreign terrorists, the Order authorizes the U.S. government to block the assets of individuals and entities that provide support, services, or assistance to, or otherwise associate with, terrorists and terrorist organizations designated under the Order, as well as their subsidiaries, front organizations, agents, and associates.”

The Executive Order is linked to a corresponding list of “specially designated” nationals (SDNs)—groups and individuals—who are subject to the sanctions.

But whether this policy applies to social media platforms hosting speech remains an open question about which experts disagree. On the aforementioned Lawfare Podcast, Scott R. Anderson, a senior editor at Lawfare and a fellow at the Brookings Institution, explained that companies are facing a potential legal risk in providing in-kind support (that is, a platform for their speech) to SDNs. But while hosting actual SDNs may be a risky endeavor, Faiza Patel and Mary Pat Dwyer at the Brennan Center for Justice recently argued that, despite repeated claims by Facebook and Instagram, they are not in fact required to remove praise or positive commentary about groups that are listed as SDNs or FTOs.

US courts have alsoy rejected civil claims brought by victims of terrorist acts and their families against social media platforms, where those claims were based on the fact that terrorists or terrorist organizations used the platforms to organize and/or spread their messages. Although strong constitutional arguments exist, these cases are typically decided on statutory grounds. In some cases, the claims are rejected because the social media platforms’ actions were not a direct enough cause of the harm as required by the Anti-Terrorism Act, the law that creates the civil claims. In other cases, courts have found the claims barred by Section 230, the US intermediary immunity law.

An especially tricky community standard

Facebook’s Dangerous Individuals and Organizations community standard has proven to be one its most problematic. The standard has been at issue in six of the 21 cases the Oversight Board has taken. The Oversight Board has repeatedly criticized its vagueness. Facebook responded by clarifying the meaning of some of the terms, but left some ambiguity and also increased its unguided discretion in some cases. In one matter, Facebook had removed a post that shared news content from Al Jazeera about a threat of violence from the Izz al-Din al-Qassam Brigades, the military wing of Hamas, because the DIO policy stated that sharing official communications of Facebook-designated dangerous organizations was a form of substantive support—failing to apply its own exception for news reporting and neutral discussions. Facebook reversed the decision only after the Oversight Board selected the case, as it did in two other similar cases. In another case, Facebook apparently misplaced important policy guidance in implementing the DIO policy for three years.

The real-world harms of Facebook’s policy


While Facebook—and indeed, many Western counter-terrorism professionals—seem to view the primary harm in hosting the speech of terrorist organizations, there are real and significant harms to enacting sweeping policies that remove such a broad range of expression related to groups that, for better or worse, play a role in governance. The way that Facebook implements its policies—using automation to remove whatever it deems to be terrorist or extremist content with little to no human oversight—has resulted in overly broad takedowns of all sorts of legitimate speech. Despite this, Mark Zuckerberg has repeatedly stated a belief that automation (not nuanced human review) is the way forward.

The combination of ever-increasing automation and Facebook’s vague and opaque rules (none of which cite any legal requirements) make it impossible for users in affected countries to understand what they can and cannot say.

As such, a Lebanese citizen must carefully avoid coming across as supporting Hezbollah, one of many political parties in their country that have historically engaged in violence against civilians. An Afghan seeking essential services from their government may simply not be able to find them online. And the footage of violence committed by extremist groups diligently recorded by a Syrian citizen journalist may never see the light of day, as it will likely be blocked by an upload filter.

While companies are, as always, well within their rights to create rules that bar groups that they find undesirable—be they U.S.-designated terrorist organizations or domestic white supremacist groups—the lack of transparency behind these rules serves absolutely no one.

We understand that Facebook feels bound by perceived legal obligations. The Department of Treasury can and should clarify those obligations just as they did under the Obama administration. But Facebook also has a responsibility to be transparent to its users and let them know, in clear and unambiguous terms, exactly what they can and cannot discuss on its platforms.

No comments:

Post a Comment