Thursday, May 28, 2020

A Quick Take on Trump's May 26 Executive Order on Platforms and CDA 230

Stanford Cyber Policy Center’s Platform Regulation Director says this is a copy of the draft. She has annotated it as well:

To aid in this endeavor, here is my color coded and annotated copy of the Executive Order in CDA 230 and platforms. https://t.co/H3zN22X4me https://t.co/1CosSHTpqd
— Daphne




Color coding:
Red: Atmospherics. Politically and philosophically interesting and important in the long term, as I discuss here, but not legally effective as part of an executive order. Reporting on these parts is about on par with reporting on tweets. 
Orange: Legally dubious, requires agencies to disregard judicial interpretation of federal legislation.
Yellow: Reasonable minds can differ.


Late night caveat: I’m posting this around midnight my time. If (1) the order is revised before official publication or (2) I missed something serious, I’ll update this later to reflect that. 

EXECUTIVE ORDER


----------


PREVENTING ONLINE CENSORSHIP


By the authority vested in me as President by the Constitution and the laws of the United States of America, including the Federal Property and Administrative Services Act of 1949, as amended (40 U.S.C. 101 and 121(a)), it is hereby ordered as follows:
Section 1. Policy. 
Free speech is the bedrock of American democracy.  Our Founding Fathers protected this sacred right with the First Amendment to the Constitution, underscoring that the freedom to express and debate ideas is the foundation for all of our rights as a free people.
The emergence and growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology.  Today, many Americans follow the news, stay in touch with friends and family, and share their views on current events through social media and other online platforms.  As a result, these platforms function in many ways as a 21st-century equivalent of the public square.
As President, I have made clear my commitment to free and open debate on the Internet. Such debate is just as important online as it is in our universities, our businesses, our newspapers, and our homes.  It is essential to sustaining our democracy.
In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to hand-pick the speech that Americans may access and convey online.  This practice is fundamentally un-American and anti-democratic.  When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power.    
Online platforms, however, are engaging in selective censorship that is hurting our national discourse.  Tens of thousands of Americans have reported, among other troubling behaviors, online platforms “flagging” content as inappropriate, even though it does not violate any stated terms of service; making unannounced and unexplained changes to policies that have the effect of disfavoring certain viewpoints; and deleting content and entire accounts with no warning, no rationale, and no recourse.
At the same time social media platforms are invoking inconsistent, irrational, and groundless justifications to censor or otherwise punish Americans’ speech here at home, several online platforms are profiting from and promoting the aggression and disinformation spread by foreign governments like China.  Google, for example, created a search engine for the Chinese Communist Party, which blacklisted searches for “human rights,” hid data unfavorable to the Chinese Communist Party, and tracked users determined appropriate for surveillance.  Google has also established research partnerships in China that provide direct benefits to the Chinese military.  For their part, Facebook and Twitter have accepted advertisements paid for by the Chinese Government that spread false information about China’s mass imprisonment of religious minorities.  Twitter has also amplified China’s propaganda abroad, including by allowing Chinese government officials to use its platform to undermine pro-democracy protests in Hong Kong. 
My commitment to free and open debate on the Internet remains as strong as ever.  Therefore, it remains the policy of the United States that lawful content should be free from censorship in our digital marketplace of ideas.  As a Nation, we must foster and protect diverse viewpoints in today’s digital communications environment where all Americans can and should have a voice.  We must seek transparency and accountability from online platforms, and encourage standards and tools to protect and preserve the integrity and openness of American discourse and freedom of expression.
Sec. 2. Protections Against Arbitrary Restrictions.  (a) It is the policy of the United States to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet.  Prominent among those rules is the immunity from liability created by section 230(c) of the Communications Decency Act (section 230).  47 U.S.C. 230.  It is the policy of the United States that the scope of that immunity should be clarified.
Section 230(c) was designed to address court decisions from the early days of the Internet holding that an online platform that engaged in any editing or restriction of content posted by others thereby became itself a “publisher” of the content and could be liable for torts like defamation.  As the title of section 230(c) makes clear, the provision is intended to provide liability “protection” to a provider of an interactive computer service (such as an online platform like Twitter) that engages in “‘Good Samaritan’ blocking” of content when the provider deems the content (in the terms of subsection 230(c)(2)(A)) obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.  Subsection 230(c)(1) broadly states that no provider of an interactive computer service shall be treated as a publisher or speaker of content provided by another person.  But subsection 230(c)(2) qualifies that principle when the provider edits the content provided by others.  Subparagraph (c)(2) specifically addresses protections from “civil liability” and clarifies that a provider is protected from liability when it acts in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”  The provision does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.   When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  By making itself an editor of content outside the protections of subparagraph (c)(2)(A), such a provider forfeits any protection from being deemed a “publisher or speaker” under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others.  It is the policy of the United States that all departments and agencies should apply section 230(c) according to the interpretation set out in this section.    
(b) To further advance the policy described in subsection (a) of this section, within 30 days of the date of this order, the Secretary of Commerce (Secretary), through the National Telecommunications and Information Administration (NTIA), shall file a petition for rulemaking with the Federal Communications Commission (FCC) requesting that the FCC expeditiously propose regulations to clarify:
 (i) the conditions under which an action restricting access to or availability of material is not “taken in good faith” within the meaning of subparagraph (c)(2)(A) of section 230, particularly the conditions under which such actions will be considered to be:
(1) deceptive, pretextual, or inconsistent with a provider’s terms of service; or
(2) the result of inadequate notice, the product of unreasoned explanation, or having been undertaking without a meaningful opportunity to be heard; and
(ii) Any other proposed regulations that the NTIA concludes may be appropriate to advance the policy described in subsection (a) of this section.
Sec 3. Prohibition on Spending Federal Taxpayer Dollars on Advertising with Online Platforms That Violate Free Speech Principles.  (a) The head of each executive department and agency (agency) shall review its agency’s Federal spending on advertising and marketing paid to online platforms.  Such review shall include the amount of money spent, the online platforms supported, the viewpoint-based speech restrictions imposed by each online platform, an assessment of whether the online platform is appropriate for such agency’s speech, and the statutory authorities available to restrict advertising dollars to online platforms not appropriate for such agency’s speech. 
(b) Within 30 days of the date of this order, the head of each agency shall report its findings to the Director of the Office of Management and Budget.
Sec. 4. Federal Review of Unfair or Deceptive Practices.  (a)  It is the policy of the United States that large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech.  The Supreme Court has described that social media sites, as the modern public square, “can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.” Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017).  Communication through these channels has become important for meaningful participation in American democracy, including to petition elected leaders.  These sites are providing a public forum to the public for others to engage in free expression and debate.  Cf. PruneYard Shopping Center v. Robins, 447 U.S. 74, 85-89 (1980).
(b)  In May of 2019, the White House Office of Digital Strategy created a Tech Bias Reporting tool to allow Americans to report incidents of online censorship.  In just weeks, the White House received over 16,000 complaints of online platforms censoring or otherwise taking action against users based on their political viewpoints.  The White House Office of Digital Strategy shall reestablish the White House Tech Bias Reporting Tool to collect complaints of online censorship and other potentially unfair or deceptive acts or practices by online platforms and shall submit complaints received to the Department of Justice and the Federal Trade Commission (FTC). 
(c) The FTC shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to 15 U.S.C. 45.  Such unfair or deceptive acts or practice shall include practices by entities regulated by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.
(d)  For large internet platforms that are vast arenas for public debate, including the social media platform Twitter, the FTC shall also consider whether complaints allege violations of law that implicate the policies set forth in section 4(a) of this order.  The FTC shall develop a report describing such complaints and make the report publicly available, consistent with applicable law.    
Sec. 5. State Review of Unfair or Deceptive Practices. (a) The Attorney General shall establish a working group regarding the potential enforcement of State statutes that prohibit online platforms from engaging in unfair and deceptive acts and practices.  The working group shall invite State Attorneys General for discussion and consultation, as appropriate and consistent with applicable law.
(b) The White House Office of Digital Strategy shall submit all complaints described in Section 4(b) of this order to the working group, consistent with applicable law. The working group shall also collect publicly available information regarding the following:
(i) monitoring or creating watch-lists of users based on their interactions with content or users (e.g., likes, follows, time spent); and
(ii) monitoring users based on their activity off the platform.
Sec. 6. Definition.  For purposes of this order, the term “online platform” means any website or application that allows users to create and share content or engage in social networking, or any general search engine.
Sec. 7. General Provisions
      1. Nothing in this order shall be construed to impair or otherwise affect:
    1. the authority granted by law to an executive department or agency, or the head thereof;
    2. the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals; or
    3. existing rights or obligations under international agreements.

      1. This order shall be implemented consistent with applicable law and subject to the availability of appropriations.
      2. This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.
FOOTNOTES

1 This is an important issue. But there are First Amendment arguments on all sides. For example, Trump appointee J. Kavanaugh as a judge proclaimed himself solidly on the side of private Internet companies’ First Amendment rights to establish and enforce editorial policy. 2 The truth about disparate impact from platforms’ content takedown operations is hotly debated. People across the political spectrum believe they are victims of biased takedowns. Without better transparency, we can’t know whose anecdotal accounts hold water, or what factual claims should be the basis for policy-making. 3 If this really means that *all* First Amendment-permitted speech must be given equal footing on major platforms, this is a radical proposition. It would change the Internet as we know it, and undermine calls to remove widely reviled material like the Christchurch massacre video. 4 This is a major leap. Section 230 has a “good faith” limitation on (c)(2) immunity (the immunity for “wrongful takedown” claims) but not (c)(1) immunity (the immunity for failing to take down unlawful content). The (c)(2) immunity is relatively inconsequential, since platforms keep winning those cases on other grounds -- it’s not clear they even need 230(c)(2) for this. The (c)(1) immunity, however, is hugely consequential, and makes intermediary business models feasible. Several academics and politicians have advanced the “good faith is a (c)(1) requirement” argument in recent years, but to my knowledge no court has accepted it. As a matter of statutory interpretation, I find it hard to justify. 5 To the extent this is all just about the (c)(2) immunity, it’s not too wild to say these could be aspects of the “good faith” inquiry. Why NTIA and the FCC (rather than courts) should be involved in this question is another matter. And smart Communications law experts on Twitter are suggesting the FCC simply lacks authority here. (I suspect they are right, especially Harold Feld, but I’m not expert enough to call this one.) 6 (1) Does this effectively lead only to a report, or to actual changes in the advertising spending of federal agencies? Despite the section title, there is no mandate here, but agencies might change their practices anyway. (2) If the latter, is there a constitutional problem with conditioning federal spending on the ad venue’s editorial policy? There must be precedent on (2). 7 Packingham is about the government restricting access to social media -- which can violate the First Amendment. It is not about private companies restricting such access. 8 PruneYard is not about the First Amendment or federal policy. It is about whether the Supreme Court will prevent claims under the California Constitution 9 This could be considered posturing, since it only mandates that the FTC “shall consider” such interpretation of the law. I have a lot of questions about it. But I’ve heard honest participants in the debate suggest this might be permissible, so I’ll leave it as yellow for now. 10 Can the executive order the FTC to make a report? I assume yes. (Update: one expert told me he thinks the answer is no.) Can it require that the report reflect the statutorily and constitutionally suspect positioning in 4(a)? Maybe, but it is hard to see that as anything but political theater. 11 Convening a working group: presumably within the federal AG’s authority. Doing so to circumvent Congressionally-created limits on federal authority: pretty sketchy. 12 This is a really broad definition. To the extent any of this reaches smaller platforms (your cousin’s knitting blog, a local political organizer’s site, the NYT’s comments section) or infrastructure providers (Amazon Web Services, DNS providers, Cloudflare) there are a lot of big questions to ask, including questions about competition and net neutrality.






No comments: