This U.S. Supreme Court Case Could Decide The Future Of The Internet As We Know It
Paul Blumenthal
Sun, February 19, 2023
The Supreme Court will soon hear arguments in a case with major implications for the operation of the internet as we know it. In Gonzalez v. Google, set to be argued Feb. 21, the court will be asked to pass judgment on Section 230 of the Communications Decency Act for the first time.
Since its enactment in 1996, Section 230 has been interpreted by courts to shield online platforms from liability for almost any offense committed by users. This protection for user behavior enabled the growth of the current online ecosystem of search engines, social media sites, blogs, message boards, user-generated encyclopedias and shopping sites. And so it has variously been dubbed the internet’s “Magna Carta,” its “First Amendment” and the “Twenty-Six Words That Created The Internet.”
But Section 230 has also produced negative effects. Online platforms have been used for harassment, death threats, defamation, discrimination, revenge porn, fraudulent product sales, the illegal purchase of weapons and drugs resulting in death, and other illicit behavior. In most cases, platforms have been absolved of any responsibility thanks to Section 230’s protections. Gonzalez v. Google brings the negative effects that come with Section 230 before the court for the first time.
In 2015, Islamic State-linked militants murdered 23-year old American student Nohemi Gonzalez amid a terrorist attack in Paris that left 129 people dead. Gonzalez’s family sued Google for aiding and abetting terrorism under the Anti-Terrorism Act. The family claimed that YouTube’s algorithmic recommendation engine suggested and promoted videos posted by the Islamic State that recruited followers and encouraged violence. At issue in the case is whether or not these algorithms are themselves covered by Section 230’s liability protection.
The U.S. Court of Appeals for the 9th Circuit ruled in 2021 that Google’s YouTube recommendation algorithm is protected by Section 230, but the decision featured notable dissenting opinions.
These dissents joined an increasing range of criticism of Section 230 from women’s rights advocates, antitrust reformers and conservatives. With the issue finally before the Supreme Court, these challengersask the justices to consider whether the internet has changed so much in the past 30 years that it’s time to reconsider the law that made it what it is today.
From The Wolf Of Wall Street To Oklahoma City
Section 230 emerged from very specific circumstances that arose during the early public adoption of the internet in the 1990s. It started with two court cases that created perverse incentives that could doom the nascent technology.
In 1991, the online service CompuServe was sued for defamation over user posts made on a message board it hosted. A court in New York found CompuServe not liable as a publisher because it did not edit, moderate or review any content posted on the bulletin boards it operated.
Four years later, Stratton Oakmont, the brokerage firm whose founder Jordan Belfort was immortalized in Martin Scorsese’s movie “The Wolf of Wall Street,” sued the internet service provider Prodigy over alleged defamatory posts made on one of its message boards. Unlike CompuServe, Prodigy engaged in content moderation in order to remove profanity and pornography. A New York state court found Prodigy liable for defamation posted by its users because it proactively removed objectionable content.
These two cases set off alarm bells for the burgeoning online industry. It looked like companies would be punished for good behavior and protected for giving free rein to bad behavior.
The 1990s message board provider CompuServe was involved in one of the early online defamation cases that gave rise to Section 230.
The 1990s message board provider CompuServe was involved in one of the early online defamation cases that gave rise to Section 230.
Then-Rep. Chris Cox, a California Republican, thought the same thing when reading about the cases in the paper. And so he teamed up with Rep. Ron Wyden, who is now a Democratic senator from Oregon, to do something about it. The law they wrote became Section 230 of the Communications Decency Act of 1996.
The law, named the Good Samaritan Act, gave internet content service providers liability protection when they acted to remove objectionable content. This meant that online sites would not be penalized in court as publishers of user content as Prodigy was for engaging in content moderation. It features two key passages.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” the first passage states. These are the aforementioned 26 words that “created the internet.”
The other key passage states that online content providers are immune from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected” or “any action taken to enable or make available to information content providers or others the technical means to restrict access to” such material. This provides the Good Samaritan protection for content moderation.
The new law was quickly tested in the courts. The first such case emerged in the wake of the Oklahoma City bombing when an anonymous AOL user posted an offer to sell pro-Timothy McVeigh bumper stickers, and attached the name and address of one Ken Zeran. Zeran, who was not behind the bumper stickers, received a torrent of death threats and abuse. He asked AOL to take down the messages, and the company did, but they kept reappearing. Zeran ultimately sued AOL for negligence for failing to keep the messages off of the platform.
In 1998, the 4th Circuit U.S. Court of Appeals ruled that Section 230 protected AOL from claims of negligence, or any other possible criminal liability, whether as a publisher or as a distributor, because the law “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
Ever since the Zeran decision, federal and state courts have applied this near-blanket “federal immunity” to protect digital platforms from liability for user-generated content. This new exemption from secondary liability encouraged the nascent tech industry to move away from content generation and toward the creation of user-generated platforms. The companies that emerged from this change in liability law like Amazon, Google and Facebook are today among the highest-capitalized companies in the world.
Are Recommendations The Same As Publishing?
As the tech industry emerged from its position as an upstart into the home of the most valuable companies in the world, the perverse consequences of Zeran’s broad grant of Section 230 immunity came more and more into focus.
For social media platforms in particular, this included the understanding that their owners prioritized the maximization of the time users spent on their site in order to increase ad revenue. Platforms built recommendation algorithms that prioritized keeping eyeballs engaged. They also built targeted advertising systems and algorithms for one class of user — advertisers — to connect with another.
In the race to monopolize user attention, social media companies built their platforms and the algorithms that unearthed engaging content in order to hook their users. One of the most read books by Silicon Valley executives as this attention economy emerged was called “Hooked: How to Build Habit-Forming Products.” In this push for money and power, these companies wound up hosting and promoting to their users content from all kinds of sources, including terrorists, racist extremists, misogynists and many others who ultimately became linked to bombings, murders and mass shootings.
In my view, these types of targeted recommendations and affirmative promotion of connections and interactions among otherwise independent users are well outside the scope of traditional publication.Judge Marsha Berzon, 9th U.S. Circuit Court of Appeals
Section 230 had long been cited in lower courts as providing liability protection in almost every case of platform-hosted user-generated content. Would it apply when that user-generated content was recommended by an algorithm or other system created by the platform itself? A few key lower court cases precede Google v. Gonzalez in addressing this question.
In a nearly identical case, Force v. Facebook, U.S. victims of terrorist attacks in Israel and their families sued the social media company alleging it aided and abetted terrorism under the Anti-Terrorism Act by hosting and promoting content from the group Hamas through its recommendation algorithms.
On appeal, the 2nd Circuit U.S. Court of Appeals ruled in favor of Facebook by holding that the use of neutral algorithms to suggest or recommend content was the action of a publisher, which Section 230 protects from liability.
“Arranging and distributing third-party information inherently forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content, whether in interactive internet forums or in more traditional media. That is an essential result of publishing,” the court’s majority opinion stated.
Many online platforms do not use as sophisticated recommendation systems as Facebook or YouTube do, and they too won protection from liability under Section 230.
In Dyroff v. Ultimate Software Group, the mother of Wesley Greer sued Ultimate Software Group after Greer purchased fentanyl-tainted heroin from a drug dealer through the company’s website The Experience Project. That site allowed users to post questions or comments and then suggested connections that would appeal to them. It also alerted them when other users responded. In Greer’s case, he asked where he could find heroin near him. The site then emailed him when another user responded with an offer to sell him drugs.
The 9th Circuit ultimately ruled that the site’s “recommendation and notification functions … did not materially contribute … to the alleged unlawfulness of the content” and were “neutral,” making them just another function of publishing.
‘Proactively Creating Networks’
While these rulings uphold the long-standing lower court consensus on Section 230, a number of notable dissents have emerged along with them.
In the Force v. Facebook case, 2nd Circuit Chief Judge Robert Katzmann, a Bill Clinton appointee who died in 2021, issued a partial concurrence and dissent arguing that Section 230 should not be read to cover the recommendation algorithms created by social media sites.
“Through its use of friend, group, and event suggestions, Facebook is doing more than just publishing content: it is proactively creating networks of people,” Katzmann wrote. “Its algorithms forge real-world (if digital) connections through friend and group suggestions, and they attempt to create similar connections in the physical world through event suggestions.”
By “proactively creating networks of people” through friend, group and interest suggestions, Facebook, Katzmann argued, produces a “cumulative effect” that is “greater than the sum of each suggestion.” These suggestions have the potential to immerse a user “in an entire universe filled with people, ideas, and events she may never have discovered on her own."
People gather at a makeshift memorial near the Bataclan concert hall in Paris on Nov. 15, 2015, two days after a series of deadly attacks by Islamic State militants where American citizens including Nohemi Gonzalez were killed. Gonzalez's family is suing Google for aiding ISIS by distributing its videos over YouTube.
People gather at a makeshift memorial near the Bataclan concert hall in Paris on Nov. 15, 2015, two days after a series of deadly attacks by Islamic State militants where American citizens including Nohemi Gonzalez were killed. Gonzalez's family is suing Google for aiding ISIS by distributing its videos over YouTube.
“It strains the English language to say that in targeting and recommending these writings to users — and thereby forging connections, developing new social networks — Facebook is acting as ‘the publisher of ... information provided by another information content provider,’” he continued.
Judges on the 9th Circuit echoed Katzmann’s arguments in a concurrence and a partial dissent when they heard Gonzalez.
In joining “the growing chorus of voices calling for a more limited reading of the scope of Section 230 immunity,” Judge Marsha Berzon, a Clinton appointee, wrote in a concurrence that she would find that “the term ‘publisher’ under section 230” does not cover “activities that promote or recommend content or connect content users to each other.”
“In my view, these types of targeted recommendations and affirmative promotion of connections and interactions among otherwise independent users are well outside the scope of traditional publication,” she added.
In a separate concurrence and partial dissent, Judge Ronald Gould, a Clinton appointee, agreed that Section 230 protects Google from liability for posts published by ISIS members on YouTube, but not for any activity that “goes beyond merely publishing the post” like “amplifying” dangerous content such as terrorism recruitment videos.
The plaintiffs in Gonzalez point to these dissents to argue that Section 230 protection should not extended to cover these recommendation systems. This wouldn’t necessarily lead to Google being found in violation of the Anti-Terrorism Act, but it would enable a court challenge to proceed.
Friend-of-the-court briefs similarly argue that the Supreme Court could limit Section 230 immunity in a variety of ways. The court could find that certain acts of curation and recommendation are not acts of publishing. It could rule that Section 230 protects online companies as “publishers,” but not as “distributors” of third-party content. Or it could require companies to act as good Samaritans, as suggested by the law’s original title, and eliminate harmful conduct or protect users from it on their platforms when they are made aware of it.
At The Supreme Court
Since the Supreme Court has not heard a Section 230 case before, the justices have had little to say about it. The only exception is Justice Clarence Thomas, who in 2020 noted his dissatisfaction with lower courts’ interpretation of Section 230.
Noting that “most of today’s major Internet platforms did not exist” when Section 230 was enacted, Thomas said it “behooves” the court to take up a case challenging the law to judge whether lower courts have extended the protection from liability suits too far.
“Adopting the too-common practice of reading extra immunity into statutes where it does not belong, courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms,” Thomas wrote in dissent from a decision to turn down a case.
No other justice has stated an opinion on Section 230. There is also no way to divine their potential opinions based on which party’s president appointed them or their identification as conservative or liberal. Thomas, a George H.W. Bush appointee, is the most conservative justice on the court, while the three dissenting lower court judges were all liberals appointed by Clinton. Thomas’ call for the court to hear a Section 230 case also came amid rising skepticism toward Section 230 from members of both political parties.
Democrats in Congress introduced legislation to limit Section 230 liability protections for online advertisements and certain health information, and for online platforms that enable discrimination, stalking, harassment, genocide or wrongful death. Meanwhile, Republicans seek to amend Section 230 by having its liability protections kick in only when companies do not censor or otherwise moderate political opinions.
Gonzalez may very well be the beginning of a new legal landscape for the internet. The Supreme Court is currently weighing whether to take up arguments in two cases challenging laws passed by Republicans in Florida and Texas that would ban digital platforms from moderating content based on political ideology. In a hint that they might take the cases, the court asked the Biden administration to submit a brief in the Florida case on Jan. 23.
Changes to the internet’s “Magna Carta,” however well-meaning, may result in unwanted consequences. After a court found that Section 230 provided liability protection to the sex-worker website Backpage.com for connecting users with underage sex workers, Congress passed a law denying Section 230 protection to platforms engaged in sex-trafficking. This resulted in the shuttering of sites where sex workers consensually offered their services and Craigslist removing its entire personal ads section.
Just as Section 230 was enacted in response to the “perverse incentives” created by the Stratton Oakmont decision, and its passage created its own incentives protecting the internet’s negative externalities, so too could any change dictated by the court
Google’s $168 Billion in Ad Revenue at Risk in Supreme Court Case
DOJ, Eight States Sue Google Over Ad Dominance
Emily Birnbaum
Fri, February 17, 2023
(Bloomberg) -- The US Supreme Court is poised to hear a case that could spell danger for the internet’s most lucrative business: online advertising.
The case, Gonzalez v. Google, will be argued Tuesday and centers on whether internet companies are liable for the content their algorithms recommend to users. The tech industry says it’s protected by a legal shield contained in communications law known as Section 230.
Much of the discussion surrounding the case has focused on the costs to online companies if the court determines they are legally responsible for the hundreds of millions of comments, videos and other content posted by users every day. However, such a decision could also strike at the heart of the automated advertising upon which Meta Platforms Inc.’s Facebook and Alphabet Inc.’s Google rely for the bulk of their revenue.
In fact, the social media companies view the case as an existential threat.
“This case could adversely impact the entire advertising ecosystem,” said Marc Beckman, Chief Executive Officer of DMA United, an advertising firm that regularly uses Google and Facebook’s tools to serve targeted ads to potential customers around the world.
Google is being sued by the family of Nohemi Gonzalez, a 23-year-old US citizen who was among at least 130 people killed in coordinated attacks by the Islamic State in Paris in November 2015. The family argues that Google’s YouTube should be held responsible for automated recommendations of Islamic State videos.
Websites and ad networks automatically target ads based on information they have collected about users, including their location, browsing history, topics they follow closely and more. The ads are posted to websites by online tools without human intervention.
Google declined to comment about the case. But in its Supreme Court brief, it said it is concerned about the case’s impact on the economy, including advertisers. Meta believes that Section 230 shields the company from liability for all content from third parties, including ads, and the social media giant is worried that the court could weaken those protections, a Meta spokesperson confirmed.
A broad ruling by the Supreme Court could effectively snuff out the business of serving personalized ads on the internet and turn online ad practices back to the early 90s, experts say. It could also force the platforms to litigate a wave of lawsuits over the millions of advertisements they target at users, resulting in exponential legal costs for smaller ad networks and exchanges.
• QuickTake: Why ‘Section 230’ Is Nub of Fights Over Online Speech
“If we’re not targeting ads, we’re going back to the old ’90s model of ‘see who bites,’” said Jess Miers, legal advocacy counsel with tech-funded group Chamber of Progress. Miers previously worked for Google.
Together, Google and Facebook capture almost 50% of all digital advertising revenues worldwide. The companies, which have been referred to as the “duopoly” of online advertising, collect reams of data about their users in order to serve them relevant ads – a business that mints both companies billions of dollars per year. Globally, Google made $168 billion in ad revenue in 2022 while Meta made $112 billion, according to data analytics company Insider Intelligence. This year, Google’s US revenue alone is projected to reach $73.8 billion, while Meta’s is expected to reach $51 billion. A ruling by the high court would only apply to the US, but it would be technically difficult for the companies to handle advertising differently in its largest market than other countries around the world.
The companies are already facing legal challenges over the ads they serve, particularly those that relate to sensitive issues like healthcare, politics, employment opportunities and more. With few exceptions, Facebook and Google successfully win dismissals of most cases that would hold them responsible, thanks to Section 230.
That could change quickly if the Supreme Court decides to narrow Section 230. While the shield protects companies from lawsuits over content generated by ordinary people, Cathy Gellis, a California lawyer who has represented tech companies in online speech cases, said ads could be categorized as “user-generated content” if the Supreme Court’s ruling is wide-ranging.
The digital advertising industry is already coming under fire as governments around the world crack down, arguing that companies collect too much information about people without their consent and violate their privacy. Privacy regulations in countries including the European Union limiting the amount of data companies are allowed to collect on users have already put a huge strain on the digital ads ecosystem, said Beckman.
“We are already, as an agency, implementing new marketing initiatives to not just combat what we think will happen if 230 is limited, but also in the face of these new third party data privacy restrictions,” Beckman said. He said the era of “beautiful” and distinctive advertising may be on its way back as advertisers can no longer rely on the hyper-personalized and cheap ad networks they’ve become accustomed to. While targeted advertising allowed firms to reach their intended audiences with little effort, a pivot away from algorithmic recommendations could require advertisers to work harder to grab attention.
Miers said it’s likely that Google and Facebook will face the brunt of lawsuits the court weakens Section 230. But smaller ad agencies and ad networks will face “trickle-down” effects.
Online advertising is so key to Meta and Google’s business models, it’s likely they would try to fight it out in court, said Gellis, the California lawyer. They would try to handle the legal costs and see if they could win cases on the merits. “Everybody’s going to try to muddle through as best they can,” Gellis said.
To some critics of the tech companies, a wind down of targeting advertising on the internet could benefit some of the internet’s most vulnerable users. Children’s advocacy group Common Sense Media and Facebook whistleblower Frances Haugen in a Supreme Court brief argued that Google’s video and ad recommendations can create a “feedback loop” that steers children and teenagers down rabbit holes that can revolve around eating disorders, self harm and extremism. In their view, Google and Facebook should better control the ads that it serves to young audiences.
The case could be a “shock to a lot of businesses,” said Eric Goldman, a law professor at Santa Clara University School.
“So much of advertising is now being delivered in a dynamic way,” Goldman said. “If that dynamic assessment is an algorithmic recommendation that disqualifies the ad network for 230 protections, then the ad industry has to do something different.”