Tuesday, February 21, 2023

How Two US Supreme Court Cases Could Completely Change the Internet

Solcyre Burga
Sun, February 19, 2023 

On a laptop screen you can see the logo of Youtube.

The future of the federal law that protects online platforms from liability for content uploaded on their site is up in the air as the Supreme Court is set to hear two cases that could change the internet this week.

The first case, Gonzalez v. Google, which is set to be heard on Tuesday, argues that YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez. Gonzalez’s parents and other deceased victims’ families are seeking damages related to the Anti-Terrorism Act.

Oral arguments for Twitter v. Taamneh—a case that makes similar arguments against Google, Twitter, and Facebook—centers around another ISIS terrorist attack that killed 29 people in Istanbul, Turkey, will be heard on Wednesday.

The cases will decide whether online platforms can be held liable for the targeted advertisements or algorithmic content spread on their platforms.

Tech companies argue that Section 230 protects them from these types of lawsuits because it grants them legal immunity from liability over third-party content that is posted on their platform. The case will decide whether platforms can be held liable for spreading harmful content to users through their algorithm.

Here’s what to know about Section 230.

What is Section 230?


Section 230, which passed in 1996, is a part of the Communications Decency Act.


The law explicitly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning online platforms are not responsible for the content a user may post.

The law allows tech companies to moderate or remove content that is considered egregious. Section 230, however, does not protect sites that violate federal criminal law, or intellectual property law. It also does not protect platforms that create illegal or harmful content.

Because popular sites like Facebook, Twitter and YouTube rely on user-generated content, many people have credited Section 230 for the creation of the internet we now know and love.

As the scale of online platforms has drastically increased over time, with up to 368 million monthly active users on Twitter alone, experts argue that Section 230 helps protect companies that struggle to keep up with the amount of content posted on their platforms from being sued over what users say or do.

What are these cases about?

The Gonzalez family first filed a suit in 2016, alleging that because Google, which owns YouTube, matches and suggests content to users based on their views, the platform recommended ISIS’s content to users, and enabled them to find other videos and accounts owned by ISIS.

Plaintiffs also argued that Google placed paid advertisements on ISIS videos, which meant they shared ad revenue with the terrorist organization. The lawsuit argues that this means that Google has not taken enough action to ensure ISIS remains off the platform. Because of this, the plaintiffs allege that these tech companies are directly liable for “committing acts of international terrorism” and secondarily liable for “conspiring with, and aiding and abetting, ISIS’s acts of international terrorism.”

A federal district court in California dismissed the complaint, saying that Google could not be held responsible for content that was produced by ISIS. The U.S. Court of Appeals for the 9th circuit sided with the district court, but in October, the Supreme Court agreed to hear the case.

In an opposition brief filed to the Supreme Court, Google maintained that a review of the case was not warranted because websites like YouTube could not be held liable as the “publisher or speaker” of the content users created. They add that Google does not have the capacity to screen “all third-party content for illegal or tortious material” and that the company was concerned that “the threat of liability could prompt sweeping restrictions on online activity.”

Major tech companies like Twitter and Meta, which have expressed their support for Google in the case, say that recommendations based on their algorithms allow them to “organize, rank, and display” user content in a way that enhances a user’s experience on the platforms and called the ability to do so “indispensable.”

What is the future of Section 230?

If the court decides in Gonzalez’s favor, the lawsuit will set a precedent for holding tech companies liable for targeted ads or recommendations.

The effects this could have on the internet are not entirely known, though many warn that tech companies would face a host of lawsuits. Corporate giants like Yelp, Reddit, Microsoft, Craigslist, Twitter and Facebook, say that searches for jobs and restaurants could be restricted if platforms can be sued over what users post, according to the Associated Press. And other review sites could even be held liable for defamation if a particular restaurant received bad ratings.

Even dating sites, like Tinder and Match, called Section 230 essential to user experience on the app as they hope to continue providing match recommendations “without having to fear overwhelming litigation,” according to CBS.

How do legislators feel about Section 230?

Conservatives have long criticized Section 230, alleging that it allows social media platforms to censor right-leaning content.

This scrutiny was applied towards platforms like Twitter, which came under fire after it removed a story by the New York Post about Hunter Biden’s laptop. Twitter executives later called the action a mistake in a House committee hearing, but many conservatives have claimed this as evidence of bias. Lawmakers also criticized social platforms ban of conspiracy theorist Alex Jones’ Infowars page from their sites in 2018.

Former President Donald Trump made calls to repeal the law, even prompting the Justice Department to release proposed amendments to Section 230 in 2020.

“I’ll just cut to the chase, Big Tech is out to get conservatives,” said Rep. Jim Jordan in a House Judiciary Committee hearing in July 2020. “That’s not a hunch, that’s not a suspicion, that’s a fact.”

Democrats have similarly argued against Section 230, saying that it prevents platforms from being held liable for hate speech and misinformation spread on their sites.

In July 2021, Senators Amy Klobuchar and Ben Ray Lujan introduced a bill that would remove tech companies’ immunity from lawsuits if their algorithms promoted health misinformation.

The White House later called on Congress to revoke Section 230 during a September “listening session” about tech companies’ accountability. And in January, President Joe Biden released an Op-Ed in the Wall Street Journal, asking for bipartisan legislation that would hold tech companies accountable.

“The American tech industry is the most innovative in the world…But like many Americans, I’m concerned about how some in the industry collect, share and exploit our most personal data, deepen extremism and polarization in our country, tilt our economy’s playing field, violate the civil rights of women and minorities, and even put our children at risk,” Biden wrote.

Justices — From Alito to Jackson — Are ‘Confused’ by Internet Legal Shield Case



Emily Birnbaum
Tue, February 21, 2023 

(Bloomberg) -- US Supreme Court justices across the political spectrum — from Samuel Alito to Ketanji Brown Jackson — uttered the words “I’m confused” during the early part of arguments over upending the internet’s foundational law.

The justices are “not the nine greatest experts on the internet,” said Justice Elena Kagan on Tuesday during oral arguments in Gonzalez v. Google, a case that centers on whether YouTube should be held liable for the videos that its algorithms recommend to users.

Kagan’s remarks, which were met with laughter, came as she expressed skepticism about the arguments from Eric Schnapper, the lawyer representing the family of a US citizen killed by Islamic State in a 2015 Paris attack, which contends Alphabet Inc.’s Google should be held liable for software algorithms that recommended terrorist videos to YouTube users.

The justices early in the day seemed at times confused and frustrated by Schnapper’s arguments, saying that they were unsure why YouTube in this case would not be protected by Section 230 of the Communications Decency Act, a 1996 statute that allows social media platforms to avoid most lawsuits over the content posted on their platforms. The arguments offered a very early glimpse into where the justices might land, although the court often changes its skeptical tone throughout oral arguments.

The Gonzalez family has argued that YouTube should face lawsuits over whether it aided and abetted terrorism by proactively recommending Islamic State videos.

“I’m afraid I’m completely confused by whatever argument you’re making at the present time,” said Alito.

“I guess I’m thoroughly confused,” Jackson said at another point.

Most of the justices at some point said they did not understand Schnapper’s arguments, contending that the conduct at issue — YouTube recommending videos — is likely protected by Section 230.

Even Justice Clarence Thomas, who has stated previously that he believes it’s time to revisit Section 230, questioned whether it made sense to hold YouTube accountable for its algorithms even though the company applied them “neutrally” to all kinds of videos, ranging from Islamic State videos to videos about cooking.

Schnapper argued that Section 230 does not protect YouTube’s promotion of thumbnails, or clickable videos that feature snapshots from a video that YouTube is recommending.

“Algorithms are endemic to the internet,” said Kagan. “Does your position send us down the road such as 230 really can’t mean anything at all?”

©2023 Bloomberg L.P.

Supreme Court eager to steer clear of sweeping changes to internet in Section 230 dispute

John Fritze, USA TODAY
Tue, February 21, 2023 

WASHINGTON – The Supreme Court seemed hesitant Tuesday to hand down a sweeping ruling that could change the way search engines and other websites recommend content to users but the justices struggled with how to address whether Big Tech can ever be held liable when those recommendations cause harm.

At issue in the case, Gonzalez v. Google, is a controversial law known as Section 230, which has been widely interpreted as shielding websites from lawsuits for user-generated content. The question for the court is whether recommendations – such as a suggestion for the next video to watch on YouTube – are covered under that law.

The family of a 23-year-old American killed in a 2015 terrorist attack in Paris sued Google, which owns YouTube, for promoting videos dealing with the Islamic State group. But several of the justices from both ends of the ideological spectrum seemed concerned about potentially holding companies liable for all such recommendations.

Guide: A look at the key cases pending at the Supreme Court

Background: Breaking down a Supreme Court case on Section 230

Thomas: As Supreme Court takes up Section 230, Thomas has made his thoughts clear

"These are not, like, the nine greatest experts on the internet," Associate Justice Elena Kagan quipped about her colleagues to the family's attorney during nearly three hours of oral argument, underscoring a broader point that such questions might be better left to Congress. "There's a lot of uncertainty about going the way you would have us go, in part just because of the difficulty of drawing lines in this area."

Even Associate Justice Clarence Thomas, who has been one of the most outspoken critics of how broadly lower courts have interpreted Section 230, seemed concerned about the potential implications of holding Big Tech liable for recommendations. If the algorithms are neutral, he said – if they're not attempting to promote radical videos – then how could they be aiding and abetting terrorism?

"If you're interested in cooking, you don't want thumbnails on light jazz. It's neutral in that sense," Thomas said. "I don't understand how a neutral suggestion about something that you've expressed interest in is aiding and abetting."
 

The Supreme Court on Jan. 10, 2023.

If the case blurs the 6-3 conservative-liberal divide on the Supreme Court there may be a good reason for it: The underlying issue itself has shuffled traditional partisan politics. Section 230 has drawn intense criticism from former President Donald Trump over accusations that social media companies throttled conservative views. But Many Democrats agree, for different reasons, that the nearly 30-year-old law needs an update.

Associate Justice Neil Gorsuch repeatedly zeroed in on one element of the decision in favor of Google by the San Francisco-based U.S. Court of Appeals for the 9th Circuit: The idea that a "neutral" algorithm was likely to be within the law's liability shield. Gorsuch argued that test was unworkable and wasn't based in the statute's text and suggested the case could be sent back to the appeals court for further review.

That's one potential off ramp the Supreme Court could take to avoid the larger questions about Section 230. Another deals with a related case the court is scheduled to hear Wednesday: Whether Big Tech can be held liable under the Anti-Terrorism Act for "aiding and abetting" terrorism by promoting the videos at all – putting aside Section 230.

If the court decides online platforms cannot be held liable under that law, it would potentially put off for another casequestions about whether and when Section 230 applies to recommendations.

A decision in the Google case is expected by June.

Contributing: Jessica Guynn



Google tells US Supreme Court it is not legally responsible for internet posts

James Titcomb
Tue, February 21, 2023 




People wait in line outside the US Supreme Court in Washington, DC on February 21, 2023 as the justices hear arguments in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.

Google could be forced to take responsibility for videos that YouTube recommends to its users if a landmark legal challenge against the internet giant succeeds in America’s top court.

The company on Tuesday defended itself in a Supreme Court hearing, the culmination of a years-long legal campaign from the family of a victim of the 2015 Paris terrorist attacks who say that YouTube recommended extremist content to users.

Defeat for Google would threaten a legal shield that search engines and social networks have used for 27 years to avoid liability for illegal videos, messages and pictures on their services.

The Section 230 law broadly grants immunity to websites that host and manage content uploaded by users. However, the family of Nohemi Gonzalez, a 23-year-old who was one of 129 people killed in the co-ordinated terrorist attacks in November 2015, argue that this shield is effectively forfeited when sites such as YouTube use algorithms that prioritise some posts and videos over others.

The family argues that Google should be liable for the videos it algorithm recommends to users, and that the site broke anti-terrorism laws by promoting Islamic State recruitment videos.

Since almost all major websites use some form of algorithm to manage the mountains of user-posted material, a ruling against Google could force internet companies to more tightly police their sites for violent or explicit content.

“If they do this indelicately, and say that the moment you promote that content in any way you lose all immunity, then I think this is going to blow up the internet,” said Michael Smith, a professor of information technology at Carnegie Mellon University in Pittsburgh, Pennsylvania.

Google has repeatedly won in lower courts and sought to block the case from going to America’s highest court, but the Supreme Court agreed to hear the case last year.

On Wednesday a similar case is due to be heard against Twitter, brought by the family of Nawras Alassaf, who died in a 2017 terrorist attack in Istanbul.

The longstanding law known as Section 230 has been referred to as the “26 words that created the internet” and has been seen as a potential sticking point in negotiations over a post-Brexit US trade deal, conflicting with UK Government plans for tighter internet regulation.

The Republican-leaning Supreme Court, which last year made a series of controversial rulings on issues such as abortion and gun control, is expected to rule on the cases in the summer. Clarence Thomas, one member of the nine-strong court, has been a vocal critic of how internet companies have used the law to escape liability.


 Associate Justice Clarence Thomas joins other members of the Supreme Court as they pose for a new group portrait, at the Supreme Court building in Washington, Oct. 7, 2022. The Republican majority in the Georgia state senate voted on Tuesday, Feb. 14, 2023, to build a statue of Thomas on the state capitol grounds, despite Democratic opposition to honoring the longtime Supreme Court member and Georgia native
AP Photo/J. Scott Applewhite

A group of 17 Republican Congressmen have submitted documents to the court calling for a stricter interpretation of the law, while tech companies have lined up to defend Google and warn that a decision against the company risks mass censorship of the internet.

Damian Collins, a Conservative MP and former chairman of the digital, culture, media and sport committee who has campaigned for stricter internet laws, said: “Whatever happens in America has an impact on the platform policy for companies that affects the whole world.

“If a company is going to be legally responsible, that's hugely significant in terms of recommended content on social media platforms, but potentially has wider implications for the way in which companies develop their products and the care they take to ensure there aren't unintended consequences.”

Mr Collins said efforts by US trade negotiators to impose Section 230-like laws in a trade deal had been “strongly resisted” in the UK.

President Joe Biden has said he wants to see the law reformed to force internet companies to take more responsibility for violent material.

Supreme Court for first time casts doubt on Section 230, the legal shield for Big Tech

David G. Savage
Mon, February 20, 2023

The Supreme Court of the United States in Washington, DC, is seen in October. 
(Kent Nishimura / Los Angeles Times)

Internet giants such as Google, Facebook, YouTube and Twitter owe much of their success to a legal shield erected by Congress in 1996.

Known as Section 230, it has been called the rule that launched Big Tech. Though it drew little attention at the time, the law is now seen as a pillar of the wide-open global internet we know today.

While newspapers and TV stations can be held liable for any false and malicious content they publish or broadcast, internet platforms are treated differently under Section 230.

Congress passed the special free-speech rule to protect the new world of online communication. It said: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Law professor and author Jeff Kosseff called Section 230 “the 26 words that created the internet” because it allowed websites to develop freely as platforms for the words, photos and videos of others.

And it went unchallenged in the Supreme Court — until now.

This week, the justices will hear two cases that may finally pierce that legal shield and dramatically alter the rules of the game for the internet.

And they are expected to consider a third case later this year involving the 1st Amendment rights of internet companies amid state efforts to regulate them.

The case to be heard on Tuesday began with a California family's suit against Google and YouTube for allegedly aiding and abetting an act of international terrorism. Their daughter Nohemi Gonzalez was killed in Paris in November 2015 when Islamic State terrorists fired into a restaurant where the 23-year-old student was dining with two friends. It was part of an ISIS rampage in the city that killed 129 people.

Their lawsuit alleged that Google, which owns YouTube, had "knowingly permitted ISIS to post hundreds of radicalizing videos inciting violence and recruiting potential supporters to join the ISIS forces." Further, they alleged that YouTube "affirmatively recommended ISIS videos to users.”

At issue on Tuesday is only their second claim. Can YouTube be sued over the algorithms it created to direct users to similar content — in this case allegedly directing potential terrorists to other ISIS videos? Or does Section 230 protect them against such claims?

More than four dozen tech firms, internet scholars and free-speech advocates have filed friend-of-the-court briefs arguing that the internet companies should not be held liable for using computer programs that direct users to content they might find interesting.

"Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack," said Washington attorney Lisa S. Blatt, representing Google and YouTube. She warned that opening the door to lawsuits over algorithms "risks upending the modern internet."

A federal judge had dismissed the family's suit based on Section 230, and a divided 9th Circuit Court of Appeals affirmed that decision in 2021.

Until this term, the Supreme Court had refused to hear appeals involving the law. On several occasions, however, Justice Clarence Thomas called for "paring back the sweeping immunity courts have read into Section 230," particularly in cases where websites knew they were posting dangerous lies or criminal schemes.

Some prominent liberals, including Judges Marsha Berzon and Ronald Gould on the 9th Circuit Court, have also called for paring back the scope of Section 230.

They have been joined by advocates — both liberal and conservative — who portray the internet as a cesspool of disinformation and hate speech, a home for stalkers and fraudsters and a contributor to teen suicides and mass shootings. Critics also say social media companies get rich and keep viewers online by amplifying the most extreme claims and the angriest voices.

Google and other tech firms were surprised in October when the high court voted for the first time to hear a direct challenge to Section 230 and decide whether websites such as YouTube can be sued for their use of algorithms and targeted recommendations.

Their alarm grew in December when the Biden administration took the side of the plaintiffs in Gonzalez vs. Google and said YouTube could be sued for algorithms that "recommend" more videos to viewers.

Justice Department attorneys said the 9th Circuit Court made a mistake by throwing out the claim, and they argued for a new understanding of Section 230. They agreed websites are shielded from liability for displaying content provided by others, including ISIS videos, but said they were not shielded for "their own conduct" in recommending further videos for viewing.

"When YouTube presents a user with a video she did not ask to see, it implicitly tells the user that she will be interested in that content based on the video and account information and characteristics," they wrote in their filing.

Many experts in internet law said they were puzzled by the Supreme Court's decision to take up the case and troubled by what it might mean.

"The internet needs curation. We need to be able to find what we're looking for," said Eric Goldman, a law professor at Santa Clara University. If websites cannot sort content based on algorithms, he said, "it would not be a functional internet."

Blatt, Google’s attorney, said, "YouTube does not 'recommend' videos in the sense of endorsing them, any more than Google Search endorses search results. YouTube displays videos that may be most relevant to users."

On Wednesday, the court will hear a related case but one focused only on whether Facebook, Google and Twitter may be sued for allegedly aiding international terrorists.

Congress in 2016 expanded the Antiterrorism Act to authorize lawsuits by victims or their survivors against anyone who "knowingly provided substantial assistance" to a person who committed an act of international terrorism.

The U.S. family of a Jordanian citizen who was killed in an ISIS attack on the Reina nightclub in Istanbul in 2017 sued Facebook, Twitter and YouTube, accusing them of aiding and abetting the murders. They said ISIS openly maintained accounts on all three social media platforms and used them to recruit members.

The 9th Circuit cleared this claim to proceed, but the Justice Department and the social media firms said that was a mistake. They said the suit should be tossed out because the plaintiffs could not show that the internet platforms provided "substantial assistance" to the terrorist who carried out the mass shooting.

It's not entirely clear why the court agreed to hear the second case, Twitter vs. Taamneh, but the justices may have decided they faced two questions: Can a social media site be sued for aiding terrorists? And if so, can it be held liable for directing viewers to ISIS videos?

It's unclear whether the justices will split along the usual ideological lines when it comes to the Section 230 debate, which has liberals and conservatives on both sides.

Still pending before the court may be an even larger question: Can the states regulate the internet and penalize social media companies for what they post or remove from their sites?

That clash began on a sharply partisan note. Republican leaders in Texas and Florida adopted laws two years ago that authorized fines and damage claims against Facebook, Twitter and other large social media sites if they "censor" or discriminate against conservatives. Upon signing the measure, Florida Gov. Ron DeSantis said the law was intended as "protection against the Silicon Valley elites."

Before the laws could take effect, they were challenged on free speech grounds and put on hold based on the 1st Amendment, not Section 230.

The justices are almost certain to grant review of one or both laws because appellate court judges, both appointed by President Trump, were divided on a major constitutional question.

Judge Kevin Newsom of the 11th Circuit Court in Atlanta blocked most of the Florida law from taking effect. The 1st Amendment "constrains government actors and protects private actors," he said. Social media sites are private companies, and "put simply, with minor exceptions, the government can't tell a private person or entity what to say or how to say it."

Shortly afterward, Judge Andrew Oldham of the 5th Circuit Court in New Orleans upheld the Texas law because the state sought to protect the free speech rights of Texans. A former counsel to Texas Gov. Greg Abbott and law clerk to Justice Samuel A. Alito Jr., Oldham said it is a "rather odd inversion of the 1st Amendment" to say the social media platforms have a "right to muzzle speech. ... We reject the idea that corporations have a freewheeling 1st Amendment right to censor what people say."

Last month, the Supreme Court asked the Justice Department to weigh in on the issue, and that will put off the cases until the fall.

If, as expected, the U.S. solicitor general's office submits its view on the issue by June, the justices are likely to schedule one or both cases for a hearing in the fall.

This story originally appeared in Los Angeles Times.

No comments:

Post a Comment