May 23, 2024
The Supreme Court Probably Won’t Break the Internet—At Least for Now

The Supreme Court Probably Won’t Break the Internet—At Least for Now

This week, the nine Justices of the Supreme Court heard oral arguments in two cases that could change the Internet. “These are not the nine greatest experts on the Internet,” Justice Elena Kagan joked, on Tuesday, and was met with laughter in the courtroom. Nonetheless, in five hours of arguments throughout two days, the Justices interrogated the inner workings of online platforms. The first case, Gonzalez v. Google, focusses on the recommendation algorithms that steer users to specific pieces of content. Is suggesting an article, image, or video the same as supporting it? The second, Twitter v. Taamneh, considers whether platforms are responsible for the content that their users share online. If the answer to either question is yes, then many tech companies would need a fundamental overhaul.

In 1996, Congress passed a sprawling telecommunications law, which included a provision, Section 230 of the Communications Decency Act, that had the effect of shielding early Internet-service providers—companies such as CompuServe and Prodigy—from legal liability for users’ actions. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” the section reads, in part. Whereas traditional publishers take legal responsibility for what they publish—they can be sued for libel if they knowingly print falsehoods—Section 230 framed the tech companies differently. It protected them from civil claims like libel lawsuits, while also granting them the freedom to moderate objectionable content. (The section’s full name is “Protection for private blocking and screening of offensive material.”) To this day, platforms, including Facebook, Twitter, YouTube, and TikTok, are not usually held responsible for content that they did not directly create. In theory, Gonzalez v. Google and Twitter v. Taamneh could change that.

If the Supreme Court were to treat digital platforms like publishers, a company like Facebook might have to mount a legal defense for every user’s post that comes into conflict with any law. In advance of this week’s hearings, some observers feared that the conservative-leaning Court might unleash this reality by wholly removing Section 230 protections, as Daphne Keller, a Stanford Law School professor, told The New Yorker, in January. But the tenor of the Court’s questions did not seem to foreshadow drastic legal changes, and instead reflected a broader public reckoning with the digital technologies that increasingly shape our lives. The Court “seemed to be grasping for a more sophisticated vocabulary for how platforms and their recommendation algorithms work,” Tarleton Gillespie, a professor of digital media at Cornell University, and a principal researcher at Microsoft Research, told me.

These two cases attempt to hold platforms accountable in different ways. In November, 2015, an American student named Nohemi Gonzalez was killed in a terrorist attack in Paris, which was claimed by the Islamic State. Gonzalez’s family sued YouTube and its parent company, Google, for recommending ISIS-recruitment videos to users, arguing that, in doing so, the platform aided and abetted terrorism. According to the plaintiffs’ lawyer, Eric Schnapper, Section 230’s protections should not apply to the way that YouTube directs users to specific videos, because recommending a piece of content is tantamount to publishing it. In response, Lisa S. Blatt, a lawyer for Google in Gonzalez v. Google, argued that Section 230 is meant to “shield Web sites for publishing other people’s speech, even if they intentionally publish other people’s harmful speech.”

Today, recommendation algorithms are everywhere. “Every time anybody looks at anything on the Internet, there is an algorithm involved,” Justice Kagan said, on Tuesday. But Section 230 was written at a time when Web sites were relatively new, and algorithm-driven digital platforms like YouTube did not yet exist. “This was a pre-algorithm statute,” Kagan said. “Everybody is trying their best to figure out how this statute applies.” Justice Clarence Thomas portrayed recommendations as neutral: they tend to suggest content that the user is already interested in, whether that’s light jazz, ISIS videos, or rice-pilaf recipes, he told the Court, in one of the hearing’s most surreal moments. “I don’t understand how a neutral suggestion about something you’ve expressed an interest in is aiding and abetting,” Thomas said.

Justice Sonia Sotomayor continued along a similar line of reasoning: just because an algorithmic recommendation suggested a piece of content, she said, doesn’t mean that the platform supports the content. (Her comment brought to mind the old Twitter disclaimer that “Retweets do not equal endorsements.”) “How do you get yourself from a neutral algorithm to an aiding and abetting?” Sotomayor asked Schnapper. “There has to be some intent to aid and abet.”

The Twitter v. Taamneh hearing, on Wednesday, addressed another terrorist attack linked to the Islamic State: one in Istanbul, in 2017. The family of one of the victims, a Jordanian man named Nawras Alassaf, sued Twitter, Google, and Facebook under a recently amended provision of the Anti-Terrorism Act, alleging that they both hosted and recommended ISIS content to users, and thus provided assistance to ISIS by helping to inspire the attack. “The assistance doesn’t have to be connected to a specific act,” Schnapper, who again represented the plaintiffs, argued. This time, Justice Thomas pushed back by contemplating a precedent that could be set: “It would seem that every terrorist attack that uses this platform would also mean that Twitter is an aider and abettor in those instances.”

Still, it was difficult to listen to the hearings without getting some sense that big tech companies should be more accountable for their content. During the Gonzalez hearing, Kagan wondered why digital platforms are held to a unique standard. “Every other industry has to internalize the costs of its conduct,” she said. “Why is it that the tech industry gets a pass? A little bit unclear.” And, during the Taamneh hearing, Kagan considered whether Twitter might be similar to a bank that knowingly offers its services to terrorists—which would be unambiguously illegal. If so, she said, then hosting and recommending content might indeed amount to aiding and abetting.

There’s unusual bipartisan support for increased regulation of tech companies right now, but that doesn’t mean that the Court will challenge Section 230 in these cases. “It seems like there is not a great appetite for the Court to use these facts, in either Gonzalez or Taamneh, as the vehicle for making great changes to the core legal structure of Internet platforms,” Nathaniel Persily, a professor at Stanford Law School, told me. Gillespie, of Cornell and Microsoft Research, suggested that recommendations alone might be the wrong target of legal action, because the filtering of content is an inherent part of any platform. “Any design element of an information archive will deliver some content more than others,” he told me.

“Both of these cases are very difficult for the plaintiffs to win,” Persily continued. “The question is: How do they lose?” Assuming that the platforms prevail, Persily said, the Court’s reasoning could still limit their Section 230 protections in smaller ways, or in specific instances.

There are other ways that the U.S. government could constrain online platforms. The Court may soon take up two more cases—NetChoice v. Paxton and NetChoice v. Moody—that could lead to a reinterpretation of Section 230, or could otherwise impact Big Tech, Persily said. In a way, these cases are the opposite of the ones against Google and Twitter. NetChoice is a tech-industry lobbying firm that has challenged recent Texas and Florida state laws prohibiting social networks from “censoring” accounts based on their political viewpoints. (The Florida law also prohibits “shadow-banning,” which is a pejorative term for when algorithmic recommendations de-prioritize certain posts.) In these cases, NetChoice is drawing on another part of Section 230: the right of platforms to block users or screen content. The industry now faces legal pressure from both sides: some want to hold them liable for publishing content, even as others want to force them to keep content up. “The theories of liability are in tension with one another,” Persily told me. “You kind of can’t have it both ways.”

The Court could also decide that regulating new technologies is up to lawmakers, whether in state legislatures or in Congress. In 2018, Congress passed FOSTA and SESTA, legislation that amended Section 230 in hopes of preventing sex trafficking. (In response, Craigslist took down its personals section.) Persily’s work informed one recent proposal to regulate social media, the Platform Transparency and Accountability Act, which was introduced in the Senate in 2021. Others include the Social Media NUDGE Act, introduced in 2022, and a 2023 bill that aims, likely unrealistically, to keep children under sixteen off of social media entirely. But, before lawmakers come together on such a proposal, they may need to agree on what matters most to them: maintaining the radically open space of the Internet or forcing tech companies to be much more cautious about the content on their platforms. ♦

Source link