May 28, 2024
Two Supreme Court Cases That Could Break the Internet

Two Supreme Court Cases That Could Break the Internet

In February, the Supreme Court will hear two cases—Twitter v. Taamneh and Gonzalez v. Google—that could alter how the Internet is regulated, with potentially vast consequences. Both cases concern Section 230 of the 1996 Communications Decency Act, which grants legal immunity to Internet platforms for content posted by users. The plaintiffs in each case argue that platforms have violated federal antiterrorism statutes by allowing content to remain online. (There is a carve-out in Section 230 for content that breaks federal law.) Meanwhile, the Justices are deciding whether to hear two more cases—concerning laws in Texas and in Florida—about whether Internet providers can censor political content that they deem offensive or dangerous. The laws emerged from claims that providers were suppressing conservative voices.

To talk about how these cases could change the Internet, I recently spoke by phone with Daphne Keller, who teaches at Stanford Law School and directs the program on platform regulation at Stanford’s Cyber Policy Center. (Until 2015, she worked as an associate general counsel at Google.) During our conversation, which has been edited for length and clarity, we discussed what Section 230 actually does, different approaches the Court may take in interpreting the law, and why every form of regulation by platforms comes with unintended consequences.

How much should people be prepared for the Supreme Court to substantively change the way the Internet functions?

We should be prepared for the Court to change a lot about how the Internet functions, but I think they could go in so many different directions that it’s very hard to predict the nature of the change, or what anybody should do in anticipation of it.

Until now, Internet platforms could allow users to share speech pretty freely, for better or for worse, and they had immunity from liability for a lot of things that their users said. This is the law colloquially known as Section 230, which is probably the most misunderstood, misreported, and hated law on the Internet. It provides immunity from some kinds of claims for platform liability based on user speech.

These two cases, Taamneh and Gonzalez, could both change that immunity in a number of ways. If you just look at Gonzalez, which is the case that’s squarely about Section 230, the plaintiff is asking for the Court to say that there’s no immunity once a platform has made recommendations and done personalized targeting of content. If the Court felt constrained only to answer the question that was asked, we could be looking at a world where suddenly platforms do face liability for everything that’s in a ranked news feed, for example, on Facebook or Twitter, or for everything that’s recommended on YouTube, which is what the Gonzalez case is about.

If they lost the immunity that they have for those features, we would suddenly find that the most used parts of Internet platforms or places where people actually go and see other users’ speech are suddenly very locked down, or very constrained to only the very safest content. Maybe we would not get things like a MeToo movement. Maybe we would not get police-shooting videos being really visible and spreading like wildfire, because people are sharing them and they’re appearing in ranked news feeds and as recommendations. We could see a very big change in the kinds of online speech that are available on basically what is the front page of the Internet.

The upside is that there is really terrible, awful, dangerous speech at issue in these cases. The cases are about plaintiffs who had family members killed in ISIS attacks. They are seeking to get that kind of content to disappear from these feeds and recommendations. But a whole lot of other content would also disappear in ways that affect speech rights and would have different impacts on marginalized groups.

So, the plaintiffs’ arguments come down to this idea that Internet platforms or social-media companies are not just passively letting people post things. They are packaging them and using algorithms and putting them forward in specific ways. And so they can’t just wash their hands and say they have no responsibility here. Is that accurate?

Yeah, I mean, their argument has changed dramatically even from one brief to the next. It’s a little bit hard to pin it down, but it’s something close to what you just said. Both sets of plaintiffs lost family members in ISIS attacks. Gonzalez went up to the Supreme Court as a question about immunity under Section 230. And the other one, Taamneh, goes up to the Supreme Court as a question along the lines of, If there were not immunity, would the platforms be liable under the underlying law, which is the Anti-Terrorism Act?

It sounds like you really have some concerns about these companies being liable for anything posted on their sites.

Absolutely. And also about them having liability for anything that is a ranked and amplified or algorithmically shaped part of the platform, because that’s basically everything.

The consequences seem potentially harmful, but, as a theoretical idea, it doesn’t seem crazy to me that these companies should be responsible for what is on their platforms. Do you feel that way, or do you feel that actually it’s too simplistic to say these companies are responsible?

I think it is reasonable to put legal responsibility on companies if it’s something they can do a good job of responding to. If we think that legal responsibility can cause them to accurately identify illegal content and take it down, that’s the moment when putting that responsibility on them makes sense. And there are some situations under U.S. law where we do put that responsibility on platforms, and I think rightly so. For example, for child-sexual-abuse materials, there’s no immunity under federal law or under Section 230 from federal criminal claims. The idea is that this content is so incredibly harmful that we want to put responsibility on platforms. And it’s extremely identifiable. We’re not worried that they are going to accidentally take down a whole bunch of other important speech. Similarly, we as a country choose to prioritize copyright as a harm that the law responds to, but the law puts a bunch of processes in place to try to keep platforms from just willy-nilly taking down anything that is risky, or where someone makes an accusation.

So, there are situations where we put the liability on platforms, but there’s no good reason to think that they would do a good job of identifying and removing terrorist content in a situation where the immunity just goes away. I think we would have every reason to expect in that situation that a bunch of lawful speech about things like U.S. military intervention in the Middle East, or Syrian immigration policy, would disappear, because platforms would worry that it might create liability. And the speech that disappears would disproportionately come from people who are speaking Arabic or talking about Islam. There’s this very foreseeable set of problems from putting this particular set of legal responsibilities onto platforms, given the capacities that they have right now. Maybe there’s some future world where there’s better technology or better involvement of courts in deciding what comes down, or something such that the worry about the unintended consequences reduces, and then we do want to put the obligations on platforms. But we’re not there now.

How has Europe dealt with these issues? It seems like they are putting pressure on tech companies to be transparent.

Europe recently had the legal situation these plaintiffs are asking for. Europe had one big piece of legislation that governed platform liability, that was enacted in 2000. It’s called the E-Commerce Directive. And it had this very blunt idea that if platforms “know” about illegal content, then they have to take it down in order to preserve immunity. And what they discovered, unsurprisingly, is that the law led to a lot of bad-faith accusations by people trying to silence their competitors or people they disagree with online. It leads to platforms being willing to take down way too much stuff to avoid risk and inconvenience. And so the European lawmakers overhauled that in a law called the Digital Services Act, to get rid of or at least try to get rid of the risks of a system that tells platforms they can make themselves safe by silencing their users.

Source link