Investing

SCOTUS Sidesteps Section 230

Will Duffield

Gonzalez v. Google, a much‐​watched Supreme Court case about whether Section 230 protects algorithmic curation, ended with a whimper on Thursday. In a three page per curiam opinion, the Court avoided addressing Section 230 at all. Instead, the court decided Gonzalez via Twitter v. Taamneh, a related case about platforms’ underlying liability for hosting terrorist speech under the Anti‐​Terrorism Act (ATA).

In a clear, unanimous decision authored by Justice Thomas the court held that Twitter did not “aid and abet” ISIS under the ATA by failing to prevent the terror group from using its platform. Because Gonzalez plaintiffs’ underlying claims were essentially the same as those in Twitter v. Taamneh, it wasn’t necessary for the court to rule on whether Section 230 protected Google from plaintiff’s now‐​defunct ATA claims. Remanding Gonzalez, the Court writes:

“ … we think it sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail under either our decision in Twitter or the Ninth Circuit’s unchallenged holdings below. We therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief.”

Most important is the simple fact that the Court was offered an opportunity to reinterpret or remake Section 230 and declined to act on it. This is, as I explained after oral argument, the best possible outcome. “The best outcome would be for the court to dismiss Gonzalez as improvidently granted and decide the matter in Twitter v. Taamneh, a related case about the scope of the Anti‐​Terrorism Act.” By refraining from ruling on Section 230, the Court avoided even inadvertently muddying the waters of settled lower court precedent.

Instead, the Court signaled that even in the face of intractable partisan disagreement, decisions to impose a duty to remove speech rest with legislatures, not the courts. Indeed, in footnote 14, Thomas writes “Plaintiffs have not presented any case holding such a company liable for merely failing to block such criminals despite knowing that they used the company’s services. Rather, when legislatures have wanted to impose a duty to remove content on these types of entities, they have apparently done so by statute.”

This reasoning cuts against both knowledge standards and exceptions for algorithms. The fact that a platform knew of (and in this case, tried to prevent, albeit imperfectly) an unlawful misuse of its services does not make it more liable for the misuse. And, despite several attempts, Congress has refrained from creating a duty to exclude unlawful content from algorithmic curation tools.

Although Twitter v. Taamneh does not affect platforms general protection from liability for user speech under Section 230, Thomas hits all the right philosophic notes as he applies the common law to the ATA’s aiding and abetting provisions. He draws clear lines between the classic common law standard of aiding and abetting in Halberstam v. Welch, in which a burglar’s live‐​in partner and accountant was held liable for supporting her partner’s criminal career, and the passive support at issue in Twitter and Gonzalez. Thomas writes, “The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”

Examining the role of algorithms in Twitter, Thomas correctly presents algorithmic curation as integral to websites ability to host user speech.

“Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself”

He continues by recognizing that platforms rarely filter content at upload, instead relying on post‐​hoc moderation. “There is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms”. Crucially, Thomas treats this model as a legitimate way of offering the opportunity to speak to the largest number of people.

“The mere creation of those platforms, however, is not culpable. To be sure, it might be that bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends. But the same could be said of cell phones, email, or the internet generally.”

In a world where the basic business models and default openness of major social media platforms has come under attack, this recognition is valuable in any context. Paired with the Court’s decision to pass up an opportunity to reinterpret Section 230, it is a victory for free speech worth celebrating.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close