A wide range of businesses, internet users, academics and even human rights experts defended Big Tech’s liability shield in a pivotal Supreme Court case about YouTube algorithms, with some arguing that excluding AI-driven recommendation engines from federal legal protections would cause sweeping changes to the open internet. From a report: The diverse group weighing in at the Court ranged from major tech companies such as Meta, Twitter and Microsoft to some of Big Tech’s most vocal critics, including Yelp and the Electronic Frontier Foundation. Even Reddit and a collection of volunteer Reddit moderators got involved. In friend-of-the-court filings, the companies, organizations and individuals said the federal law whose scope the Court could potentially narrow in the case — Section 230 of the Communications Decency Act — is vital to the basic function of the web. Section 230 has been used to shield all websites, not just social media platforms, from lawsuits over third-party content.
The question at the heart of the case, Gonzalez v. Google, is whether Google can be sued for recommending pro-ISIS content to users through its YouTube algorithm; the company has argued that Section 230 precludes such litigation. But the plaintiffs in the case, the family members of a person killed in a 2015 ISIS attack in Paris, have argued that YouTube’s recommendation algorithm can be held liable under a US antiterrorism law. In their filing, Reddit and the Reddit moderators argued that a ruling enabling litigation against tech-industry algorithms could lead to future lawsuits against even non-algorithmic forms of recommendation, and potentially targeted lawsuits against individual internet users.