
The Supreme Court declined to address the legal liability protections that protect tech platforms from being held liable for their users’ posts, the court said in a unsigned opinion Thursday.
The decision currently leaves broad liability coverage that protects companies like Twitter, Meta’s Facebook and Instagram as well Google’s YouTube from being held responsible for their users’ speech on their platforms.
The court’s rulings in these cases will serve as a big sigh of relief for tech platforms for now, but many members of Congress are still itching to reform legal liability protections.
In the case, Gonzalez v. Google, the court said it would “decline to address the application” of Section 230 of the Communications Decency Act, the law that protects platforms from their users’ speech and also allows the services to moderate or remove users’ posts. The court said it made that decision because the complaint “appears to state few, if any, reasonable claims for relief.”
The Supreme Court will send the case back to a lower court for reconsideration in light of its decision in a separate but similar case, Twitter v. Taamneh.
In that case, the family of an American victim of a terrorist attack sought to hold Twitter liable under anti-terrorism laws for aiding and abetting the attack by failing to take sufficient action against terrorist content on its platform. In a decision written by Justice Clarence Thomas, the court ruled that such a claim could not be brought under this statute.
“As alleged by Plaintiff, Defendants designed virtual platforms and willfully failed to do ‘adequate’ to remove ISIS-affiliated users and ISIS-related content—out of hundreds of millions of users worldwide and a vast ocean of content—from their platforms , Thomas Thomas wrote in the court’s unanimous opinion.
“Yet, Plaintiffs have failed to allege that Defendants intentionally provided any substantial support for the Reina Attack or otherwise knowingly participated in the Reina Attack—much less that Defendants so pervasively and systemically aided ISIS as to make them responsible for every ISIS attack” , he added, referring to the Istanbul nightclub there the terrorist attack took place.
Many lawmakers see Section 230 as an unnecessary protection for a massive industry, although its advocates say the law also protects smaller players from costly lawsuits, as it helps dismiss user speech cases at an earlier stage. Still, lawmakers remain divided on what form such changes should take, meaning there are still huge hurdles to getting it done.
“This decision to leave Section 230 untouched is an unequivocal victory for the moderation of online speech and content,” Jess Miers, legal counsel for the Meta and Google-backed Chamber of Progress, said in a statement. “While the Court may once have had an appetite for reinterpreting decades of Internet law, it was clear from oral argument that changing Section 230’s interpretation would create more problems than it would solve. In the end, the Court made the right decision. Section 230 has made possible the Internet as we know it.”
“This is a huge win for free speech on the internet,” Chris Marchese, litigation center director for NetChoice, a group whose members include Google, Meta, Twitter and TikTok, said in a statement. “The court was asked to undermine § 230 – and declined.”
SEE: The messy business of content moderation on Facebook, Twitter, YouTube

[pub1]