Section 230 Under Fire: Recent Cases, Legal Workarounds, and Reforms

Introduction

For over two decades, Section 230 of the Communications Decency Act (CDA) has served as a powerful liability shield for online platforms. This provision broadly protects websites from being treated as the publisher of third-party content. In practice, Section 230 has enabled social media networks, forums, and e-commerce sites to host user posts, reviews, and listings without facing publisher liability for anything users say or do.

In recent years, however, this once-impenetrable shield has been tested by lawsuits, strategic pleading by plaintiffs, and mounting political scrutiny. Courts and lawmakers are increasingly probing the limits of Section 230 immunity in cases involving recommendations, product design defects, online marketplace transactions, and even AI-generated content. At the same time, attorneys and regulators are devising creative legal theories – from product liability to civil rights and anti-trafficking laws – to work around Section 230’s protections. This post by Eric Rosen surveys the most significant court rulings, legislative developments, and litigation strategies affecting Section 230 in the past few years, with a focus on how different industries (social media, e-commerce, crypto, AI) are impacted.

Section 230’s Broad Shield – and Its Cracks

The Basics of Section 230. Section 230(c)(1) states that no provider of an “interactive computer service” shall be treated as the publisher or speaker of information provided by another content provider. Courts have distilled a three-prong test (from Barnes v. Yahoo!) to determine if a claim is barred: (1) the defendant is an interactive computer service provider, (2) the claim treats the defendant as a publisher or speaker, and (3) the content at issue was provided by another party. When these elements are met, Section 230 can immunize the platform from a wide array of state law claims (defamation, negligence, etc.), as well as many federal claims, for harms caused by user-generated content. This immunity has been described as “formidable”– it has routinely led to early dismissal of lawsuits seeking to hold websites liable for user posts.

Historical breadth vs. emerging limits. Traditionally, courts interpreted Section 230 very broadly. Early precedents like Zeran v. AOL (4th Cir. 1997) set the tone that platforms are not liable for failing to remove or edit third-party content, even after notice of its falsity. Over time, however, cracks have appeared. Courts have recognized a few exceptions and limitations in Section 230’s scope: for example, suits targeting a platform’s own promises or content (rather than user content) may fall outside the shield. And in 2018, Congress itself amended the CDA through FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act), carving out certain sex-trafficking claims from Section 230’s protection. The past few years have seen an acceleration of these challenges. As detailed below, plaintiffs are framing claims to focus on platforms’ conduct (algorithms, product design, warnings, etc.) instead of treating them as mere publishers of user speech. Appellate courts – and even the U.S. Supreme Court – have been grappling with where to draw the line. While Section 230 remains a robust defense in many cases, recent rulings suggest it is not an absolute “get-out-of-jail-free card” in all contexts.

High-Profile Court Rulings Shaping Section 230

The Supreme Court Stays Hands-Off (For Now) – Gonzalez v. Google

One of the most anticipated Section 230 showdowns reached the U.S. Supreme Court in 2023 with Gonzalez v. Google LLC. The case stemmed from claims that YouTube’s algorithms recommended ISIS terrorist videos to users, allegedly aiding and abetting an attack under the Anti-Terrorism Act. The Ninth Circuit had held Section 230 immunized Google, even for algorithmically recommended content. However, in a much-watched decision, the Supreme Court punted on the Section 230 issue. In May 2023, the Court issued a brief per curiam opinion sidestepping the 230 question and instead deciding that the plaintiffs’ underlying terrorism claims failed on their own merits. By resolving the companion case (Twitter v. Taamneh) on proximate cause grounds, the Court found it unnecessary to interpret Section 230’s reach in Gonzalez. This narrow outcome left Section 230’s prevailing law unchanged for now, disappointing those who hoped the Justices would clarify whether recommendation algorithms fall outside the immunity.

Despite this avoidance, the Supreme Court’s foray into Section 230 signaled growing judicial interest. At oral argument, some Justices expressed uncertainty about the law’s breadth. Justice Thomas, in particular, has repeatedly invited re-examination of Section 230’s “sweeping” interpretation in separate writings, suggesting it has been applied more broadly than its text and original purpose support. Although Gonzalez did not result in a new test, it put Section 230 on the Court’s radar.

Algorithmic Recommendations: Neutral Tools or Platform Conduct?

Old rule – algorithms are covered. Traditionally, courts treated content recommendation and sorting algorithms as extensions of a platform’s role in publishing third-party content. For example, the Second Circuit in Force v. Facebook (2019) held that Facebook’s algorithms organizing and recommending user posts (even extremist content) are “neutral tools” entitled to 230 protections. Similarly, the Ninth Circuit in Dyroff v. Ultimate Software (2019) found that features like algorithms that suggest connections or content are simply tools to facilitate user communications, not content in themselves. Under this view, a platform doesn’t lose immunity just because its software curates or amplifies what users post – these functions were seen as indistinguishable from publishing or distributing information provided by others.

New crack – Third Circuit’s TikTok decision. A recent Third Circuit ruling broke from that pattern, suggesting a platform’s algorithmic recommendations can give rise to liability. In Anderson v. TikTok, Inc. (2023), the estate of a 10-year-old girl sued TikTok after she died attempting the viral “Blackout Challenge” allegedly promoted to her by TikTok’s “For You” algorithm. The district court had dismissed the case under Section 230, but the Third Circuit reversed in a landmark decision, holding that TikTok could potentially be held liable for its role in actively recommending dangerous content. The court reasoned that TikTok “is not merely hosting third-party content but actively recommending specific content to users,” effectively engaging in its own form of expression or conduct. By curating and targeting the challenge videos to a vulnerable child, TikTok’s algorithm crossed the line from passive intermediary to active participant, in the Third Circuit’s view. While Section 230 protects publishing third-party content, it “may not extend to the algorithms [platforms] use to curate and promote that content”. One concurring judge argued forcefully that platforms should not be immune when their proprietary algorithms push deadly content to kids, criticizing the “causal indifference” and lack of accountability under the old 230 paradigm.

Active promotion vs. neutral tools. The contrast between Force/Dyroff and Anderson highlights an evolving litigation strategy: plaintiffs now argue that certain recommendation or matchmaking systems aren’t “neutral” at all, but rather a form of platform conduct. If a court agrees, Section 230 may not apply. A similar logic appeared in a California state appellate decision involving a YouTube crypto scam. In Wozniak v. YouTube (Cal. Ct. App. 2024), Apple co-founder Steve Wozniak and others sued YouTube over “Bitcoin giveaway” scam videos that used Wozniak’s likeness. A trial court dismissed the claims under 230, but the appellate court revived them in part, finding plaintiffs had plausibly alleged YouTube contributed to the content. Notably, YouTube had provided verification badges to the scam channels, falsely signaling to users that those channels were authentic and trustworthy. The court held that such verification icons constituted information provided by YouTube itself – not by the scammers – potentially making YouTube an “information content provider” of the misleading material. This allowed claims (like fraud or negligent misrepresentation) to proceed on the theory that YouTube’s own affirmative representations (through its design and badges) helped perpetuate the scam. Both Anderson and Wozniak illustrate how framing a claim around a platform’s product features or recommendations – rather than the mere presence of harmful user videos – can erode Section 230 protection.

Product Design Defects and “Duty to Protect” Theories

Another litigation front involves treating social platforms and apps as products with design defects or safety failures, instead of suing them as publishers of user content. Plaintiffs have tested whether Section 230 applies to claims that an app’s design (e.g., features that encourage dangerous behavior or lack safety guards) caused harm.

Snapchat Speed Filter case – a notable exception. In Lemmon v. Snap, Inc. (9th Cir. 2021), parents sued Snap after their teens died in a high-speed car crash; before the crash, the boys had used Snapchat’s “Speed Filter” feature, which overlays your current speed on a post. Plaintiffs argued the feature incentivized reckless driving for social media cred. Snap invoked Section 230, but the Ninth Circuit allowed the case to proceed, finding the claim was about Snap’s own product design, not the publishing of third-party content. The duty not to design a product that actively encourages dangerous behavior is independent of any duty to monitor or remove user content. In other words, even though user-generated photos/videos were involved, the gravamen was that Snap created a defective feature – a claim more akin to a products liability or negligence theory. Lemmon was a breakthrough: it showed a path to defeat 230 by decoupling the harm from user content, focusing instead on how the platform’s feature’s function.

Dating apps and “failure to protect” claims – mostly barred. By contrast, when plaintiffs have tried to reframe harassment or crimes by users as the platform’s fault, courts often find those claims still boil down to publishing third-party content. A prime example is the litigation against Grindr, a dating app. In Herrick v. Grindr (S.D.N.Y. 2018, aff’d 2d Cir.), a man was relentlessly harassed by strangers due to a fake profile of him on Grindr created by his ex-boyfriend, who sent dozens of would-be suitors to the victim’s home. The victim sued Grindr alleging product liability, design defect (for not having safety features to block spoofed accounts), failure to warn, and other torts. However, the court dismissed the case under Section 230. Even though the plaintiff pleaded product-type claims, the injuries still originated from false content (the impersonating profile) provided by another user. The claims were essentially that Grindr failed to police and remove harmful content – a classic role of a publisher, squarely immunized by 230. The Second Circuit agreed that the “product defect” framing couldn’t escape the fact that the harm was caused by third-party communication. Similarly, in the newer Doe v. Grindr case (N.D. Cal. 2023, aff’d 9th Cir. 2025), a plaintiff sued Grindr for being sexually exploited as a minor on the app, alleging negligent design (e.g., Grindr should have better age verification and filters to prevent minors from being contacted by adults) and failure to warn. The Ninth Circuit rejected these claims as inherently treating Grindr as the publisher of user communications. Any duty to “suppress communications” between certain users or to warn of misuse would require monitoring user content and interactions – activity directly tied to the platform’s role in publishing others’ content. The court distinguished this from Lemmon, noting that unlike Snap’s Speed Filter, Grindr’s matching and messaging features were “neutral” tools for communication, not dangerous in themselves. Thus, claims that effectively seek safer moderation or user controls will likely implicate Section 230, whereas claims targeting a stand-alone product feature (unrelated to publishing third-party speech) have a better chance to survive.

Online marketplaces and liability for offline harms. E-commerce platforms have also faced suits alleging their site design facilitated illegal or harmful transactions. A notable case is Daniel v. Armslist – involving a classifieds website for firearms. The plaintiff was the daughter of a woman killed by someone who illegally bought a gun via Armslist (a sale that avoided background checks). She sued Armslist for negligent design of the site (e.g., allowing searches by private sellers, no safeguards to prevent prohibited buyers). A Wisconsin appellate court initially held Section 230 did not apply, drawing a thin distinction that the claims were about website features, not specific user content. However, the Wisconsin Supreme Court reversed in 2019, firmly applying Section 230. The state high court found that the alleged design flaws (search filters, lack of verification) were all related to how Armslist published or structured third-party listings. The duty purportedly breached was essentially “providing an online forum for third-party content and failing to adequately monitor that content” – precisely the type of claim Section 230 prohibits. The takeaway is that courts remain reluctant to allow liability for crimes stemming from online marketplaces unless the platform injected its own content or tools that actively contributed to the illegality. (Notably, in Armslist, no such active role was alleged, whereas in Wozniak’s YouTube case, the platform’s own verification badge was alleged to contribute to the scam.

In sum, product liability and design-defect theories have emerged as a way to “plead around” Section 230, but success depends on convincing the court that the harm was caused by something other than the platform’s publication of user content. Snap faced liability because the harm was tied to its filter design, not a specific user’s Snap content. Grindr and Armslist, on the other hand, avoided liability because the harms were traceable to interactions and content created by users on their platforms. This line remains fact-sensitive and is actively being litigated in cases involving social media addiction, cyberbullying, and other alleged design defects in platform algorithms (many of which are part of ongoing multi-district litigation against Meta, TikTok, and others).

Liability for Facilitating Illegal Transactions: FOSTA, Sex Trafficking, and Beyond

The FOSTA carve-out. In 2018, amid bipartisan concern over online sex trafficking, Congress passed FOSTA-SESTA, which amended Section 230 to exclude immunity for certain sex trafficking-related claims. Specifically, Section 230(e)(5) now allows victims to bring civil claims under 18 U.S.C. §1595 (the federal trafficking victim statute) if the defendant knowingly participated in the trafficking venture and permits state criminal prosecutions for conduct that violates federal anti-trafficking laws. This was a direct response to websites like Backpage.com, which was accused of facilitating prostitution and trafficking ads. Indeed, after FOSTA, Backpage was shut down by the DOJ and its executives faced criminal charges. But what about other companies tangentially connected to user misconduct? Recent cases are testing just how far liability can extend for knowingly assisting illegal online activities.

Section 230 doesn’t protect knowing participation in illegality. A prime example is A.B. v. Salesforce.com (5th Cir. 2024), where trafficking victims sued Salesforce – not for hosting any content, but for selling customer relationship management (CRM) software to Backpage. The victims alleged Salesforce knew Backpage was being used for trafficking and yet provided it database tools and support that helped optimize Backpage’s business. Salesforce argued it was immunized by 230 as an “interactive computer service” supporting an online publisher. The Fifth Circuit rejected that defense. It held that Salesforce’s role was not that of a publisher or speaker of third-party content at all – Salesforce did not host, edit, or monitor any of Backpage’s user ads. Rather, the claims centered on Salesforce’s own conduct in facilitating a criminal enterprise, which falls outside Section 230’s scope. The court noted that Salesforce wasn’t being sued for reviewing or posting content, but for providing a service (software and support) that enabled Backpage’s trafficking venture despite known illegality. That kind of direct participation in a venture is exactly what FOSTA’s Section 230 exception was designed to reach.

Other courts have similarly held that simply turning a blind eye to misuse isn’t enough to lose 230 immunity – there must be knowing affirmative participation. For instance, victims of child sexual exploitation sued Reddit for hosting users who traded child pornography, arguing Reddit should be liable under the trafficking victim law. A California court (affirmed by the Ninth Circuit) dismissed those claims because the plaintiffs failed to show Reddit had actual knowledge and assistance in the specific trafficking venture. The courts underscored that “merely turning a blind eye” to user misconduct on a platform does not amount to active participation that strips 230 immunity. In Doe v. Reddit, as in Doe v. Grindr, the allegations fell short of showing the platform did anything more than provide a forum that bad actors misused. By contrast, Salesforce was not merely a forum provider – it was accused of directly enabling a known illicit operation.

State law claims for offline crimes – tricky terrain. Even before FOSTA, state courts looked for ways to hold platforms accountable for crimes like sex trafficking. In 2021, the Texas Supreme Court allowed trafficking survivors to sue Facebook under a state civil statute, reasoning that Section 230 was never intended to shield websites that knowingly profit from such criminal activity. That decision (>Doe v. Facebook, Tex. Sup. Ct. 2021*) was controversial, arguably conflicting with federal law, but it reflected a judicial willingness at the state level to narrowly interpret 230 when heinous crimes are facilitated online. Now, with FOSTA in place, plaintiffs and state AGs have a clearer path: they focus on evidence that a platform (or its partners) had specific knowledge of trafficking and provided assistance, to fit within the statutory exception.

Beyond trafficking, a related strategy is citing other laws not preempted by 230. Section 230(e) exempts intellectual property claims and federal criminal law, for example. Plaintiffs have tried framing certain online harm cases as IP violations or product liability to avoid 230. While IP claims (like copyright or trademark) are explicitly outside Section 230, courts have generally been wary of creative relabeling. (One recent decision held that the “intellectual property” exception to 230 does not encompass right of publicity claims under state law, treating those as still barred. This means platforms remained immune from a claim that they hosted content violating a person’s likeness rights, since that wasn’t considered “intellectual property” under Section 230’s meaning.) The main successes against 230 remain tied to either federal causes of action carved out by Congress (like trafficking), or clever pleading that the platform effectively co-created the illegal content.

Enforcement of Platforms’ Own Promises: Contract and Estoppel Claims

A more contract-oriented tactic has also gained attention: holding platforms to their terms of service or public promises regarding content moderation. The leading case in this vein was Barnes v. Yahoo! (9th Cir. 2009), where Yahoo allegedly promised a revenge-porn victim that it would remove nude photos an ex posted of her but failed to do so. The plaintiff’s promissory estoppel claim survived Section 230 because Yahoo’s specific promise to her was enforceable – the lawsuit sought to hold Yahoo accountable for breaching a duty it voluntarily undertook, not for its role as publisher per se. Barnes carved out a narrow exception: if a platform makes a concrete promise to a user to remove or address content, and the user relies on that promise, then a claim for breach of that promise is not barred by 230 (even though it relates to content moderation).

In recent years, plaintiffs have tried to extend this principle to platforms’ general terms of service or content policies. For example, families of teens harmed by cyberbullying or harmful content have argued that social media companies violated duties spelled out in their community guidelines (which often pledge to remove harmful posts or ban users who violate rules). Two Ninth Circuit decisions in 2022 – Calise v. Meta and Estate of Carson Bride v. YOLO Technologies – suggested that even broad statements in a platform’s terms or policies might be treated as enforceable promises, not covered by Section 230. The concern is that if any general “we care about safety” statement in a policy could create liability when the platform fails to remove some harmful content, it would significantly erode 230 immunity.

Lower courts are split or cautious on this. A recent case in the Northern District of California, Ryan v. Twitter (X) (2024), illustrates the pushback. There, a cryptocurrency/NFT promoter sued X (formerly Twitter) for suspending his accounts, alleging among other things that X breached its own terms of service and committed fraud by not following stated moderation policies. The court dismissed most claims under Section 230, even though the plaintiff argued Twitter’s use of an AI-based moderation system and failure to give an explanation violated its promises. The judge noted that Barnes created only a “limited exception” for specific, personal promises by a platform operator. General content policies or aspirational statements usually don’t amount to enforceable promises – and reading them that way would conflict with many other cases holding such claims preempted. In Ryan, the court ultimately held the plaintiff’s contract-based claims were barred: despite Twitter’s terms, the decision to suspend the account was made in its capacity as a publisher of information, thus falling under 230’s umbrella.

Similarly, in Doe v. Grindr (9th Cir. 2025), the plaintiff tried a “negligent misrepresentation” theory, citing Grindr’s terms of service promise of a “safe environment.” The Ninth Circuit found that Grindr’s vague assurances of safety were “too general to be enforced” – unlike the concrete removal promise in Barnes. Courts remain more inclined to view failure to police content as fundamentally a publishing issue, unless a plaintiff can point to a clear, relied-upon commitment by the company. The bottom line is that plaintiffs will have a tough time using contract or estoppel claims to get around 230 unless they have evidence of a specific promise made directly to them (or a narrow class) that the platform then breached. General statements in user agreements about striving to remove harmful content likely won’t defeat immunity – and expanding Barnes too far has been met with resistance.

A New Frontier: AI-Generated Content and Section 230

With the explosion of generative AI tools that produce content in response to user prompts, a pressing question is whether Section 230 applies when the content wasn’t provided by a human “user,” but by the platform’s own AI. By its text, Section 230 protects an interactive computer service from liability for information “provided by another information content provider.” If the service itself (through algorithms or AI) is wholly or partly creating the content, then the service is acting as an information content provider, not just a neutral intermediary. This suggests that harmful or unlawful outputs of AI might not be shielded by Section 230.

We are already seeing the first test cases. In 2023, a Georgia radio host filed a defamation lawsuit against OpenAI, alleging that ChatGPT falsely stated he was accused of embezzling funds in a legal case. ChatGPT essentially “hallucinated” a fake court opinion containing damaging lies about the plaintiff. Notably, OpenAI has not tried to invoke Section 230 in response – instead, it’s arguing traditional defamation defenses (such as lack of “actual malice” since the host is a public figure). The likely reason is that OpenAI knows Section 230 would be a weak shield here: the defamatory statement was entirely generated by ChatGPT, not provided by a user or third party. In the eyes of the law, OpenAI is much closer to the publisher or creator of that content. Legal scholars like Prof. Lyrissa Lidsky have pointed out that these AI libel cases will force courts to navigate “between two sets of legal principles” – the product liability frame (because AI is a product/service) and the defamation frame (because it produces speech). Section 230 wasn’t designed with AI in mind, and it may not fit: a platform cannot claim to be just an intermediary when it’s effectively manufacturing the content output via its algorithms.

We can also analogize to older cases involving automated features. Courts have said that if a site “develops” content in part, it loses immunity for that content. For example, the Roommates.com case (9th Cir. 2008) denied 230 protections to a roommate-matching site for the portion of its site where it required users to answer potentially discriminatory questions (like sex and family status) – the site was held to be a co-developer of that information. Likewise, if an AI chatbot generates a defamatory paragraph about someone, the platform has at least co-developed it (if not created it outright). That content is not “information provided by another” – it’s provided by the AI controlled by the defendant. Thus, AI companies and any platform deploying generative AI should not assume Section 230 immunizes them from claims like defamation, privacy violations, or false information harms caused by AI outputs. They may need to rely on other defenses and meticulously warn users (as OpenAI does in its terms) that the AI can produce false information. Still, warning users might not shield them from all liability. Courts will soon have to decide how traditional tort concepts (fault, publication, intent) apply when the “speaker” is an algorithm. The outcomes of cases like the OpenAI defamation suit and others on the horizon will be pivotal in defining the legal exposure of AI-generated content providers.

Regulatory and Legislative Developments

The evolving case law has been matched by growing legislative efforts to reform Section 230. Both Congress and state legislatures have proposed numerous bills – though few have become law – aiming to recalibrate platform immunity in light of current concerns.

Federal reform bills. In the past few years, bipartisan consensus has emerged that Section 230 in its current form may be outdated, but lawmakers disagree on how to fix it. Democrats often focus on holding platforms accountable for misinformation, hate speech, and harms to users, whereas Republicans often seek to prevent perceived censorship of political viewpoints. Some notable proposals include:

  • SAFE TECH Act (Senate) – introduced by Sen. Mark Warner and others, this bill would significantly scale back 230 immunity, including by excluding paid content/ads from protection and ensuring that civil rights and harassment laws are not preempted. It aims to “reaffirm victims’ rights” by making it easier to sue platforms when their services are misused in certain ways. As of 2023-24, the SAFE TECH Act has been reintroduced but not enacted.

  • EARN IT Act – a bill targeting online child sexual abuse material (CSAM). It proposes to condition Section 230 immunity on platforms following best practices to curb CSAM, effectively removing 230 protections if they don’t “earn it” through compliance. Civil liberties groups have opposed this due to concerns it would undermine encryption and privacy. The EARN IT Act has been considered in multiple sessions of Congress but not yet passed into law.

  • Protecting Kids on Social Media/KOSA – Senators Blumenthal and Blackburn introduced the Kids Online Safety Act, which, while not directly amending Section 230, would impose a duty on platforms to prevent and mitigate harm to minors (such as promoting self-harm or eating disorder content). If enacted, such duties could indirectly create new legal exposure for platforms (and possibly interact with 230 if states or individuals sue for violations). KOSA gained traction but then died in the House (as many things do).

    Notably, despite much debate, Congress has not passed any major 230 reform since FOSTA in 2018. Multiple attempts have stalled, partly due to First Amendment concerns and lack of consensus. The Supreme Court’s hesitation in Gonzalez also puts the onus on Congress to act if big changes are desired. As it stands, however, the threat of legislation is prompting platforms to take voluntary steps (e.g., more robust content moderation, transparency reports) to preempt heavy-handed regulation.

State laws taking matters into their own hands. Several states have pursued their own laws addressing online content, though these often run into federal preemption or constitutional issues:

  • State age-verification and parental consent laws. States like Utah and Arkansas passed laws in 2023 requiring minors to obtain parental consent to have social media accounts and mandating age verification for users. These laws reflect concern over minors’ exposure to harmful content. While they don’t directly amend or conflict with Section 230 (they regulate user access, not platform liability for content), they indicate a willingness of states to regulate the platform-user relationship for safety. How these laws will be enforced and whether they survive legal challenge (on grounds such as the First Amendment or COPPA preemption) remains to be seen.

  • Anti-censorship laws (Florida and Texas). In a twist, Florida and Texas enacted laws in 2021 prohibiting large social media platforms from “censoring” users based on viewpoint, aiming to protect political speech. These laws (Florida’s S.B. 7072 and Texas’s H.B. 20) were immediately challenged by tech industry groups on First Amendment grounds. The Eleventh Circuit struck down most of Florida’s law, while the Fifth Circuit upheld Texas’s law – a direct circuit split. In these cases, Section 230 played a secondary role: the states argued their laws weren’t creating liability for third-party content but restricting platforms’ own moderation choices, so 230’s preemption of state laws might not even apply. The Supreme Court has granted review of the Florida and Texas cases (NetChoice v. Moody/Paxton) in its 2023-24 term. The outcome could indirectly affect Section 230 by determining how free platforms are to moderate content (a strong First Amendment ruling for platforms would preserve their discretion, whereas upholding the state laws could force platforms to carry more user content, altering the practical balance that 230 currently underpins).

  • Public nuisance and novel torts. Some government entities have sued social media companies on novel grounds like public nuisance – for example, the Seattle Public School District’s 2023 lawsuit against TikTok, Meta, and others, blaming them for a youth mental health crisis. These claims attempt to analogize social media harms to environmental or public health nuisances. Section 230 is a major hurdle for such suits; the platforms argue that essentially these are claims “arising from publication of third-party content” (e.g., other students’ posts, harmful challenges going viral, etc.), which 230 would bar. Plaintiffs counter that the claim is about platforms’ conduct (algorithm design, targeting minors, etc.). It’s an uphill battle – most expect 230 will foreclose public nuisance claims, but the very attempt illustrates the creative strategies to make platforms answer for widespread societal harms allegedly linked to their products. No final decisions have been made in these novel suits yet, but they are being closely watched.

Regulators’ role. It bears noting that Section 230 does not shield platforms from federal criminal prosecution or civil enforcement by agencies. The DOJ can and has prosecuted websites for crimes (Backpage being the prime example). The FTC can pursue platforms for deception or data breaches, etc. State attorneys general, thanks to FOSTA, can bring actions against sites promoting prostitution or trafficking. Thus, beyond litigation by private plaintiffs, regulators are exerting pressure through investigations, hearings, and the court of public opinion. In late 2023, the U.S. Surgeon General even issued an advisory on social media’s risks to youth, adding momentum to calls for stricter oversight. While these actions don’t directly change Section 230, they contribute to a climate in which platforms are more proactive about content policies to avoid being seen as “causally indifferent” to harm.

On the whole, the legislative landscape is in flux, but no sweeping overhaul of Section 230 has happened yet. Lawyers should keep an eye on the reintroduction of bills in the current Congress, and on the outcome of Supreme Court cases that, while focused on the First Amendment, could influence the future interplay of law and online content.

Conclusion: Evolving Strategies and the Road Ahead

Recent court decisions and legal strategies illustrate that Section 230’s once-absolute protection is being chipped away at the margins. To be sure, the core principle of immunity for third-party content remains intact in many situations – as the Ninth Circuit noted this year, Section 230 continues to provide a “potent shield” for online platforms, even in cases involving serious harms. For everyday defamation or ordinary user misconduct, platforms can still expect robust immunity. However, plaintiffs are no longer painting all claims with the same broad brush. They are parsing platforms’ conduct to find actionable elements that fall outside publishing someone else’s speech: Did the platform design a feature that led to the harm? Did it make a promise or representation it failed to uphold? Did it actively encourage or create the unlawful content? These questions drive the new wave of lawsuits.

Social media companies are feeling the heat as courts scrutinize algorithms and addictive design elements in a way that earlier courts did not. E-commerce platforms must be mindful that facilitating a transaction (especially of dangerous goods) could expose them to liability if courts see the claim as about the transaction rather than the content of a listing. Crypto companies and exchanges, while not typically sued for user content, find themselves part of the conversation when scams spread on mainstream platforms – as seen in the Wozniak case, where a crypto scam on YouTube led to narrowing of 230 in a state court. And companies offering AI-generated content are effectively operating outside Section 230’s umbrella – they will need to bolster other defenses because if their AI defames someone or produces illegal content, they’ll likely be treated as the publisher of that content in the eyes of the law.

For plaintiffs’ lawyers, the blueprint is clear: focus on the platform’s own contributions to the harm. Whether that means leveraging the FOSTA carve-out by showing knowing participation in a venture, framing a claim as a product defect or failure to warn independent of user content, or pointing to a platform’s misrepresentations or endorsements (like verification badges), the aim is to fit within an exception to 230 or convince the court that 230 simply doesn’t speak to the conduct at issue. We have seen courts receptive to some of these arguments, but outcomes vary. It is still risky to assume a clever pleading will overcome 230 – many judges continue to enforce the statute’s broad immunity for anything that fundamentally derives from third-party content.

Next
Next

Can a State Court Hold a Federal Officer in Contempt? Inside the Legal Showdown Over a Mid-Trial Immigration Arrest in Boston