The Law and Business of Social Media
December 15, 2016 - User-Generated Content, Section 230 Safe Harbor

The Decline and Fall of Section 230?

The Decline and Fall of Section 230?

2016 has been a tough year for a lot of reasons, most of which are outside the scope of this blog (though if you’d like to hear our thoughts about Bowie, Prince or Leonard Cohen, feel free to drop us a line). But one possible victim of this annus horribilis is well within the ambit of Socially Aware: Section 230 of the Communications Decency Act (CDA).

Often hailed as the law that gave us the modern Internet, CDA Section 230 provides immunity against liability for website operators for certain claims arising from third-party or user-generated content. The Electronic Frontier Foundation has called Section 230 “the most important law protecting Internet speech,” and companies including Google, Yelp and Facebook have benefited from the protections offered by the law, which was enacted 20 years ago.

But it’s not all sunshine and roses for Internet publishers and Section 230, particularly over the past 18 months. Plaintiffs are constantly looking for chinks in Section 230’s armor and, in an unusually large number of recent cases, courts have held that Section 230 did not apply, raising the question of whether the historical trend towards broadening the scope of Section 230 immunity may now be reversing. This article provides an overview of recent cases that seem to narrow the scope of Section 230.

The “Publisher or Speaker” Requirement

CDA Section 230(c)(1) states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Plaintiffs sometimes argue that Section 230 does not apply because the claims they are asserting do not treat the defendant as a publisher or speaker. This has not always been a successful argument, but it has prevailed in several recent cases.

Doe #14 vs. Internet Brands involved a website called Model Mayhem, which is designed to match models with prospective gigs. In 2012, a Jane Doe plaintiff sued Internet Brands, the parent company of Model Mayhem, alleging that the site’s operators were negligent in failing to notify its users of the risk that rapists were using the website to find victims. Consequently, Doe argued, she was drugged and raped by two assailants who had used the website to lure her to a fake audition.

The plaintiff argued that Section 230 did not apply because a “failure to warn” claim did not depend on Model Mayhem being the publisher or speaker of content provided by another person. The Ninth Circuit bought the plaintiff’s argument and overturned the district court’s earlier dismissal of the case, which had been based on Section 230 immunity.

In his opinion, Judge Clifton explained that Jane Doe did not seek to hold Internet Brands liable as a publisher or speaker of content posted by a user on the website, or for its failure to remove content posted on the website. Instead, she sought to hold Internet Brands liable for failing to warn her about information it obtained from an outside source about how third parties targeted victims through the website. This duty to warn would “not require Internet Brands to remove any user content or otherwise affect how it publishes or monitors such content.” Since the claim did not treat Internet Brands as a publisher or speaker, the court ruled that Section 230 did not apply.

Some commentators have criticized this ruling, arguing that imposing an obligation on website operators to warn about potentially harmful users is impractical and contrary to the principles of Section 230 and of many prior cases, and will cause websites to self-censor and over-censor.

A similar argument worked—to an extent—in Darnaa v. Google, a Northern District of California case that involved YouTube’s removal of the plaintiff’s music video based on YouTube’s belief that the plaintiff had artificially inflated view counts. The plaintiff sued for breach of the covenant of good faith and fair dealing, interference with prospective economic advantage and defamation. She sought damages and an injunction to prevent YouTube from removing the video or changing the video’s URL.

The district court held that Section 230(c)(1) preempted the plaintiff’s interference claim, but not her good faith and fair dealing claim. The court explained that the latter claim sought to hold YouTube liable for breach of its good faith contractual obligation to the plaintiff, rather than in its capacity as a publisher; as such, Section 230 did not shield YouTube against this claim.

In a similar vein, a California Court of Appeal refused to apply the Section 230 safe harbor in a case involving Yelp, which we recently wrote about. In Hassell v. Bird, the plaintiff, an attorney, sued a former client for defamation regarding three negative reviews that the plaintiff claimed the defendant had published on Yelp.com under different usernames. When the defendant failed to appear, the court issued an order granting the plaintiff’s requested damages and injunctive relief. The court also entered a default judgment ordering Yelp to remove the offending posts. Yelp challenged the order on Section 230 grounds, among others, but the court held that Section 230 did not apply. It reasoned that Yelp was not itself being sued for defamation, so it did not face liability as a speaker or publisher of third-party speech.

Likewise, the Northern District of California court in Airbnb v. City and County of San Francisco denied Airbnb’s request for a preliminary injunction barring enforcement of a San Francisco ordinance that makes it a misdemeanor to provide booking services for unregistered rental units. Airbnb argued that such an ordinance would conflict with Section 230, which contains an express preemption clause stating that no liability may be imposed under any state or local law that is inconsistent with Section 230.

The decision turned on whether the ordinance “inherently requires the court to treat [Airbnb] as the ‘publisher or speaker’ of content provided by another.” Airbnb argued that the threat of criminal penalty for providing booking services for unregistered rental units would require that the company actively monitor and police listings by third parties to verify registration, which would be tantamount to “treating it as a publisher” because that would involve traditional publisher functions of reviewing, editing and selecting content to publish.

But the court held that the ordinance did not treat Airbnb as the publisher or speaker of the rental listings because it applies only to providing and collecting fees for booking services in connection with an unregistered unit, and does not regulate what can and cannot be published. Therefore, the court denied the request for preliminary injunction.

Section 230’s Application to “Providers and Users”

Plaintiffs sometimes argue along with the “publisher or speaker” argument that Section 230 does not apply where the defendant does not fall within the category of “providers and users of an interactive computer service,” as required under Section 230(c)(1). This argument worked for the plaintiff in Maxfield v. Maxfield, a Connecticut state court case.

In Maxfield, the plaintiff sued his ex-wife for defamation, claiming that she forwarded screenshots of defamatory tweets about him to his current wife. The ex-wife defended on Section 230 grounds, based on the fact that she forwarded third-party tweets but did not write the Tweets herself. The court, however, found that she was not covered by Section 230 immunity because she “merely transmitted” the defamatory messages. The opinion states: “Ms. Maxfield does not operate a website and plainly is not ‘a provider of an interactive computer service.’ While she might, on occasion, be considered a ‘user of an interactive computer service,’ she did not do so in the behavior alleged in the complaint.” Therefore, the court rejected the defendant’s Section 230 defense.

It is worth noting that the Maxfield decision runs contrary to several prior cases in which courts have held that forwarding defamatory emails would, in fact, be covered by Section 230.

Defendants as Content Developers

One of the most commonly exploited plaintiff’s arguments against Section 230 defenses is that the statutory immunity does not apply if the defendant itself developed or contributed to the relevant material. In other words, the relevant material is not “information provided by another information content provider” and therefore falls outside of the scope of Section 230. Courts have historically been fairly strict about applying this exception, and have consistently held that editing, selecting and commenting on third-party content does not take a defendant out of Section 230 immunity. However, recent cases seem to blur the line between what it means to “develop” content and to exercise editorial functions.

In Diamond Ranch Academy v. Filer, the plaintiff, who ran a residential youth treatment facility, sued the defendant, who ran a website that contained critical descriptions of the plaintiff’s facility, for defamation. The critical comments were included in a portion of the website that, according to the defendant, contained third-party complaints about the plaintiff. The defendant asserted a Section 230 defense, arguing that she had merely selected and summarized third-party material to make it more digestible for readers.

However, the court did not find this argument persuasive. In its decision, the court pointed out that the defendant’s posts “do not lead a person to believe that she is quoting a third party. Rather, [she] has adopted the statements of others and used them to create her comments on the website.” The court implied that the lack of quotation marks or other signals that the comments were created by third parties supported the inference that the defendant had “adopted” the statements. The court also noted that she had “elicited” the third-party comments through surveys that she had conducted. Since the court treated the defendant as the author of the allegedly defamatory statements, it held that she was not entitled to protection under Section 230 for those statements.

In a more recent ruling, the Seventh Circuit in Huon v. Denton similarly refused to immunize the defendant from liability for an allegedly defamatory comment posted on its website. The case involved a comment to a story published on Jezebel, a property owned by Gawker, that called the plaintiff a “rapist.” The plaintiff argued that Section 230 was inapplicable because “Gawker’s comments forum was not a mere passive conduit for disseminating defamatory statements.” Rather, the plaintiff claimed, Gawker itself was an information content provider because it “encouraged and invited” users to defame the plaintiff, through “urging the most defamation-prone commenters to post more comments and continue to escalate the dialogue,” editing and shaping the content of the comments, and selecting each comment for publication.

Many prior cases have held that engaging in editorial activities such as these does not turn a website operator into a content developer for purposes of Section 230. But the court in Huon sidesteps these arguments, stating that “we need not wade into that debate” because the plaintiff had also alleged that Gawker employees may have anonymously authored comments to increase traffic to the website. Despite the fact that, as one commentator noted, there was no allegation that Gawker employees had written the specific allegedly defamatory comment at issue, the court held that these allegations of anonymous authorship by Gawker employees were sufficient to survive Gawker’s motion to dismiss.

Avoidance of Section 230(c)(2) on Technicalities

So far, this article has focused primarily on CDA Section 230(c)(1), which tends to see more action in the courts than its counterpart provision, CDA Section 230(c)(2). But there have also been recent cases that narrow the scope of Section 230(c)(2).

Section 230(c)(2) states that no provider or user of an interactive computer service will be liable for its filtering decisions. Specifically, Section 230(c)(2)(A) protects website operators from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” Arguably, plaintiffs have reasoned, Section 230(c)(2) does not apply to filtering decisions that are based on other objections.

In Song Fi v. Google Inc., another case involving the removal of a video for allegedly inflated view counts that we previously covered, the plaintiff asserted claims for, among other things, breach of contract and breach of the implied covenant of good faith and fair dealing. The defendant, YouTube, raised a defense under Section 230(c)(2). However, the court interpreted the provision narrowly, and found that although videos with inflated view counts could be a problem for YouTube, they are not “otherwise objectionable” within the meaning of Section 230(c)(2)(A).

As we wrote previously, the court concluded that, in light of the CDA’s history and purpose, the phrase “otherwise objectionable” relates to “potentially offensive material, not simply any materials undesirable to a content provider or user.” Further, the requirement that the service provider subjectively finds the blocked or screened material objectionable “does not mean anything or everything YouTube finds subjectively objectionable is within the scope of Section 230(c).” The court did not believe that YouTube’s removal of the video was “the kind of self-regulatory editing and screening that Congress intended to immunize in adopting Section 230(c).” Therefore, the court held that YouTube’s removal of videos with inflated view counts fell outside of the protections offered by Section 230(c)(2).

Looking Ahead

So where do these cases leave us? Unfortunately for website operators, and happily for plaintiffs, there seems to be a trend developing toward reining in the historically broad scope of Section 230 immunity. Of course, Section 230 still provides robust protection in many cases, and we have also seen a few recent victories for defendants asserting Section 230 defenses. Whatever happens, we will continue to monitor and provide updates on Section 230 as we enter the new year.

*          *          *

For other Socially Aware blog posts regarding the CDA Section 230 safe harbor, please see the following: In a Rough Year for CDA Section 230, Manchanda v. Google Provides Comfort to Website Operators; Yelp Case Shows CDA §230 Still Has Teeth; Controversial California Court Decision Significantly Narrows a Crucial Liability Safe Harbor for Website Operators; and Google AdWords Decision Highlights Contours of the CDA Section 230 Safe Harbor.