Song Fi Inc. v. Google, Inc.

District Court, N.D. California
108 F.Supp.3d 876 (2015)
ELI5:

Rule of Law:

The immunity provided to interactive computer services under Section 230(c)(2) of the Communications Decency Act for removing "otherwise objectionable" material is limited to content that is of the same kind as the statute's listed examples, such as obscene or harassing material, and does not extend to content removed for technical violations like artificially inflated view counts.


Facts:

  • Song fi, Inc., N.G.B., and the Rasta Rock Opera (Plaintiffs) produced and uploaded a music video titled 'Luv ya Luv ya Luv ya' to YouTube's platform.
  • In the process of uploading, Plaintiffs agreed to YouTube’s Terms of Service.
  • Over a two-month period, the video's view count reached over 23,000.
  • YouTube removed the video from public view, replacing it with a notice stating, '[t]his video has been removed because its content violated YouTube’s Terms of Service.'
  • YouTube later explained its action was based on its determination that the video's view count had been artificially inflated through automated means, a violation of its Terms of Service.
  • Plaintiffs deny any involvement in inflating the view count.
  • Plaintiffs allege that the video's removal and the accompanying notice caused them economic harm, including the cancellation of a performance sponsored by Nike and the suspension of financial support from a principal funder.

Procedural Posture:

  • Plaintiffs filed suit against YouTube in the United States District Court for the District of Columbia.
  • The court granted YouTube's motion to transfer the case to the United States District Court for the Northern District of California, pursuant to a forum selection clause in YouTube's Terms of Service.
  • In the Northern District of California, YouTube filed a motion to dismiss Plaintiffs' complaint.
  • Plaintiffs filed a motion for partial summary judgment, seeking a ruling that YouTube's notice was libel per se.

Locked

Premium Content

Subscribe to Lexplug to view the complete brief

You're viewing a preview with Rule of Law, Facts, and Procedural Posture

Issue:

Does Section 230(c)(2) of the Communications Decency Act immunize an interactive computer service from liability for removing content that it considers to have an artificially inflated view count, on the grounds that such content is 'otherwise objectionable'?


Opinions:

Majority - Judge Samuel Conti

No. Section 230(c)(2) of the Communications Decency Act does not immunize YouTube because an allegedly artificially inflated view count is not 'otherwise objectionable' within the meaning of the statute. The court's reasoning is based on the canon of statutory construction known as eiusdem generis. The phrase 'otherwise objectionable' must be interpreted in light of the preceding list of terms: 'obscene, lewd, lascivious, filthy, excessively violent, harassing.' These terms all relate to offensive content, not technical or business-related violations like view count manipulation. The statute's title, 'Protection for ‘Good Samaritan’ blocking and screening of offensive material,' further supports that Congress intended to immunize the removal of offensive, not merely undesirable, content. Adopting YouTube's broad, purely subjective interpretation would grant providers an unbounded power to remove content for anticompetitive or malicious reasons, which is contrary to the statute's purpose. While YouTube is not immune under the CDA for its actions, Plaintiffs' breach of contract claims fail because YouTube’s Terms of Service unambiguously grant it the 'sole discretion' to remove content. The libel claim is also dismissed because the notice is not libel per se; its defamatory meaning depends on extrinsic knowledge of the Terms of Service, making it libel per quod, which requires pleading special damages that Plaintiffs failed to allege.



Analysis:

This decision significantly narrows the scope of the 'Good Samaritan' immunity under CDA § 230(c)(2). By applying the eiusdem generis canon, the court distinguishes between removing content that is inherently offensive (like pornography or harassment) and removing content for other violations of a platform's terms of service (like view count manipulation). This ruling establishes a precedent that online service providers cannot rely on § 230(c)(2) immunity for content moderation decisions related to technical or business norms. Future cases involving platform liability will likely need to distinguish whether the removal was based on the offensive nature of the content itself or on other, non-immunized grounds for violating a platform's policies.

🤖 Gunnerbot:
Query Song Fi Inc. v. Google, Inc. (2015) directly. You can ask questions about any aspect of the case. If it's in the case, Gunnerbot will know.
Locked
Subscribe to Lexplug to chat with the Gunnerbot about this case.

Unlock the full brief for Song Fi Inc. v. Google, Inc.