Beyond Infringement: The Rise of DMCA Claims in AI Litigation

Much of the conversation surrounding the recent wave of generative AI litigation has focused on copyright infringement and the fair use doctrine.  Less attention has been paid to the growing number of claims asserted under the Digital Millennium Copyright Act (DMCA), which may carry significant implications for AI companies, content platforms, and content creators.

Unlike traditional copyright law, which centers on copying, the DMCA targets the circumvention of technical access controls (§ 1201) and the removal or alteration of copyright management information (§ 1202). It also provides a separate basis for liability, including statutory damages assessed per violation rather than per copyrighted work. Given the automated and large-scale nature of AI training, a successful DMCA claim could materially broaden the exposure faced by generative AI companies.

Recent decisions offer insight into how courts are applying these provisions to AI training practices and where disagreements are beginning to emerge.

Section 1201 – Circumventing Technical Measures Controlling Access

Section 1201 prohibits circumvention of a “technological measure that effectively controls access” to a copyrighted work. Historically, that provision has been applied to acts that bypassed classic digital rights management systems, i.e. encryption and authentication protocols. [1]  Now, however, courts are examining to what degree large-scale data scraping and ingestion pipelines used to train AI models constitute “circumvention” of platform-level controls over publicly available content.

In TED Entertainment, Inc. v. NVIDIA Corp., content creators allege that YouTube’s terms of service and related technical measures prevent downloading of publicly viewable videos, and that NVIDIA’s alleged mass scraping and downloading of those videos for training AI violates § 1201. [2] Likewise, in Reddit, Inc. v. SerpApi LLC et al., Reddit alleges that Perplexity AI and related scraping service providers circumvented Reddit’s anti-scraping tools and systems by accessing and using large volumes of publicly available Reddit data to train AI systems. As of the date of this article, both cases remain at the pleading stage. [3]

Although not an AI case, a recent decision in Cordova v. Huneault underscores the tension between widely used platform gating features and the potential for anti-circumvention liability where those features are bypassed.  There, a YouTuber alleged that several content creators used unidentified software and ripping tools to bypass YouTube’s rolling cipher technology to download his videos. On January 23, 2026, the court allowed the § 1201 claim to proceed past a motion to dismiss, holding that the plaintiff was not required to identify the specific tool used to bypass YouTube’s technical measures and that the public availability of YouTube videos was “immaterial.” [4]

As these lawsuits reflect, technical measures that regulate automated access (i.e. rate limits, request validation systems, and bot-detection tools) are standard features of modern content platforms. Whether such mechanisms qualify as “technological measures that effectively control access” under § 1201 is a central question. If courts treat these platform-level controls as qualifying access controls, the inquiry shifts from whether content is publicly viewable to whether the manner of access complied with platform-imposed technical restrictions, which could have significant implications for AI companies that rely on automated data ingestion at scale.

Section 1202 – Removal or Alteration of Copyright Management Information

Section 1202 prohibits the intentional removal or alteration of copyright management information (CMI), which often includes title, authorship, and copyrights ownership.  Plaintiffs in recent gen-AI litigation have alleged that AI companies violated § 1202 by removing CMI from the plaintiffs’ works, both in processing those works for training and in reproducing those works in user-generated outputs.

Courts have disagreed on whether the allegation of an intentional removal of CMI itself is a concrete injury sufficient to establish standing to sue. In Raw Story Media, Inc. v. OpenAI, Inc., a federal court in the Southern District of New York dismissed the plaintiff’s § 1202 claims for lack of standing, concluding removal of CMI itself without any allegation of dissemination was not a concrete and particularized injury sufficient to establish standing to sue. [5] Two of its sister courts disagreed.  In Intercept Media, Inc. v. OpenAI, Inc. and The New York Times Co. v. Microsoft Corp., both courts held that the removal of CMI was an interference with property rights even without publication to a third party. [6]

Courts have also challenged various theories asserting a violation under § 1202.  In Andersen v. Stability AI Ltd., the court held that because there were no allegations that the AI outputs were identical to the plaintiffs’ works, there could be no “removal” of CMI from those works under the DMCA.  [7] And in Kadrey v. Meta Platforms, the court granted summary judgment on a § 1202 claim after holding that Meta’s use of the plaintiffs’ works for LLM training was fair use.  The court held that because § 1202 requires a defendant to have knowledge that its conduct is facilitating infringement, the absence of infringement proved dispositive under the applicable facts. [8]

Taken together, these decisions reveal several potential constraints on § 1202 claims in the AI context.  LLMs are not generally designed to reproduce training works verbatim.  This means that some plaintiffs may not be able to allege dissemination of their works sufficient to establish standing in certain jurisdictions.  It also means that, where plaintiffs do not or cannot allege AI-generated output that is identical to their works, courts may not find the absence of CMI in AI-generated content an act of “removal” in violation of § 1202. And finally, while § 1202 does not categorically require proof of infringement, where a court determines that LLM training is fair use, plaintiffs may face difficulty establishing the statute’s knowledge requirement.

What Comes Next?

The recent § 1201 and § 1202 decisions suggest that DMCA claims in AI litigation may turn less on traditional notions of copying and more on technical architecture, platform design, and data ingestion practices. Courts are now being asked to apply a statute enacted in 1998 to a hyper-digital world involving platforms hosting vast quantities of data and automated systems operating at unprecedented scale and pace.  In this setting, fundamental questions remain: what qualifies as a “technological measure,” when does automated ingestion at scale amount to unlawful circumvention, and how should courts reconcile highly technical AI processes with established concepts of property and access rights? How courts resolve these issues may shape not only AI litigation strategy, but also the evolving relationship among content creators, content platforms, generative AI companies, and copyright law itself.

 ***

[1] See, e.g. Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2d Cir. 2001); MDY Indus., LLC v. Blizzard Ent., Inc., 629 F.3d 928 (9th Cir. 2010), as amended on denial of reh'g (Feb. 17, 2011), opinion amended and superseded on denial of reh'g, No. 09-15932, 2011 WL 538748 (9th Cir. Feb. 17, 2011).

[2] Ted Entertainment, Inc. v. Nvidia Corporation, No. 5:25-cv-10287-EJD, United States District Court, Northern District of California, Dkt. 1.

[3] Reddit, Inc. v. Serpapi LLC et al, No. 1:25-cv-08736-PAE, United States District Court, Southern District of New York, Dkt. 55.

[4] Cordova v. Huneault, No. 25-CV-04685-VKD, 2026 WL 184598 (N.D. Cal. Jan. 23, 2026).

[5] Raw Story Media, Inc. v. OpenAI, Inc., 756 F. Supp. 3d 1 (S.D.N.Y. 2024), reconsideration denied sub nom. In re OpenAI, Inc., Copyright Infringement Litig., No. 24-CV-01514, 2025 WL 1707564 (S.D.N.Y. June 18, 2025).

[6] New York Times Co. v. Microsoft Corp., 777 F. Supp. 3d 283 (S.D.N.Y. 2025); The Intercept Media, Inc. v. OpenAI, Inc., 767 F. Supp. 3d 18 (S.D.N.Y. 2025).

[7] Andersen v. Stability AI Ltd., 744 F. Supp. 3d 956 (N.D. Cal. 2024).

[8] Kadrey v. Meta Platforms, Inc., No. 23-CV-03417-VC, 2025 WL 1786418 (N.D. Cal. June 27, 2025).

Previous
Previous

Ninth Circuit’s Longstanding Copyright Test Faces En Banc Challenge in Sedlik v. Von Drachenberg

Next
Next

Filtration as a Framework, Not a Checklist: A Practical Guide from Practical Experience for Extrinsic Test Victory