Generative AI and the Limits of the Volitional Conduct Doctrine
I. Introduction
Recent copyright litigation involving generative AI has brought renewed attention to the volitional conduct doctrine, but from competing directions. In Concord Music v. Anthropic, music publishers argue that Anthropic is liable for certain outputs because it selected materials for training and instigated copying through the design and operation of Claude. [1] Anthropic, in turn, argues that its users—not the platform—are the volitional actors, because any accused output is generated solely in response to user prompts. [2] Both positions draw on established principles of copyright law under the volitional conduct doctrine, yet neither fits comfortably within the existing framework.
The volitional conduct doctrine provides that a defendant must be sufficiently involved in the act of copying to be liable for direct copyright infringement. The doctrine developed in a technological context where copying was discrete, traceable, and tied to one party’s selection of a specific work for copying. A copy shop is not liable for a customer’s photocopying, and a VCR manufacturer is not liable for a user’s recording of a television program, because in each case the user—not the system—controls the copying. But generative AI systems do not operate that way. Unlike copying machines or VCRs, AI platforms do more than execute user directives. Before any prompt is submitted, the platform has already made upstream design choices: selecting and curating training data, designing model architecture, and configuring the constraints that shape what outputs are possible. At the same time, users also play a role in triggering and shaping outputs, particularly where prompts are iterative or designed to elicit specific results. The result is a system in which control is distributed across multiple actors and across time. Applying a binary user-versus-system framework for volitional conduct does not fully account for that reality.
This framing also sits in tension with how user control is understood in the authorship context. Under existing law, AI-generated outputs may not be copyrightable where human prompting is viewed as insufficient to control the expressive elements of the output. Mechanically applying the volitional doctrine to AI could render the same act—submitting a prompt—as insufficient to establish authorship, yet sufficient to establish volitional conduct for liability.
As an institution, the law has always required time to adapt to technological change. The pace of development in today’s AI-dependent commercial landscape, however, is unprecedented. Developing a more nuanced understanding of that doctrine now, and one that accounts for the distributed nature of control in generative and agentic AI systems, will be critical to ensuring that copyright law keeps pace with the technology it governs.
II. The Volitional Conduct Doctrine
Under the volitional conduct doctrine, a defendant must be sufficiently involved in the act of copying to be held directly liable for copyright infringement. The doctrine emerged in the early 1990s, in response to case law that threatened to impose liability for automatic, system-level copying. [3]
In Religious Technology Center v. Netcom On-Line Communications Services, Inc., the court held that an internet service provider and online bulletin board operator were not directly liable for creating copies of infringing posts made by a third-party subscriber. [4] Although the defendants’ systems created temporary copies of the asserted works, the court held that this copying was not volitional because it occurred as an automatic consequence of subscriber activity. The system operators were therefore likened to owners of a copying machine made available for public use.
The Second Circuit further developed the volitional conduct doctrine in Cartoon Network LP v. CSC Holdings, Inc., which addressed a remote DVR system that allowed users to record television programs. [5] The court held there were “only two instances of volitional conduct”—Cablevision’s creating and maintaining a system that “exists only to produce a copy,” and the customer’s ordering the system to produce a copy of a specific program. The court held that Cablevision was not liable for its customers’ copying because “selling access to a system that automatically produces copies on demand . . . more closely resembles a store proprietor who charges customers to use a photocopier on his premises.” [6].
More recently, the Ninth Circuit held in VHT, Inc. v. Zillow Group, Inc. that Zillow was not directly liable for photos posted by its users on its real estate marketing website. The court held that users—not Zillow—selected and uploaded every photo, and that any control that Zillow had was limited to the “general operation” of its website. [7]
What these cases establish is a relatively simple framework: where a system automatically executes a user’s directive to copy a protected work selected by the user, the system operator does not engage in the volitional conduct necessary for direct liability.
III. The Gaps Between Doctrine and AI Technology
The technologies at the center of the volitional conduct cases share a defining feature: they reproduce content selected by the user at the user’s direction. Generative AI systems differ meaningfully in both structure and function. They do not simply retrieve or reproduce user-selected works; instead, they generate outputs based on statistical relationships learned during training, and produce novel output—literary and audiovisual works, functional code, corporate documents—that may or may not contain content prompted by the user. By the time a user enters a prompt, the system has already been shaped by upstream design choices made months, if not years earlier, that determine what kinds of outputs are possible and how likely they are to occur.
The result is that functions traditionally associated with volitional conduct—control, selection, and instigation—are distributed across different actors at different times and stages of the system. The user triggers the output through prompting, but the platform defines the conditions under which that output can be produced. Neither role maps cleanly onto the traditional concept of a single actor making a discrete decision to copy.
The Second Circuit’s decision in Cartoon Network illustrates the importance of this nuance. There, the court emphasized that liability turns in part on the degree of control over the specific content being selected and copied. It held that in the VOD context, Cablevision had a certain degree of control where it “actively selects and makes available beforehand” certain programs available for viewing. In contrast, Cablevision exercised “far less” control in the DVR context because it did not dictate which television channels would be available to the user, and when any specific program would air on a given channel. [8]
Generative AI systems blur that line. AI companies actively select and weigh the data they use for training AI models. They also shape and constrain outputs through guardrails and parameters designed into the model for a range of purposes. For example, Anthropic states that it has “implemented and improved guardrails and other techniques” to reduce the likelihood of reproducing copyrighted materials. [9] While these measures are often invoked to rebut secondary liability theories, they also reflect active platform-level involvement in shaping what outputs can and cannot be generated.
IV. The Asymmetry in Control: Authorship v. Liability
Adopting the user-versus-system binary in AI copyright litigation raises a related issue when viewed through the lens of copyright ownership: if AI-generated content cannot be sufficiently controlled by a human to qualify for authorship, how can that level of control be sufficient to establish volitional conduct for liability?
As described in this recent article, human authorship is a prerequisite for obtaining copyright protection. Under the current framework, merely prompting an AI model, even hundreds of times, may not satisfy the requirement of human authorship. As the Copyright Office has explained, this is because AI models (or at least the ones at the center of recent disputes) do not treat prompts as direct instructions, and the resulting output is shaped not just by the user’s prompt, but also by the model’s internal processing.
If courts ultimately conclude that prompting does not constitute sufficient control for purposes of authorship, they will need to grapple with whether prompting can nevertheless establish control for purposes of direct copyright liability. These doctrines need not be perfectly symmetrical; authorship and liability serve different legal purposes and are governed by separate bodies of law. But the potential gap between them has real and potentially far-reaching consequences in an increasingly AI-dependent commercial environment. If both positions were adopted, individual users, whose conduct may also be imputed to their employers, would bear the worst of both outcomes: unable to claim ownership over content created through AI prompting, while also bearing direct infringement liability for that same content. That result would shift the focus away from the platforms whose design and training decisions shape the system’s outputs and toward downstream users who have limited visibility into how those outputs are generated.
V. Prompting and Degrees of User Control
The role of user prompting in the volitional conduct analysis also warrants closer attention. In many cases, the outputs at issue are generated in response to prompts submitted during the course of litigation, often designed to test the boundaries of the system. For example, Anthropic has characterized prompts submitted by plaintiffs’ investigators as attempts to “jailbreak” its guardrails. [10] Perplexity has likewise characterized plaintiffs’ queries as “highly atypical, litigation-driven ‘user’ behavior.” [11] The use of targeted prompts for litigation highlights an important point: not all prompts reflect the same degree of user control.
A single, open-ended query differs meaningfully from a sequence of iterative prompts designed to elicit a particular output. The former may reflect only general guidance, leaving substantial aspects of the response to the system’s generation process. But the latter may involve refinement and iterative phrasing that more closely resembles selection of materials and instigation of copying. This distinction matters for the volitional conduct doctrine, which imposes liability on the actor that selected and instigated the copying of a particular work. In the AI context, however, that inquiry cannot be reduced to the mere fact that a prompt was submitted. It must also consider the nature of the prompt itself and the degree to which it directs the system toward a specific result, as opposed to leaving the substance of the output to the system itself.
Ultimately, even highly directed prompting occurs within constraints established by the platform. The system’s training, architecture, and guardrails determine what outputs are possible and how readily they can be produced. Both user prompting and platform design contribute to the ultimate output, reinforcing the need for a more nuanced, context-specific analysis rather than a categorical assignment of liability.
VI. The Future of the Volitional Conduct Doctrine
The positions advanced in current AI copyright litigation reflect the limits of categorically applying existing doctrine to generative systems. Plaintiffs emphasize the role of platform-level decisions (training data selection, model architecture, and output constraints) as evidence that AI companies exercise meaningful control over allegedly infringing outputs. Defendants, by contrast, argue that users alone instigate any copying through their prompts. Each position draws on established principles of copyright law but captures only part of the causal chain that produces AI-generated output.
The volitional conduct doctrine developed in a technological environment where copying was discrete, traceable, and attributable to a single actor. Generative AI operates differently. Control is distributed across training, design, deployment, and prompting, making it difficult to identify a single “volitional actor” without oversimplifying how these systems actually function.
This does not mean the doctrine is obsolete, nor does it require abandoning traditional principles of direct liability. But these gaps illustrate the risk of mechanically applying a binary user-versus-system framework to generative AI systems, or even in other uses cases that enjoy similar nuance. The question facing courts is not simply who caused the copy, but how responsibility should be allocated when no single participant fully determines the result.
As generative and agentic AI systems become more deeply integrated into the commercial ecosystem, the answers to those questions will shape not only the future of copyright litigation, but also how legal and operational risk is allocated across the businesses that develop, deploy, and rely on these technologies.
Chieh Tung is a litigator representing companies in copyright, trademark, and business disputes. She writes about developments at the intersection of AI and intellectual property.
***
[1] Concord Music Group, Inc. et al v. Anthropic PBC, No. 5:24-cv-03811-EKL-SVK (N.D. Cal.), Dkt. 594 (Plaintiffs' Notice of Motion and Motion for Partial Summary Judgment) at 18.
[2] Id. at Dkt. 693 (Anthropic's Motion for Summary Judgment and Opposition to Plaintiffs' Partial Motion for Summary Judgment) at 24-29.
[3] MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511, 518 (9th Cir.1993).
[4] Religious Tech. Ctr. v. Netcom On-Line Commc'n Servs., Inc., 907 F. Supp. 1361, 1369 (N.D. Cal. 1995).
[5] Cartoon Network LP, LLLP v. CSC Holdings, Inc., 536 F.3d 121 (2d Cir. 2008).
[6] Id. at 132.
[7] VHT, Inc. v. Zillow Grp., Inc., 918 F.3d 723, 733 (9th Cir. 2019).
[8] Supra note 5 at 132.
[9] Supra note 2 at 31.
[10] Supra note 2 at 26.
[11] The New York Times Company et al. v. Perplexity AI, Inc., No. 1:25-cv-10106-LAP (S.D.N.Y), Dkt. 58 (Consolidated Memorandum of Law in Support of Defendants' Motion to Dismiss Plaintiffs' First Amended Complaints) at 12.