What the New York Times lawsuit could mean for ChatGPT

 
February 04, 2024

On December 27, 2023, the New York Times Company filed a lawsuit against OpenAI alleging that the company committed willful copyright infringement through its generative AI tool ChatGPT. The Times claimed both that ChatGPT was unlawfully trained on vast amounts of text from its articles and that ChatGPT’s output contained language directly taken from its articles.

To remedy this, the Times asked for more than just money: It asked a federal court to order the “destruction” of ChatGPT.

If granted, this request would force OpenAI to delete its trained large language models, such as GPT-4, as well as its training data, which would prevent the company from rebuilding its technology.

This prospect is alarming to the 100 million people who use ChatGPT every week. And it raises two questions that interest me as a law professor. First, can a federal court actually order the destruction of ChatGPT? And second, if it can, will it?

Destruction in the court

The answer to the first question is yes. Under copyright law, courts do have the power to issue destruction orders.

To understand why, consider vinyl records. Their resurging popularity has attracted counterfeiters who sell pirated records.

If a record label sues a counterfeiter for copyright infringement and wins, what happens to the counterfeiter’s inventory? What happens to the master and stamper disks used to mass-produce the counterfeits, and the machinery used to create those disks in the first place?

To address these questions, copyright law grants courts the power to destroy infringing goods and the equipment used to create them. From the law’s perspective, there’s no legal use for a pirated vinyl record. There’s also no legitimate reason for a counterfeiter to keep a pirated master disk. Letting them keep these items would only enable more lawbreaking.

So in some cases, destruction is the only logical legal solution. And if a court decides ChatGPT is like an infringing good or pirating equipment, it could order that it be destroyed. In its complaint, the Times offered arguments that ChatGPT fits both analogies.

https://www.youtube.com/embed/kUUievwKEaM?wmode=transparent&start=0 NBC News reports on The New York Times’ lawsuit.

Copyright law has never been used to destroy AI models, but OpenAI shouldn’t take solace in this fact. The law has been increasingly open to the idea of targeting AI.

Consider the Federal Trade Commission (FTC)’s recent use of algorithmic disgorgement as an example. The FTC has forced companies, such as WeightWatchers, to delete not only unlawfully collected data but also the algorithms and AI models trained on such data.

Why ChatGPT will likely live another day

It seems to be only a matter of time before copyright law is used to order the destruction of AI models and datasets. But I don’t think that’s going to happen in this case. Instead, I see three more likely outcomes.

The first and most straightforward is that the two parties could settle. In the case of a successful settlement, which may be likely, the lawsuit would be dismissed and no destruction would be ordered.

The second is that the court might side with OpenAI, agreeing that ChatGPT is protected by the copyright doctrine of “fair use.” If OpenAI can argue that ChatGPT is transformative and that its service does not provide a substitute for the New York Times’ content, it just might win.

The third possibility is that OpenAI loses but the law saves ChatGPT anyway. Courts can order destruction only if two requirements are met: First, destruction must not prevent lawful activities, and second, it must be “the only remedy” that could prevent infringement.

That means OpenAI could save ChatGPT by proving either that ChatGPT has legitimate, noninfringing uses or that destroying it isn’t necessary to prevent further copyright violations.

Both outcomes seem possible, but for the sake of argument, imagine that the first requirement for destruction is met. The court could conclude that, because of the articles in ChatGPT’s training data, all uses infringe on the Times’ copyrights—an argument put forth in various other lawsuits against generative AI companies.

In this scenario, the court would issue an injunction ordering OpenAI to stop infringing on copyrights. Would OpenAI violate this order? Probably not. A single counterfeiter in a shady warehouse might try to get away with that, but that’s less likely with a $100 billion company.

Instead, it might try to retrain its AI models without using articles from the Times, or it might develop other software guardrails to prevent further problems. With these possibilities in mind, OpenAI would likely succeed on the second requirement, and the court wouldn’t order the destruction of ChatGPT.

Given all of these hurdles, I think it’s extremely unlikely that any court would order OpenAI to destroy ChatGPT and its training data. But developers should know that courts do have the power to destroy unlawful AI, and they seem increasingly willing to use it.


João Marinotti is an associate professor of Law at Indiana University.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


(21)