Power
Robot Reading a Book
(Getty Images)

Judge rules Anthropic training on books it purchased was “fair use,” but not for the ones it stole

Anthropic still faces litigation for training its models on millions of pirated texts.

When AI companies like OpenAI, Anthropic, and Meta were racing to build and train new large language models, they scrambled to find enough text to train their systems on. Countless web pages, photos, YouTube videos, Disney movies, Reddit threads, and book texts were slurped up to feed the models to add billions and billions of tokens.

Resulting litigation initiated by copyright holders has shown that the legality of the process was on the minds of some AI company employees, like researchers at Meta who raised concerns while training its Llama model, only to be told that the use of LibGen, a corpus of pirated texts, was approved by “MZ.”

But yesterday, a court decided a case partially in favor of AI companies, with far-reaching consequences for all the companies that were sucking copyrighted material into their models.

A federal judge in the Northern District of California has ruled that Anthropic was not violating the copyright of authors of the books it purchased and scanned for training.

A group of authors filed the suit against Anthropic last August, alleging that Anthropic had acknowledged training its Claude AI model using “The Pile,” a mass of text shared online that contained millions of copyrighted works, including some written by the plaintiffs.

The process of buying, scanning, and ingesting the text for use in training the Claude model was determined to be “exceedingly transformative and was a fair use under Section 107 of the Copyright Act” by Judge William Alsup, a key test of the fair use doctrine in intellectual property law.

But what about the “over seven million copies of books” that Anthropic admitted were pirated that it did not pay for? The judge said that was not fair use, and warrants its own trial.

Judge Alsup wrote:

“The downloaded pirated copies used to build a central library were not justified by a fair use. Every factor points against fair use. Anthropic employees said copies of works (pirated ones, too) would be retained ‘forever’ for ‘general purpose’ even after Anthropic determined they would never be used for training LLMs. A separate justification was required for each use. None is even offered here except for Anthropic’s pocketbook and convenience.”

The case is the first of its kind to be decided in the US, and lays out a potentially legal way for AI companies to safely train their models using copyrighted works — as long as they purchase them. That said, there are still many other cases pending and many factors at play before the industry has clear rules.

But companies that are caught knowingly using pirated, copyrighted works to train AI models may face new legal exposure.

An Anthropic spokesperson told Sherwood News:

“We are pleased that the Court recognized that using ‘works to train LLMs was transformative — spectacularly so.’ Consistent with copyright’s purpose in enabling creativity and fostering scientific progress, ‘Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.’”

More Power

See all Power
US-POLITICS-CONGRESS-AI

Anthropic sues the US government

In response to the Pentagon’s unprecedented, punitive determination that Anthropic is a national security supply chain risk, the AI startup has sued the US government.

power

OpenAI is reportedly working with Pentagon to hash out guardrails amid Anthropic standoff over AI safety

OpenAI CEO Sam Altman said the company is working with the Pentagon to negotiate safety guardrails for AI models used in the battlefield, which comes as one of its top competitors, Anthropic, is at a standoff with the government.

According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”

Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.

Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.

According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”

Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.

Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.

power
Jon Keegan

Report: Anthropic CEO Amodei meeting with Hegseth at the Pentagon as tensions mount

Anthropic CEO Dario Amodei has been summoned to meet with Defense Secretary Pete Hegseth at the Pentagon on Tuesday, according to a report from Axios. Tensions are running high between the Trump administration and Anthropic, as the startup’s surveillance restrictions on the use of its AI are reportedly causing outrage within the Pentagon.

Last month’s attack on Venezuela that led to the capture of Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Per the report, the Pentagon is considering effectively blacklisting Anthropic’s AI from government work if it doesn’t capitulate to the administration’s terms.

Antagonizing the Trump administration could cause Anthropic to face potential regulatory hurdles as it races toward an IPO this year. The company recently hired former Microsoft CFO Chris Liddel to its board, who formerly served as deputy White House chief of staff in the first Trump administration.

Last month’s attack on Venezuela that led to the capture of Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Per the report, the Pentagon is considering effectively blacklisting Anthropic’s AI from government work if it doesn’t capitulate to the administration’s terms.

Antagonizing the Trump administration could cause Anthropic to face potential regulatory hurdles as it races toward an IPO this year. The company recently hired former Microsoft CFO Chris Liddel to its board, who formerly served as deputy White House chief of staff in the first Trump administration.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.