Anthropic reaches a regulation of $ 1.5 billion with the authors in the case of copyright.

Anthropic accepted a regulation of $ 1.5 billion with the authors in a case of historical copyright, marking one of the first and larger legal payments in the AI era.
The AI startup agreed to pay the authors about $ 3,000 per book for around 500,000 works, after being accused of having downloaded millions of pirated texts from Shadow library to form her large language model, Claude. As part of the agreement, Anthropic will also destroy data that it has been accused of acquiring illegally.
The fast-growing AI startup announced earlier this week that it had just raised an additional $ 13 billion in new venture capital funding in an agreement that estimated the company at $ 183 billion. He also said he was currently about to generate at least $ 5 billion in income over the next 12 months. The regulations amount to nearly a third of this figure or more than a tenth of the new anthropic funding which has just received.
Although the regulations do not establish a legal precedent, the experts have declared that it would probably serve as an anchoring figure for the amount that other large AI companies will have to pay if they hope to settle violation of similar copyright. For example, a number of authors continue Meta to use their books without authorization. As part of this trial, Meta was forced to disclose internal emails from the company which suggest that he knowingly used a library of pirated books called Libgen – which is one of the same libraries as Anthropic used. OPENAI and its partner Microsoft are also faced with a certain number of cases of copyright violation, including one deposited by the author’s guild.
Aparna Sridhar, lawyer general of Anthropic, said Fortune in a declaration:: “In June, the District Court rendered a historic decision on the development of AI and the law on copyright, noting that Anthropic’s approach for the training of AI models constitutes fair use. Today’s regulation, if approved, will resolve AI systems and people who help people and organizations extend their capacities, advance scientific discoveries and solve complex problems. ”
A lawyer for the authors who continued Anthropic said that the regulations would have large -scale impacts.
“This Landmark Settlement Far Surpass Any Other Known Copyright Recovery. It is the first of its kind in the ai era. It will provide meaningful for each class work and sets a precedent requiring ai companies to pay copyright,” Justin Nelson, Partner Godfrey llp and co-lead plaintiffs’ counsel on Bartz et al. v. Anthropic PBCsaid in a press release. “These regulations send a powerful message to the companies and creators of the AI, that the taking of works protected by copyright on these pirate websites is wrong.”
The case, which was originally to be tried in December, could have exposed anthropogenic to damages up to 1 dollars if the court concluded that the company voluntarily violated the law on copyright. Santa Clara’s law professor, Ed Lee said that if Anthropic lost the trial, this could “at least the potential for end -of -life responsibility”. Anthropic essentially concluded Lee’s conclusion, writing in a court depositing that he felt “excessive pressure” to settle the case given the size of potential damage.
The anthropogenic danger was faced on the means he had used to obtain the books protected by copyright, rather than the fact that they had used books to form AI without the explicit authorization of author holders. In July, the judge of the American district court William Alsup, decided that the use of books protected by copyright to create an AI model constituted “fair use” for which no specific license was required.
But Alsup then focused on the allegation according to which Anthropic had used digital libraries of pirated books for at least some of the data that she nourished by her AI models, rather than buying copies of books. The judge suggested in a decision allowing the case to judge that it was inclined to consider this as a copyright violation, regardless of what Anthropic has done with hacked libraries.
By adjusting the case, Anthropic has circumvented an existential risk for its activities. However, the settlement is significantly higher than certain legal experts provided. The motion now requests the preliminary approval of what is claimed to be “the greatest recovery in copyright published publicly in history”.
James Grimmelmann, professor of law at Cornell Law School and Cornell Tech, described it as “modest regulations”.
“He is not trying to solve all copyright problems around generating AI. Instead, he focuses on what judge Alsup thought was the only elegantly unjustified thing that Anthropic has done: Download loose books from shadows rather than buying copies and scanning them. Fortune.
He declared that the regulations help to establish that AI companies must legitimately acquire their training data, but does not answer other copyright issues faced by AI companies, such as what they have to do to prevent their generative models from producing results that undermine copyright law. In several cases still pending against AI companies, including a case The New York Times Driven against Openai and a case that Warner Brothers films studios have deposited this week against Midjourney, a company that makes AI that can generate images and videos – copyright holders allege that AI models have produced identical or substantially similar results to the copyright -protected works
“The recent Warner Bros. combination against Midjourney, for example, focuses on how Midjourney can be used to produce DC superheroes and other characters protected by copyright,” said Grimmelmann.
While legal experts claim that the amount is manageable for a company in the size of Anthropic, Luke McDonagh, an associate professor of law at LSE, said that the case could have an impact downstream on small companies in AI if it establishes a commercial precedent for similar claims.
“The figure of $ 1.5 billion, such as the total amount of the regulation, indicates the type of level which could resolve some of the other Copyrights of AI. Fortune. “”This type of sum – $ 3,000 per work – is manageable for an appreciated company also very anthropic and the other large IA companies. This can be less for small businesses. »»
A business precedent for other AI companies
Cecilia Ziniti, lawyer and founder of the legal company of AI GC AI, said that the regulations were a moment “Napster to iTunes” for AI.
“These regulations mark the start of a necessary development towards a legitimate license system based on the data training training,” she said. She added that the regulations could mark the “beginning of a more mature and sustainable ecosystem where creators are paid, much like the way the music industry has adapted to digital distribution.”
Ziniti also noted that the size of the colony could force the rest of the industry to become more serious about licenses to works protected by copyright.
“The argument that it is too difficult to follow and pay the training data is a red herring because we have enough transactions at this stage to show that this can be done,” she said, highlighting the offers of these publications, including Axel Springer and Vox, have entered with Openai. “These regulations will push other AI companies at the negotiating table and will accelerate the creation of a real data market, probably involving API authentications and income sharing models.”
https://fortune.com/img-assets/wp-content/uploads/2025/09/GettyImages-2216956965.jpg?resize=1200,600