Judge rules Anthropic can legally train AI on copyrighted material
One of the big gray areas in the burgeoning generative AI space is whether the training of AI models on copyrighted material without the permission of copyright holders violates copyright. This has led a group of authors to sue Anthropic, the company behind the AI chatbot Claude. Now, a US federal judge has ruled that AI training is covered by so-called “fair use” laws and is therefore legal, Engadget reports. Under US law, fair use means that copyrighted material is allowed to be used if the result is considered “transformative.” That is, the resulting work must be something new rather than it being entirely derivative or a substitute for the original work. This is one of the first judicial reviews of its kind, and the judgment may serve as precedent for future cases. However, the judgment also notes that the plaintiff authors still have the option to sue Anthropic for piracy. The judgment states that the company illegally downloaded (pirated) over 7 million books without paying, and also kept them in its internal library even after deciding they wouldn’t be used to train or re-train the AI model going forward. The judge wrote: “Authors argue Anthropic should have paid for these pirated library copies. This order agrees.”

One of the big gray areas in the burgeoning generative AI space is whether the training of AI models on copyrighted material without the permission of copyright holders violates copyright. This has led a group of authors to sue Anthropic, the company behind the AI chatbot Claude. Now, a US federal judge has ruled that AI training is covered by so-called “fair use” laws and is therefore legal, Engadget reports.
Under US law, fair use means that copyrighted material is allowed to be used if the result is considered “transformative.” That is, the resulting work must be something new rather than it being entirely derivative or a substitute for the original work. This is one of the first judicial reviews of its kind, and the judgment may serve as precedent for future cases.
However, the judgment also notes that the plaintiff authors still have the option to sue Anthropic for piracy. The judgment states that the company illegally downloaded (pirated) over 7 million books without paying, and also kept them in its internal library even after deciding they wouldn’t be used to train or re-train the AI model going forward.
The judge wrote: “Authors argue Anthropic should have paid for these pirated library copies. This order agrees.”