This article expresses the views of its author(s), separate from those of this publication. Readers are encouraged to comment or submit a Letter to the Editor to share their opinions. To submit a Letter to the Editor, follow the instructions here.
A federal appeals court ruled against a man seeking copyright protection for AI-generated artwork in a case this March, and court cases like this indicate a larger problem for AI companies regarding copyright law.
This particular legal battle joins a litany of similar cases attempting to clarify copyright protections in the face of growing AI usage. Copyrighted material being used as training data and AI output that allegedly borders on copyright infringement creates confusion about the legal implications of AI chatbots. Additionally, recent events, such as ChatGPT’s new image-generation capabilities, make AI’s copyright implications more relevant than ever.
In the previously mentioned appeals court case, the U.S. Copyright Office denied copyright protection requested for a piece of artwork entirely created by AI. They did so on the grounds that there was no human involvement in the art’s creation, with the office having previously granted copyright to other AI-generated works that had substantial human involvement in their creation.
The appeals court backed the U.S. Copyright Office’s decision.
On the other side of the spectrum, Anthropic, an AI startup, succeeded in claiming some ground for AI when it comes to copyright in an even more recent court decision.
In a case heard in California, Universal Music Group and other record labels filed an injunction to stop Anthropic from using their copyrighted song lyrics while training the AI model Claude.
The record labels pointed to the fact that Claude would generate responses that closely resembled or blatantly copied the works owned by them. After hearing both sides, the court indicated that the labels failed to prove damages from Anthropic’s use of their intellectual property.
Both the man requesting copyright protection for his AI artwork and the record labels plan on appealing.
Fair Use?
When faced with criticisms based on the fact that they train their models on copyrighted material, AI companies cry fair use. Or, more recently, petition the government to let them use materials they do not own because of national security concerns, appealing to fears of Chinese tech supremacy.
Some academics and researchers back this assertion of fair use rights, fearing that copyright restrictions on an AI model’s training data would hamper any current research.
Those who do believe that AI developers’ use of copyrighted material for training purposes falls into fair use rely on previous court decisions to predict that the courts will agree with them now. One such court case is Authors Guild, Inc. v. Google, Inc., where an appeals court said that Google Books’ digitization of copyrighted books was protected by fair use.
However, technology has rapidly increased since this ruling. Ethically ambiguous actions taken by tech companies complicate this claim to fair use. For instance, Meta allegedly used pirated books to train its generative AI models.
AI developers’ fair use arguments only hold so much weight. The lawsuits facing companies working on AI technology illustrate the shaky legal ground that these companies navigate when it comes to copyright.
___
For more information or news tips, or if you see an error in this story or have any compliments or concerns, contact editor@unfspinnaker.com.