Experts express concerns over AI image generators potentially violating copyright laws

Experts are pointing out that AI image generators, like Midjourney, may violate copyright laws by creating images resembling copyrighted material, prompting concerns about exploitation and legal implications.

 Adult, Male, Man, Person, Art, Collage, Female, Woman, Baby, Face, Head, Robert Downey Jr., Timothée Chalamet
Image credit: IEEE Sprectrum

The use of AI image generators has raised concerns about potential copyright infringement, the New York Times reported. Different users tested the AI image generator, Midjourney, to produce an image that turned out to be identical to that from the Joker film, indicating the exploitation and the use of intellectual property without proper licenses.

Lawsuits, including those by actors and authors, have brought the question of copyright violation by AI systems to the courts. AI companies claim ‘fair use’ protection and address the issue of memorization, but concerns persist. The experiments also reveal instances where AI systems produce images resembling copyrighted material despite efforts to establish safeguards. Experts emphasize the issue’s complexity, with the potential for AI companies to violate copyright through training on unlicensed material or reproducing copyrighted material based on user prompts. The article suggests ongoing challenges in addressing this problem and the need for more robust solutions.

President and chief executive of the US Copyright Alliance, Keith Kupferschmid, told the New York Times that AI companies could violate copyright in two ways: training on unlicensed copyrighted material or replicating copyrighted content based on user prompts. While AI companies claim that they have established safeguards to prevent copyright infringement, those proved insufficient to ensure effective protection of copyrighted material.

AI risks taxonomy 900x300 1
How can we deal with AI risks?
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks.
AI risks taxonomy 900x300 1
How can we deal with AI risks?
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks.

Why does it matter?

The widespread production of AI-generated content mirroring copyrighted material poses a risk of diminishing the artistic efforts of human creators, affecting their livelihoods and the cultural value of their work. Humans should remain at the core of AI utilisation and progress, and AI should empower, rather than endanger, humanity. One of the ways to build such a future is bottom-up AI, an approach that ensures we can decide when to contribute our AI patterns to wider organisations, from communities to countries and the whole of humanity.