As the use of artificial intelligence (AI) has entered the creative media sector – particularly art and design – the definition of intellectual property (IP) appears to be emerging It is becoming increasingly difficult to understand what plagiarism is in real time.
In the past year, AI-powered art platforms have pushed the boundaries of IP rights by using extensive data sets TrainingOften without the express permission of the artists who created the original works.
For example, platforms like OpenAI’s DALL-E and MidJourney’s service offer subscription models, indirectly monetizing the copyrighted content that makes up their training data sets.
In this regard, an important question has emerged: “Do these platforms operate within the parameters established by the ‘fair use’ doctrine, which in its current iteration protects copyrighted works for criticism, comment, news reporting, teaching, and research?” “Purpose?”
Recently, Getty Images, a major supplier of stock photos, began Lawsuits against sustainability AI in both the United States and the United Kingdom. Getty has accused Stable AI’s visual-generating program, Stable Diffusion, of violating copyright and trademark laws by using images from the catalog without authorization, particularly those bearing its watermark.
However, plaintiffs will have to present more extensive evidence to support their claims, which may prove challenging since Stable Diffusion’s AI has been trained on a massive cache of 12+ billion compressed images.
In another related matter, artists Sarah Anderson, Kelly McKernan and Carla Ortiz began In January, legal action was taken against Stable Diffusion, MidJourney, and the online art community DeviantArt, accusing the organizations of exploiting “millions of artists” by training their AI tools using five billion images snatched from the web “without the consent of the original artists.” Accused of violating rights. ,
AI poisoning software
Responding to complaints from artists whose works were plagiarized by AI, researchers at the University of Chicago recently released a tool called Nightshade, which enables artists to integrate unknown changes into their artwork.
These modifications may occur while being invisible to the human eye. poison AI training data. Furthermore, subtle pixel changes can disrupt the learning processes of AI models, leading to mislabeling and identification.
Even a handful of these images can corrupt the AI’s learning process. For example, a recent experiment showed that a few dozen misrepresented images were sufficient. bad The output of the stable spread remarkably.
The University of Chicago team previously developed their own tool, called Glaze, which aimed to hide an artist’s style from AI detection. Their new offering, Nightshade, is ready to integrate with Glaze, further expanding its capabilities.
recently InterviewBen Zhao, lead developer at Nightshade, said tools like his will help drive companies toward more ethical practices. “I think right now there’s very little incentive for companies to change the way they do things – which is to say, ‘Everything under the sun is ours, and you can’t do anything about it.’ I think we’re giving them a little more incentive on the moral front, and we’ll see if that actually happens,” he said.
Despite Nightshade’s ability to safeguard future artwork, Zhao said the platform cannot undo the impact on art already processed by older AI models. Additionally, there are concerns about the potential misuse of the software for malicious purposes such as corrupting digital image generators on a large scale.
However, Zhao is confident that this latter use case will be challenging because it requires thousands of toxic samples.
While independent artist Autumn Beverly believes tools like Nightshade and Glaze have given her the power to once again share her work online without fear of abuse, experts associated with the Art and Artificial Intelligence Lab at Rutgers University Marion Mazzone believes that such tools cannot provide a permanent solution, suggesting that artists should pursue legal reforms to address ongoing issues related to AI-generated imagery.
Asif Kamal, CEO of Artfy, a Web3 solution for fine art investing, told Cointelegraph that creators using AI data-poisoning tools are challenging traditional notions of ownership and authorship, prompting a re-evaluation of copyright and creative control:
“The use of data-poisoning tools is raising legal and ethical questions about training AI on publicly available digital artwork. People are debating issues like copyright, fair use, and respecting the rights of original creators. That said, AI companies are now working on various strategies to address the impact of data-poisoning tools like Nightshade and Glaze on their machine-learning models. This includes improving their security, enhancing data validation, and developing more robust algorithms to identify and mitigate pixel poisoning strategies.
Yubo Ruan, founder of ParaX, a Web3 platform powered by account abstraction and zero-knowledge virtual machines, told Cointelegraph that as artists continue to adopt AI-poisoning tools, what constitutes digital art and how it is owned and , it needs to be re-imagined. Originality is determined.
“We need to reevaluate today’s intellectual property framework to accommodate the complexities introduced by these technologies. The use of data-poisoning tools is highlighting legal concerns about consent and copyright infringement, as well as ethical issues related to use without giving appropriate compensation or recognition to the original owners of public artwork,” he said.
Stretching IP laws to their limits
Beyond the realm of digital art, the impact of generative AI is also being seen in other domains, including academic and video-based content. In July, comedian Sarah Silverman, along with writers Christopher Golden and Richard Kadrey, took legal action against OpenAI and Meta in the US District Court for the Americas. accused Tech giants of copyright infringement.
The lawsuit claims that both OpenAI’s ChatGPT and Meta’s Llama were trained on data sets obtained from illegal “shadow library” sites that allegedly contained the plaintiffs’ copyrighted works. The lawsuits point to specific instances where ChatGPT used Silverman to summarize its books without including copyright management information. bed wetterof golden Araratand Cadre’s sandman slim As a prime example.
Separately, the lawsuit against Meta claims that the company’s Llama model was trained using a data set of equally dubious origin, specifically citing EleutherAI’s The Pile, which allegedly Contains content from personal tracker Bibliotik.
The authors stress that they never consented to their works being used in this manner and are therefore seeking damages and compensation.
As we move toward a future driven by AI technology, many companies are grappling with the enormity of the technological offerings presented by this growing paradigm.
While companies like Adobe have started using the A mark to flag AI-generated data, companies like Google and Microsoft have said they are doing so ready to face any legal consequences Should customers be prosecuted for copyright infringement when using their generative AI products.