Amid the widespread exploitation of data by certain artificial intelligence innovators, a collective of artists, collaborating with academic researchers, is actively introducing modifications to their creations to render them unusable.
Paloma McLain, an American artist, found herself at odds with numerous AI generative programs that initially promised to generate images inspired by her artistic style, yet she neither consented to nor derived any financial benefit from this utilization. Based in Houston, McLain voiced her unease, stating, “I’m not a renowned artist, but the notion of my works being used for AI model training bothered me.”
To counteract the misuse of her creations, McLain employed the “Glitch” program, which introduces imperceptible pixel elements to her works, with the intention of disrupting the functionality of artificial intelligence.
Following this intervention, the generated images became distorted and unclear.
Ben Zhao, a researcher at the University of Chicago who spearheaded the development of the “Glitch” program, expressed, “We aim to provide technological tools to shield innovators from the inappropriate use of generative AI models.”
Professor Zhao created the “Glitch” program in a mere four months, drawing inspiration from previous work geared towards thwarting facial recognition technology.
“We worked swiftly because we recognized the severity of the problem,” stated Ben Zhao, adding, “A substantial number of people were adversely affected.”
Major companies specializing in generative artificial intelligence have struck deals to secure rights associated with specific content. However, a majority of the data, images, texts, and sounds used to construct these models have been utilized without explicit consent.
Since its launch, “Glitch” has been downloaded over 1.6 million times, according to the researcher, whose team is gearing up to release a new program called “Nightshade.”
“Glitch” focuses on requests formulated in simple language, typically employed by users in generative AI models to obtain new images. Its purpose is to disrupt the algorithm’s immediate suggestion of another potential creation, for instance, suggesting a picture of a cat instead of a dog.
In a parallel initiative, “Spaining,” a company, introduced the “Cudoru” program, which monitors attempts to gather images across specialized platforms.
Artists can either withdraw the availability of their works or submit an image other than the one requested, effectively “poisoning” the developing AI model and impacting its reliability, according to Jordan Meyer, a co-founder of “Spaining.”
Furthermore, “Spaining” launched “Have I Been Trained,” a website that monitors whether an AI model has utilized certain images and empowers the owner of those images to safeguard them from any unauthorized future use.
Beyond visual content, researchers from Washington University in St. Louis focused on vocal data and developed “AntiFake.”
This program adds imperceptible additional sounds to an audio file, making the imitation of human voice nearly impossible, according to Chiyuan Yu, a Ph.D. student responsible for the project.
The program specifically aims to counteract “deepfakes,” increasingly sophisticated digital manipulations that closely resemble reality.
Lastly, successful podcast producers seeking protection from potential misuse collaborate with a team led by Professor Ning Zhang, as per Chiyuan Yu.
Leave a Reply