In the coming months, Meta plans to label AI-generated images across its social networks. This was announced by the American technology company on Tuesday, in a move it has already begun to adopt with its specialized image-generating tools, according to the French news agency.
Nick Clegg, Meta’s head of international affairs, stated in a post, “In the coming months, we will classify the images posted by users across Facebook, Instagram, and Threads when we detect signs indicating that they were created using artificial intelligence.”
Meta had already begun applying these classifications to images generated using its “Meta AI” tool since its launch in December.
Clegg added, “We want to apply this with innovative content using tools from other companies such as Google, OpenAI, Microsoft, Adobe, Modulate, and Shutterstock.”
He further said, “We are strengthening this capability today, and in the coming months, we will adopt classifications in various languages used in each application.”
This announcement comes at a time when the proliferation of generative artificial intelligence raises concerns about people using it to stir political chaos through media manipulation or disseminating false information in the run-up to important elections scheduled to be held this year in several countries, according to the French news agency.
Although Clegg acknowledged that this classification “will not completely dispel” the danger of generating false images (using deepfake technology), he affirmed that “it limits its spread” within the boundaries currently enabled by technology.
OpenAI, the creator of the “GPT” chat program, announced in mid-January the launch of tools to combat the spread of misleading information, stating its desire not to allow its technological tools, including “GPT chat” and the “DALL-E 3” image generator, to be used for political purposes.
Leave a Reply