The World Economic Forum has identified misinformation as the most significant threat over the next two years. The use of artificial intelligence (AI)-based disinformation, particularly by groups affiliated with Russia, China, and Iran, is seen as an attempt to shape and disrupt elections in adversarial countries. The “Duplicator Operation,” launched in early 2022, replicated accounts of well-known media and public figures to spread pro-Russian positions, particularly concerning Ukraine. French authorities and Meta, the owner of Facebook, WhatsApp, and Instagram, implicated the Kremlin in this operation.
However, authoritarian regimes can also use the threat of misinformation to justify increased censorship and other human rights violations, according to the World Economic Forum. Countries are looking to respond through legislation regulating these operations, but progress is slow compared to the rapid development of AI. The Digital India Act and the Digital Services Act for the European Union would require platforms to target misleading information and remove any illegal content. However, experts doubt their enforceability.
China and the European Union are working on comprehensive AI laws, but their completion is unlikely before 2026. In October, President Joe Biden issued an executive order related to safety standards for artificial intelligence, though some question the feasibility of enforcing these standards. Concerns also exist that overregulation might negatively impact the sector and favor competing interests. Technology companies facing pressure are introducing their initiatives. Meta requires advertisers to disclose if their content uses generative AI, while Microsoft launched a tool enabling political candidates to verify the authenticity of their content using digital watermarks. Platforms increasingly rely on AI in verification processes.
Leave a Reply