In December last year, after challenging negotiations, the European Union approved unprecedented legislation on a global scale to regulate artificial intelligence (AI), aiming to foster innovation while limiting potential abuses of these technologies.
Facing the evolution of generative AI tools from U.S. companies (such as OpenAI’s “ChatGPT” and Google’s “BART”), EU member states expressed concerns that excessive regulation could stifle European company projects in their infancy, including “AlphaAlp” in Germany and “Mistral AI” in France, by making the development of these technologies excessively costly.
The final text establishes binding rules for all to ensure the quality of data used in algorithm development, verify that it does not violate copyright, and enhance restrictions on the most powerful systems in sensitive areas.
For Ursula von der Leyen, this legislation “establishes a climate of trust” by dealing separately with “high-risk” situations, such as real-time biometric identification, allowing innovation in all other fields.
She stated that the EU has “200,000 experienced engineers in the field of artificial intelligence,” a number “larger than that of the United States or China.”
Von der Leyen added that the 27 EU member states have a “huge competitive advantage in industrial data,” providing the possibility of “training systems using unparalleled quality data.”
She continued, “We want to invest in this area” and enhance the access of European startups and small to medium-sized enterprises to supercomputers on the continent, in addition to “shared data spaces in all EU languages,” so that artificial intelligence works “for non-English speakers” as well.
Leave a Reply