Increasing concerns surround the advancement of artificial intelligence, raising fears among humans that it might one day surpass its creators and potentially wreak havoc on humanity. At present, artificial intelligence lacks any awareness of the challenges it presents, and there is no indication of any form of consciousness. This concern has prompted London to host a summit on AI regulation, with the White House recently signing a decree for its oversight. Similarly, the European Union is on track to establish new rules in this domain by year-end.
Algorithms have long been integrated into human daily life, but the remarkable success of programs like “ChatGPT” developed by OpenAI has reignited debates in 2023. Generative artificial intelligence, capable of producing texts, images, and sounds with simple commands in everyday language, is particularly raising concerns, especially regarding potential job obsolescence.
The impact of machine reliance on various tasks is already noticeable across several sectors, from agriculture to factories. With its generative capabilities, artificial intelligence is now affecting a broad spectrum of workers, including administrative staff, lawyers, doctors, journalists, teachers, and more. McKinsey consultancy predicted in July that machines could handle up to 30% of current working hours in the U.S. economy by 2030, a trend accelerated by generative artificial intelligence.
Major U.S. technology companies often propose universal basic income as a solution, providing a minimum allocation for everyone to compensate for potential job losses, even though its widespread effectiveness remains unproven.
Artists have been among the first to voice concerns about programs like “DALL-E” or “MIDJORNIE” that generate images on demand. Similar to developers and writers, they worry about companies using their work without permission or compensation to create their own technology.
Generative artificial intelligence, relying on language models, requires extensive data retrieval from the internet. OpenAI founder Sam Altman stated in a September conference, “We train (artificial intelligence) to be, in some way, the collective humanity. It has been trained to produce a large part of what humanity produces.” He emphasized that this tool enhances human capabilities and does not replace them, although several lawsuits aim to redefine the concept of intellectual property.
While fake news and deepfakes are not new, generative artificial intelligence raises concerns about an increase in false content online. AI specialist Gary Marcus warns that elections might be won by those most adept at spreading misleading information.
Primarily, democracy relies on the ability to access necessary information for making informed decisions. If people lose the ability to distinguish between truth and falsehood, it poses a significant threat.
Generative artificial intelligence also facilitates fraudsters in creating more convincing messages for those involved in phishing activities. There are even language models specifically trained to produce harmful content, such as FraudGPT.
Moreover, technology has made it easy to replicate faces or voices, deceiving people into believing false scenarios, like the kidnapping of a child for extortion.
As with many other technologies, the primary danger of artificial intelligence lies in human design and usage. For instance, hiring programs can discriminate against candidates if they reproduce existing human biases automatically. The language model is not an advocate for the rights of marginalized groups; its performance depends on the data and instructions provided by its developers.
In general, artificial intelligence can facilitate various activities that pose risks to human rights, ranging from inventing harmful molecules to population surveillance.
Some fear that artificial intelligence may become capable of thinking to the extent that it can control humans. OpenAI is working on building “artificial general intelligence” surpassing human intelligence, aiming to benefit humanity collectively. At the same time, leaders of major technology companies, including Sam Altman, have called for addressing “existential risks” posed by artificial intelligence this summer.
For historian Emil Torres, such discussions distract from very real problems. He recently stated to “France Presse,” “Talking about the end of humanity, a truly terrifying event, is much more attractive than talking about Kenyan workers who earn $1.32 per hour” to supervise content used in artificial intelligence or the exploitation of artists and writers to fuel AI models.
Leave a Reply