Scientists estimate the potential risk of artificial intelligence pushing humanity towards extinction. Technology experts warn that the future development of artificial intelligence may have catastrophic consequences for the world, posing risks that threaten humanity.
In the largest survey of artificial intelligence researchers to date, the majority state that there is a “non-trivial” risk of human extinction due to the potential evolution of superintelligent artificial intelligence. A team of international scientists surveyed 2,778 experts in artificial intelligence about the future of these systems, with approximately 58% of them stating that they consider the risk of human extinction or other highly negative outcomes resulting from technology to be around 5%.
However, the most alarming estimate came from one in ten researchers who said there is a shocking 25% chance that artificial intelligence will destroy the human race. The experts cited three possible reasons: allowing artificial intelligence to enable threatened groups to create powerful tools such as engineered viruses, dictators using artificial intelligence to control their populations, and artificial intelligence systems exacerbating economic inequality.
Researchers emphasize that regulating artificial intelligence is the only solution to protect humans. If artificial intelligence is not regulated, they estimate a 10% chance that machines will surpass humans in all tasks by 2027, with the percentage rising to 50% by 2047.
The survey also asked experts about four specific professions that could become fully automated, including truck drivers, surgeons, retail sales representatives, and artificial intelligence researchers. The response indicated a 50% chance that artificial intelligence would completely take over these professions by 2116.
Like humans, robots “lie and cheat” under pressure! The question of whether artificial intelligence poses a significant threat to humanity has sparked intense debate in Silicon Valley in recent months.
The CEO of OpenAI, Sam Altman, and the CEO of Google DeepMind, Demis Hassabis, issued a statement in May of last year, calling for more control and regulation in the field of artificial intelligence. The statement said, “Mitigating the risk of extinction from artificial intelligence should be a global priority alongside other societal risks such as pandemics and nuclear war.”
Dan Hendrix, the CEO of the Center for Artificial Intelligence Safety, pointed out in a separate statement at that time that there are many “important and urgent risks from artificial intelligence, not just the risk of extinction,” including systemic bias, misinformation, harmful use, cyber attacks, and militarization.
Jeffrey Hinton, a “father of artificial intelligence,” expressed regret for creating artificial intelligence. There is still a great deal of uncertainty about the long-term impact of artificial intelligence, and the study acknowledges that “prediction is difficult, even for experts.”
Leave a Reply