Stanford study: 36% of researchers fear nuclear-level AI catastrophe

The poll offers one of the most complete perspectives on how AI experts feel about AI development.

Data presented by Atlas VPN shows that more than a third of credible AI experts believe that AI will cause a nuclear-level catastrophe within this century.
 
These findings are part of Stanford’s 2023 Artificial Intelligence Index Report, released in April 2023.
 
During the months of May and June 2022, a team of American researchers polled the natural language processing (NLP) community on a range of topics, including the condition of artificial general intelligence (AGI), NLP, and ethics fields.
 
The field of NLP is a branch of artificial intelligence concerned with providing computers the capacity to comprehend written and spoken words in a manner similar to that of humans.
 
The poll was completed by 480 people, 68% of whom had written at least two papers for the Association for Computational Linguistics (ACL) between 2019 and 2022.
 
The poll offers one of the most complete perspectives on how AI experts feel about AI development.
 
More than a third (36%) of respondents agreed or weakly agreed with the statement: “It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is a least as bad as an all-out nuclear war.”
 
Despite these concerns, only 41% of NLP researchers thought AI should be regulated.
 
One significant area of agreement among those surveyed was that “AI could soon lead to revolutionary societal change,” 73% of AI experts agreed with the statement.
 
One month ago, Geoffrey Hinton, considered the “godfather of artificial intelligence,” told CBS News’ Brook Silva-Braga that the rapidly advancing technology’s potential impacts are comparable to “the Industrial Revolution, or electricity, or maybe the wheel.”
 
Asked about the chances of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”
 
Moratorium for advanced AI systems
 
In February, OpenAI CEO Sam Altman wrote in a company blog post: “The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”
 
Elon Musk, the CEO of Tesla and Twitter, who also signed the letter calling for a pause, was said to be “developing plans to launch a new artificial intelligence start-up to compete with OpenAI,” according to a recent article in The Financial Times.
 
The same Stanford research also found that 77% of AI experts either agreed or weakly agreed that private AI firms have too much influence.
 
 

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More