AI Experts, Including OpenAI CEO, Warn of 'Human Extinction' in 22-Word Statement

AI Experts, Including OpenAI CEO, Warn of 'Human Extinction' in 22-Word Statement
Cover Image Source: Getty Images | (L) Photo by Win McNamee; (R) Kim Hee-Chul

In a concerning joint statement released on Tuesday, a collective of prominent artificial intelligence experts and executives cautioned that the technology carries a significant "risk of extinction" for human beings. According to The Verge, Sam Altman, the CEO of OpenAI, the organization behind ChatGPT, and Geoffrey Hinton, widely regarded as the "Godfather of AI," were among over 350 notable individuals who view AI as an existential peril, as stated in a concise open letter coordinated by the nonprofit Center for AI Safety.

Image Source: Getty Images | Photo by Win McNamee
Image Source: Getty Images | Photo by Win McNamee (Samuel Altman, CEO of OpenAI)

 

The succinct statement represents the most recent addition to a series of warnings by prominent experts highlighting the potential for AI to instigate societal chaos. The risks outlined encompass the dissemination of misinformation, significant economic disruptions resulting from job displacement, and even potential direct threats to humanity. The 22-word statement, condensed for maximum acceptability, stated: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." 

According to The Verge, in a previous instance this year, a group of individuals who also supported the 22-word warning signed an open letter advocating for a six-month "pause" in AI development. However, the letter received criticism on various fronts. Some experts believed it exaggerated the risks associated with AI, while others agreed with the risks but disagreed with the proposed solution outlined in the letter.

Image Source: Getty Images | Photo by Kim Hee-Chul-Pool | Demis Hassabis, CEO of Google's artificial intelligence (AI) startup DeepMind
Image Source: Getty Images | Photo by Kim Hee-Chul  (Demis Hassabis, CEO of Google's artificial intelligence (AI) startup DeepMind)

 

The Center for AI Safety expressed that the purpose of the concise statement was to initiate a "discussion" regarding the topic, considering the "wide range of significant and pressing risks associated with AI." In addition to Altman and Hinton, the list of notable signatories included Demis Hassabis, the CEO of Google DeepMind, and Dario Amodei, the CEO of Anthropic, both of whom hold influential positions in AI research.

Altman, Hassabis, and Amodei were among a select group of experts who recently met with President Joe Biden to deliberate on the potential risks and regulations pertaining to AI, per The New York Post. Altman, despite his prominent position at OpenAI, has expressed his apprehensions regarding the uncontrolled progression of advanced AI systems.

Image Source: Getty Images | Photo by Leon Neal
Image Source: Getty Images | Photo by Leon Neal

 

“As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence. Mitigating the risk of extinction from AI will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems,” said Dan Hendrycks, director of the Center for AI Safety to the New York Times

In the meantime, proponents and skeptics of AI risks alike concur that AI systems pose a variety of threats in the current era, even without further advancements in their capabilities. These threats range from enabling mass surveillance to facilitating flawed "predictive policing" algorithms, as well as contributing to the proliferation of misinformation and disinformation.

Share this article: Top AI Experts Warn About 'Human Extinction' Due to AI in a 22-Word Statement
More Stories on Inquisitr