Runaway AI Is an Extinction Risk, Experts Warn

Leading personalities in Artificial intelligence system developers, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are developing could one day pose an existential threat to humanity comparable to that of nuclear war and a pandemic.

“Containing the risk of extinction caused by AI should be a global priority, alongside other societal risks such as pandemics and nuclear war,” the organization said in a one-sentence statement released today Center for AI Securitya non-profit organization.

The idea that AI could be difficult to control and could destroy humanity either accidentally or intentionally has long been debated by philosophers. But in the last six months, following some surprising and disturbing leaps in the performance of AI algorithms, the topic has become much more widely and seriously discussed.

In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of anthropic, a startup dedicated to developing AI with a focus on security. Other signatories include Geoffrey Hinton and Yoshua Bengio – two of three academics to receive the Turing Award for their work on deep learning, the technology underlying modern advances in machine learning and AI – and dozens of entrepreneurs and researchers, working on cutting-edge AI problems.

“The declaration is a great initiative,” says Max TegmarkProfessor of Physics at the Massachusetts Institute of Technology and Director of the Institute for the Future of Life, a non-profit organization focused on the long-term risks of AI. In March, the Tegmark Institute issued a letter calling for a six-month hiatus in the development of cutting-edge AI algorithms so the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.

Tegmark hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is for the threat of AI eradication to be publicized so that anyone can discuss it without fear of ridicule,” he adds.

Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the development of nuclear weapons. “We need to have the conversations that nuclear scientists had before the atomic bomb was developed,” Hendrycks said in a quote released along with his organization’s statement.

The current alarm sound is related to several performance leaps in AI algorithms known as large language models. These models consist of a special type of artificial neural network that is trained on huge amounts of human-written text to predict the words that should follow a given string of characters. When these language models are fed enough data and given additional training in the form of feedback from humans on good and bad answers, they are able to generate text and answer questions with remarkable eloquence and obvious knowledge – even if their answers are often full of mistakes.

These language models have proven to be increasingly coherent and powerful as more data and computational power has been fed into them. OpenAI’s most powerful model to date, GPT-4, is capable of solving complex problems, including those that seem to require certain forms of abstraction and common sense.

Zack Zwiezen

Zack Zwiezen is a USTimesPost U.S. News Reporter based in London. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Zack Zwiezen joined USTimesPost in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing zackzwiezen@ustimespost.com.

Related Articles

Back to top button