Monday, April 10, 2023

A Chernobyl for AI May Be Imminent, Scientist Says

Tim Newcomb
Sat, April 8, 2023 
Marina Demkina - Getty Images

AI expert Stuart Russell reiterates the need for a pause in AI expansion before humanity loses control.

In an interview with Business Today, Russell likens the threat of unregulated AI to a potential Chernobyl event.

Leaders are calling on AI creators to ensure the safety of AI systems before releasing them to the public.


Stuart Russell knows AI. And he's concerned about its unchecked growth. In fact, he's so concerned that, in an interview with Business Today, he says an unbridled artificial intelligence carries with it the possibility of "a Chernobyl for AI."

That has the potential to be life-altering beyond our current understanding.

Russell, a computer science professor at the University of California, Berkeley, has spent decades as a leader in the AI field. He's also joined other prominent figures, like Elon Musk and Steve Wozniak, in signing an open letter calling for a pause on development of powerful AI systems—defined as anything more potent than OpenAI's GPT-4.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter reads. “Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here.”

The letter’s backers say there isn’t a level of planning and management happening in the AI field that matches the tech's potential to represent a “profound change in the history of life on Earth.” The signers say this is especially true as AI labs continue an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

Left unhindered, that kind of development could lead to "Chernobyl for AI," Russell tells Business Today, referring to the 1986 nuclear catastrophe in Ukraine that continues to show ill-effects on life over 35 years later.

"What we're asking for is, to develop reasonable guidelines [sic]," he says. "You have to be able to demonstrate convincingly for the system to be safely released, and then show that your system meets those guidelines. If I wanted to build a nuclear power plant, and the government says, well, you need to show that it's safe, that it can survive an earthquake, that it's not going to explode like Chernobyl did."

Creating new AI systems isn't all that different, Russell says, from building an airplane expected to safely fly hundreds of passengers, or a nuclear power plant with the potential to disastrously impact the world around it if something goes even slightly wrong.

AI has that same profound power, so much so leaders aren't even sure what a cataclysmic AI tragedy looks like. But they want to make sure we don't ever find out.

No comments:

Post a Comment