'50-50 chance' that AI outsmarts humanity, Geoffrey Hinton says
BNN Bloomberg
,Geoffrey Hinton remembers the moment he became the "godfather" of artificial intelligence.
It was more than a decade ago, at a meeting with other researchers, including fellow AI guru Andrew Ng, who gave him the title.
"It wasn’t intended as complimentary, I don’t think," Hinton recalled in our exclusive television interview. “We just had a session where I was interrupting everybody and I was the senior guy there who had organized the meeting, so he started referring to me as the godfather."
Today, Hinton is using his influence to interrupt another conversation.
At a time when the AI boom is pushing tech company valuations into the trillions, Hinton is urging the industry to set aside vast sums of money to address an existential threat -- what happens if AI becomes smarter than humans?
"I think there’s a 50-50 chance it will get more intelligent than us in the next 20 years," he said.
"We’ve never had to deal with things more intelligent than us. And so people should be very uncertain about what it will look like."
Given the unknowns, Hinton believes companies should be spending between 20 to 30 per cent of their computing resources to examine how this intelligence might eventually evade human control.
Currently, he doesn’t see firms spending anywhere close to that amount on safety.
"It would seem very wise to do lots of empirical experiments when it’s slightly less smart than us so we still have a chance at staying in control.”
Given the realities of capitalism, Hinton has little faith that companies individually will pick safety over profits.
So he wants political leaders to step in urgently.
"I think governments are the only thing powerful enough to slow that down."
AI getting smarter
In making his case, Hinton points to the rapid rise of OpenAI and its ChatGPT technology.
"If you take something like GPT-4, which is bigger than GPT-3, it is quite a lot smarter. It answers a whole bunch of questions correctly that GPT-3 would get wrong."
"So we know these things will get more intelligent just by making them bigger. But in addition to them getting more intelligent by being made bigger, we’ll have scientific breakthroughs."
Meanwhile, Hinton points to the fact that self-preservation is already being built into the industry’s technology, to ensure things like chatbots can function effectively when they encounter problems, such as data center disruptions.
"As soon as they’ve got self-interest, you’ll get evolution kicking in," he said.
"Suppose there’s two chatbots and one’s a bit more self-interested than the other…the slightly more self-interested one will grab more data centers because it knows it can get smarter if it gets more data centers to look at data with. So now you’ve got competition between chatbots. And as soon as evolution kicks in, we know what happens: the most competitive one wins, and we will be left in the dust if that happens."
Hinton juxtaposed that risk with the potential money to be made from this technology, as one of the key tensions that led OpenAI co-founder Ilya Sutskever to recently leave the company.
"The people interested in safety, like Ilya Sutskever, wanted significant resources to be spent on safety. People interested in profits, like Sam Altman, didn’t want to spend too many resources on that.
“I think (Altman) would like big profits,” he added.
Life after Google
Having spent more than 50 years leading research in this area, Hinton long ago became a hot commodity in Silicon Valley, which ultimately landed him at Google.
But last year, Hinton resigned his role at the company so he could speak more freely about the existential risks surrounding AI.
"In the spring of 2023, I began to realize that these digital intelligences we’re building might just be a lot better form of intelligence than us and we had to take seriously the idea that they were going to get smarter than us."
He says before Wall Street caught on to the AI opportunity, profits weren’t as much of a priority.
“When I was at Google, they had a big lead in all of this stuff. And they were actually very responsible.” They weren’t doing that much work on safety, but they didn’t release this stuff because they had a very good reputation and they didn’t want to besmirch their reputation [with] chatbots saying prejudicial things … so they were very responsible with it. They used it internally, but didn’t release chatbots to the public even though they had them. But as soon as OpenAI used some of the Google research on transformers to make things as good as Google had… and actually tune them up slightly better and give them to Microsoft, then Google couldn’t help getting involved in an arm’s race.”
Hinton added that skyrocketing stock market values — and competitive realities for these companies — is quickly lessening the industry’s focus on safety issues.
"It makes it clear to the big companies that they want to go full speed ahead and there’s a big race on between Microsoft, Google and possibly Amazon, Nvidia and Meta. If any one of those (companies) pulled out, the others would keep going."
That’s not to say he doesn’t believe tech leaders aren’t concerned about the risks.
"Some people in industry — particularly Elon Musk — have said this is a real threat. I don’t agree with much of what he says but that aspect I do agree with," Hinton said.
As for companies putting aside roughly a third of their computational resources for safety testing, Hinton is skeptical. But if they were collectively required to do so, he could see a path towards that happening.
“I’m not sure they would object if all of them had to do it. It would be equally difficult for all of them. And I think that might be feasible."
And in the end, that is what leads Hinton to believe that government intervention is the only solution, despite the fact that many countries are already competing with each other to ensure they respectively have an AI lead.
"None of these countries want super intelligence to take over. And that will force them to coordinate."
No comments:
Post a Comment