The meeting will include briefings by international AI experts and UN Secretary-General Antonio Guterres, who last month called the alarm bells over the most advanced form of AI "deafening," and loudest from its developers.
Guterres has announced plans to appoint an advisory board on artificial intelligence in September to prepare initiatives that the UN can take. / Photo: Reuters Archive
The UN Security Council will hold a first-ever meeting on the potential threats of artificial intelligence to international peace and security, organised by the United Kingdom which sees tremendous potential but also major risks about AI's possible use for example in autonomous weapons or in control of nuclear weapons.
UK Ambassador Barbara Woodward on Monday announced the July 18 meeting as the centrepiece of its presidency of the council this month.
It will include briefings by international AI experts and Secretary-General Antonio Guterres, who last month called the alarm bells over the most advanced form of AI "deafening," and loudest from its developers.
"These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war," the UN chief said.
Guterres announced plans to appoint an advisory board on artificial intelligence in September to prepare initiatives that the UN can take.
He also said he would react favourably to a new UN agency on AI and suggested as a model the International Atomic Energy Agency, which is knowledge-based and has some regulatory powers.
'Multilateral approach'
Woodward said the UK wants to encourage "a multilateral approach to managing both the huge opportunities and the risks that artificial intelligence holds for all of us," stressing that “this is going to take a global effort.”
She stressed that the benefits side is huge, citing AI's potential to help UN development programmes, improve humanitarian aid operations, assist peacekeeping operations and support conflict prevention, including by collecting and analyzing data.
"It could potentially help us close the gap between developing countries and developed countries," she added.
But the risk side raises serious security question that must also be addressed, Woodward said.
On June 14, EU lawmakers signed off on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.
In May, the head of the artificial intelligence company that makes ChatGPT told a US Senate hearing that government intervention will be critical to mitigating the risks of increasingly powerful AI systems, saying as this technology advances people are concerned about how it could change their lives, and "we are too."
OpenAI CEO Sam Altman proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to "take that license away and ensure compliance with safety standards."
Woodward said the Security Council meeting, to be chaired by UK Foreign Secretary James Cleverly, will provide an opportunity to listen to expert views on AI, which is a very new technology that is developing very fast, and start a discussion among the 15 council members on its implications.
Britain's Prime Minister Rishi Sunak has announced that the UK will host a summit on AI later this year, "where we'll be able to have a truly global multilateral discussion," Woodward said.
Scientists warn of AI dangers but disagree on solutions
Computer scientists, including Geoffrey Hinton, who is often dubbed "the godfather of artificial intelligence", speak out about dangers of AI, such as job market destabilisation, automated weaponry and dangers of biased data sets.
AP
Some experts are worried that hype around superhuman machines — which don't exist — is distracting from attempts to set practical safeguards on current AI products. / Photo: AP
Computer scientists who helped build the foundations of today's artificial intelligence [AI] technology have warned of its dangers, but disagree on what those dangers are or how to prevent them.
Humanity's survival is threatened when "smart things can outsmart us," the so-called "Godfather of AI" Geoffrey Hinton said at a conference on Wednesday at the Massachusetts Institute of Technology.
"It may keep us around for a while to keep the power stations running," Hinton said. "But after that, maybe not."
After retiring from Google so he could speak more freely, the 75-year-old Hinton said he's recently changed his views about the reasoning capabilities of the computer systems he's spent a lifetime researching.
"These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people," Hinton said, addressing the crowd attending MIT Technology Review's EmTech Digital conference from his home via video. "Even if they can't directly pull levers, they can certainly get us to pull levers."
"I wish I had a nice simple solution I could push, but I don’t," he added. "I'm not sure there is a solution."
Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he's "pretty much aligned" with Hinton's concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say "We're doomed" is not going to help.
"The main difference, I would say, is he's kind of a pessimistic person, and I'm more on the optimistic side," said Bengio, a professor at the University of Montreal. "I do think that the dangers — the short-term ones, the long-term ones — are very serious and need to be taken seriously by not just a few researchers but governments and the population."
Governments discussing AI risks
There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet on Thursday with Vice President Kamala Harris in what's being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.
But all the talk of the most dire future dangers has some worried that hype around superhuman machines — which don't exist — is distracting from attempts to set practical safeguards on current AI products that are largely unregulated and have been shown to cause real-world harms.
Margaret Mitchell, a former leader on Google's AI ethics team, said she's upset that Hinton didn't speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialised into products such as ChatGPT and Google's Bard.
"It's a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalised in tech," said Mitchell, who was also forced out of Google in the aftermath of Gebru's departure. "He's skipping over all of those things to worry about something farther off."
Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today's AI applications such as ChatGPT.
Bengio, the only one of the three who didn't take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilisation, automated weaponry and the dangers of biased data sets.
But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI's latest model, GPT-4.
Bengio said on Wednesday he believes the latest AI language models already pass the "Turing test" named after British codebreaker and AI pioneer Alan Turing's method introduced in 1950 to measure when AI becomes indistinguishable from a human — at least on the surface.
"That's a milestone that can have drastic consequences if we're not careful," Bengio said. "My main concern is how they can be exploited for nefarious purposes to destabilise democracies, for cyberattacks, disinformation. You can have a conversation with these systems and think that you’re interacting with a human. They’re difficult to spot."
Fearmongering?
Where researchers are less likely to agree is on how current AI language systems — which have many limitations, including a tendency to fabricate information — might actually get smarter than humans not just in memorising huge troves of information, but in showing critical reasoning and other human skills.
Aidan Gomez was one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique — the "T" at the end of ChatGPT — for improving the performance of machine-learning systems, especially in how they learn from passages of text. Then just a 20-year-old intern at Google, Gomez remembers laying on a couch at the company's California headquarters when his team sent out the paper around 3 am when it was due.
"Aidan, this is going to be so huge," he remembers a colleague telling him, of the work that's since helped lead to new systems that can generate humanlike prose and imagery.
Six years later and now CEO of his own AI company called Cohere, which Hinton has invested in, Gomez is enthused about the potential applications of these systems but bothered by fearmongering he says is "detached from the reality" of their true capabilities and "relies on extraordinary leaps of imagination and reasoning."
"The notion that these models are somehow gonna get access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse to have," Gomez said. "It’s harmful to those real pragmatic policy efforts that are trying to do something good."
Asked about his investments in Cohere on Wednesday in light of his broader concerns about AI, Hinton said he had no plans to pull his investments because there are still many helpful applications of language models in medicine and elsewhere. He also said he hadn't made any bad decisions in pursuing the research he started in the 1970s.
"Until very recently, I thought this existential crisis was a long way off," Hinton said. "So I don't really have any regrets about what I did."
Computer scientists, including Geoffrey Hinton, who is often dubbed "the godfather of artificial intelligence", speak out about dangers of AI, such as job market destabilisation, automated weaponry and dangers of biased data sets.
AP
Some experts are worried that hype around superhuman machines — which don't exist — is distracting from attempts to set practical safeguards on current AI products. / Photo: AP
Computer scientists who helped build the foundations of today's artificial intelligence [AI] technology have warned of its dangers, but disagree on what those dangers are or how to prevent them.
Humanity's survival is threatened when "smart things can outsmart us," the so-called "Godfather of AI" Geoffrey Hinton said at a conference on Wednesday at the Massachusetts Institute of Technology.
"It may keep us around for a while to keep the power stations running," Hinton said. "But after that, maybe not."
After retiring from Google so he could speak more freely, the 75-year-old Hinton said he's recently changed his views about the reasoning capabilities of the computer systems he's spent a lifetime researching.
"These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people," Hinton said, addressing the crowd attending MIT Technology Review's EmTech Digital conference from his home via video. "Even if they can't directly pull levers, they can certainly get us to pull levers."
"I wish I had a nice simple solution I could push, but I don’t," he added. "I'm not sure there is a solution."
Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he's "pretty much aligned" with Hinton's concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say "We're doomed" is not going to help.
"The main difference, I would say, is he's kind of a pessimistic person, and I'm more on the optimistic side," said Bengio, a professor at the University of Montreal. "I do think that the dangers — the short-term ones, the long-term ones — are very serious and need to be taken seriously by not just a few researchers but governments and the population."
Governments discussing AI risks
There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet on Thursday with Vice President Kamala Harris in what's being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.
But all the talk of the most dire future dangers has some worried that hype around superhuman machines — which don't exist — is distracting from attempts to set practical safeguards on current AI products that are largely unregulated and have been shown to cause real-world harms.
Margaret Mitchell, a former leader on Google's AI ethics team, said she's upset that Hinton didn't speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialised into products such as ChatGPT and Google's Bard.
"It's a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalised in tech," said Mitchell, who was also forced out of Google in the aftermath of Gebru's departure. "He's skipping over all of those things to worry about something farther off."
Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today's AI applications such as ChatGPT.
Bengio, the only one of the three who didn't take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilisation, automated weaponry and the dangers of biased data sets.
But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI's latest model, GPT-4.
Bengio said on Wednesday he believes the latest AI language models already pass the "Turing test" named after British codebreaker and AI pioneer Alan Turing's method introduced in 1950 to measure when AI becomes indistinguishable from a human — at least on the surface.
"That's a milestone that can have drastic consequences if we're not careful," Bengio said. "My main concern is how they can be exploited for nefarious purposes to destabilise democracies, for cyberattacks, disinformation. You can have a conversation with these systems and think that you’re interacting with a human. They’re difficult to spot."
Fearmongering?
Where researchers are less likely to agree is on how current AI language systems — which have many limitations, including a tendency to fabricate information — might actually get smarter than humans not just in memorising huge troves of information, but in showing critical reasoning and other human skills.
Aidan Gomez was one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique — the "T" at the end of ChatGPT — for improving the performance of machine-learning systems, especially in how they learn from passages of text. Then just a 20-year-old intern at Google, Gomez remembers laying on a couch at the company's California headquarters when his team sent out the paper around 3 am when it was due.
"Aidan, this is going to be so huge," he remembers a colleague telling him, of the work that's since helped lead to new systems that can generate humanlike prose and imagery.
Six years later and now CEO of his own AI company called Cohere, which Hinton has invested in, Gomez is enthused about the potential applications of these systems but bothered by fearmongering he says is "detached from the reality" of their true capabilities and "relies on extraordinary leaps of imagination and reasoning."
"The notion that these models are somehow gonna get access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse to have," Gomez said. "It’s harmful to those real pragmatic policy efforts that are trying to do something good."
Asked about his investments in Cohere on Wednesday in light of his broader concerns about AI, Hinton said he had no plans to pull his investments because there are still many helpful applications of language models in medicine and elsewhere. He also said he hadn't made any bad decisions in pursuing the research he started in the 1970s.
"Until very recently, I thought this existential crisis was a long way off," Hinton said. "So I don't really have any regrets about what I did."
No comments:
Post a Comment