Story by Martyn Landi and Katherine Fidler • METRO UK
Rishi Sunak has said the threat of AI should be a global priority
(Picture: Peter Nicholls/Getty)© Provided by Metro
Rishi Sunak has said mitigating the risk of extinction because of artificial intelligence (AI) should be a global priority alongside pandemics and nuclear war.
The prime minister said he wanted to be ‘honest’ with the public about the risks of AI, as he made a speech on the emerging technology.
As the government published new assessments on AI, Mr Sunak said they offered a ‘stark warning’.
‘Get this wrong and it could make it easier to build chemical or biological weapons,’ he said.
‘Terrorist groups could use AI to spread fear and disruption on an even greater scale.
‘Criminals could exploit AI for cyber attacks, disinformation, fraud or even child sexual abuse.
‘And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as “super intelligence”.’
Criminals could use AI for more sophisticated cyber attacks, the prime minister warned (Picture: Getty)© Provided by Metro
Mr Sunak is not the first world leader to warn over the threats posed by AI, but went a step further in likening it to that posed by other ‘exinction-level events’,
He said: ‘Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts, mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war.’
One of signatories, the ‘godfather of AI’ Geoffrey Hinton, quit his job at Google earlier this year while warning of the dangers posed by the technology – adding that part of him now regretted his life’s work.
Dr Geoffrey Hinton has warned repeatedly of the dangers of AI (Picture: Getty)© Provided by Metro
However, Mr Sunak added that it was ‘not a risk that people need to be losing sleep over right now’ and he did not want to be ‘alarmist’.
The issue of AI and its potential capabilities was thrown into sharp focus last November following the public release of ChatGPT, a large language model (LLM) with outstanding capabilities, including writing content indistinguishable from human work and creating computer code within seconds.
Rishi Sunak has said mitigating the risk of extinction because of artificial intelligence (AI) should be a global priority alongside pandemics and nuclear war.
The prime minister said he wanted to be ‘honest’ with the public about the risks of AI, as he made a speech on the emerging technology.
As the government published new assessments on AI, Mr Sunak said they offered a ‘stark warning’.
‘Get this wrong and it could make it easier to build chemical or biological weapons,’ he said.
‘Terrorist groups could use AI to spread fear and disruption on an even greater scale.
‘Criminals could exploit AI for cyber attacks, disinformation, fraud or even child sexual abuse.
‘And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as “super intelligence”.’
Criminals could use AI for more sophisticated cyber attacks, the prime minister warned (Picture: Getty)© Provided by Metro
Mr Sunak is not the first world leader to warn over the threats posed by AI, but went a step further in likening it to that posed by other ‘exinction-level events’,
He said: ‘Indeed, to quote the statement made earlier this year by hundreds of the world’s leading AI experts, mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war.’
One of signatories, the ‘godfather of AI’ Geoffrey Hinton, quit his job at Google earlier this year while warning of the dangers posed by the technology – adding that part of him now regretted his life’s work.
Dr Geoffrey Hinton has warned repeatedly of the dangers of AI (Picture: Getty)© Provided by Metro
However, Mr Sunak added that it was ‘not a risk that people need to be losing sleep over right now’ and he did not want to be ‘alarmist’.
The issue of AI and its potential capabilities was thrown into sharp focus last November following the public release of ChatGPT, a large language model (LLM) with outstanding capabilities, including writing content indistinguishable from human work and creating computer code within seconds.
Related video: AI expert explains why the general public needs to be on the look out for deepfakes to stay safe (FOX News) Duration 0:56 View on Watch
Since its release others have followed, while generative AI image creators have also proliferated, allowing users to create pictures from a simple text prompt.
A report yesterday from the Internet Watch Foundation warned users on the dark web were using these to create child sexual abuse images.
Next week the government will host an AI Safety Summit at Bletchley Park, bringing together world leaders, tech firms and civil society to discuss the emerging technology.
Ahead of the summit, Mr Sunak announced the government would establish the ‘world’s first’ AI safety institute, which the prime minister said would ‘carefully examine, evaluate and test new types of AI to understand what each new model is capable of’ and ‘explore all the risks’.
He said tech firms had already trusted the UK with privileged access to their models, making Britain ‘well placed’ to create the world’s first AI safety institute.
The prime minister said the government would use next week’s summit to push for a first international statement about the nature of AI risks, and said leaders should follow the example of global collaboration around climate change and establish a global expert panel on the issue.
But Mr Sunak said the government would not ‘rush to regulate’ AI, although he added that countries should not rely on private firms ‘marking their own homework’.
Bletchley Park, home of the Enigma machine that helped win the Second World War (Picture: Getty)© Provided by Metro
‘Only governments can properly assess the risks of national security,’ he said.
He also defended the decision to invite China to the AI Safety Summit, arguing there can be ‘no serious strategy for AI without at least trying to engage all of the world’s leading AI powers’.
‘That might not have been the easy thing to do but it was the right thing to do,’ he said.
Ahead of the prime minister’s speech, the government published several discussion papers showing its evaluation of the risk of AI, which suggested that there were new opportunities for growth and advances, but also a range of ‘new dangers’.
The papers said there is insufficient evidence to rule out a threat to humanity from AI and that it is hard to predict many of the risks because of the broad range of potential uses in the future.
It adds that the current lack of safety standards is a key issue, and warns that AI could be used to carry out more advanced cyber attacks and develop bioweapons.
It also warns that human workers could be displaced by AI and both misinformation and disinformation could be spread more easily, and potentially influence future elections.
The papers said there is insufficient evidence to rule out a threat to humanity from AI and that it is hard to predict many of the risks because of the broad range of potential uses in the future.
It adds that the current lack of safety standards is a key issue, and warns that AI could be used to carry out more advanced cyber attacks and develop bioweapons.
It also warns that human workers could be displaced by AI and both misinformation and disinformation could be spread more easily, and potentially influence future elections.
No comments:
Post a Comment