Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war
Geneva Abdul
THE GUARDIAN
Tue 30 May 2023
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars.
The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. Signatories included the chief executives from Google’s DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic.
The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
Earlier this month the man often touted as the godfather of AI – Geoffrey Hinton, also a signatory – quit Google citing its “existential risk”. The risk was echoed and acknowledged by No 10 last week for the first time – a swift change of tack within government that came two months after publishing an AI white paper industry figures have warned is already out of date.
While the letter published on Tuesday is not the first, it’s potentially the most impactful given its wider range of signatories and its core existential concern, according to Michael Osborne, a professor in machine learning at the University of Oxford and co-founder of Mind Foundry.
“It really is remarkable that so many people signed up to this letter,” he said. “That does show that there is a growing realisation among those of us working in AI that existential risks are a real concern.”
AI’s potential to exacerbate existing existential risks such as engineered pandemics and military arms races are concerns that led Osborne to sign the public letter, along with AI’s novel existential threats.
Calls to curb threats come after the success of ChatGPT after its launch in November last year. The language model has already been widely adopted by millions of people and has rapidly advanced beyond predictions by those best informed in the industry, said Osborne.
“Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species,” he said.
Tue 30 May 2023
A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars.
The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. Signatories included the chief executives from Google’s DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic.
The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.
Earlier this month the man often touted as the godfather of AI – Geoffrey Hinton, also a signatory – quit Google citing its “existential risk”. The risk was echoed and acknowledged by No 10 last week for the first time – a swift change of tack within government that came two months after publishing an AI white paper industry figures have warned is already out of date.
While the letter published on Tuesday is not the first, it’s potentially the most impactful given its wider range of signatories and its core existential concern, according to Michael Osborne, a professor in machine learning at the University of Oxford and co-founder of Mind Foundry.
“It really is remarkable that so many people signed up to this letter,” he said. “That does show that there is a growing realisation among those of us working in AI that existential risks are a real concern.”
AI’s potential to exacerbate existing existential risks such as engineered pandemics and military arms races are concerns that led Osborne to sign the public letter, along with AI’s novel existential threats.
Calls to curb threats come after the success of ChatGPT after its launch in November last year. The language model has already been widely adopted by millions of people and has rapidly advanced beyond predictions by those best informed in the industry, said Osborne.
“Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species,” he said.
Yes, you should be worried about AI – but Matrix analogies hide a more insidious threat
We need not speculate on ways AI can cause harm; we already have a mountain of evidence from the past decade
Samantha Florea
Tue 30 May 2023
As the resident tech politics nerd among my friends, I spend a lot of time fielding questions. Help! I’ve been part of a data breach, what do I do? What on earth is crypto and should I care? And lately: should I be worried that AI is going to take over and kill us all?
‘They’re afraid their AIs will come for them’: Doug Rushkoff on why tech billionaires are in escape mode
There is so much hype around artificial intelligence that the concern is understandable but it’s important that we hang on to our critical faculties. The current AI frenzy ultimately serves those who stand to benefit from implementing these products the most but we don’t have to let them dictate the terms of the conversation.
If there is one thing that I try to impart to friends – and now you – it’s this: Yes, you should be concerned about AI. But let’s be clear about which boogeyman is actually lurking under the bed. It’s hard to fight a monster if you don’t know what it is. No one wants to be the fool using a wooden stake on a zombie to no avail.
Rather than fretting over some far-flung fear of an “existential threat” to humanity, we should be concerned about the material consequences of far less sophisticated AI technologies that are affecting people’s lives right now. And what’s more, we should be deeply troubled by the way AI is being leveraged to further concentrate power in a handful of companies.
So let’s sort the speculative fiction from reality.
Every other day a high profile figure peddles a doomsday prediction about AI development left unchecked. Will it lead to a Ministry of Truth a la George Orwell’s 1984? Or perhaps hostile killing machines fresh out of Terminator. Or perhaps it’ll be more like The Matrix.
This all acts as both a marketing exercise for and a diversion from the more pressing harms caused by AI.
... We’re not talking about the danger of some far-off sci-fi future, we’re talking about the amplification of systems and social problems that already exist
First, it’s important to remember that large language models like GPT-4 are not sentient, nor intelligent, no matter how proficient they may be at mimicking human speech. But the human tendency toward anthropomorphism is strong, and it’s made worse by clumsy metaphors such as that the machine is ‘hallucinating’ when it generates incorrect outputs. In any case, we are nowhere near the kind of Artificial General Intelligence (AGI) or ‘superintelligence’ that a handful of loud voices are sounding the alarm on.
The problem with pushing people to be afraid of AGI while calling for intervention is that it enables firms like OpenAI to position themselves as the responsible tech shepherds – the benevolent experts here to save us from hypothetical harms, as long as they retain the power, money and market dominance to do so. Notably, OpenAI’s position on AI governance focuses not on current AI but on some arbitrary point in the future. They welcome regulation, as long as it doesn’t get in the way of anything they’re currently doing.
We need not wait for some hypothetical tech-bro delusion to consider – and fight – the harms of AI. The kinds of technologies and computational techniques that sit under the umbrella marketing term of AI are much broader than the current fixation on large language models or image generation tools. It covers less show-stopping systems that we use – or are used upon us – every day, such as recommendation engines that curate our online experiences, surveillance technologies like facial recognition, and some automated decision-making systems, which determine, for example, people’s interactions with finance, housing, welfare, education, and insurance.
The use of these technologies can and do lead to negative consequences. Bias and discrimination is rife in automated decision-making systems, leading to adverse impacts on people’s access to services, housing, and justice. Facial recognition supercharges surveillance and policing, compounding the effect of state-sanctioned violence against many marginalised groups. Recommender systems often send people down algorithmic rabbit holes toward increasingly extreme online content. We need not speculate on ways this tech can cause harm; we already have a mountain of evidence from the past decade.
The through-line here is that we’re not talking about the danger of some far-off sci-fi future, we’re talking about the amplification of systems and social problems that already exist. Sarah Myers West of AI Now said that the focus on future harms has become a rhetorical sleight of hand, used by AI industry figures to ‘position accountability right out into the future.’ It’s easy to pay attention to the fantastical imaginary of AI but it is in the more mundane uses where the real, material consequences are happening.
The future of AI is chilling – humans have to act together to overcome this threat to civilisation
Jonathan Freedland
When interviewed about his warnings on the dangers of AI, the so-called ‘Godfather of AI’ Geoffrey Hinton dismissed the concerns of longstanding whistleblowers such as Timnit Gebru and Meredith Whittaker, claiming their concerns were not as ‘existential’ as his. To suggest that rampant bias and discrimination, pervasive information manipulation, or the entrenchment of surveillance is not as serious as the chimera of AGI is disturbing. What such people fail to realise is that AI does pose an existential threat to many, just not people they care about.
Too often AI is presented as a risk-benefit tradeoff; where the historical evidence and present risks are dismissed as the cost of an overblown hypothetical future. We are told that there is so much potential for good, and that to slow ‘progress’ or ‘innovation’ would prevent us from realising it. But overlooking material impacts of past and present AI in favour of an imaginary future will not lead us to socially progressive technology. And that’s way more worrying than speculative AI overlords.
Samantha Floreani is a digital rights activist and writer based in Naarm
Tue 30 May 2023
As the resident tech politics nerd among my friends, I spend a lot of time fielding questions. Help! I’ve been part of a data breach, what do I do? What on earth is crypto and should I care? And lately: should I be worried that AI is going to take over and kill us all?
‘They’re afraid their AIs will come for them’: Doug Rushkoff on why tech billionaires are in escape mode
There is so much hype around artificial intelligence that the concern is understandable but it’s important that we hang on to our critical faculties. The current AI frenzy ultimately serves those who stand to benefit from implementing these products the most but we don’t have to let them dictate the terms of the conversation.
If there is one thing that I try to impart to friends – and now you – it’s this: Yes, you should be concerned about AI. But let’s be clear about which boogeyman is actually lurking under the bed. It’s hard to fight a monster if you don’t know what it is. No one wants to be the fool using a wooden stake on a zombie to no avail.
Rather than fretting over some far-flung fear of an “existential threat” to humanity, we should be concerned about the material consequences of far less sophisticated AI technologies that are affecting people’s lives right now. And what’s more, we should be deeply troubled by the way AI is being leveraged to further concentrate power in a handful of companies.
So let’s sort the speculative fiction from reality.
Every other day a high profile figure peddles a doomsday prediction about AI development left unchecked. Will it lead to a Ministry of Truth a la George Orwell’s 1984? Or perhaps hostile killing machines fresh out of Terminator. Or perhaps it’ll be more like The Matrix.
This all acts as both a marketing exercise for and a diversion from the more pressing harms caused by AI.
... We’re not talking about the danger of some far-off sci-fi future, we’re talking about the amplification of systems and social problems that already exist
First, it’s important to remember that large language models like GPT-4 are not sentient, nor intelligent, no matter how proficient they may be at mimicking human speech. But the human tendency toward anthropomorphism is strong, and it’s made worse by clumsy metaphors such as that the machine is ‘hallucinating’ when it generates incorrect outputs. In any case, we are nowhere near the kind of Artificial General Intelligence (AGI) or ‘superintelligence’ that a handful of loud voices are sounding the alarm on.
The problem with pushing people to be afraid of AGI while calling for intervention is that it enables firms like OpenAI to position themselves as the responsible tech shepherds – the benevolent experts here to save us from hypothetical harms, as long as they retain the power, money and market dominance to do so. Notably, OpenAI’s position on AI governance focuses not on current AI but on some arbitrary point in the future. They welcome regulation, as long as it doesn’t get in the way of anything they’re currently doing.
We need not wait for some hypothetical tech-bro delusion to consider – and fight – the harms of AI. The kinds of technologies and computational techniques that sit under the umbrella marketing term of AI are much broader than the current fixation on large language models or image generation tools. It covers less show-stopping systems that we use – or are used upon us – every day, such as recommendation engines that curate our online experiences, surveillance technologies like facial recognition, and some automated decision-making systems, which determine, for example, people’s interactions with finance, housing, welfare, education, and insurance.
The use of these technologies can and do lead to negative consequences. Bias and discrimination is rife in automated decision-making systems, leading to adverse impacts on people’s access to services, housing, and justice. Facial recognition supercharges surveillance and policing, compounding the effect of state-sanctioned violence against many marginalised groups. Recommender systems often send people down algorithmic rabbit holes toward increasingly extreme online content. We need not speculate on ways this tech can cause harm; we already have a mountain of evidence from the past decade.
As for generative AI, we are already seeing the kinds of harms that can arise, in far more prosaic ways than it becoming sentient and deciding to end humanity. Like how quickly GPT-4 was spruiked as a way to automate harassment and intimidation by debt-collectors. Or how it can turbocharge information manipulation, enabling impersonation and extortion of people, using new tech for old tricks to scam people; or add a hi-tech flavour to misogyny through deepfake porn. Or how it entrenches and seeks to make additional profit from surveillance capitalism business models that prioritise data generation, accumulation and commodification.
The through-line here is that we’re not talking about the danger of some far-off sci-fi future, we’re talking about the amplification of systems and social problems that already exist. Sarah Myers West of AI Now said that the focus on future harms has become a rhetorical sleight of hand, used by AI industry figures to ‘position accountability right out into the future.’ It’s easy to pay attention to the fantastical imaginary of AI but it is in the more mundane uses where the real, material consequences are happening.
The future of AI is chilling – humans have to act together to overcome this threat to civilisation
Jonathan Freedland
When interviewed about his warnings on the dangers of AI, the so-called ‘Godfather of AI’ Geoffrey Hinton dismissed the concerns of longstanding whistleblowers such as Timnit Gebru and Meredith Whittaker, claiming their concerns were not as ‘existential’ as his. To suggest that rampant bias and discrimination, pervasive information manipulation, or the entrenchment of surveillance is not as serious as the chimera of AGI is disturbing. What such people fail to realise is that AI does pose an existential threat to many, just not people they care about.
Too often AI is presented as a risk-benefit tradeoff; where the historical evidence and present risks are dismissed as the cost of an overblown hypothetical future. We are told that there is so much potential for good, and that to slow ‘progress’ or ‘innovation’ would prevent us from realising it. But overlooking material impacts of past and present AI in favour of an imaginary future will not lead us to socially progressive technology. And that’s way more worrying than speculative AI overlords.
Samantha Floreani is a digital rights activist and writer based in Naarm
No comments:
Post a Comment