Wednesday, May 22, 2024

World leaders still aren’t taking the ‘extreme risks’ of AI seriously

22May 2024
Text Thom Waite
DAZED
Joaquin Phoenix in Her (2013)Courtesy of Warner Bros. Pictures

As trust in tech leaders like Sam Altman wanes, governments have agreed to establish a global network of AI safety institutes, but experts aren’t convinced that it’s enough to stop the tech from going rogue

In November last year, world leaders, prominent businesspeople, and (for some reason) King Charles gathered at the UK’s Bletchley Park for the world’s first AI safety summit, aiming to highlight the enormous risks of the most advanced AI models, offering a counterpoint to their huge projected benefits. The result was an unusual display of international unity, with the EU, China, and the US joining forces to sign the “world’s first” Bletchley Declaration. Six months later, however, things aren’t looking much better, according to the experts.

On Monday (May 20), 25 academics and experts in the field – including the likes of Geoffrey Hinton and Yoshua Bengio, two “godfathers of AI”, and Yuval Noah Harari – warned that governments have failed to adequately face up to the risks of powerful AI so far. “Although researchers have warned of extreme risks from AI, there is a lack of consensus about how to manage them,” they say. “AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems.”

These doom-laden claims come in a paper titled Managing extreme AI risks amid rapid progress, published the day before a two-day summit in Seoul, which began May 21. In the South Korean capital, world leaders have been tasked with following up on last year’s safety commitments, and building upon these pretty vague foundations to ensure that the technology doesn’t prove too disastrous for humankind via the disruption of economies, elections, relationships and – on the more fantastical end of the scale – the very continuation of the human race.

The most concrete result of the Seoul summit is the joint signing of the Seoul Declaration by the EU and 10 other countries, including the UK, US, Australia, Canada, France, Germany, Italy, Japan, the Republic of Korea and the Republic of Singapore. This declaration commits to developing “human-centric, trustworthy and responsible” AI via a network of global AI safety institutes and research programmes. The UK, which founded the first of these institutes last year, simultaneously pledged £8.5 million in grants for new AI safety research. Just for context: ChatGPT developer OpenAI has been valued at $80 billion or more. Lol!

The renewed focus on AI’s risks comes amid rising scepticism toward leaders in the industry, most notably OpenAI’s CEO, Sam Altman, who has long positioned himself as a prominent figure in the safety conversation. Last week, Ilya Sutskever and Jan Leike, the co-leads of the company’s “Superalignment” team – the bit responsible for reining in the tech’s more existential threats – both resigned from their posts, with subsequent reports saying that they never received the resources they needed for their work. On top of that, Altman has been “embarrassed” by the revelation of manipulative exit agreements that encouraged outgoing employees to sign NDAs (just the latest cause for concern when it comes to OpenAI’s lack of transparency).

In even more public news, Scarlett Johansson recently accused OpenAI of stealing her voice for its latest chatbot iteration. Having declined to voice the chatbot when she was approached by OpenAI last year, the actor was apparently “shocked, angered and in disbelief” by its unveiling of a voice that sounded “eerily similar” to hers with GPT-4o. The company has since agreed to pull the voice, which it claims was never supposed to be an imitation of Johansson’s – even though Altman tweeted “her” straight after the launch, in an obvious reference to the 2013 film where Johansson voices an AI-powered virtual assistant.

Ok, so a dispute over a multimillionaire actor’s voice might not inspire too much sympathy. Many have pointed out, however, that the controversy raises broader suspicions about Altman’s – and other AI leaders’ – tendency to bypass rules and best practices, and reignites concerns about the manipulative personalities at the top of the game. Are these the people we want designing our potential successors? Maybe not!

All of that said, OpenAI has joined other tech giants including Google, Amazon, Meta, and Elon Musk’s xAI in signing a new round of voluntary commitments about AI safety to coincide with this week’s Seoul summit. These commitments include the publication of frameworks to measure the risks of their frontier AI models, and a promise to “not to develop or deploy a model at all” if it poses severe risks that can’t be mitigated.

The question is: will the actions of the countries and companies involved in the summit actually reflect their words, or will they continue to seek a competitive advantage via loopholes and a lack of transparency? When the next AI safety summit is held in France, six months down the line, will the experts be able to celebrate any real advancements, or will we still be speeding “recklessly” toward a world where AI spirals out of control, beyond the limits of human intervention? Based on the last six months, things aren’t looking too promising.

No comments:

Post a Comment