Saturday, September 23, 2023

EU considering whether to attend Britain's AI summit, spokesperson says


By Martin Coulter

Fri, September 22, 2023 

LONDON (Reuters) - The European Union is considering whether to send officials to Britain's upcoming artificial intelligence safety summit, a spokesperson told Reuters, as the bloc nears completion of wide-ranging AI legislation that is the first of its kind globally.

British Prime Minister Rishi Sunak is set to host the summit in November bringing together governments, tech companies and academics to discuss the risks posed by the technology.

But the invitee list has been kept under wraps, with some companies declining to say whether they have been invited.

European Commission Vice President Vera Jourova has received a formal invitation to the summit, the spokesperson said, adding: "We are now reflecting on potential EU participation."

AI has seen rapid growth in investment and consumer popularity since the release of OpenAI's ChatGPT chatbot.

While Sunak hopes to position Britain as the global leader in regulating the rapidly developing technology, the EU is close to rolling out its own AI Act, the first such legislation in the world.

Under the bloc's incoming rules, it is expected that organisations using AI systems the bloc deems high risk will have to log their activities, complete rigorous risk assessments and make some internal data available to authorities.

However, the Financial Times reported that British government officials favour a less "draconian" approach to AI regulation than the EU.

Tech expert Matt Clifford and former senior diplomat Jonathan Black have been appointed to lead preparations for the summit. Last month, Clifford told Reuters he hoped the summit would set the tone for future international debates on AI regulation.

While a number of world leaders, including U.S. Vice President Kamala Harris, are expected to attend the summit, it largely remains unknown who else has been invited -- or who has accepted an invitation.

The British government was recently forced to defend its decision to invite China to the summit.

The country's finance minister Jeremy Hunt told Politico: "If you're trying to create structures that make AI something that overall is a net benefit to humanity, then you can’t just ignore the second-biggest economy in the world."

‘This is his climate change’: The experts helping Rishi Sunak seal his legacy

James Titcomb
Sat, September 23, 2023

Rishi Sunak wants Britain to lead on AI safety 
- IAN VOGLER/POOL/AFP via Getty Images

It took just 23 words for the world to sit up and pay attention. In May, the Center for AI Safety, a US non-profit, published a one-sentence statement warning that artificial intelligence should be considered an extinction risk alongside pandemics.

Those who endorsed the statement included: Geoffrey Hinton, known as the Godfather of AI; Yoshua Bengio, whose work with Hinton won the coveted computer science Turing prize; and Demis Hassabis, the head of the Google-owned British AI lab Deepmind.

The statement helped to transform public opinion on AI from seeing it as a handy office aide to a potential threat of the kind usually only seen in dystopian science fiction.


The Center itself describes its mission as reducing the “societal-scale risks from AI”. It is now one of a handful of California-based organisations advising Rishi Sunak’s government on how to handle the rise of the technology.

In recent months, observers have detected an increasingly apocalyptic tone in Westminster. In March, the Government unveiled a white paper promising not to “stifle innovation” in the field. Yet just two months later, Sunak was talking about “putting guardrails in place” and pressing Joe Biden to embrace his plans for global AI rules.
Sunak’s legacy moment

An AI safety summit at Bletchley Park in November is expected to focus almost entirely on existential risks and how to negate them.

Despite myriad political challenges, Sunak is understood to be deeply involved in the AI debate. “He’s zeroed in on it as his legacy moment. This is his climate change,” says one former government adviser.


In November, Bletchley Park will host Prime Minister Rishi Sunak's AI Safety Summit - Simon Walker / No 10 Downing Street

In the last year, Downing Street has assembled a tight-knit team of researchers to work on AI risk. Ian Hogarth, a tech investor and the founder of the concert-finding app Songkick, was enlisted as the head of a Foundation Model taskforce after penning a viral Financial Times article warning of the “race to God-like AI”.

This month, the body was renamed the “Frontier AI taskforce” – a reference to the bleeding edge of the technology where experts see the most risk. Possible applications could include creating bioweapons, for example, or orchestrating mass disinformation campaigns.

Human-level AI systems ‘just a few years away’

Hogarth has assembled a heavyweight advisory board including Bengio, who has warned that human-level AI systems are just a few years away and pose catastrophic risks, and Anne Keast-Butler, the director of GCHQ. A small team is currently testing the most prominent AI systems such as ChatGPT, probing for weaknesses.

Hogarth recently told a House of Lords committee that the taskforce is dealing with “fundamentally matters of national security”.

“An AI that is very capable of writing software… can also be used to conduct cybercrime or cyberattacks. An AI that is very capable of manipulating biology can be used to lower the barriers to entry to perpetrating some sort of biological attack,” he said.

Leading preparations for the AI summit are Matt Clifford, an entrepreneur who chairs the Government’s blue-sky research lab Aria, and Jonathan Black, a senior diplomat. The pair, who have been dubbed Number 10’s AI “sherpas”, were in Beijing last week in order to drum up support for the summit.

Meanwhile, the research organisations now working with the taskforce have raised eyebrows for their links to the effective altruism (EA) movement, a philosophy centred around maximising resources for the best possible good.

The movement has become controversial for concentrating on long-term but unclear risks such as AI – judging that the lives of people in the future are as valuable as those in the present – and for its close association with FTX, the bankrupt cryptocurrency exchange founded by the alleged fraudster Sam Bankman-Fried.

Of the six research organisations working with the UK taskforce, three – The Collective Intelligence Project, the Alignment Research Center, and Redwood Research – were awarded grants by FTX, which dished out millions to non-profits before going bust. (The Collective Intelligence Project has said it is unsure if it can spend the money, The Alignment Research Center returned it, while Redwood never received it).

One AI researcher defends the associations, saying that until this year effective altruists were the only ones thinking about the subject. “Now people are realising it’s an actual risk but you’ve got these guys in EA who were thinking about it for the last 10 years.”
No guarantee tighter regulation will yield results

Those close to the taskforce are said to have brushed off a recent piece in Politico, the Westminster-focused political website, that laid out the strong ties to EA. It focused on the controversial aspects of the movement but, as a source close to the process says: “The inside joke is that they’re not effective or altruists.”

Still, start-ups have raised concerns that the focus on existential risk could stifle innovation and hand control of AI to Big Tech. One lobbyist says that, counterintuitively, this obsession with risk could concentrate power in the hands of major AI labs such as OpenAI, the company behind ChatGPT, DeepMind and Anthropic (the bosses of the three labs held a closed-door meeting with Sunak in May).

Rishi Sunak meeting with Demis Hassabis, chief executive of DeepMind, Dario Amodei, chief executive of Anthropic, and Sam Altman, chief executive of OpenAI, in 10 Downing Street in May - Simon Walker / No 10 Downing Street

Hogarth has insisted these companies cannot be left to “mark their own homework”, but if government safety work ends up with something like a licensing regime for AI models, they are the most likely to benefit. “What we are witnessing is regulatory capture happening in real time,” the lobbyist says.

Baroness Stowell, the chair of the Lords communications and digital committee, has written to the Government demanding details on how Hogarth is managing potential conflicts of interests around his more than 50 AI investments, which include Anthropic and defence company Helsing.

There is no guarantee that the current push for tighter regulation will yield results. Other past efforts have fallen by the wayside. Last week it emerged that the Government had disbanded the Centre for Data Ethics and Innovation Advisory Board, created five years ago to address areas such as AI bias.

However, those close to the current process believe the focus in Downing Street is now sharper. And to the clutch of researchers working on preventing the apocalypse, the existential risks are more important than other considerations.

“It’s a big opportunity for global Britain, a thing that the UK can actually lead on,” says Shabbir Merali, who developed AI strategy at the Foreign Office and now works at the think tank Onward. “It would be strange not to focus on existential risk - that’s where you want nation state capability to be.”

Green U-turns may sink Sunak bid for AI legacy

Simon Hunt
Fri, September 22, 2023 

The Prime Minister announced a series of delays to climate policy that will make it harder to hit legal targets
 (Justin Tallis/PA) (PA Wire)

Rishi Sunak this week delivered a hasty press conference at Downing Street where he unveiled a bonfire of green regulation.

The move was roundly condemned by big business leaders, who complained that they had already poured oodles of cash into their net-zero plans, and desperately needed certainty from government. Those who didn’t comment may be quietly breathing a sigh of relief that they have more time to go green.

But whether Sunak’s looser approach to environmental goals proves a vote winner or not, it is diminishing his status on the world stage. Former US vice president Al Gore led international criticism of the new approach, blasting it as “shocking and really disappointing”. “This is not what the world needs from the United Kingdom,” he said.

The Prime Minister is just weeks away from hosting an AI safety summit with world leaders and bosses of big tech. In UK tech circles, there are hopes that the Government can achieve, if not a formal treaty, at least some kind of memorandum of understanding on AI regulation that will establish this country as a global leader.

But those hopes are fading following his press conference. As one summit attendee put it to me, why would countries trust anything the UK says on its AI commitments, if less than two years on from hosting COP, the Government is already rowing back on its green pledges?

Sunak’s chances of winning the election look slim. His hopes of leaving a legacy should be pinned on a breakthrough agreement at his AI safety summit. Such a deal would make the UK the world’s number one AI destination, and might even avert the catastrophe that experts warn could ensue if artificial general intelligence, the most powerful form of AI, is left to develop unchecked. But he may have now scuppered it all with his latest U-turn.

No comments: