Reuters
Wed, November 1, 2023
AI Safety Summit in Bletchley
LONDON (Reuters) - Britain on Wednesday published a "Bletchley Declaration", agreed with countries including the United States and China, aimed at boosting global efforts to cooperate on artificial intelligence (AI) safety.
The declaration, by 28 countries and the European Union, was published on the opening day of the AI Safety Summit hosted at Bletchley Park, central England.
"The Declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration," Britain said in a separate statement accompanying the declaration.
The declaration encouraged transparency and accountability from actors developing frontier AI technology on their plans to measure, monitor and mitigate potentially harmful capabilities.
"This is a landmark achievement that sees the world's greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren," British Prime Minister Rishi Sunak said.
It set out a two-pronged agenda focused on identifying risks of shared concern and building the scientific understanding of them, and also building cross-country policies to mitigate them.
"This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research," the declaration said.
(Reporting by William James, writing by Farouq Suleiman, Editing by Sachin Ravikumar)
Meta exec and former U.K. Deputy Prime Minister compares AI fears to past ‘moral panic’ over video games—and bicycles
Ryan Hogg
Wed, November 1, 2023
Meta’s president of global affairs Nick Clegg
A Meta exec has moved to quell public fears about the capabilities of AI, calling it a “moral panic” akin to past fears about everything from video games to the bicycle.
Meta’s president of global affairs Nick Clegg warned against premature calls for regulation of the technology, the Times of London and the Guardian reported, speaking ahead of the landmark AI Summit being hosted at Bletchley Park in the U.K. The summit is expected focus on mitigating the potential harms of AI.
Elon Musk will speak with U.K. Prime Minister Rishi Sunak on Musk’s X platform about regulating AI. Major world leaders, including European Commission President Ursula von der Leyen, will be in attendance.
The summit follows an executive order signed Monday by the Biden Administration, which will force tech companies to quickly develop strong safety standards for AI. Biden will not be attending the summit.
However, former U.K. Deputy Prime Minister Clegg will be one voice present in Bletchley Park seeking to downplay growing concerns about AI, from the technology’s potential to steal jobs to its ability to manipulate humans.
Clegg said there was a "Dutch auction" around the risks, with detractors trying to outdo each other with the most outlandish theories of AI going wrong.
“I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet,” Clegg said at an event in London, the Times reported.
“Ten years ago we were being told that by now there would be no truck drivers left because all cars will be entirely automated. To my knowledge in the U.S., there’s now a shortage of truck drivers.”
AI’s risks
Clegg, who joined Meta following a nearly two-decade political career in the U.K., has been on a charm offensive supporting the development of AI. This approach has largely involved downplaying the technology’s risks, as well as its capabilities.
In July, Clegg told BBC’s Today radio program that large language models (LLM) like OpenAI’s ChatGPT and Google’s Bard were currently “quite stupid,” and fell “far short” of the level where they could develop autonomy.
In his role at Meta, Clegg is among a minority of tech execs unreservedly backing the potential for AI, pouring cold water on panic about the tech’s threats.
Meta made its own LLM, Llama 2, open source when it released it in July. The move was seen by proponents, including Meta, as one that would boost transparency and democratize information, preventing the tech from being gatekept by a few powerful companies.
However, detractors of the move worry that the information might be used by bad actors to proliferate AI’s harms. OpenAI went open source with its code when it first launched ChatGPT but soon backpedaled. The company’s co-founder Ilya Sutskever told The Verge in an interview that open-sourcing AI was “just not wise.” It might be a key discussion point at this week’s U.K. AI Summit.
Danger warnings
Other tech execs have been much more vocal about the wider risks of AI. In May, OpenAI co-founder Sam Altman penned a short letter alongside hundreds of other experts warning of the dangers of AI.
Musk and Apple co-founder Steve Wozniak were among 1,100 people who in March signed an open letter calling on a moratorium on the development of advanced AI systems.
However, Andrew Ng, one of the founding fathers of AI and co-founder of Google Brain, hinted that there might be ulterior motives behind tech companies’ warnings.
Ng taught OpenAI co-founder Sam Altman at Stanford and hinted his former student may be trying to consolidate an oligopoly of powerful tech companies controlling AI.
“Sam was one of my students at Stanford. He interned with me. I don’t want to talk about him specifically because I can’t read his mind, but… I feel like there are many large companies that would find it convenient to not have to compete with open-sourced large language models,” Ng said in an interview with the Australian Financial Review.
Ng warned proposed regulation of AI was likely to stifle innovation, and that no regulation was currently better than what was being proposed.
Geoffrey Hinton, a former Google engineer who quit to warn about AI’s dangers, questions Ng’s comments about an apparent conspiracy among tech companies to stifle competition.
“Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat,” the so-called “Godfather of AI” posted on X, formerly Twitter.
This story was originally featured on Fortune.com
AI Doomers Take Center Stage at the UK’s AI Summit
Thomas Seal
Wed, November 1, 2023
(Bloomberg) -- A fierce debate over how much to focus on the supposed existential risks of artificial intelligence defined the kickoff of the UK’s AI Safety Summit on Wednesday, highlighting broader tensions in the tech community as lawmakers propose regulations and safeguards.
Tech leaders and academics attending the Summit at Bletchley Park, the former home of secret World War II code-breakers, disagreed over whether to prioritize immediate risks from AI — such as fueling discrimination and misinformation — verses concerns that it could lead to the end of human civilization.
Some attendees openly worried so-called AI doomers would dominate the proceedings — a fear compounded by news that Elon Musk would appear alongside British Prime Minister Rishi Sunak shortly after the billionaire raised the specter of AI leading to “the extinction of humanity” on a podcast. On Wednesday, the UK government also unveiled the Bletchley Declaration, a communique signed by 28 countries warning of the potential for AI to cause “catastrophic harm.”
“I hope that it doesn’t get dominated by the doomer, X-risk, ‘Terminator’-scenario discourse, and I’ll certainly push the conversation towards practical, near-term harms,” said Aidan Gomez, co-founder and chief executive officer of AI company Cohere Inc., ahead of the summit.
Top tech executives spent the week trading rhetorical blows over the subject. Meta Platforms Inc.’s chief AI scientist Yann LeCun accused rivals, including DeepMind co-founder Demis Hassabis, of playing up existential risks of the technology in an attempt “to perform a regulatory capture” of the industry. Hassabis then hit back in an interview with Bloomberg on Wednesday, calling the criticisms preposterous.
On the summit’s fringes, Ciaran Martin, the former head of the UK’s National Cyber Security Center, said there’s “genuine debate between those who take a potentially catastrophic view of AI and those who take the view that it’s a series of individual, sometimes-serious problems, that need to be managed.”
“While the undertones of that debate are running through all of the discussions,” Martin said, “I think there’s an acceptance from virtually everybody that the international, public and private communities need to do both. It’s a question of degree.”
In closed-door sessions at the summit, there were discussions about whether to pause the development of next-generation “frontier” AI models and the “existential threat” this technology may pose “to democracy, human rights, civil rights, fairness, and equality,” according to summaries published by the British government late Wednesday.
Between seminars, Musk was “mobbed” and “held court” with delegates from tech companies and civil society, according to a diplomat. But during a session about the risks of losing control of AI, he quietly listened, according to another attendee, who said the seminar was nicknamed the “Group of Death.”
Matt Clifford, a representative of the UK Prime Minister who helped organize the summit, tried to square the circle and suggest the disagreement over AI risks wasn’t such a dichotomy.
“This summit’s not focused on long-term risk; this summit’s focused on next year’s models,” he told reporters on Wednesday. “How do we address potentially catastrophic risks — as it says in the Bletchley Declaration — from those models?” he said. “The ‘short term, long term’ distinction is very often overblown.”
By the end of the summit’s first day, there were some signs of a rapprochement between the two camps. Max Tegmark, a professor at the Massachusetts Institute of Technology who previously called to pause the development of powerful AI systems, said “this debate is starting to melt away.”
“Those who are concerned about existential risks, loss of control, things like that, realize that to do something about it, they have to support those who are warning about immediate harms,” he said, “to get them as allies to start putting safety standards in place.”
Bloomberg Businessweek
SEE
Countries at UK summit pledge to tackle AI's potentially 'catastrophic' risks
Tom Cruise’s ‘Mission: Impossible — Dead Reckoning’ Inspired President Biden to Bolster Security Against AI Threats
Bletchley Declaration: Nations sign agreement for safe and responsible AI advancement
'AI' named Collins Word of the Year
Rishi Sunak first world leader to say AI poses threat to humanity
Historic UK codebreaking base to host ‘world first’ AI safety summithttps://plawiuk.blogspot.com/2023/08/balanced-scorecard-positives-and.html
The case for taking AI seriously as a threat to humanity
Why some people fear AI, explained.
No comments:
Post a Comment