Thursday, November 02, 2023

Countries at UK summit pledge to tackle AI's potentially 'catastrophic' risks


Britain's Prime Minister Rishi Sunak welcomes US Vice President Kamala Harris to 10 Downing Street in London, Wednesday, Nov. 1, 2023. Harris is on a two day visit to England to attend the AI Summit at Bletchley Park. (AP Photo/Kirsty Wigglesworth)


Wu Zhaohui, third left, Chinese Vice Minister of Science and Technology looks on as other delagats takes their places for the family photo during the AI Saftey Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)


US Vice President Kamala Harris waves before delivering a policy speech on the Biden-Harris Administration's vision for the future of Artificial Intelligence (AI), at the US Embassy in London, Wednesday, Nov. 1, 2023. Harris is on a two day visit to England to attend the AI Summit at Bletchley Park. (AP Photo/Kin Cheung)


Tesla and SpaceX's CEO Elon Musk attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. (Leon Neal/Pool Photo via AP)

A delegate takes a selfie with Tesla and SpaceX's CEO Elon Musk during the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. (Toby Melville/Pool Photo via AP)



Britain's Michelle Donelan, Secretary of State for Science, Innovation and Technology, left, listens to China's Vice Minister of Science and Technology Wu Zhaohui speak during the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. (Leon Neal/Pool Photo via AP)



Mustafa Suleyman co founder and CEO of Inflection AI speaks to journalist during the AI Safety Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)


Britain's Michelle Donelan, Secretary of State for Science, Innovation and Technology, right, and Wu Zhaohui, Chinese Vice Minister of Science and Technology, shake hands prior to the AI Saftey Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)



Britain's Michelle Donelan, Secretary of State for Science, Innovation and Technology, 6th right front row, with Digital Ministers who are attending the AI Saftey Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)


Yoshua Bengio, Scientific Director Mila Quebec AI Institute speaks to the Associated Press during the AI Safety Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)



Mustafa Suleyman co founder and CEO of Inflection AI speaks to journalist during the AI Safety Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)

KELVIN CHAN and JILL LAWLESS
Wed, November 1, 2023 

BLETCHLEY PARK, England (AP) — Delegates from 28 nations, including the U.S. and China, agreed Wednesday to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.

The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge “frontier” AI that some scientists warn could pose a risk to humanity's very existence.

British Prime Minister Rishi Sunak said the declaration was “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren.”

But U.S. Vice President Kamala Harris urged Britain and other countries to go further and faster, stressing the transformations AI is already bringing and the need to hold tech companies accountable — including through legislation.

In a speech at the U.S. Embassy, Harris said the world needs to start acting now to address “the full spectrum” of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

“There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential," she said, citing a senior citizen kicked off his health care plan because of a faulty AI algorithm or a woman threatened by an abusive partner with deep fake photos.

The AI Safety Summit is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.

Harris is due to attend the summit on Thursday, joining government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak's governing Conservative Party.

Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work toward “shared agreement and responsibility” about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.

China's Vice Minister of Science and Technology, Wu Zhaohui, said AI technology is “uncertain, unexplainable and lacks transparency.”

“It brings risks and challenges in ethics, safety, privacy and fairness. Its complexity is emerging," he said, noting that Chinese President Xi Jinping last month launched the country's Global Initiative for AI Governance.

“We call for global collaboration to share knowledge and make AI technologies available to the public under open source terms,” he said.

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a conversation to be streamed on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google's DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the “godfathers” of AI, are also attending the meeting at Bletchley Park, a former top secret base for World War II codebreakers that’s seen as a birthplace of modern computing.

Attendees said the closed-door meeting's format has been fostering healthy debate. Informal networking sessions are helping to build trust, said Mustafa Suleyman, CEO of Inflection AI.

Meanwhile, at formal discussions “people have been able to make very clear statements, and that’s where you see significant disagreements, both between countries of the north and south (and) countries that are more in favor of open source and less in favor of open source," Suleyman told reporters.

Open source AI systems allow researchers and experts to quickly discover problems and address them. But the downside is that once an an open source system has been released, “anybody can use it and tune it for malicious purposes,” Bengio said on the sidelines of the meeting.

“There's this incompatibility between open source and security. So how do we deal with that?"

Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.

In contrast, Harris stressed the need to address the here and now, including “societal harms that are already happening such as bias, discrimination and the proliferation of misinformation.”

She pointed to President Joe Biden’s executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest.

Harris also encouraged other countries to sign up to a U.S.-backed pledge to stick to “responsible and ethical” use of AI for military aims.

“President Biden and I believe that all leaders … have a moral, ethical and social duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” she said.

___

Lawless reported from London.


Watch as world leaders gather for second day of AI summit at Bletchley Park

Oliver Browning
Thu, November 2, 2023 

Watch as Britain hosts a global summit on artificial intelligence at Bletchley Park, inviting political leaders and tech bosses to try to agree an approach to the fast-developing technology.

Losing control of AI is the biggest concern around the computer science, the technology secretary said on Thursday 2 November, the second day of the summit.

Michelle Donelan said a Terminator-style scenario was a “potential area” where AI development could lead but “there are several stages before that”.

She was speaking to Times Radio from Bletchley Park, where the government has convened delegates from around the world alongside tech firms and civil society to discuss the risks of the advancing technology.

Ms Donelan said the government has a responsibility to manage the potential risks, but also said AI offered “humongous benefits”.

“We have convened countries across the globe, companies that are working in this space producing that cutting-edge AI and also academics, scientists, experts from all over the world to have a conversation and work out, ‘OK, what are the risks?’” she said.

“How can we work together in a long-term process so that we can really tackle this and get the benefits for humanity, not just here in the UK, but across the globe?”


Meta's Yann LeCun joins 70 others in calling for more openness in AI development

Paul Sawers
Updated Wed, November 1, 2023


On the same day the U.K. gathered some of the world's corporate and political leaders into the same room at Bletchley Park for the AI Safety Summit, more than 70 signatories put their name to a letter calling for a more open approach to AI development.

"We are at a critical juncture in AI governance," the letter, published by Mozilla, notes. "To mitigate current and future harms from AI systems, we need to embrace openness, transparency and broad access. This needs to be a global priority."

Much like what has gone on in the broader software sphere for the past few decades, a major backdrop to the burgeoning AI revolution has been open versus proprietary -- and the pros and cons of each. Over the weekend, Facebook parent Meta's chief AI scientist Yann LeCun took to X to decry efforts from some companies, including OpenAI and Google's DeepMind, to secure "regulatory capture of the AI industry" by lobbying against open AI R&D.

"If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote.

And this is a theme that continues to permeate the growing governance efforts emerging from the likes of President Biden's executive order and the AI Safety Summit hosted by the U.K. this week. On the one hand, heads of large AI companies are warning about the existential threats that AI poses, arguing that open source AI can be manipulated by bad actors to more easily create chemical weapons (for example), while on the other hand counter arguments posit that such scaremongering is merely to help concentrate control in the hands of a few protectionist companies.
Proprietary control

The truth is probably somewhat more nuanced than that, but it's against that backdrop that dozens of people put their name to an open letter today, calling for more openness.

"Yes, openly available models come with risks and vulnerabilities -- AI models can be abused by malicious actors or deployed by ill-equipped developers," the letter says. "However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.

Esteemed AI researcher LeCun -- who joined Meta 10 years ago -- attached his name to the letter, alongside numerous other notable names including Google Brain and Coursera co-founder Andrew Ng, Hugging Face co-founder and CTO Julien Chaumond and renowned technologist Brian Behlendorf from the Linux Foundation.

Specifically, the letter identifies three main areas where openness can help safe AI development, including through enabling greater independent research and collaboration, increasing public scrutiny and accountability, and lowering the barriers to entry for new entrants to the AI space.

"History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation," the letter notes. "Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there."

A ‘world-first’ AI agreement, Elon Musk and backlash from tech community: The UK's AI summit
Pascale Davies
Wed, November 1, 2023

A ‘world-first’ AI agreement, Elon Musk and backlash from tech community: The UK's AI summit

International governments signed a “world-first” agreement on artificial intelligence (AI) at a global summit in the United Kingdom to combat the "catastrophic" risks the technology could present.

Tech experts, global leaders and representatives from across 27 countries and the European Union are attending the UK’s AI Safety Summit, which runs from Wednesday until Thursday at Bletchley Park, once home to Second World War codebreakers.

The UK announced it would invest in an AI supercomputer, while the Tesla and X boss Elon Musk said on the sidelines of the event that AI is "one of the biggest threats to humanity".

However, many in the tech community signed an open letter calling for a spectrum of approaches — from open source to open science and for scientists, tech leaders and governments to work together.

Here are the key takeaways from the event.
The AI agreement

The Bletchley Declaration on AI safety is a statement signed by representatives and companies of 28 countries, including the US, China, and the EU. It aims to tackle the risks of so-called frontier AI models - the large language models developed by companies such as OpenAI.

The UK government called it a “world-first” agreement between the signatories, which aims to identify the “AI safety risks of shared concern” and build “respective risk-based policies across countries”.

It warns frontier AI, which is the most sophisticated form of the technology that is being used in generative models such as ChatGPT, has the "potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models".

An exterior view shows the mansion house at Bletchley Park museum in the town of Bletchley in Buckinghamshire, England, Jan. 15, 2015. - Matt Dunham/Copyright 2023 The AP.

The UK’s Secretary of State for Science, Innovation and Technology Michelle Donelan said the agreement was a “landmark achievement” and that it “lays the foundations for today’s discussions”.

However, experts argue the agreement does not go far enough.

"Bringing major powers together to endorse ethical principles can be viewed as a success, but the undertaking of producing concrete policies and accountability mechanisms must follow swiftly," Paul Teather, CEO of AI-enabled research firm AMPLYFI, told Euronews Next.

"Vague terminology leaves room for misinterpretation while relying solely on voluntary cooperation is insufficient toward sparking globally recognised best practices around AI".

AI: ChatGPT consumes more energy than a traditional Internet search
More AI summits

The UK government also announced that there would be future AI safety summits.

South Korea will launch another “mini virtual” Summit on AI in the next six months and France will host the next in-person AI summit next year.
Who said what?

Billionaire tech entrepreneur Elon Musk arrived at the summit and kept quiet during the talks but warned about the risks of AI.

"We’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us.”

Toby Melville/Pool Photo via AP - Toby Melville/Reuters

Musk, who co-founded the ChatGPT developer OpenAI and has launched a new venture called xAI, said there should be a “referee” for tech companies but that regulation should be implemented with caution.

“I think what we’re aiming for here is... first, to establish that there should be a referee function, I think there should.

"And then, you know, be cautious in how regulations are applied, so you don’t go charging in with regulations that inhibit the positive side of AI."

Musk will speak with British Prime Minister Rishi Sunak later on Thursday on his platform X, formerly Twitter.
Ursula von der Leyen

European Commission chief Ursula von der Leyen warned AI came with risks and opportunities and praised how quantum physics led to nuclear energy but also societal risks such as the atomic bomb.

European Commission President Ursula von der Leyen arrives for a plenary session at the AI Safety Summit at Bletchley Park in Milton Keynes, England, Thursday, Nov. 2, 2023. - Alastair Grant/Copyright 2023 The AP. All rights reserved

"We are entering a completely different era. We are now at the dawn of an era where machines can act intelligently. My wish for the next five years is that we learn from the past, and act fast!" she said.

Von der Leyen urged for a system of objective scientific checks and balances, with an independent scientific community, and for AI safety standards that are accepted worldwide.

She said the EU's AI Act is in the final stages of the legislative process. She also said the potential of a European AI Office is being discussed which could "deal with the most advanced AI models, with responsibility for oversight" and would cooperate with similar entities around the world.
Kamala Harris

US Vice President Kamala Harris said that action was needed now to address “the full spectrum” of AI risks and not just “existential” fears about threats of cyber attacks or the development of bioweapons.

US Vice President Kamala Harris, with husband Second Gentleman Douglas Emhoff, arrives at Stansted Airport for her visit to the UK to attend the AI safety summit. - Joe Giddens/AP

“There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential,” she said at the US embassy in London.
King Charles III

Britain’s King Charles III sent in a video speech in which he compared the development of AI to the significance of splitting the atom and harnessing fire.

He said AI was “one of the greatest technological leaps in the history of human endeavour” and said it could help “hasten our journey towards net zero and realise a new era of potentially limitless clean green energy”.

But he warned: “We must work together on combatting its significant risks too”.
Backlash from the tech community

Meta's president of global affairs Nick Clegg said there was "moral panic" over new technologies, indicating government regulations could face backlash from tech companies.

“New technologies always lead to hype,” Clegg said. “They often lead to excessive zeal amongst the advocates and excessive pessimism amongst the critics.

“I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet.”

Mark Surman, president and executive director of the Mozilla Foundation linked to browser Firefox, also raised concerns that the summit was a world-stage platform for private companies to push their interests.

Mozilla published an open letter on Thursday, signed by academics, politicians and employees from private companies, in particular Meta, as well as Nobel Peace Prize Maria Ressa.

"We have seen time and again that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst," Surman said in comments to Euronews Next.

"We’re asking policymakers to invest in a range of approaches - from open source to open science - in the race to AI safety. Open, responsible and transparent approaches are critical to keep us safe and secure in the AI era," he added.
A new AI supercomputer

The United Kingdom announced it will invest £225 million (€257 million) in a new AI supercomputer, called Isambard-AI after the 19th-century British engineer Isambard Brunel.

It will be built at The University of Bristol, in southern England, and the UK government said it would be 10 times faster than the UK’s current quickest machine.

Alongside another recently announced UK supercomputer called Dawn, the government hopes both will achieve breakthroughs in fusion energy, health care and climate modelling.

Both computers aim to be up and running next summer.
The UK’s ambitions

It is no secret that Sunak wants the UK to be a leader in AI, it is unclear how it will be regulated and other countries are already setting their own AI regulations. There is stiff competition from the US, China and the EU.

President Joe Biden said “America will lead the way during this period of technological change” after signing an AI executive order on October 30. Meanwhile, the EU is also trying to set its own set of AI guidelines.

Britain's Prime Minister Rishi Sunak speaks to journalists upon his arrival for the second day of the UK Artificial Intelligence (AI) Safety Summit, at Bletchley Park. - Justin Tallis/Pool Photo via AP

However, unlike the EU, the UK has said it does not plan to adopt new legislation to regulate AI but would instead require the existing regulators in the UK to be responsible for AI in their sectors.

China too has been pushing through its own rules governing generative AI.

The country’s vice minister of technology Wu Zhaohui said at the summit China would contribute to an “international mechanism [on AI], broadening participation, and a governance framework based on wide consensus delivering benefits to the people, and building a community with a shared future for mankind."


China, US, UK unite behind AI safety at summit

Reuters Videos
Wed, November 1, 2023 

STORY: "And here we are for the first time really in human history with something that's going to be far more intelligent than us.”

Elon Musk expressed grave concern about the rapid development of artificial intelligence on Wednesday at the world's first major summit on AI safety.

“I do think it's one of the existential risks that we're facing, potentially the most pressing one."

Musk said the aim of the inaugural two-day summit was to establish what he called a "third-party referee" to observe AI development and to sound the alarm if needed.

Fears about the impact AI could have on economies and society took off last year when Microsoft-backed OpenAI made ChatGPT available to the public.

Some worry that, in time, machines could achieve greater intelligence than humans, resulting in unintended consequences.

In a first for Western efforts to manage the dangers, China's vice minister of science and technology joined U.S. and EU leaders, as well as tech bosses at England’s Bletchley Park, home of Britain's World War Two code-breakers.

It was here the countries signed the ‘Bletchley Declaration’ – an agenda focused on identifying issues with AI and developing policies to mitigate them.

'I firmly believe that we must be guided by a common set of understandings among nations.”

In a speech at the U.S. Embassy in London, U.S. Vice President Kamala Harris called for urgent global action to address potential threats posed by AI:

“From AI enabled cyber attacks at a scale beyond anything we've seen before to AI formulated bioweapons that could endanger the lives of millions of people, these threats are often referred to as the existential threats of AI because, of course, they could endanger the very existence of humanity.”

The United States used the British summit to announce it would establish a new AI Safety Institute, which will assess potential risks.

Harris's decision to give her speech and hold some meetings with attendees away from the summit raised some eyebrows, with some executives and lawmakers suggesting Washington was trying to overshadow Prime Minister Rishi Sunak's summit.

British officials denied that, saying they wanted as many voices as possible.

And, later in the day, Sunak welcomed Harris to 10 Downing Street for dinner. She plans to attend the British summit on Thursday.

AI's most famous leaders are in a huge fight after one said Big Tech is cynically exaggerating the risk of AI wiping out humanity

Hasan Chowdhury
Wed, November 1, 2023 

Meta's Chief AI Scientist Yann LeCun has lashed out those pushing claims that AI is an extinction threat.
Kevin Dietsch/Getty Images

Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans.


That seems to attack AI leaders such as DeepMind's Demis Hassabis and OpenAI's Sam Altman.


AI's biggest names are all piling in.


Some of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech.

Andew Ng, a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit.

"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," Ng said. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."

Ng didn't name names, but figures who have pushed this line include Elon Musk, Ng's one-time student and OpenAI cofounder Sam Altman, DeepMind cofounder Demis Hassabis, and AI pioneer and fellow ex-Googler Geoffrey Hinton, and computer scientist Yoshua Bengio.

These discussions around AI's impact on society have come to the fore after the arrival of scary-smart generative AI tools such as ChatGPT.

Hinton, a British-Canadian computer scientist considered one of AI's godfathers, shot back at Ng and doubled down.

"Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy," he wrote in a post on X. "A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat."

Meta's chief AI scientist Yann LeCun, also known as an AI godfather for his work with Hinton, sided with Ng.

"You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models," he wrote on X to Hinton.

LeCun has become increasingly concerned that regulation designed to quell the so-called extinction risks of AI might kill off the field's burgeoning open-source community. He warned over the weekend that "a small number of companies will control AI" if their attempt at regulatory capture succeeds.

Meredith Whittaker, president of messaging app Signal and chief advisor to the AI Now Institute, said those claiming AI was an existential risk were pushing a "quasi-religious ideology" that is "unmoored from scientific evidence."

"This ideology is being leveraged by Big Tech to promote their products/shore up their position," Whittaker wrote on X. Whittaker and others argue that Big Tech benefits from scaremongering about hypothetical risks as a distraction from more immediate real
world issues, such as copyright theft and putting workers out of jobs.


Politicians commit to collaborate to tackle AI safety, US launches safety institute

Ingrid Lunden
Updated Wed, November 1, 2023 


The world is locked in a race, and competition, over dominance in AI, but today, a few of them appeared to come together to say that they would prefer to collaborate when it comes to mitigating risk.

Speaking at the AI Safety Summit in Bletchley Park in England, the U.K. minister of technology, Michelle Donelan, announced a new policy paper, called the Bletchley Declaration, which aims to reach global consensus on how to tackle the risks that AI poses now and in the future as it develops. She also said that the summit is going to become a regular, recurring event: Another gathering is scheduled to be held in Korea in six months, she said; and one more in France six months after that.

As with the tone of the conference itself, the document published today is relatively high level.

"To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible," the paper notes. It also calls attention specifically to the kind of large language models being developed by companies like OpenAI, Meta and Google and the specific threats they might pose for misuse.

"Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models," it noted.

Alongside this, there were some concrete developments.

Gina Raimondo, the U.S. secretary of commerce, announced a new AI safety institute that would be housed within the Department of Commerce and specifically underneath the department's National Institute of Standards and Technology (NIST).

The aim, she said, would be for this organization to work closely with other AI safety groups set up by other governments, calling out plans for a Safety Institute that the U.K. also plans to establish.

"We have to get to work and between our institutes we have to get to work to [achieve] policy alignment across the globe," Raimondo said.

Political leaders in the opening plenary today spanned not just representatives from the biggest economies in the world, but also a number speaking for developing countries, collectively the Global South.

The lineup included Wu Zhaohui, China's Vice Minister of Science and Technology; Vera Jourova, the European Commission Vice President for Values and Transparency; Rajeev Chandrasekhar, India's minister of state for Electronics and Information Technology; Omar Sultan al Olama, UAE Minister of State for Artificial Intelligence; and Bosun Tijani, technology minister in Nigeria. Collectively, they spoke of inclusivity and responsibility, but with so many question marks hanging over how that gets implemented, the proof of their dedication remains to be seen.

"I worry that a race to create powerful machines will outpace our ability to safeguard society," said Ian Hogarth, a founder, investor and engineer, who is currently the chair of the U.K. government's task force on foundational AI models, who has had a big hand to play in putting together this conference. "No one in this room knows for sure how or if these next jumps in compute power will translate into benefits or harms. We’ve been trying to ground [concerns of risks] in empiricism and rigour [but] our current lack of understanding… is quite striking.

"History will judge our ability to stand up to this challenge. It will judge us over what we do and say over the next two days to come."

AI summit brings Elon Musk and world leaders to Bletchley Park

Danny Fullbrook - BBC News
Wed, November 1, 2023 

The two-day summit will be held at Bletchley Park, near Milton Keynes, where codebreakers hastened the end of the Second World War

This week political leaders, tech industry figures and academics will meet at Bletchley Park for a two-day summit on artificial intelligence (AI). The location is significant as it was here that top British codebreakers cracked the "Enigma Code", hastening the end of World War Two. So what can we expect from this global event?

Who is attending the AI summit at Bletchley Park?


Elon Musk and Rishi Sunak will take part in an interview together on Thursday

There is no public attendee list, but some well-known names have indicated they will appear.

About 100 world leaders, leading AI experts and tech industry bosses will attend the two-day summit at the stately home on the edge of Milton Keynes.

The US Vice President, Kamala Harris, and European Commission (EC) President Ursula von der Leyen are expected to attend.

Deputy Prime Minister Oliver Dowden told BBC Radio 4 that China accepted an invite, but added: "you wait and see who actually turns up".

Tech billionaire Elon Musk will attend ahead of a live interview with UK Prime Minister Rishi Sunak on Thursday evening.

The BBC also understands Open AI's Sam Altman and Meta's Nick Clegg will join the gathering - as well as a host of other tech leaders.

Experts such as Prof Yann LeCun, Meta's chief AI scientist, are also understood to be there.

The government said getting these people in the same room at the same time to talk at all is a success in itself - especially if China does show up.

What will be discussed and why does it matter?


Earlier this week Prime Minister Rishi Sunak warned AI could help make it easier to build chemical and biological weapons

The government has said the purpose of the event is to consider the risks of AI and discuss how they could be mitigated.

These global talks aim to build an international consensus on the future of AI.

There is concern frontier AI models pose potential safety risks if not developed responsibly, despite the potential to cause economic growth, scientific progress and other public benefits.

Some argue the summit has got its priorities wrong.

Instead of doomsday scenarios, which they believe is a comparatively small risk, they want a focus on more immediate threats from AI.

Prof Gina Neff, who runs an AI centre at the University of Cambridge said: "We're concerned about what's going to happen to our jobs, what's going to happen to our news, what's going to happen to our ability to communicate with one another".

Professor Yoshua Bengio, who is considered one of the "Godfathers" of AI, suggested a registration and licensing regime for frontier AI models - but acknowledged that the two-day event may need to focus on "small steps that can be implemented quickly."

What are the police doing?


Police have increased their presence in the run up to the world event

Thames Valley Police has dedicated several resources to the event, providing security to both attendees and the wider community.

Those resources include the police's mounted section, drone units, automatic number plate recognition officers and tactical cycle units.

The teams will assist the increased police presence on the ground ahead of the AI Summit.

People have been encouraged to ask officers any questions or raise any concerns when they see them.

Local policing area commander for Milton Keynes, Supt Emma Baillie, said she expected disruption to day-to-day life in Bletchley but hoped it would be kept to a minimum.

"As is natural, we rely on our community to help us," she said.

"Bletchley has a strong community, and I would ask anybody who sees anything suspicious or out of the ordinary, to please report this to us."


Security around the global event will be paramount


What is Bletchley Park famous for?


Alan Turing played a key role as part of the codebreaking team at Bletchley Park

The Victorian mansion at Bletchley Park served as the secret headquarters of Britain's codebreakers during World War Two.

Coded messages sent by the Nazis, including orders by Adolf Hitler, were intercepted and then translated by the agents.

Mathematician Alan Turing developed a machine, the bombe, that could decipher messages sent by the Nazi enigma device.

By 1943, Turing's machines were cracking 84,000 messages each month - equivalent to two every minute.

The work of the codebreakers helped give the Allied forces the upper hand and their achievements have been credited with shortening the war by several years.

How will it affect Bletchley Park itself?


Blocks A and B in Bletchley Park near Milton Keynes, where Britain's finest minds worked during World War Two

Ian Standon, chief executive of Bletchley Park, said it was a "huge privilege and honour to be selected as the location for this very important summit."

The museum has had to close for a week until Sunday while the event takes place.

Temporary structures have appeared over recent weeks to host the many visitors for the summit.

Mr Standon praised his team for their hard work in preparing for the event, especially when dealing with added security over the next couple of days.

"We're in sort of security lockdown but that's a very small price to pay for the huge amount of publicity we're going to get out of this particular project," he said.

"For us at Bletchley Park this is an opportunity to put the place and its story on the world stage and hopefully people around the world will now understand and recognise what Bletchley Park is all about."


Everything you're hearing right now about AI wiping out humans is a big con

Beatrice Nolan
Wed, November 1, 2023 


Doomsayers want us all to believe an AI coup could happen, but industry pioneers are pushing back.

Many are shrugging off the supposed existential risks of AI, labeling them a distraction.

They argue big tech companies are using the fears to protect their own interests.


You've heard a lot about AI wiping out humanity. From AI godfathers to leading CEOs, there's been a seemingly neverending flood of warnings about how AI will be our enemy, not our friend.

Here's the thing: not only is an AI coup unlikely, but the idea of one is conveniently being used to distract you from more pressing issues, according to numerous AI pioneers who have recently spoken out.

Two experts, including Meta's chief AI scientist, have dismissed the concerns as distractions, pointing the finger at tech companies attempting to protect their own interests.

AI godfather Yann LeCun, Meta's chief AI scientist, accused some of the most prominent founders in AI of "fear-mongering" and "massive corporate lobbying" to serve their own interests. He said much of the doomsday rhetoric was about keeping control of AI in the hands of a few.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote. "They are the ones who are attempting to perform a regulatory capture of the AI industry."

Google DeepMind's Demis Hassabis told CNBC he disagreed with many of LeCun's remarks, adding it was important to start the conversation about regulating superintelligence early.

Representatives for OpenAI's Sam Altman and Anthropic's Dario Amodei did not immediately respond to Insider's request for comment.

Andrew Ng, an adjunct professor at Stanford University and cofounder of Google Brain, took a similar view over the weekend.

He told the Australian Financial Review that some companies were using the fears around AI to assert their own market dominance.

The outlet reported that he said some large tech companies didn't want to compete with open-source alternatives and were hoping to squash competition with strict regulation triggered by AI extinction fears.

Several AI experts have long disputed some of the more far-fetched warnings.

It hasn't helped that the statements issued by various centers — and backed by prominent AI leaders — have been notably vague, leaving many struggling to make sense of the dramatic claims.

One 23-word statement backed by the CEOs of OpenAI, DeepMind, and Anthropic drew a largely unexplained link between the rise of advanced AI and threats to human existence like nuclear war and pandemics.

The timing of the pushback, ahead of the UK's AI safety summit and following Biden's recent executive order on AI, is also significant.

More experts are warning that governments' preoccupation with the existential risks of AI is taking priority over the more immediate threats.

Aidan Gomez, an author of a research paper that helped create the technology behind chatbots, told The Guardian that while the more existential threats posed by AI should be "studied and pursued," they posed a "real threat to the public conversation."

"I think in terms of existential risk and public policy, it isn't a productive conversation to be had," he said. "As far as public policy and where we should have the public-sector focus — or trying to mitigate the risk to the civilian population — I think it forms a distraction, away from risks that are much more tangible and immediate."

Merve Hickok, the president of the Center for AI and Digital Policy, raised similar concerns about the UK AI safety summit's emphasis on existential risk.

Hickok told Insider that while the event "was initially born out of a commitment to promote democratic values," it now has a "narrow focus on safety and existential risk," which risks sidelining other pressing concerns to civil society.

In a letter addressed to UK Prime Minister Rishi Sunak, the center encouraged the UK government to include more pressing topics "such as bias, equity, fairness, and accountability" in the meeting agenda.

The UK government said the event, which will be opened by technology secretary Michelle Donelan, would set out its "vision for safety and security to be at the heart of advances in AI, in order to enable the enormous opportunities it will bring."


The UK AI summit's narrow focus on safety and existential risk means the real issues are being ignored, an AI ethicist says

Beatrice Nolan
Business Insider
Thu, November 2, 2023 

The UK's AI safety summit kicks off on Wednesday.


Some have already slammed the event for ignoring several key issues.


Several groups have criticized the UK government's focus on the existential risks of AI.


The UK's AI safety summit kicked off on Wednesday, but the event is already surrounded by criticism.

Several groups have criticized the UK government's emphasis on some of the more existential risks of AI, sidelining other and perhaps more pressing concerns.

Merve Hickok, the president of the Center for AI and Digital Policy, told Insider the summit began with a shared commitment with the US to work together for democratic AI values.

"Then somehow, after the Prime Minister's meetings with tech companies, it started focusing narrowly only on the existential crisis as defined by AGI taking over," she said

Not only has the focus been narrowed, she added, but the people at the table are mostly major tech companies.

"Civil society and other voices and communities are sidelined," she said. "Also, all the concerns about existing AI systems which are impacting our fundamental rights are sidelined as well."

Hickok is not the only one to raise concerns about the current rhetoric around AI safety.

Two leading experts have dismissed existential threats and some of the doomsday scenarios. Both have suggested big tech companies might have something to gain from inflating the more dramatic fears around AI.

Hickok's Center for AI and Digital Policy wrote to UK the prime minister, Rishi Sunak, earlier in the month to urge him to include pressing issues of "bias, equity, fairness, and accountability" in the meeting agenda.

Hickok added that the event's narrow focus risked sidelining these other threats to civil society.

"The UK should not let the AI safety agenda displace the AI fairness agenda," she said.


The 'stakes are too high' to ignore extinction risks of AI, AI godfather warns

Beatrice Nolan
Updated Thu, November 2, 2023 


AI godfather Yoshua Bengio says the risks of AI should not be underplayed.


In an interview with Insider, Bengio criticized peers dismissing AI's threat to humanity.


His remarks come after Meta's Yann LeCun accused Bengio and AI founders of "fear-mongering."

Claims by Meta's chief AI scientist, Yann LeCun, that AI won't wipe out humanity are dangerous and wrong according to one of his fellow AI godfathers.

"Yann and others are making claims that this is not a problem, but he doesn't know — nobody knows," Yoshua Bengio, the Canadian computer scientist and deep learning pioneer, told Insider Thursday. "I think it's dangerous to make these claims without any strong evidence that it can't happen."

LeCun, the storied French computer scientist who now leads Meta's AI lab, sparked a furious debate on X last weekend after accusing some of the most prominent founders in AI of "fear-mongering" with the ultimate goal of controlling the development of artificial intelligence.

LeCun argued that by overstating the apparently farfetched idea that AI will wipe out humans, these CEOs could influence governments to bring in punitive regulation that would hurt their competition.

"If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote.

Bengio, who once worked with LeCun at Bell Labs and was co-awarded the Turing Award with him in 2018 for their work in deep learning, told Insider that LeCun was too dismissive of the risks.

"Yann himself agreed that it was plausible we would reach human-level capabilities in the future, which could take a few years to a few decades and I completely agree with the timeline," he said. "I think there's too much uncertainty, and the stakes are too high."

Bengio has said in the past that current AI systems are not anywhere close to posing an existential risk but warned things could get "catastrophic" in the future.

AI's leading lights are unlikely to come to a consensus any time soon.

Andrew Ng, cofounder of Google Brain, said this week that big tech was over-inflating the existential risks of AI to squash competition from the open-source community.

As AI's biggest names subsequently began piling in, Yann went on to call out his fellow Turing Award winners Geoffrey Hinton and Bengio in another post.

In a response to Hinton, who has claimed AI poses an extinction risk, LeCun wrote on X: "You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models."

Bengio did warn governments need to ensure they weren't only listening to tech companies when formulating regulation.

"In terms of regulation, they should listen to independent voices and make sure that the safety of the public and the ethical considerations are at center stage," he said.

"Existential risk is one problem but the concentration of power, in my opinion, is the number two problem," he said.

Elon Musk is coming to the UK's big AI safety party. Some people actually building AI say they got frozen out.

Tom Carter
Wed, November 1, 2023

The UK's AI summit is underway. Both Elon Musk and OpenAI's Sam Altman are attending.

Some AI experts and startups say they've been frozen out in favor of bigger tech companies.

They warn that the "closed door" event risks ensuring that AI is dominated by select companies.


A group of AI startups and industry experts are warning that their exclusion from a major AI summit risks ensuring that a handful of tech companies have future dominance over the new technology.

The UK's AI safety summit, which begins Wednesday at WWII code-breaking facility Bletchley Park, has attracted a glitzy mix of tech execs and political leaders from OpenAI's Sam Altman and Microsoft President Brad Smith to US Vice President Kamala Harris. Tesla CEO and Twitter owner, Elon Musk, is also attending.

The exclusive guest list has raised eyebrows, with some AI industry experts and labor groups warning that the event risks pandering to a group of big tech companies and ignoring others who are at the center of the AI boom.

Iris Ai founder Victor Botev, whose company has been building AI products since 2015, told Insider that startups had been frozen out of the summit in favor of bigger tech companies.

"Smaller AI firms and open-source developers often pioneer new innovations, yet their voices on regulation go unheard," he said.

"It is vital for any consultation on AI regulation to include perspectives beyond just the tech giants. The summit missed a great opportunity by only including 100 guests, who are primarily made up of world leaders and big tech companies," he added.

It comes after Yann LeCun, Meta's chief AI scientist, who is also expected to attend the event, accused AI companies like OpenAI, Anthropic, and Deepmind of "fear-mongering" and "massive corporate lobbying" to ensure that AI remains in the hands of a small collection of companies.

The UK's AI summit aims to bring together AI experts, tech bosses, and world leaders to discuss the risks of AI and find ways to regulate the new technology.

It has faced criticism for focusing too much on the existential threats that could be posed by hypothetical superintelligent AI, with UK Prime Minister Rishi Sunak warning that humanity could "lose control" of the technology.

"It is far from certain whether the AI summit will have any lasting impact," Ekaterina Almasque, a general partner at European venture capital firm OpenOcean, which invests in AI, told Insider.

"It looks likely to focus mostly on bigger, long-term risks from AI, and far less on what needs to be done today to build a thriving AI ecosystem," she added.

Almasque said that much of the AI start-up community, which will bear the brunt of any regulation proposed at the summit, had been "shut out" of the event, and warned that this had to change in the future if AI regulation was to succeed.

"Going forward, we must have more voices for startups themselves. The AI Safety Summit's focus on Big Tech, and the shutting out of many in the AI start-up community, is disappointing.

It is vital that industry voices are included when shaping regulations that will directly impact technological development," she added.

A spokesperson for the UK government's Department for Science, Innovation, and Technology – organizing the summit – told Insider that there will be a range of attendees from "international governments, academia, industry, and civil society."

"These attendees are the right mix of expertise and willingness to be part of the discussions," they said.

Workers groups such as the UK's Trades Union Congress and the American Federation of Labor and Congress of Industrial Organizations, which represents 12.5 million US workers, have also criticized the summit. AI is expected to have a huge impact on many white-collar jobs, with Goldman Sachs warning earlier this year that over 300 million jobs could be affected by new technology.

An open letter signed by over 100 individuals and labor groups said that the AI summit was a "closed door" event that was prioritizing big tech companies over groups feeling the impact of generative AI now, like small businesses and artists.

"The communities and workers most affected by AI have been marginalized by the Summit," they said.

The signatories also described the limited guest list as a "missed opportunity," and warned that the conference's focus on AI's hypothetical existential threats risked missing the point.

"As it stands, it is a closed-door event, overly focused on speculation about the remote 'existential risks' of 'frontier' AI systems; systems built by the very same corporations who now seek to shape the rules," they said.

Elon Musk says AI means eventually no one will need to work

Lakshmi Varanasi
Thu, November 2, 2023 


UK Prime Minister Rishi Sunak and Elon Musk chatted about AI at the close of the UK's AI Safety Summit.

Musk said advances in AI will lead to a world where "no job is needed."

Musk also suggested we'll have "universal high income" instead of just universal basic income.

People may be fretting about how the coming AI job apocalypse will impact them, but Elon Musk has a pretty utopian view of how AI will reshape the labor market.

Musk said that advances in AI will simply lead to a world "where no job is needed," in a conversation with UK Prime Minister Rishi Sunak at the close of the UK's inaugural AI Safety Summit on Thursday. Of course, people can still hold a job "for personal satisfaction," but one day, AI "will be able to do everything," Musk said.

And how exactly will people support themselves in this new, AI-powered world?

"You'll have universal high income," Musk told Sunak, presenting it as a superior alternative to universal basic income — one of Silicon Valley's dream solutions to income inequality — without specifying exactly how the two concepts differed.

"You'll rarely ask for anything," he said, outlining a "future of abundance" where there would be no scarcity of goods and services. As a result, AI will function as somewhat of an economic equalizer, he said, especially because it'll be accessible to everyone.

At the same time, he suggested that there might be "somewhat of the magic genie problem," so people will need to be careful about exactly what they "wish for," he said. Musk has been outspoken about the need to regulate AI and was among the list of tech execs and AI researchers who signed an open letter calling for a pause on AI development. During his discussion Sunak, he offered solutions ranging from an "off switch" to a keyword for putting humanoid robots into a safe state.

Still, his verdict — at least at the end of Thursday conversation— was that AI is likely to be 80% good and 20% bad.

No comments: