Thursday, November 02, 2023

Danish energy company Orsted cancels New Jersey wind projects


File Photo by Gary C. Caskey/UPI | License Photo

Nov. 1 (UPI) -- Danish energy company Orsted announced Tuesday that it has canceled two wind energy projects that were planned for New Jersey.

The projects, Ocean Wind 1 and 2, were intended to be built about 15 miles off the coast of southern New Jersey as part of the Biden administration's efforts to promote clean energy

The company cited supply-chain factors as the main reason for the projects being canceled.

"This is a consequence of additional supplier delays further impacting the project schedule and leading to an additional significant project delay. In addition, Orsted has updated its view on certain assumptions, including tax credit monetization and the timing and likelihood of final construction permit," Orsted said in a statement Tuesday.

Orsted leadership said the company remains committed to the goal of creating renewable energy for U.S. markets.

"We are extremely disappointed to announce that we are ceasing the development of Ocean Wind 1 and 2. We firmly believe the U.S. needs offshore wind to achieve its carbon emissions reduction ambition, and we remain committed to the US renewables market and truly value the efforts by the US government to support the build-up of the US offshore wind industry," said Orsted CEO Mads Nipper.

New Jersey Gov. Phil Murphy expressed disappointment in Orsted's decision, saying it "calls into question the company's credibility and competence."

Murphy said the company recently had indicated it was prepared to move forward with the project.

"As recently as several weeks ago, the company made public statements regarding the viability and progress of the Ocean Wind 1 project," Murphy said.

Murphy's administration tried to create a financial incentive for the Ocean Wind projects by passing a law that would have allowed Orsted to keep tax incentives that were initially intended to offset consumer costs.

Murphy said he has directed his administration "to review all legal rights and remedies and to take all necessary steps to ensure that Orsted fully and immediately honors its obligations."

The decision to cease the projects comes as the Biden administration this week announced the approval of the largest offshore wind project to date.

"The Biden-Harris administration today announced its approval of the Coastal Virginia Offshore Wind (CVOW) commercial project -- the fifth approval of a commercial-scale offshore wind energy project under President Biden's leadership," the Bureau of Ocean Energy Management said in a press release Tuesday.


Wind industry deals with blowback from Orsted scrapping 2 wind power projects in New Jersey

WAYNE PARRY
Updated Wed, November 1, 2023 










Orsted-Offshore Wind Cancellations
A worker opens a section street in Ocean City, N.J. on Sept. 12, 2023, at the start of land-based probing along the right-of-way where a power cable for New Jersey's first offshore wind farm was proposed to run. Orsted scrapped the project on Oct. 31, 2023, citing supply chain problems and high interest rates.

 (AP Photo/Wayne Parry)

ATLANTIC CITY, N.J. (AP) — Wind energy developer Orsted is writing off $4 billion, due largely to the cancellation of two large offshore wind projects in New Jersey whose financial challenges mirror those facing the nascent industry.

It added fresh uncertainty to an industry seen by supporters as a way to help end the burning of planet-warming fossil fuels, but derided by opponents as inherently unworkable without massive financial subsidies.

The Danish company said Tuesday night it is scrapping its Ocean Wind I and II projects off the coast of southern New Jersey due to problems with supply chains, higher interest rates, and a failure to obtain the amount of tax credits the company wanted.

“These are obviously some very tough decisions,” Mads Nipper, Orsted's CEO, said on an earnings conference call Wednesday.

He said the company, the world's largest offshore wind developer, decided “to de-risk the most painful part of our portfolio, and that is the U.S.”

That statement went straight to the heart about concerns over the financial viability of the offshore wind industry in the northeastern U.S., which is in its infancy but has extensive plans from New England to the Carolinas.

Some projects already have been canceled, and many offshore wind developers are seeking better terms from governments with whom they have already contracted. New York rejected such a request two weeks ago.

New Jersey approved a tax break for Orsted in July, letting it keep federal tax credits that otherwise would have gone to ratepayers.

“While periodic local opposition in the U.S. made some headlines, these projects ultimately come down to economics, so higher costs and lower power prices are working against offshore wind,” said Louis Knight, an analyst at Third Bridge, a research firm advising private equity and other businesses. “Higher interest rates are adding to financing costs for these projects. There are other, cheaper ways to develop power in the U.S., most notably with solar and natural gas.”

But the main appeal of offshore wind for supporters, including environmentalists, many state governments and the Biden administration is precisely that it is not a fossil fuel business. The hottest Northern Hemisphere summer ever measured hit this year, according to the World Meteorological Organization and the European climate service Copernicus.

“The urgency to transition to clean, renewable energy is an irreversible reality,” read a statement signed Wednesday by nearly 40 environmental, labor and community groups from New Jersey who support offshore wind, including the state's chapter of the Sierra Club. “In a world of warming temperatures and extreme weather in likely the hottest year on record, maintaining the status quo of fossil fuel generation is not an option as the cost of climate inaction is undeniably high.”

Orsted's stock price was down over 26% at midday Wednesday. The company said it hopes to re-use some supplies it has already purchased, such as cable and steel, on other projects.

Power generated from the Orsted projects was intended to come ashore and connect with the electrical grid at the site of a former coal-fired power plant that was blown up last week.

The industry also faces stiff political headwinds, in New Jersey and nationally, most of it from Republicans, who have convinced the U.S. Government Accountability Office to look into the industry.

Rep. Jeff Van Drew, a Republican who represents the area in southern New Jersey where Orsted's wind farms would have been built, exulted in the decision to scrap the projects.

“David defeated Goliath!” he said in a statement late Tuesday night, calling wind farms bad for the economy, the environment and electric customers.

Numerous resident groups also opposed the projects, citing similar concerns, and said they do not want to see the ocean horizon dotted with wind turbines.

“Without billions of dollars in tax breaks and subsidies, these projects never made sense and could not stand on their own,” said Robin Shaffer, a spokesman for Protect Our Coast NJ, one of the most vocal opposition groups.

Despite the challenges, some wind projects are moving forward. Orsted said it is proceeding with its Revolution Wind project in Connecticut and Rhode Island.

In Virginia, a utility’s plans for an enormous wind farm off that state’s coast gained key federal approval Tuesday. Dominion Energy received a favorable “record of decision” from federal regulators who reviewed the potential environmental impact of its plan to build 176 turbines in the Atlantic, more than 20 miles (32 kilometers) off Virginia Beach.

Pro-wind groups including the American Clean Power Association and the Oceantic Network acknowledged the setback posed by Orsted's cancellations. But both were heartened by progress on the Virginia project and Orsted's decision to continue with Revolution Wind, and both said the future of the industry is promising.

And New Jersey still has several other offshore wind projects in various stages of development, with four new proposals submitted in August alone. They join the one remaining project of the three originally approved by the state, Atlantic Shores. That is a project by Shell New Energies US and EDF Renewables North America.

Atlantic Shores said Wednesday it remains committed to its project, though it hinted in a statement that it, too, is seeking additional help.

“We are actively engaging in conversations with the administration, regulators, and elected leaders across New Jersey to identify viable solutions that will not only preserve the progress made thus far, but also facilitate the successful execution of Atlantic Shores Project 1,” the company said.

___

Follow Wayne Parry on X, formerly Twitter, at www.twitter.com/WayneParryAC


Energy company pulls the plug on two major offshore wind projects on East Coast

Ella Nilsen, CNN
Wed, November 1, 2023 

Wayne Parry/AP

Danish wind developer Orsted is halting the development of two massive New Jersey offshore wind projects due to cascading economic pressures, including skyrocketing interest rates and a supply chain crunch – two factors that have dogged wind energy projects up and down the East Coast.

The decision is ominous news for a nascent sector that could play a key role in solving the climate crisis, and one that is still trying to find its wings in the US, even as other major economies steam forward. It also deals a blow to President Joe Biden’s clean energy goals, which hinge in part on the massive potential for electricity generated from offshore wind.

Orsted Americas CEO David Hardy said the company was “extremely disappointed” to pull the plug on its development.

“Macroeconomic factors have changed dramatically over a short period of time, with high inflation, rising interest rates, and supply chain bottlenecks impacting our long-term capital investments,” Hardy said in a statement.

“As a result, we have no choice but to cease development of Ocean Wind 1 and Ocean Wind 2.”

More: Inflation, interest rates and whales – why offshore wind projects are on the rocks

A company statement blamed long wait times for supplies needed to build the project and rising interest rates in the US. In addition to a strain on supplies like monopiles and other components, there are long wait times for the ships needed to construct the towering wind turbines in the ocean.

Mads Nipper, Orsted’s CEO, said in a statement the company will “now assess the best way to preserve value while we cease development of the projects.”

New Jersey Gov. Phil Murphy, a major proponent of the projects, blasted the company’s decision.

“Today’s decision by Orsted to abandon its commitments to New Jersey is outrageous and calls into question the company’s credibility and competence,” Murphy said in a statement, pointing to the company’s recent public statements about the project’s viability.

“I have directed my Administration to review all legal rights and remedies and to take all necessary steps to ensure that Orsted fully and immediately honors its obligations.”

Part of the reason the American industry has been slow to get off the ground – particularly compared to offshore-wind juggernauts like Europe and China – is US developers are essentially building it from scratch, industry experts have told CNN. And a combination of incredibly tight supply chains, lack of vessel availability and rising interest rates have made the first major US offshore wind projects very difficult to build.

Despite the Biden administration’s friendly posture toward offshore wind, the number of active turbines in US waters is still in the single digits, and the energy output lags significantly behind solar and onshore wind. There will be about 140 gigawatts of solar (including both utility scale and rooftop) installed in the US by the end of this year, Sam Huntington, director of S&P Global Commodity Insights’ North American power team, told CNN earlier this year, while offshore wind comprises a tiny 42 megawatts.

Two commercial-scale offshore wind projects – Vineyard Wind off the coast of Massachusetts and South Fork Wind off the coast of New York – are under active construction. The Biden administration announced earlier this week it approved plans for Dominion Energy to build what would be the largest offshore wind farm to-date in the US off the coast of Virginia. The Dominion project, known as Coastal Virginia Offshore Wind, is planned to be a 2.6-gigawatt wind farm that could eventually generate enough electricity to power over 900,000 homes.

Before the Dominion announcement, Ocean Wind 1 was the largest project the administration had approved – expected to generate 1.1 gigawatts, enough to power over 380,000 homes. At the time, administration officials including Interior Sec. Deb Haaland had praised the project’s federal approval as a “milestone.”

White House spokesperson Michael Kikukawa reiterated there is still “momentum” for the US offshore wind industry.

“While macroeconomic headwinds are creating challenges for some projects, momentum remains on the side of an expanding U.S. offshore wind industry,” Kikukawa said in a statement, “creating good-paying union jobs in manufacturing, shipbuilding, and construction; strengthening the power grid; and providing new clean energy resources for American families and businesses.”

Biden's offshore wind agenda relied on plans that are being canceled

Timothy Puko, (c) 2023, The Washington Post
Wed, November 1, 2023
 
NIMBY


A slew of canceled offshore wind projects and contracts have jeopardized the Biden administration's push to expand the new industry and its promise of clean power for coastal states.

The latest setback came Tuesday night when the Danish developer Orsted said it is scrapping two large projects off the southern coast of New Jersey. Ocean Wind 1 and 2 became too expensive because of rising interest rates and competition for limited supplies and equipment, the company said. The projects had also become a hot-button political issue, with local grass-roots opposition and campaigns tied to fossil-fuel interests.

The Biden administration had planned an expansion to generate 30 gigawatts of offshore wind power by the end of the decade - projects that could not be undone by future administrations, with enough energy to power more than 10 million American homes and cut 78 million metric tons of carbon dioxide emissions. East Coast states saw their construction as a way to spur new, union-friendly manufacturing jobs, adding more appeal for the administration's policymakers.

But the nascent industry has encountered numerous obstacles this year, with companies in recent weeks also moving to end contracts to sell power to utilities in Massachusetts and Connecticut. Those cancellations have helped put into limbo more than half of the offshore power under development nationwide - 18 gigawatts, largely along the Atlantic coast, according to a tally from Timothy Fox, vice president of research at ClearView Energy Partners. The rising costs and unprofitability of these developments - and tepid interest in more of them - could dash President Biden's hopes for offshore wind power, Fox and several industry lobbyists said.

"That tipping point is already past," Fox said. "With the cancellation of three different projects that are very huge, it seems unlikely."

Fox added the White House goal deserves credit for helping spark the industry's development to date. A White House spokesman, Michael Kikukawa, said in an email that the administration's strategy is producing results, including $7.7 billion in investment from the offshore wind industry since last summer, when the president signed the Inflation Reduction Act, which boosted climate spending.

"While macroeconomic headwinds are creating challenges for some projects, momentum remains on the side of an expanding U.S. offshore wind industry," Kikukawa said in a statement.

Kikukawa noted that last week New York announced what it called the state's largest-ever investment in offshore wind. That announcement - which came after the state had declined to renegotiate existing offshore wind power contracts - commits hundreds of millions of dollars in new spending to several projects. And the Interior Department on Tuesday granted environmental approvals to the largest project in the country, a 2.6-gigawatt development from Dominion Energy, more than 20 miles off the coast of Virginia Beach.

In its Tuesday announcement, Orsted said it will still move forward with its Revolution Wind project, a smaller joint venture to send power to Rhode Island and Connecticut. The onshore part of its construction was already underway, and, unlike with other projects, company executives say they can ensure timely access to the giant vessels needed to build these sites in the ocean, leaving them to believe Revolution Wind can still eke out profits.

Other Orsted projects in New York and Maryland are still in limbo, with talks with officials in those states ongoing, the company said in its statement. Orsted said that it is still committed to the industry but that its portfolio is also still under review, with more decisions due in the coming months.

"Macroeconomic factors have changed dramatically over a short period of time, with high inflation, rising interest rates, and supply chain bottlenecks impacting our long-term capital investments," David Hardy, chief executive of Orsted's Americas division, said in a statement. "We are extremely disappointed to have to take this decision, particularly because New Jersey is poised to be a U.S. and global hub for offshore wind energy."

Ocean Wind had been divided into two developments, the first of which alone included up to 98 wind turbines the size of skyscrapers about 15 miles offshore and enough new power for a half-million homes. Ocean Wind 2 would have doubled that capacity, going up to more than 2.2 gigawatts combined.

New Jersey Democrats had called Ocean Wind vital for meeting a state goal of reducing overall greenhouse gas emissions by 80 percent by 2050. State leaders have also attempted to make New Jersey into an industry hub, and Gov. Phil Murphy (D) on Tuesday slammed Orsted, pledging to ensure the company pays $300 million he said it promised to pay to support the offshore wind sector if it backed out of Ocean Wind.

"Today's decision by Orsted to abandon its commitments to New Jersey is outrageous and calls into question the company's credibility and competence," Murphy said in a statement. "I remain committed to ensuring that New Jersey becomes a global leader in offshore wind - which is critical to our economic, environmental, and clean energy future."

Ocean Wind had faced stiff opposition, especially from Republicans in coastal Cape May County. That included a lawsuit Protect Our Coast NJ filed against Orsted and the state in late July to block a tax break for the wind farm.

Leaders with the group said Wednesday that there is excitement and relief after Orsted's decision but also concern that the project will get revived. They asserted that offshore wind development could threaten fisheries and marine mammals, claims that contradict the conclusions of leading scientists.

"I'm optimistic, but we're just going to have a measured response to it," said Barbara McCall, a Protect Our Coast NJ board member.

Orsted also faced soaring costs. Offshore wind projects are multibillion-dollar investments, with full payoffs decades away. So interest rate hikes that governments have used to fight inflation have hit capital-intensive businesses, such offshore wind, especially hard. That led Orsted to reduce the estimated value of its assets by $900 million in the first nine months of 2023, company executives said Wednesday.

And with countries on both sides of the Atlantic Ocean trying to boost offshore wind, developers have pursued too many projects for the available supplies and equipment to build them. In Orsted's earnings call Wednesday, chief executive Mads Nipper said the giant vessels that build these wind farms were the central problem for Ocean Wind 1. Finding available vessels would have meant several years of delays and would have forced the company to redo many other contracts, probably increasing costs almost across the board, he said.

"It meant massive impact because of the uncertainty," Nipper said.

Company executives said Wednesday that Orsted would take a total write-down equaling $4 billion for the year through September, the vast majority of it tied to Ocean Wind 1 and 2. Company shares plummeted by 25 percent in trading Wednesday after that announcement.

In an interview with The Washington Post published Tuesday, the recently departed No. 2 at the Interior Department, Tommy Beaudreau, said the offshore wind industry has a long history of gradual growth despite major challenges. He also noted several other industries facing similar turmoil tied to the economy's rebound from the pandemic.

"It has overcome challenges before, and I truly believe it will overcome some of the economic challenges we're seeing today," Beaudreau told The Climate 202.

Lobbyists and ClearView's Fox said the Treasury Department is one avenue the Biden administration could use to improve the industry's fortunes. The department is still setting rules for tax breaks created by Congress in last year's climate-spending package, and one of its biggest remaining decisions applies to offshore wind.

The rules will affect how wind developers qualify for credits designed to boost the amount of steel and other materials used from U.S. manufacturers, and how much the wind-power supply chains will spur job growth. Developers and their advocates in Washington say the proposed rules are too restrictive. Looser rules could provide hundreds of millions of dollars more for each project, making many more of them viable, they said.

There are "real challenges with getting a new industry off the ground," the American Clean Power Association, a renewable-energy trade group, said in a statement Wednesday. "The news is also a reminder that we need strong partnerships with government and stakeholders, now more than ever."


Republicans cheer cancellation of New Jersey offshore wind projects

Rachel Frazin
Wed, November 1, 2023



Republicans are cheering the cancellation of two offshore wind farms that would have been built off the coast of New Jersey.

The company Orsted announced Tuesday it was canceling its planned Ocean Wind 1 and 2 projects, which would have generated electricity via wind in the Atlantic Ocean.

The move was met with cheers from the GOP, including former President Trump, who has been particularly critical of wind energy over the years.

In a post on Truth Social, Trump criticized the projects as “horrendous” and congratulated GOP Rep. Jeff Van Drew (N.J.) on their defeat.

“Congratulations to a truly great Congressman, Jeff Van Drew, for his perseverance and success in defeating the horrendous Orsted Ocean Wind One & Two projects, which were to be built off the coast of South Jersey. This monstrosity required massive government subsidies, and ultimately, just didn’t work,” Trump wrote.

The former president has often railed against wind energy, including by spouting unfounded claims it causes cancer.

Van Drew, a former Democrat, also was among those who celebrated the cancellation. He wrote in a post on X, formerly known as Twitter, he was glad Orsted “has decided to pack up its offshore wind scam and leave South Jersey’s beautiful coasts alone.”

He called the move a “tremendous win for South Jersey residents, our fisherman, and the historic coastline of the Jersey shore.”

Rep. Chris Smith (R-N.J.) was also among those cheering the cancellation — saying he hoped other projects would similarly falter.

“Orsted’s decision was a first step in exposing the economic unsustainability and environmental dangerousness of ocean wind turbines … and Orsted’s pulling out of the deal may help slow and eventually halt similar projects off New Jersey’s coast,” Smith said in a statement.

The company cited supply chain challenges that led to delays in construction and rising interest rates as its reasons for canceling the projects.

Orsted CEO Mads Nipper said in a statement the company still “firmly believe[s] the US needs offshore wind to achieve its carbon emissions reduction ambition.”

The projects had been supported by New Jersey Governor Phil Murphy (D), who authorized a tax break for them earlier this year.

In a written statement, Murphy blasted the decision to walk away from the projects as “outrageous” and said it “calls into question the company’s credibility and competence.”

In New Jersey, the popularity of offshore wind has fallen in recent years according to Monmouth University polling, which found that just over half of New Jerseyans support it now, compared to more than three-quarters in 2019.

The Biden administration has embraced offshore wind as a method for combating climate change, saying it hopes that by 2030, the U.S. will generate enough offshore wind energy to power 10 million homes.
Researchers hope tracking senior Myanmar army officers can ascertain blame for human rights abuses

GRANT PECK
Wed, November 1, 2023 




Myanmar War Crimes
FILE - An alphabet book and a notebook lie on top of an elevated wooden floorboard of a middle school in Let Yet Kone village in Tabayin township in the Sagaing region of Myanmar on Sept. 17, 2022, the day after an air strike hit the school. A group of human rights researchers officially launched a website Wednesday, Nov. 1, 2023 that they hope will help get justice for victims of state violence in Myanmar, where one of the world’s less-noticed but still brutal armed struggles is taking place. 


BANGKOK (AP) — A group of human rights researchers officially launched a website Wednesday that they hope will help get justice for victims of state violence in Myanmar, where one of the world’s less-noticed but still brutal armed struggles is taking place.

Since the army seized power in February 2021 from the elected government of Aung San Suu Kyi, thousands of people have been killed by the security forces seeking to quash pro-democracy resistance. According to the United Nations, more than 1.8 million have been displaced by military offensives, which critics charge have involved gross violations of human rights.

War crimes have become easier to document in recent years thanks largely to the ubiquity of cellphone cameras and the near-universal access to social media, where photo and video evidence can easily be posted and viewed.

But it's harder to establish who is responsible for such crimes, especially generals and other high-ranking officers behind the scenes who make the plans and give the orders.

“Generals and lower ranking officers should fear being dragged before a court of law and imprisoned for crimes they ordered or authorized,” Tom Andrews, the United Nations special rapporteur on the situation of human rights in Myanmar, told the AP by email. “Research must move beyond merely stating the obvious — that crimes are occurring — and connect those who are responsible to specific atrocities. The victims of these crimes deserve justice and that will require the research necessary to hold those responsible fully accountable.”

The new website, myanmar.securityforcemonitor.org, is an interactive online version of a report, “Under Whose Command? — Human rights abuses under Myanmar’s military rule,” compiled by Security Force Monitor, a project of the Columbia Law School Human Rights Institute, to connect alleged crimes with their perpetrators.

The project’s team constructed a timeline of senior commanders and their postings, which can be correlated with documented instances of alleged atrocities that occurred under their commands. It exposes the army’s chain of command, identifying senior army commanders and showing the connections from alleged rights violations to these commanders, study director Tony Wilson told The Associated Press in an email interview.

“This is one of the pieces of the jigsaw that has up until now been missing in terms of accountability — demonstrating how the system works and that these abuses are not just the result of rogue units or individual soldiers,” he said.

Wilson said the Myanmar data show that 65%, or 51 of all 79 senior army commanders between the end of March 2011 and the end of March this year, “had alleged disappearances, killings, rape or instances of torture committed by units under their command.”

He said the study also shows the officer with the most links to serious human rights violations is Gen. Mya Htun Oo, who became defense minister and a member of the ruling military council when the army seized power in 2021. He also became deputy prime minister in 2023.

The legal significance leans on the established doctrine of “command responsibility,” which allows the prosecution under international law of military commanders for war crimes perpetrated by their subordinates.

“Establishing the command structures of militaries and other groups involved in atrocities is the lifeblood of properly conducting investigations into international crimes,” Mark Kersten, assistant professor of criminology and criminal justice at Canada’s University of the Fraser Valley, said in an email to The Associated Press. “When the Nazis were prosecuted at Nuremberg, the lead counsel for the Allies famously exclaimed that it was individuals who would be prosecuted for atrocities, and not abstract entities, namely states.”

Collecting evidence of human rights violations in Syria’s civil war has served as a guide to utilizing online information and technical advances to gather and organize evidence of war crimes. Similar projects have also been launched in other areas of conflict including Sudan, Yemen and Ukraine.

Since 2021, evidence-gathering groups have formed in Myanmar, including Myanmar Witness, an NGO that seeks to “collect, analyse, verify and store evidence related to human rights incidents … in a way that is compatible with future human rights prosecutions.”

Similar work for Myanmar has already been done by groups documenting the security forces’ brutal 2017 counterinsurgency campaign in the western part of the country that drove an estimated 740,000 Rohingya to seek safety across the border in Bangladesh. Several international tribunals have been considering charges of genocide and other crimes brought against the army for their activities.

“Our work can complement and feed into the work of documenting abuses,” said Wilson. “Because we always aim to map the entire police or military, our research can help make connections between what human rights groups have documented and the wider chain of command.”

He said the project relied on open-source information drawn from the work of national and international human rights organizations and local activists, as well as books, independent newspapers and the military’s own media outlets.

Its previous work includes research on the Mexican Army’s chain of command for a complaint to the International Criminal Court alleging crimes against humanity. Its methodology has been used to support the Syrian Archive’s submission of evidence to investigative and prosecute authorities in Germany, France, and Sweden about the 2013 sarin gas attack on Khan Shaykhun.

“We’ve applied lessons learned from researching militaries around the world to our work in Myanmar, and we would not have been able to map the Myanmar Army without those lessons,” said Wilson.
Instagram working to let people make AI ‘friends’ to talk to

AND SO IT BEGINS 

Andrew Griffin
Thu, November 2, 2023 

(AFP via Getty Images)

Instagram is working to let people create AI “friends” that they can talk to, code within the app appears to suggest.

It is just the latest hint that Meta is working on a variety of artificial intelligence tools to allow people to have conversations. Future uses might include letting people talk to AI versions of celebrities or with businesses, for instance.

But the new tool seems specifically aimed at allowing people to overcome boredom or loneliness by letting people choose their AI friend’s personality and then have conversations with them.

That’s according to app researcher Alessandro Paluzzi who found references to the new feature within the app. Instagram has declined to comment – and the company works on a range of features that never actually arrive within the public version of the app.

The code appears to suggest that people would choose their AI’s gender, age and ethnicity. They would then pick a personality from a range of characteristics, such as enthusiastic or witty.

The AI friend would then get a name and a face and users could chat with them through Instagram’s direct messages feature.

Similar tools are under way for other Meta platforms, such as WhatsApp and Messenger. And other apps such as Snapchat have worked on their own AI chatbots that can be messaged in the same way as normal friends.

Last month, Meta launched chatbots with a host of personalities. Celebrities including Snoop Dogg and Kendall Jenner leant their likeness to chatbots that can be conversed with over different Meta messaging platforms.

Mark Zuckerberg has publicly said that Meta is working on a host of different AIs, pointed at different purposes, that will be available inside chat conversations. He said recently that the company doesn’t “think we necessarily want there to be one big super intelligence” and instead would build different artificial intelligence systems that can be spoken to in different contexts.

Some of those might be built for businesses, for instance, so that an AI can chat with people about issues such as refunds, or that creators could build artificially intelligent systems that mimic their personality so that fans can chat to them. But others might just be for speaking to as friends, he said.

“there are going to be a bunch of use cases that I think are just fun,” he said during a podcast with Lex Fridman. “I think there will be AIs that I can tell jokes, so you can put them into chat thread with friends. I think a lot of this, because we’re like a social company.

“I mean we’re fundamentally around helping people connect in different ways. Part of what I’m excited about is how do you enable these kind of AIs to facilitate connection between two people or more, put them in a group chat, make the group chat more interesting around whatever your interests are, sports, fashion, trivia.”
Countries at UK summit pledge to tackle AI's potentially 'catastrophic' risks


Britain's Prime Minister Rishi Sunak welcomes US Vice President Kamala Harris to 10 Downing Street in London, Wednesday, Nov. 1, 2023. Harris is on a two day visit to England to attend the AI Summit at Bletchley Park. (AP Photo/Kirsty Wigglesworth)


Wu Zhaohui, third left, Chinese Vice Minister of Science and Technology looks on as other delagats takes their places for the family photo during the AI Saftey Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)


US Vice President Kamala Harris waves before delivering a policy speech on the Biden-Harris Administration's vision for the future of Artificial Intelligence (AI), at the US Embassy in London, Wednesday, Nov. 1, 2023. Harris is on a two day visit to England to attend the AI Summit at Bletchley Park. (AP Photo/Kin Cheung)


Tesla and SpaceX's CEO Elon Musk attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. (Leon Neal/Pool Photo via AP)

A delegate takes a selfie with Tesla and SpaceX's CEO Elon Musk during the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. (Toby Melville/Pool Photo via AP)



Britain's Michelle Donelan, Secretary of State for Science, Innovation and Technology, left, listens to China's Vice Minister of Science and Technology Wu Zhaohui speak during the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. (Leon Neal/Pool Photo via AP)



Mustafa Suleyman co founder and CEO of Inflection AI speaks to journalist during the AI Safety Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)


Britain's Michelle Donelan, Secretary of State for Science, Innovation and Technology, right, and Wu Zhaohui, Chinese Vice Minister of Science and Technology, shake hands prior to the AI Saftey Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)



Britain's Michelle Donelan, Secretary of State for Science, Innovation and Technology, 6th right front row, with Digital Ministers who are attending the AI Saftey Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)


Yoshua Bengio, Scientific Director Mila Quebec AI Institute speaks to the Associated Press during the AI Safety Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)



Mustafa Suleyman co founder and CEO of Inflection AI speaks to journalist during the AI Safety Summit in Bletchley Park, Milton Keynes, England, Wednesday, Nov. 1, 2023. (AP Photo/Alastair Grant)

KELVIN CHAN and JILL LAWLESS
Wed, November 1, 2023 

BLETCHLEY PARK, England (AP) — Delegates from 28 nations, including the U.S. and China, agreed Wednesday to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.

The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge “frontier” AI that some scientists warn could pose a risk to humanity's very existence.

British Prime Minister Rishi Sunak said the declaration was “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren.”

But U.S. Vice President Kamala Harris urged Britain and other countries to go further and faster, stressing the transformations AI is already bringing and the need to hold tech companies accountable — including through legislation.

In a speech at the U.S. Embassy, Harris said the world needs to start acting now to address “the full spectrum” of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

“There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential," she said, citing a senior citizen kicked off his health care plan because of a faulty AI algorithm or a woman threatened by an abusive partner with deep fake photos.

The AI Safety Summit is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.

Harris is due to attend the summit on Thursday, joining government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak's governing Conservative Party.

Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work toward “shared agreement and responsibility” about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.

China's Vice Minister of Science and Technology, Wu Zhaohui, said AI technology is “uncertain, unexplainable and lacks transparency.”

“It brings risks and challenges in ethics, safety, privacy and fairness. Its complexity is emerging," he said, noting that Chinese President Xi Jinping last month launched the country's Global Initiative for AI Governance.

“We call for global collaboration to share knowledge and make AI technologies available to the public under open source terms,” he said.

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a conversation to be streamed on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google's DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the “godfathers” of AI, are also attending the meeting at Bletchley Park, a former top secret base for World War II codebreakers that’s seen as a birthplace of modern computing.

Attendees said the closed-door meeting's format has been fostering healthy debate. Informal networking sessions are helping to build trust, said Mustafa Suleyman, CEO of Inflection AI.

Meanwhile, at formal discussions “people have been able to make very clear statements, and that’s where you see significant disagreements, both between countries of the north and south (and) countries that are more in favor of open source and less in favor of open source," Suleyman told reporters.

Open source AI systems allow researchers and experts to quickly discover problems and address them. But the downside is that once an an open source system has been released, “anybody can use it and tune it for malicious purposes,” Bengio said on the sidelines of the meeting.

“There's this incompatibility between open source and security. So how do we deal with that?"

Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.

In contrast, Harris stressed the need to address the here and now, including “societal harms that are already happening such as bias, discrimination and the proliferation of misinformation.”

She pointed to President Joe Biden’s executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest.

Harris also encouraged other countries to sign up to a U.S.-backed pledge to stick to “responsible and ethical” use of AI for military aims.

“President Biden and I believe that all leaders … have a moral, ethical and social duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” she said.

___

Lawless reported from London.


Watch as world leaders gather for second day of AI summit at Bletchley Park

Oliver Browning
Thu, November 2, 2023 

Watch as Britain hosts a global summit on artificial intelligence at Bletchley Park, inviting political leaders and tech bosses to try to agree an approach to the fast-developing technology.

Losing control of AI is the biggest concern around the computer science, the technology secretary said on Thursday 2 November, the second day of the summit.

Michelle Donelan said a Terminator-style scenario was a “potential area” where AI development could lead but “there are several stages before that”.

She was speaking to Times Radio from Bletchley Park, where the government has convened delegates from around the world alongside tech firms and civil society to discuss the risks of the advancing technology.

Ms Donelan said the government has a responsibility to manage the potential risks, but also said AI offered “humongous benefits”.

“We have convened countries across the globe, companies that are working in this space producing that cutting-edge AI and also academics, scientists, experts from all over the world to have a conversation and work out, ‘OK, what are the risks?’” she said.

“How can we work together in a long-term process so that we can really tackle this and get the benefits for humanity, not just here in the UK, but across the globe?”


Meta's Yann LeCun joins 70 others in calling for more openness in AI development

Paul Sawers
Updated Wed, November 1, 2023


On the same day the U.K. gathered some of the world's corporate and political leaders into the same room at Bletchley Park for the AI Safety Summit, more than 70 signatories put their name to a letter calling for a more open approach to AI development.

"We are at a critical juncture in AI governance," the letter, published by Mozilla, notes. "To mitigate current and future harms from AI systems, we need to embrace openness, transparency and broad access. This needs to be a global priority."

Much like what has gone on in the broader software sphere for the past few decades, a major backdrop to the burgeoning AI revolution has been open versus proprietary -- and the pros and cons of each. Over the weekend, Facebook parent Meta's chief AI scientist Yann LeCun took to X to decry efforts from some companies, including OpenAI and Google's DeepMind, to secure "regulatory capture of the AI industry" by lobbying against open AI R&D.

"If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote.

And this is a theme that continues to permeate the growing governance efforts emerging from the likes of President Biden's executive order and the AI Safety Summit hosted by the U.K. this week. On the one hand, heads of large AI companies are warning about the existential threats that AI poses, arguing that open source AI can be manipulated by bad actors to more easily create chemical weapons (for example), while on the other hand counter arguments posit that such scaremongering is merely to help concentrate control in the hands of a few protectionist companies.
Proprietary control

The truth is probably somewhat more nuanced than that, but it's against that backdrop that dozens of people put their name to an open letter today, calling for more openness.

"Yes, openly available models come with risks and vulnerabilities -- AI models can be abused by malicious actors or deployed by ill-equipped developers," the letter says. "However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.

Esteemed AI researcher LeCun -- who joined Meta 10 years ago -- attached his name to the letter, alongside numerous other notable names including Google Brain and Coursera co-founder Andrew Ng, Hugging Face co-founder and CTO Julien Chaumond and renowned technologist Brian Behlendorf from the Linux Foundation.

Specifically, the letter identifies three main areas where openness can help safe AI development, including through enabling greater independent research and collaboration, increasing public scrutiny and accountability, and lowering the barriers to entry for new entrants to the AI space.

"History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation," the letter notes. "Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there."

A ‘world-first’ AI agreement, Elon Musk and backlash from tech community: The UK's AI summit
Pascale Davies
Wed, November 1, 2023

A ‘world-first’ AI agreement, Elon Musk and backlash from tech community: The UK's AI summit

International governments signed a “world-first” agreement on artificial intelligence (AI) at a global summit in the United Kingdom to combat the "catastrophic" risks the technology could present.

Tech experts, global leaders and representatives from across 27 countries and the European Union are attending the UK’s AI Safety Summit, which runs from Wednesday until Thursday at Bletchley Park, once home to Second World War codebreakers.

The UK announced it would invest in an AI supercomputer, while the Tesla and X boss Elon Musk said on the sidelines of the event that AI is "one of the biggest threats to humanity".

However, many in the tech community signed an open letter calling for a spectrum of approaches — from open source to open science and for scientists, tech leaders and governments to work together.

Here are the key takeaways from the event.
The AI agreement

The Bletchley Declaration on AI safety is a statement signed by representatives and companies of 28 countries, including the US, China, and the EU. It aims to tackle the risks of so-called frontier AI models - the large language models developed by companies such as OpenAI.

The UK government called it a “world-first” agreement between the signatories, which aims to identify the “AI safety risks of shared concern” and build “respective risk-based policies across countries”.

It warns frontier AI, which is the most sophisticated form of the technology that is being used in generative models such as ChatGPT, has the "potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models".

An exterior view shows the mansion house at Bletchley Park museum in the town of Bletchley in Buckinghamshire, England, Jan. 15, 2015. - Matt Dunham/Copyright 2023 The AP.

The UK’s Secretary of State for Science, Innovation and Technology Michelle Donelan said the agreement was a “landmark achievement” and that it “lays the foundations for today’s discussions”.

However, experts argue the agreement does not go far enough.

"Bringing major powers together to endorse ethical principles can be viewed as a success, but the undertaking of producing concrete policies and accountability mechanisms must follow swiftly," Paul Teather, CEO of AI-enabled research firm AMPLYFI, told Euronews Next.

"Vague terminology leaves room for misinterpretation while relying solely on voluntary cooperation is insufficient toward sparking globally recognised best practices around AI".

AI: ChatGPT consumes more energy than a traditional Internet search
More AI summits

The UK government also announced that there would be future AI safety summits.

South Korea will launch another “mini virtual” Summit on AI in the next six months and France will host the next in-person AI summit next year.
Who said what?

Billionaire tech entrepreneur Elon Musk arrived at the summit and kept quiet during the talks but warned about the risks of AI.

"We’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us.”

Toby Melville/Pool Photo via AP - Toby Melville/Reuters

Musk, who co-founded the ChatGPT developer OpenAI and has launched a new venture called xAI, said there should be a “referee” for tech companies but that regulation should be implemented with caution.

“I think what we’re aiming for here is... first, to establish that there should be a referee function, I think there should.

"And then, you know, be cautious in how regulations are applied, so you don’t go charging in with regulations that inhibit the positive side of AI."

Musk will speak with British Prime Minister Rishi Sunak later on Thursday on his platform X, formerly Twitter.
Ursula von der Leyen

European Commission chief Ursula von der Leyen warned AI came with risks and opportunities and praised how quantum physics led to nuclear energy but also societal risks such as the atomic bomb.

European Commission President Ursula von der Leyen arrives for a plenary session at the AI Safety Summit at Bletchley Park in Milton Keynes, England, Thursday, Nov. 2, 2023. - Alastair Grant/Copyright 2023 The AP. All rights reserved

"We are entering a completely different era. We are now at the dawn of an era where machines can act intelligently. My wish for the next five years is that we learn from the past, and act fast!" she said.

Von der Leyen urged for a system of objective scientific checks and balances, with an independent scientific community, and for AI safety standards that are accepted worldwide.

She said the EU's AI Act is in the final stages of the legislative process. She also said the potential of a European AI Office is being discussed which could "deal with the most advanced AI models, with responsibility for oversight" and would cooperate with similar entities around the world.
Kamala Harris

US Vice President Kamala Harris said that action was needed now to address “the full spectrum” of AI risks and not just “existential” fears about threats of cyber attacks or the development of bioweapons.

US Vice President Kamala Harris, with husband Second Gentleman Douglas Emhoff, arrives at Stansted Airport for her visit to the UK to attend the AI safety summit. - Joe Giddens/AP

“There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential,” she said at the US embassy in London.
King Charles III

Britain’s King Charles III sent in a video speech in which he compared the development of AI to the significance of splitting the atom and harnessing fire.

He said AI was “one of the greatest technological leaps in the history of human endeavour” and said it could help “hasten our journey towards net zero and realise a new era of potentially limitless clean green energy”.

But he warned: “We must work together on combatting its significant risks too”.
Backlash from the tech community

Meta's president of global affairs Nick Clegg said there was "moral panic" over new technologies, indicating government regulations could face backlash from tech companies.

“New technologies always lead to hype,” Clegg said. “They often lead to excessive zeal amongst the advocates and excessive pessimism amongst the critics.

“I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet.”

Mark Surman, president and executive director of the Mozilla Foundation linked to browser Firefox, also raised concerns that the summit was a world-stage platform for private companies to push their interests.

Mozilla published an open letter on Thursday, signed by academics, politicians and employees from private companies, in particular Meta, as well as Nobel Peace Prize Maria Ressa.

"We have seen time and again that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst," Surman said in comments to Euronews Next.

"We’re asking policymakers to invest in a range of approaches - from open source to open science - in the race to AI safety. Open, responsible and transparent approaches are critical to keep us safe and secure in the AI era," he added.
A new AI supercomputer

The United Kingdom announced it will invest £225 million (€257 million) in a new AI supercomputer, called Isambard-AI after the 19th-century British engineer Isambard Brunel.

It will be built at The University of Bristol, in southern England, and the UK government said it would be 10 times faster than the UK’s current quickest machine.

Alongside another recently announced UK supercomputer called Dawn, the government hopes both will achieve breakthroughs in fusion energy, health care and climate modelling.

Both computers aim to be up and running next summer.
The UK’s ambitions

It is no secret that Sunak wants the UK to be a leader in AI, it is unclear how it will be regulated and other countries are already setting their own AI regulations. There is stiff competition from the US, China and the EU.

President Joe Biden said “America will lead the way during this period of technological change” after signing an AI executive order on October 30. Meanwhile, the EU is also trying to set its own set of AI guidelines.

Britain's Prime Minister Rishi Sunak speaks to journalists upon his arrival for the second day of the UK Artificial Intelligence (AI) Safety Summit, at Bletchley Park. - Justin Tallis/Pool Photo via AP

However, unlike the EU, the UK has said it does not plan to adopt new legislation to regulate AI but would instead require the existing regulators in the UK to be responsible for AI in their sectors.

China too has been pushing through its own rules governing generative AI.

The country’s vice minister of technology Wu Zhaohui said at the summit China would contribute to an “international mechanism [on AI], broadening participation, and a governance framework based on wide consensus delivering benefits to the people, and building a community with a shared future for mankind."


China, US, UK unite behind AI safety at summit

Reuters Videos
Wed, November 1, 2023 

STORY: "And here we are for the first time really in human history with something that's going to be far more intelligent than us.”

Elon Musk expressed grave concern about the rapid development of artificial intelligence on Wednesday at the world's first major summit on AI safety.

“I do think it's one of the existential risks that we're facing, potentially the most pressing one."

Musk said the aim of the inaugural two-day summit was to establish what he called a "third-party referee" to observe AI development and to sound the alarm if needed.

Fears about the impact AI could have on economies and society took off last year when Microsoft-backed OpenAI made ChatGPT available to the public.

Some worry that, in time, machines could achieve greater intelligence than humans, resulting in unintended consequences.

In a first for Western efforts to manage the dangers, China's vice minister of science and technology joined U.S. and EU leaders, as well as tech bosses at England’s Bletchley Park, home of Britain's World War Two code-breakers.

It was here the countries signed the ‘Bletchley Declaration’ – an agenda focused on identifying issues with AI and developing policies to mitigate them.

'I firmly believe that we must be guided by a common set of understandings among nations.”

In a speech at the U.S. Embassy in London, U.S. Vice President Kamala Harris called for urgent global action to address potential threats posed by AI:

“From AI enabled cyber attacks at a scale beyond anything we've seen before to AI formulated bioweapons that could endanger the lives of millions of people, these threats are often referred to as the existential threats of AI because, of course, they could endanger the very existence of humanity.”

The United States used the British summit to announce it would establish a new AI Safety Institute, which will assess potential risks.

Harris's decision to give her speech and hold some meetings with attendees away from the summit raised some eyebrows, with some executives and lawmakers suggesting Washington was trying to overshadow Prime Minister Rishi Sunak's summit.

British officials denied that, saying they wanted as many voices as possible.

And, later in the day, Sunak welcomed Harris to 10 Downing Street for dinner. She plans to attend the British summit on Thursday.

AI's most famous leaders are in a huge fight after one said Big Tech is cynically exaggerating the risk of AI wiping out humanity

Hasan Chowdhury
Wed, November 1, 2023 

Meta's Chief AI Scientist Yann LeCun has lashed out those pushing claims that AI is an extinction threat.
Kevin Dietsch/Getty Images

Andrew Ng, formerly of Google Brain, said Big Tech is exaggerating the risk of AI wiping out humans.


That seems to attack AI leaders such as DeepMind's Demis Hassabis and OpenAI's Sam Altman.


AI's biggest names are all piling in.


Some of the biggest figures in artificial intelligence are publicly arguing whether AI is really an extinction risk, after AI scientist Andrew Ng said such claims were a cynical play by Big Tech.

Andew Ng, a cofounder of Google Brain, suggested to The Australian Financial Review that Big Tech was seeking to inflate fears around AI for its own benefit.

"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," Ng said. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."

Ng didn't name names, but figures who have pushed this line include Elon Musk, Ng's one-time student and OpenAI cofounder Sam Altman, DeepMind cofounder Demis Hassabis, and AI pioneer and fellow ex-Googler Geoffrey Hinton, and computer scientist Yoshua Bengio.

These discussions around AI's impact on society have come to the fore after the arrival of scary-smart generative AI tools such as ChatGPT.

Hinton, a British-Canadian computer scientist considered one of AI's godfathers, shot back at Ng and doubled down.

"Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy," he wrote in a post on X. "A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat."

Meta's chief AI scientist Yann LeCun, also known as an AI godfather for his work with Hinton, sided with Ng.

"You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models," he wrote on X to Hinton.

LeCun has become increasingly concerned that regulation designed to quell the so-called extinction risks of AI might kill off the field's burgeoning open-source community. He warned over the weekend that "a small number of companies will control AI" if their attempt at regulatory capture succeeds.

Meredith Whittaker, president of messaging app Signal and chief advisor to the AI Now Institute, said those claiming AI was an existential risk were pushing a "quasi-religious ideology" that is "unmoored from scientific evidence."

"This ideology is being leveraged by Big Tech to promote their products/shore up their position," Whittaker wrote on X. Whittaker and others argue that Big Tech benefits from scaremongering about hypothetical risks as a distraction from more immediate real
world issues, such as copyright theft and putting workers out of jobs.


Politicians commit to collaborate to tackle AI safety, US launches safety institute

Ingrid Lunden
Updated Wed, November 1, 2023 


The world is locked in a race, and competition, over dominance in AI, but today, a few of them appeared to come together to say that they would prefer to collaborate when it comes to mitigating risk.

Speaking at the AI Safety Summit in Bletchley Park in England, the U.K. minister of technology, Michelle Donelan, announced a new policy paper, called the Bletchley Declaration, which aims to reach global consensus on how to tackle the risks that AI poses now and in the future as it develops. She also said that the summit is going to become a regular, recurring event: Another gathering is scheduled to be held in Korea in six months, she said; and one more in France six months after that.

As with the tone of the conference itself, the document published today is relatively high level.

"To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible," the paper notes. It also calls attention specifically to the kind of large language models being developed by companies like OpenAI, Meta and Google and the specific threats they might pose for misuse.

"Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models," it noted.

Alongside this, there were some concrete developments.

Gina Raimondo, the U.S. secretary of commerce, announced a new AI safety institute that would be housed within the Department of Commerce and specifically underneath the department's National Institute of Standards and Technology (NIST).

The aim, she said, would be for this organization to work closely with other AI safety groups set up by other governments, calling out plans for a Safety Institute that the U.K. also plans to establish.

"We have to get to work and between our institutes we have to get to work to [achieve] policy alignment across the globe," Raimondo said.

Political leaders in the opening plenary today spanned not just representatives from the biggest economies in the world, but also a number speaking for developing countries, collectively the Global South.

The lineup included Wu Zhaohui, China's Vice Minister of Science and Technology; Vera Jourova, the European Commission Vice President for Values and Transparency; Rajeev Chandrasekhar, India's minister of state for Electronics and Information Technology; Omar Sultan al Olama, UAE Minister of State for Artificial Intelligence; and Bosun Tijani, technology minister in Nigeria. Collectively, they spoke of inclusivity and responsibility, but with so many question marks hanging over how that gets implemented, the proof of their dedication remains to be seen.

"I worry that a race to create powerful machines will outpace our ability to safeguard society," said Ian Hogarth, a founder, investor and engineer, who is currently the chair of the U.K. government's task force on foundational AI models, who has had a big hand to play in putting together this conference. "No one in this room knows for sure how or if these next jumps in compute power will translate into benefits or harms. We’ve been trying to ground [concerns of risks] in empiricism and rigour [but] our current lack of understanding… is quite striking.

"History will judge our ability to stand up to this challenge. It will judge us over what we do and say over the next two days to come."

AI summit brings Elon Musk and world leaders to Bletchley Park

Danny Fullbrook - BBC News
Wed, November 1, 2023 

The two-day summit will be held at Bletchley Park, near Milton Keynes, where codebreakers hastened the end of the Second World War

This week political leaders, tech industry figures and academics will meet at Bletchley Park for a two-day summit on artificial intelligence (AI). The location is significant as it was here that top British codebreakers cracked the "Enigma Code", hastening the end of World War Two. So what can we expect from this global event?

Who is attending the AI summit at Bletchley Park?


Elon Musk and Rishi Sunak will take part in an interview together on Thursday

There is no public attendee list, but some well-known names have indicated they will appear.

About 100 world leaders, leading AI experts and tech industry bosses will attend the two-day summit at the stately home on the edge of Milton Keynes.

The US Vice President, Kamala Harris, and European Commission (EC) President Ursula von der Leyen are expected to attend.

Deputy Prime Minister Oliver Dowden told BBC Radio 4 that China accepted an invite, but added: "you wait and see who actually turns up".

Tech billionaire Elon Musk will attend ahead of a live interview with UK Prime Minister Rishi Sunak on Thursday evening.

The BBC also understands Open AI's Sam Altman and Meta's Nick Clegg will join the gathering - as well as a host of other tech leaders.

Experts such as Prof Yann LeCun, Meta's chief AI scientist, are also understood to be there.

The government said getting these people in the same room at the same time to talk at all is a success in itself - especially if China does show up.

What will be discussed and why does it matter?


Earlier this week Prime Minister Rishi Sunak warned AI could help make it easier to build chemical and biological weapons

The government has said the purpose of the event is to consider the risks of AI and discuss how they could be mitigated.

These global talks aim to build an international consensus on the future of AI.

There is concern frontier AI models pose potential safety risks if not developed responsibly, despite the potential to cause economic growth, scientific progress and other public benefits.

Some argue the summit has got its priorities wrong.

Instead of doomsday scenarios, which they believe is a comparatively small risk, they want a focus on more immediate threats from AI.

Prof Gina Neff, who runs an AI centre at the University of Cambridge said: "We're concerned about what's going to happen to our jobs, what's going to happen to our news, what's going to happen to our ability to communicate with one another".

Professor Yoshua Bengio, who is considered one of the "Godfathers" of AI, suggested a registration and licensing regime for frontier AI models - but acknowledged that the two-day event may need to focus on "small steps that can be implemented quickly."

What are the police doing?


Police have increased their presence in the run up to the world event

Thames Valley Police has dedicated several resources to the event, providing security to both attendees and the wider community.

Those resources include the police's mounted section, drone units, automatic number plate recognition officers and tactical cycle units.

The teams will assist the increased police presence on the ground ahead of the AI Summit.

People have been encouraged to ask officers any questions or raise any concerns when they see them.

Local policing area commander for Milton Keynes, Supt Emma Baillie, said she expected disruption to day-to-day life in Bletchley but hoped it would be kept to a minimum.

"As is natural, we rely on our community to help us," she said.

"Bletchley has a strong community, and I would ask anybody who sees anything suspicious or out of the ordinary, to please report this to us."


Security around the global event will be paramount


What is Bletchley Park famous for?


Alan Turing played a key role as part of the codebreaking team at Bletchley Park

The Victorian mansion at Bletchley Park served as the secret headquarters of Britain's codebreakers during World War Two.

Coded messages sent by the Nazis, including orders by Adolf Hitler, were intercepted and then translated by the agents.

Mathematician Alan Turing developed a machine, the bombe, that could decipher messages sent by the Nazi enigma device.

By 1943, Turing's machines were cracking 84,000 messages each month - equivalent to two every minute.

The work of the codebreakers helped give the Allied forces the upper hand and their achievements have been credited with shortening the war by several years.

How will it affect Bletchley Park itself?


Blocks A and B in Bletchley Park near Milton Keynes, where Britain's finest minds worked during World War Two

Ian Standon, chief executive of Bletchley Park, said it was a "huge privilege and honour to be selected as the location for this very important summit."

The museum has had to close for a week until Sunday while the event takes place.

Temporary structures have appeared over recent weeks to host the many visitors for the summit.

Mr Standon praised his team for their hard work in preparing for the event, especially when dealing with added security over the next couple of days.

"We're in sort of security lockdown but that's a very small price to pay for the huge amount of publicity we're going to get out of this particular project," he said.

"For us at Bletchley Park this is an opportunity to put the place and its story on the world stage and hopefully people around the world will now understand and recognise what Bletchley Park is all about."


Everything you're hearing right now about AI wiping out humans is a big con

Beatrice Nolan
Wed, November 1, 2023 


Doomsayers want us all to believe an AI coup could happen, but industry pioneers are pushing back.

Many are shrugging off the supposed existential risks of AI, labeling them a distraction.

They argue big tech companies are using the fears to protect their own interests.


You've heard a lot about AI wiping out humanity. From AI godfathers to leading CEOs, there's been a seemingly neverending flood of warnings about how AI will be our enemy, not our friend.

Here's the thing: not only is an AI coup unlikely, but the idea of one is conveniently being used to distract you from more pressing issues, according to numerous AI pioneers who have recently spoken out.

Two experts, including Meta's chief AI scientist, have dismissed the concerns as distractions, pointing the finger at tech companies attempting to protect their own interests.

AI godfather Yann LeCun, Meta's chief AI scientist, accused some of the most prominent founders in AI of "fear-mongering" and "massive corporate lobbying" to serve their own interests. He said much of the doomsday rhetoric was about keeping control of AI in the hands of a few.

"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment," LeCun wrote. "They are the ones who are attempting to perform a regulatory capture of the AI industry."

Google DeepMind's Demis Hassabis told CNBC he disagreed with many of LeCun's remarks, adding it was important to start the conversation about regulating superintelligence early.

Representatives for OpenAI's Sam Altman and Anthropic's Dario Amodei did not immediately respond to Insider's request for comment.

Andrew Ng, an adjunct professor at Stanford University and cofounder of Google Brain, took a similar view over the weekend.

He told the Australian Financial Review that some companies were using the fears around AI to assert their own market dominance.

The outlet reported that he said some large tech companies didn't want to compete with open-source alternatives and were hoping to squash competition with strict regulation triggered by AI extinction fears.

Several AI experts have long disputed some of the more far-fetched warnings.

It hasn't helped that the statements issued by various centers — and backed by prominent AI leaders — have been notably vague, leaving many struggling to make sense of the dramatic claims.

One 23-word statement backed by the CEOs of OpenAI, DeepMind, and Anthropic drew a largely unexplained link between the rise of advanced AI and threats to human existence like nuclear war and pandemics.

The timing of the pushback, ahead of the UK's AI safety summit and following Biden's recent executive order on AI, is also significant.

More experts are warning that governments' preoccupation with the existential risks of AI is taking priority over the more immediate threats.

Aidan Gomez, an author of a research paper that helped create the technology behind chatbots, told The Guardian that while the more existential threats posed by AI should be "studied and pursued," they posed a "real threat to the public conversation."

"I think in terms of existential risk and public policy, it isn't a productive conversation to be had," he said. "As far as public policy and where we should have the public-sector focus — or trying to mitigate the risk to the civilian population — I think it forms a distraction, away from risks that are much more tangible and immediate."

Merve Hickok, the president of the Center for AI and Digital Policy, raised similar concerns about the UK AI safety summit's emphasis on existential risk.

Hickok told Insider that while the event "was initially born out of a commitment to promote democratic values," it now has a "narrow focus on safety and existential risk," which risks sidelining other pressing concerns to civil society.

In a letter addressed to UK Prime Minister Rishi Sunak, the center encouraged the UK government to include more pressing topics "such as bias, equity, fairness, and accountability" in the meeting agenda.

The UK government said the event, which will be opened by technology secretary Michelle Donelan, would set out its "vision for safety and security to be at the heart of advances in AI, in order to enable the enormous opportunities it will bring."


The UK AI summit's narrow focus on safety and existential risk means the real issues are being ignored, an AI ethicist says

Beatrice Nolan
Business Insider
Thu, November 2, 2023 

The UK's AI safety summit kicks off on Wednesday.


Some have already slammed the event for ignoring several key issues.


Several groups have criticized the UK government's focus on the existential risks of AI.


The UK's AI safety summit kicked off on Wednesday, but the event is already surrounded by criticism.

Several groups have criticized the UK government's emphasis on some of the more existential risks of AI, sidelining other and perhaps more pressing concerns.

Merve Hickok, the president of the Center for AI and Digital Policy, told Insider the summit began with a shared commitment with the US to work together for democratic AI values.

"Then somehow, after the Prime Minister's meetings with tech companies, it started focusing narrowly only on the existential crisis as defined by AGI taking over," she said

Not only has the focus been narrowed, she added, but the people at the table are mostly major tech companies.

"Civil society and other voices and communities are sidelined," she said. "Also, all the concerns about existing AI systems which are impacting our fundamental rights are sidelined as well."

Hickok is not the only one to raise concerns about the current rhetoric around AI safety.

Two leading experts have dismissed existential threats and some of the doomsday scenarios. Both have suggested big tech companies might have something to gain from inflating the more dramatic fears around AI.

Hickok's Center for AI and Digital Policy wrote to UK the prime minister, Rishi Sunak, earlier in the month to urge him to include pressing issues of "bias, equity, fairness, and accountability" in the meeting agenda.

Hickok added that the event's narrow focus risked sidelining these other threats to civil society.

"The UK should not let the AI safety agenda displace the AI fairness agenda," she said.


The 'stakes are too high' to ignore extinction risks of AI, AI godfather warns

Beatrice Nolan
Updated Thu, November 2, 2023 


AI godfather Yoshua Bengio says the risks of AI should not be underplayed.


In an interview with Insider, Bengio criticized peers dismissing AI's threat to humanity.


His remarks come after Meta's Yann LeCun accused Bengio and AI founders of "fear-mongering."

Claims by Meta's chief AI scientist, Yann LeCun, that AI won't wipe out humanity are dangerous and wrong according to one of his fellow AI godfathers.

"Yann and others are making claims that this is not a problem, but he doesn't know — nobody knows," Yoshua Bengio, the Canadian computer scientist and deep learning pioneer, told Insider Thursday. "I think it's dangerous to make these claims without any strong evidence that it can't happen."

LeCun, the storied French computer scientist who now leads Meta's AI lab, sparked a furious debate on X last weekend after accusing some of the most prominent founders in AI of "fear-mongering" with the ultimate goal of controlling the development of artificial intelligence.

LeCun argued that by overstating the apparently farfetched idea that AI will wipe out humans, these CEOs could influence governments to bring in punitive regulation that would hurt their competition.

"If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI," LeCun wrote.

Bengio, who once worked with LeCun at Bell Labs and was co-awarded the Turing Award with him in 2018 for their work in deep learning, told Insider that LeCun was too dismissive of the risks.

"Yann himself agreed that it was plausible we would reach human-level capabilities in the future, which could take a few years to a few decades and I completely agree with the timeline," he said. "I think there's too much uncertainty, and the stakes are too high."

Bengio has said in the past that current AI systems are not anywhere close to posing an existential risk but warned things could get "catastrophic" in the future.

AI's leading lights are unlikely to come to a consensus any time soon.

Andrew Ng, cofounder of Google Brain, said this week that big tech was over-inflating the existential risks of AI to squash competition from the open-source community.

As AI's biggest names subsequently began piling in, Yann went on to call out his fellow Turing Award winners Geoffrey Hinton and Bengio in another post.

In a response to Hinton, who has claimed AI poses an extinction risk, LeCun wrote on X: "You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models."

Bengio did warn governments need to ensure they weren't only listening to tech companies when formulating regulation.

"In terms of regulation, they should listen to independent voices and make sure that the safety of the public and the ethical considerations are at center stage," he said.

"Existential risk is one problem but the concentration of power, in my opinion, is the number two problem," he said.

Elon Musk is coming to the UK's big AI safety party. Some people actually building AI say they got frozen out.

Tom Carter
Wed, November 1, 2023

The UK's AI summit is underway. Both Elon Musk and OpenAI's Sam Altman are attending.

Some AI experts and startups say they've been frozen out in favor of bigger tech companies.

They warn that the "closed door" event risks ensuring that AI is dominated by select companies.


A group of AI startups and industry experts are warning that their exclusion from a major AI summit risks ensuring that a handful of tech companies have future dominance over the new technology.

The UK's AI safety summit, which begins Wednesday at WWII code-breaking facility Bletchley Park, has attracted a glitzy mix of tech execs and political leaders from OpenAI's Sam Altman and Microsoft President Brad Smith to US Vice President Kamala Harris. Tesla CEO and Twitter owner, Elon Musk, is also attending.

The exclusive guest list has raised eyebrows, with some AI industry experts and labor groups warning that the event risks pandering to a group of big tech companies and ignoring others who are at the center of the AI boom.

Iris Ai founder Victor Botev, whose company has been building AI products since 2015, told Insider that startups had been frozen out of the summit in favor of bigger tech companies.

"Smaller AI firms and open-source developers often pioneer new innovations, yet their voices on regulation go unheard," he said.

"It is vital for any consultation on AI regulation to include perspectives beyond just the tech giants. The summit missed a great opportunity by only including 100 guests, who are primarily made up of world leaders and big tech companies," he added.

It comes after Yann LeCun, Meta's chief AI scientist, who is also expected to attend the event, accused AI companies like OpenAI, Anthropic, and Deepmind of "fear-mongering" and "massive corporate lobbying" to ensure that AI remains in the hands of a small collection of companies.

The UK's AI summit aims to bring together AI experts, tech bosses, and world leaders to discuss the risks of AI and find ways to regulate the new technology.

It has faced criticism for focusing too much on the existential threats that could be posed by hypothetical superintelligent AI, with UK Prime Minister Rishi Sunak warning that humanity could "lose control" of the technology.

"It is far from certain whether the AI summit will have any lasting impact," Ekaterina Almasque, a general partner at European venture capital firm OpenOcean, which invests in AI, told Insider.

"It looks likely to focus mostly on bigger, long-term risks from AI, and far less on what needs to be done today to build a thriving AI ecosystem," she added.

Almasque said that much of the AI start-up community, which will bear the brunt of any regulation proposed at the summit, had been "shut out" of the event, and warned that this had to change in the future if AI regulation was to succeed.

"Going forward, we must have more voices for startups themselves. The AI Safety Summit's focus on Big Tech, and the shutting out of many in the AI start-up community, is disappointing.

It is vital that industry voices are included when shaping regulations that will directly impact technological development," she added.

A spokesperson for the UK government's Department for Science, Innovation, and Technology – organizing the summit – told Insider that there will be a range of attendees from "international governments, academia, industry, and civil society."

"These attendees are the right mix of expertise and willingness to be part of the discussions," they said.

Workers groups such as the UK's Trades Union Congress and the American Federation of Labor and Congress of Industrial Organizations, which represents 12.5 million US workers, have also criticized the summit. AI is expected to have a huge impact on many white-collar jobs, with Goldman Sachs warning earlier this year that over 300 million jobs could be affected by new technology.

An open letter signed by over 100 individuals and labor groups said that the AI summit was a "closed door" event that was prioritizing big tech companies over groups feeling the impact of generative AI now, like small businesses and artists.

"The communities and workers most affected by AI have been marginalized by the Summit," they said.

The signatories also described the limited guest list as a "missed opportunity," and warned that the conference's focus on AI's hypothetical existential threats risked missing the point.

"As it stands, it is a closed-door event, overly focused on speculation about the remote 'existential risks' of 'frontier' AI systems; systems built by the very same corporations who now seek to shape the rules," they said.

Elon Musk says AI means eventually no one will need to work

Lakshmi Varanasi
Thu, November 2, 2023 


UK Prime Minister Rishi Sunak and Elon Musk chatted about AI at the close of the UK's AI Safety Summit.

Musk said advances in AI will lead to a world where "no job is needed."

Musk also suggested we'll have "universal high income" instead of just universal basic income.

People may be fretting about how the coming AI job apocalypse will impact them, but Elon Musk has a pretty utopian view of how AI will reshape the labor market.

Musk said that advances in AI will simply lead to a world "where no job is needed," in a conversation with UK Prime Minister Rishi Sunak at the close of the UK's inaugural AI Safety Summit on Thursday. Of course, people can still hold a job "for personal satisfaction," but one day, AI "will be able to do everything," Musk said.

And how exactly will people support themselves in this new, AI-powered world?

"You'll have universal high income," Musk told Sunak, presenting it as a superior alternative to universal basic income — one of Silicon Valley's dream solutions to income inequality — without specifying exactly how the two concepts differed.

"You'll rarely ask for anything," he said, outlining a "future of abundance" where there would be no scarcity of goods and services. As a result, AI will function as somewhat of an economic equalizer, he said, especially because it'll be accessible to everyone.

At the same time, he suggested that there might be "somewhat of the magic genie problem," so people will need to be careful about exactly what they "wish for," he said. Musk has been outspoken about the need to regulate AI and was among the list of tech execs and AI researchers who signed an open letter calling for a pause on AI development. During his discussion Sunak, he offered solutions ranging from an "off switch" to a keyword for putting humanoid robots into a safe state.

Still, his verdict — at least at the end of Thursday conversation— was that AI is likely to be 80% good and 20% bad.