'AI' named Collins Word of the Year
London (AFP) – The abbreviation of artificial intelligence (AI) has been named the Collins Word of the Year for 2023, the dictionary publisher said on Tuesday.
Issued on: 01/11/2023
Lexicographers at Collins Dictionary said use of the term AI had "accelerated" and that it had become the dominant conversation of 2023
© Josep LAGO / AFP/File
Lexicographers at Collins Dictionary said use of the term had "accelerated" and that it had become the dominant conversation of 2023.
"We know that AI has been a big focus this year in the way that it has developed and has quickly become as ubiquitous and embedded in our lives as email, streaming or any other once futuristic, now everyday technology," Collins managing director Alex Beecroft said.
Collins said its wordsmiths analysed the Collins Corpus, a database that contains more than 20 billion words with written material from websites, newspapers, magazines and books published around the world.
It also draws on spoken material from radio, TV and everyday conversations, while new data is fed into the Corpus every month, to help the Collins dictionary editors identify new words and meanings from the moment they are first used.
"Use of the word as monitored through our Collins Corpus is always interesting and there was no question that this has also been the talking point of 2023," Beecroft said.
Other words on Collins list include "nepo baby", which has become a popular phrase to describe the children of celebrities who have succeeded in industries similar to those of their parents.
"Greedflation", meaning companies making profits during the cost of living crisis, and "Ulez", the ultra-low emission zone that penalises drivers of the most-polluting cars in London, were also mentioned.
Social media terms such as "deinfluencing" or "de-influencing", meaning to "warn followers to avoid certain commercial products", were also on the Collins list.
This summer's Ashes series between England and Australia had many people talking about a style of cricket dubbed "Bazball", according to Collins.
The term refers to New Zealand cricketer and coach Brendon McCullum, known as Baz, who advocates a philosophy of relaxed minds, aggressive tactics and positive energy.
The word "permacrisis", defined as "an extended period of instability and insecurity" was the Collins word of the year in 2022.
In 2020, it was "lockdown". In 2016, it was "Brexit".
© 2023 AFP
AI anxiety as computers get super smart
San Francisco (AFP) – From Hollywood's death-dealing Terminator to warnings from genius Stephen Hawking or Silicon Valley stars, fears have been fueled that artificial intelligence (AI) could one day destroy humanity.
Issued on: 01/11/2023 -
Before his death, Professor Stephen Hawking called on the world to avoid the risks of artificial intelligence, warning it could be the worst event in the history of civilization
© Jemal Countess / GETTY IMAGES NORTH AMERICA/AFP
A
Tech titans are racing toward creating AI far smarter than people, pushing US President Joe Biden to impose emergency regulation and the European Union seeking major legislation to be agreed by the end of this year.
A two-day summit starting Wednesday in London will explore regulatory safeguards against AI risks such as those below.
Job stealer?
The success of ChatGPT from OpenAI has ignited debate about whether "generative AI" capable of quickly producing text, images and audio from simple commands in everyday language is a tremendous threat to jobs held by people.
Automated machinery is already used to do labor in factories, warehouses, and fields.
Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers.
A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.
Boosters of such technology have invoked the notion of a universal basic income in which machines generate wealth that is shared with people freed of the burdens of work.
But it is also possible companies would reap profits of improved efficiencies, leaving those out of work to fend for themselves.
Copycat?
Artists were quick to protest software such as Dall-E, Midjourney and Stable Diffusion that are capable of creating images in nearly any style on demand.
Computer coders and writers followed suit, critiquing AI creators for "training" software on their work, enabling it to replicate their styles or skills without permission or compensation.
AI models have been taught using massive amounts of information and imagery found online.
"That's what it trains on, a fraction of the huge output of humanity," OpenAI co-founder Sam Altman said at a conference in September.
"I think this will be a tool that amplifies human beings, not replace them."
Disinformation tools?
Fake news and deepfakes have been around for years but being able to easily crank it out using generative AI raises fears of rampant online deception.
Elections run the risk of being won by those most adept at spreading disinformation, contends cognitive scientist and AI expert Gary Marcus.
"Democracy depends on access to the information needed to make the right decisions," Marcus said.
"If no one knows what's true and what's not, it's all over".
Fraud?
Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches.
Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.
US President Biden called the ability of AI to imitate people's voices "mind blowing" while signing his recent executive order aimed at the technology.
There are even language models trained specifically to produce such malicious content.
Human role models
As with other technologies with the potential for good or evil, the main danger is posed by humans who wield it.
Since AI is trained on data put on the web by humans, it can mirror society's prejudices, biases, and injustices.
AI also has the potential to make it easier to create bioweapons; hack banks or power grids; run oppressive government surveillance, and more.
AI overlord?
Some industry players fear AI could become so smart that it could seize control from humans.
"It is not difficult to imagine that at some point in the future, our intelligent computers will become as smart or smarter than people," OpenAI co-founder and chief scientist Ilya Sutskever said at a recent TED AI conference.
"The impact of such artificial intelligence is going to be truly vast."
OpenAI and rivals maintain the goal is for AI to benefit humanity, solving long-intractable problems such as climate change.
At the same time, AI industry leaders are calling for thoughtful regulation to prevent risks such as human extinction.
© 2023 AFP
Global leaders gather at UK summit focused on AI safety, regulations
Issued on: 01/11/2023 -
01:40
FRANCE 24
Video by: Charlotte HUGHES
Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. Pushing for a global AI advisory board based on the UN’s panel on climate change, PM Rishi Sunak is hoping to position the UK as a leader in the rapidly developing field.
World leaders gather at UK summit aiming to tackle 'frontier AI' risks
Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.
Issued on: 01/11/2023 -
Elon Musk, CEO of Tesla and X, attends the AI Safety Summit in Bletchley Park near Milton Keynes in the UK on November 1, 2023
© Toby Melville, Reuters
By: NEWS WIRES|
The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.
Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They're underpinned by foundation models, which power chatbots like OpenAI's ChatGPT and Google's Bard and are trained on vast pools of information scraped from the internet.
The AI Safety Summit is a labor of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI. But U.S. Vice President Kamala Harris may divert attention Wednesday with a separate speech in London setting out the Biden administration’s more hands-on approach.
She's due to attend the summit on Thursday alongside government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia – and China, invited over the protests of some members of Sunak's governing Conservative Party.
Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.
European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google's DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the “godfathers” of AI, are also attending.
Watch moreIn China, artificial intelligence extends its hold on daily life
In all, more than 100 delegates were expected at the meeting held at Bletchley Park, a former top secret base for World War II codebreakers that’s seen as a birthplace of modern computing.
As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”
South Korea has agreed to host another AI safety summit in six months, followed by France in a year's time, Donelan said.
Sunak has said the technology brings new opportunities but warned about frontier AI's threat to humanity, because it could be used to create biological weapons or be exploited by terrorists to sow fear and destruction.
Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.
In contrast, Harris will stress the need to address the here and now, including “societal harms that are already happening such as bias, discrimination and the proliferation of misinformation.”
Harris plans to stress that the Biden administration is “committed to hold companies accountable, on behalf of the people, in a way that does not stifle innovation,” including through legislation.
“As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: the wellbeing of their customers; the security of our communities; and the stability of our democracies,” she plans to say.
She’ll point to President Biden’s executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest. Among measures she will announce is an AI Safety Institute, run through the Department of Commerce, to help set the rules for “safe and trusted AI.”
Harris also will encourage other countries to sign up to a U.S.-backed pledge to stick to “responsible and ethical” use of AI for military aims.
A White House official gave details of Harris’s speech, speaking on condition of anonymity to discuss her remarks in advance.
(AP)
UK, US, China sign AI safety pledge at UK summit
Bletchley Park (United Kingdom) (AFP) – Countries including the UK, United States and China on Wednesday agreed the "need for international action" as political and tech leaders gathered for the world's first summit on artificial intelligence (AI) safety.
Issued on: 01/11/2023
Britain's Prime Minister Rishi Sunak convened the first ever global summit on AI safety © Kirsty Wigglesworth / POOL/AFP
The UK government kicked off the two-day event at Bletchley Park, north of London, by publishing the "Bletchley Declaration" signed by 28 countries and the European Union.
In it, they agreed on "the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".
Sunak called the declaration a "landmark achievement" while King Charles III, in a video message to the summit, urged international collaboration to combat the "significant risks" of unchecked development.
"There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure," he said.
UK technology minister Michelle Donelan told AFP that the declaration "really outlines for the first time the world coming together to identify this problem".
The announcement came shortly after the UK and United States both said they were setting up their own institutes to assess and mitigate the risks of the fast-emerging technology.
Who is building AI? © Nicholas SHEARMAN / AFP
The release of the latest models have offered a glimpse into the potential of so-called frontier AI, but have also prompted concerns around issues ranging from job losses to cyber attacks and the control that humans actually have over the systems.
'Timely'
The conference at Bletchley Park, where top British codebreakers cracked Nazi Germany's "Enigma" code, focuses on frontier AI.
Donelan told AFP the event was a "historic moment in mankind's history" after earlier announcing two further summits, in South Korea in six months' time, and in France next year.
US Vice President Kamala Harris urged collaboration as AI develops
© Daniel LEAL / AFP
But London has reportedly had to scale back its ambitions around ideas such as launching a new regulatory body amid a perceived lack of enthusiasm.
Donelan accepted that the summit "isn't designed to produce a blueprint for global legislation", but was instead "designed to forge a path ahead,... so that we can get a better handle and understanding on the risk of frontier AI".
Italian Prime Minister Giorgia Meloni was one of the only world leaders attending the conference, although tech giant Elon Musk was already present on the first day, and will talk with Sunak on Thursday.
The SpaceX and Tesla CEO told the domestic Press Association news agency that the event was "timely".
SpaceX and Tesla CEO Elon Musk called the event 'timely'
© Leon Neal / POOL/AFP
"It's one of the existential risks that we face and it is potentially the most pressing one if you look at the timescale and rate of advancement -- the summit is timely, and I applaud the prime minister for holding it," he said.
'Talking shop'
While the potential of AI raises many hopes, particularly for medicine, its development is seen as largely unchecked.
US Vice President Kamala Harris urged in a speech in London on Wednesday that "we seize this moment" and "work together to build a future where AI creates opportunity and advances equity" while protecting rights.
Who should regulate AI?
© Nicholas SHEARMAN / AFP
She will attend the summit on Thursday, but lawyer and investigator Cori Crider, a campaigner for "fair" technology, warned that the event could be "a bit of a talking shop.
"If he were serious about safety, Rishi Sunak needed to roll deep and bring all of the UK majors and regulators in tow and he hasn't," she told a San Francisco news conference.
Ahead of the meeting, the G7 powers agreed on Monday on a non-binding "code of conduct" for companies developing the most advanced AI systems.
In Rome, ministers from Italy, Germany and France called for an "innovation-friendly approach" to regulating AI in Europe, as they urged more investment to challenge the United States and China.
China was also due to be present, but it was unclear at what level.
The invitation has raised eyebrows amid heightened tensions between China and Western nations and accusations of technological espionage.
© 2023 AFP