By AFP
October 8, 2024
Nobel physics laureate Geoffrey Hinton, 76, worries that AI could lead to 'systems more intelligent than us that eventually take control
' - Copyright AFP WALID BERRAZEG
Joseph Boyle
For a brief moment in spring last year, the bird-like features of bespectacled British-born researcher Geoffrey Hinton were poking out from TV screens across the world.
Hinton, a big name in the world of artificial intelligence but largely unknown outside it, was warning that the technology he had helped to create — for which he was awarded the 2024 Nobel Prize — could pose an existential threat to humanity.
“What do you think the chances are of AI wiping out humanity,” a reporter from the US network CBS News asked in March last year.
“It’s not inconceivable,” replied Hinton, making a very British understatement.
A few weeks later, he had walked away from his job at Google and was giving interviews to media across the world, quickly becoming the poster-child for AI doomsayers.
– Difficult family life –
Hinton, a 76-year-old soft-spoken career academic, was born in London, raised in Bristol and went to the universities of Cambridge and Edinburgh.
He has described his early life as a high pressure existence, trying to live up to the expectations of a family with an illustrious history, littered with storied scientists.
Even his father was a member of the Royal Society.
He told Toronto Life magazine he had struggled with depression his whole life and work was a way of releasing the pressure.
But Hinton has rarely been able to fully escape into his work.
His first wife died from cancer shortly after the couple had adopted their two children in the early 1990s, thrusting him into the role of single parent.
“I cannot imagine how a woman with children can have an academic career,” he told Toronto Life.
“I’m used to being able to spend my time just thinking about ideas… But with small kids, it’s just not on.”
– ‘Utterly correct’ –
After spending time in universities in the United States in the late 1970s and 1980s, Hinton relocated to Toronto in 1987, his base ever since.
Hinton, a self-professed socialist who recalls his family stuffing envelopes for the British Labour Party, had been unwilling to accept funding from the US military, which was the biggest funder for his kind of research.
The Canadian government agreed to back his research, which attempted to replicate the functioning of the human brain by engineering artificial “neural networks”.
Although he spent years on the academic fringes, a research community grew up around him in the Canadian city, and eventually his vision came to dominate the field.
And then Google came knocking.
He took a job with the Silicon Valley juggernaut in 2013 and suddenly became one of the central figures in the emerging industry.
As competition ramped up, many of his students took posts in companies including Meta, Apple and Uber.
Ilya Sutskever, who founded OpenAI, worked in Hinton’s team for years and has described the time as “critical” for his career.
He told Toronto University’s website in 2017 they pursued “ideas that were both highly unappreciated by most scientists, yet turned out to be utterly correct”.
But Sutskever and Hinton have emerged as prominent worriers about the technology — Sutskever was pushed out of OpenAI for raising concerns about their products a year after Hinton exited Google.
And true to form, even during his acceptance speech for the Nobel Prize — he received the news in a “cheap hotel in California” — Hinton was still talking of regret rather than success.
“In the same circumstances, I would do the same again,” he said.
“But I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control.”
Neural networks, machine learning? Nobel-winning AI science explained
By AFP
October 8, 2024
British-Canadian Geoffrey Hinton, known as a 'godfather of AI', and American John Hopfield were given 2024's Nobel Prize for Physics
- Copyright AFP Jonathan NACKSTRAND
Daniel Lawler and Pierre Celerier
The Nobel Prize in Physics was awarded to two scientists on Tuesday for discoveries that laid the groundwork for the artificial intelligence used by hugely popular tools such as ChatGPT.
British-Canadian Geoffrey Hinton, known as a “godfather of AI,” and US physicist John Hopfield were given the prize for “discoveries and inventions that enable machine learning with artificial neural networks,” the Nobel jury said.
But what are those, and what does this all mean? Here are some answers.
– What are neural networks and machine learning? –
Mark van der Wilk, an expert in machine learning at the University of Oxford, told AFP that an artificial neural network is a mathematical construct “loosely inspired” by the human brain.
Our brains have a network of cells called neurons, which respond to outside stimuli — such as things our eyes have seen or ears have heard — by sending signals to each other.
When we learn things, some connections between neurons get stronger, while others get weaker.
Unlike traditional computing, which works more like reading a recipe, artificial neural networks roughly mimic this process.
The biological neurons are replaced with simple calculations sometimes called “nodes” — and the incoming stimuli they learn from is replaced by training data.
The idea is that this could allow the network to learn over time — hence the term machine learning.
– What did Hopfield discover? –
But before machines would be able to learn, another human trait was necessary: memory.
Ever struggle to remember a word? Consider the goose. You might cycle through similar words — goon, good, ghoul — before striking upon goose.
“If you are given a pattern that’s not exactly the thing that you need to remember, you need to fill in the blanks,” van der Wilk said.
“That’s how you remember a particular memory.”
This was the idea behind the “Hopfield network” — also called “associative memory” — which the physicist developed back in the early 1980s.
Hopfield’s contribution meant that when an artificial neural network is given something that is slightly wrong, it can cycle through previously stored patterns to find the closest match.
This proved a major step forward for AI.
– What about Hinton? –
In 1985, Hinton revealed his own contribution to the field — or at least one of them — called the Boltzmann machine.
Named after 19th century physicist Ludwig Boltzmann, the concept introduced an element of randomness.
This randomness was ultimately why today’s AI-powered image generators can produce endless variations to the same prompt.
Hinton also showed that the more layers a network has, “the more complex its behaviour can be”.
This in turn made it easier to “efficiently learn a desired behaviour,” French machine learning researcher Francis Bach told AFP.
– What is it used for? –
Despite these ideas being in place, many scientists lost interest in the field in the 1990s.
Machine learning required enormously powerful computers capable of handling vast amounts of information. It takes millions of images of dogs for these algorithms to be able to tell a dog from a cat.
So it was not until the 2010s that a wave of breakthroughs “revolutionised everything related to image processing and natural language processing,” Bach said.
From reading medical scans to directing self-driving cars, forecasting the weather to creating deepfakes, the uses of AI are now too numerous to count.
– But is it really physics? –
Hinton had already won the Turing award, which is considered the Nobel for computer science.
But several experts said his was a well-deserved Nobel win in the field of physics, which started science down the road that would lead to AI.
French researcher Damien Querlioz pointed out that these algorithms were originally “inspired by physics, by transposing the concept of energy onto the field of computing”.
Van der Wilk said the first Nobel “for the methodological development of AI” acknowledged the contribution of the physics community, as well as the winners.
And while ChatGPT can sometimes make AI seem genuinely creative, it is important to remember the “machine” part of machine learning.
“There is no magic happening here,” van der Wilk emphasised.
“Ultimately, everything in AI is multiplications and additions.”
The Nobel Prize in Physics was awarded to two scientists on Tuesday for discoveries that laid the groundwork for the artificial intelligence used by hugely popular tools such as ChatGPT.
British-Canadian Geoffrey Hinton, known as a “godfather of AI,” and US physicist John Hopfield were given the prize for “discoveries and inventions that enable machine learning with artificial neural networks,” the Nobel jury said.
But what are those, and what does this all mean? Here are some answers.
– What are neural networks and machine learning? –
Mark van der Wilk, an expert in machine learning at the University of Oxford, told AFP that an artificial neural network is a mathematical construct “loosely inspired” by the human brain.
Our brains have a network of cells called neurons, which respond to outside stimuli — such as things our eyes have seen or ears have heard — by sending signals to each other.
When we learn things, some connections between neurons get stronger, while others get weaker.
Unlike traditional computing, which works more like reading a recipe, artificial neural networks roughly mimic this process.
The biological neurons are replaced with simple calculations sometimes called “nodes” — and the incoming stimuli they learn from is replaced by training data.
The idea is that this could allow the network to learn over time — hence the term machine learning.
– What did Hopfield discover? –
But before machines would be able to learn, another human trait was necessary: memory.
Ever struggle to remember a word? Consider the goose. You might cycle through similar words — goon, good, ghoul — before striking upon goose.
“If you are given a pattern that’s not exactly the thing that you need to remember, you need to fill in the blanks,” van der Wilk said.
“That’s how you remember a particular memory.”
This was the idea behind the “Hopfield network” — also called “associative memory” — which the physicist developed back in the early 1980s.
Hopfield’s contribution meant that when an artificial neural network is given something that is slightly wrong, it can cycle through previously stored patterns to find the closest match.
This proved a major step forward for AI.
– What about Hinton? –
In 1985, Hinton revealed his own contribution to the field — or at least one of them — called the Boltzmann machine.
Named after 19th century physicist Ludwig Boltzmann, the concept introduced an element of randomness.
This randomness was ultimately why today’s AI-powered image generators can produce endless variations to the same prompt.
Hinton also showed that the more layers a network has, “the more complex its behaviour can be”.
This in turn made it easier to “efficiently learn a desired behaviour,” French machine learning researcher Francis Bach told AFP.
– What is it used for? –
Despite these ideas being in place, many scientists lost interest in the field in the 1990s.
Machine learning required enormously powerful computers capable of handling vast amounts of information. It takes millions of images of dogs for these algorithms to be able to tell a dog from a cat.
So it was not until the 2010s that a wave of breakthroughs “revolutionised everything related to image processing and natural language processing,” Bach said.
From reading medical scans to directing self-driving cars, forecasting the weather to creating deepfakes, the uses of AI are now too numerous to count.
– But is it really physics? –
Hinton had already won the Turing award, which is considered the Nobel for computer science.
But several experts said his was a well-deserved Nobel win in the field of physics, which started science down the road that would lead to AI.
French researcher Damien Querlioz pointed out that these algorithms were originally “inspired by physics, by transposing the concept of energy onto the field of computing”.
Van der Wilk said the first Nobel “for the methodological development of AI” acknowledged the contribution of the physics community, as well as the winners.
And while ChatGPT can sometimes make AI seem genuinely creative, it is important to remember the “machine” part of machine learning.
“There is no magic happening here,” van der Wilk emphasised.
“Ultimately, everything in AI is multiplications and additions.”
No comments:
Post a Comment