Fred Schwaller, Deutsche Welle - Yesterday
Computers are solving problems no human could ever decode — and in ways that feel distinctly nonhuman to us. Should we embrace or rethink the strange intelligence of machines?In 2019, five of the top poker players in the world sat down in a casino to play poker against a computer. Over the course of the game they lost big -- some $1.7 million (E1.77 million) -- to a poker bot called Pluribus.
Human intelligence is very different from artificial intelligence.
© Roman Budnikov/Zoonar/picture alliance
It was the first time an artificial-intelligence (AI) program beat elite human players at a game of more than two players.
In a post-game interview, the players were asked how they felt about losing to a computer. Pluribus, they said, ʺbluffed really well. No human would ever bet like that.ʺ
One player said the bot played like 'an alien', betting hundreds of times more than human players did, even when it was bluffing.
How a bot learned to play poker
ʺWhy was it so alien? It's because Pluribus learned how to play poker completely differently to how humans do," Eng Lim Guo, Chief Technology Officer at Hewlett Packard Enterprise, told DW.
When a human learns to play poker, Guo explained, they learn two main skills: How to make superior mathematical decisions, and how to read their opponents.
But Pluribus didn't learn this way. Instead, it got incredibly good at one aspect of poker -- bluffing -- through trillions of games of trial and error.
ʺThey trained the machines how to bluff by pitching two machines against each other over trillions of games,ʺ said Guo. ʺAt the end of the training, a bot emerged that was an expert at bluffing.ʺ
This method of learning is called reinforcement learning. It's a unique way to learn one task by repeating the task over and over again until it finds the best methods.
ʺIt explored more spaces of probability than humans ever have since the game was invented. It found different ways of playing,ʺ said Guo.
AI doesn't 'think' like we do
Pluribus' reinforcement learning requires ways of storing information that humans simply don't have the capacity for.
ʺWe grow AI systems,ʺ explained Guo. ʺA machine accumulates and accumulates information over time.ʺ
But the brain doesn't do this. We don't have the time on Earth to train ourselves trillions of times in poker, so instead we make predictions based on past and similar experiences and pick the best solution accordingly.
In the brain, this learning requires the construction of new neural connections. But we also subtract information, pruning the connections we don't use.
ʺWe both accumulate new memories and forget things all the time. Somehow, we make assessments of what is worth storing and forgetting,ʺ said Upinder Bhalla, professor of computational neuroscience at the National Centre for Biological Sciences, India.
Bhalla explained that the efficiency of human computation comes from this subtractive nature of the brain, which allows us to focus on the meaningful information at hand and attend to the things most important for our survival.
But an AI system can only accumulate data, mindlessly exploring all the realms of possibility in a defined set until it finds the best answer.
For Bhalla, AI systems are way out there in their own tree of evolution, separate from anything remotely lifelike.
ʺArtificial intelligence is vastly different to human intelligence because a computer's architecture is completely different from the brain,ʺ he said.
AI is not capable of emotions
One of the reasons why humans find it hard to bluff is due to a phenomenon called loss aversion. Essentially, a potential loss is perceived as emotionally more severe than an equivalent gain. Betting money on a bad hand requires you to overcome this fear.
Pluribus doesn't experience loss aversion. It bluffed like no human would, betting hundreds of times the value of the pot because it had no fear. Playing poker against an emotionless AI meant all the rules of human emotion went out the window.
In fact, no AI systems with feelings currently exist. Specially trained AI systems have poor performance in recognizing human emotion, let alone being able to mimic them. It will be a long time until computers can truly bluff in the emotional ways humans do.
ʺIt's hardly surprising that AI's seem alien to us,ʺ Bhalla told DW. ʺThe way it achieves its amazing capabilities is very unhuman, very unbiological.ʺ
AI 'alienness' could help science
Irina Higgins, a research scientist at Google subsidiary DeepMind, warns that developing AIs that become too alien might be problematic.
ʺAs we develop AI systems, the danger is we invent a brain v2.0 which we also will not fully understand. We need to align AI with humans, making it less alien so we can understand and control it,ʺ Higgins told DW.
But AI being alien might also be a good thing, she said.
ʺIn science, AI systems that have 'alien' ways of analysing data can help us discover things humans haven't thought of,ʺ she said.
Higgins pointed to AlphaFold, the AI system from Deepmind that has predicted the 3D structure of over 100,000 proteins from their amino acid sequences.
ʺResolving protein structures is one of the most fundamental questions in biology. AlphaFold helps us know how proteins and drugs interact, advancing biological research,ʺ said Higgins.
For Higgins, this is the best example of how the alienness of AI can be used to help humans answer big questions, rather than replace human intelligence entirely.
AI development is still in its infancy. Where we go from here, explained Bhalla, depends on how we approach the development of AI.
ʺAI and computers will be different, but not monstrous. It's incumbent on us make sure we train AI systems to become decent, much like we do with our children,ʺ said Bhalla.
Stay tuned for the next article about AI and the brain...
Edited by Clare Roth and Zulfikar Abbany
Copyright 2022 DW.COM, Deutsche Welle. Distributed by Tribune Content Agency, LLC.
It was the first time an artificial-intelligence (AI) program beat elite human players at a game of more than two players.
In a post-game interview, the players were asked how they felt about losing to a computer. Pluribus, they said, ʺbluffed really well. No human would ever bet like that.ʺ
One player said the bot played like 'an alien', betting hundreds of times more than human players did, even when it was bluffing.
How a bot learned to play poker
ʺWhy was it so alien? It's because Pluribus learned how to play poker completely differently to how humans do," Eng Lim Guo, Chief Technology Officer at Hewlett Packard Enterprise, told DW.
When a human learns to play poker, Guo explained, they learn two main skills: How to make superior mathematical decisions, and how to read their opponents.
But Pluribus didn't learn this way. Instead, it got incredibly good at one aspect of poker -- bluffing -- through trillions of games of trial and error.
ʺThey trained the machines how to bluff by pitching two machines against each other over trillions of games,ʺ said Guo. ʺAt the end of the training, a bot emerged that was an expert at bluffing.ʺ
This method of learning is called reinforcement learning. It's a unique way to learn one task by repeating the task over and over again until it finds the best methods.
ʺIt explored more spaces of probability than humans ever have since the game was invented. It found different ways of playing,ʺ said Guo.
AI doesn't 'think' like we do
Pluribus' reinforcement learning requires ways of storing information that humans simply don't have the capacity for.
ʺWe grow AI systems,ʺ explained Guo. ʺA machine accumulates and accumulates information over time.ʺ
But the brain doesn't do this. We don't have the time on Earth to train ourselves trillions of times in poker, so instead we make predictions based on past and similar experiences and pick the best solution accordingly.
In the brain, this learning requires the construction of new neural connections. But we also subtract information, pruning the connections we don't use.
ʺWe both accumulate new memories and forget things all the time. Somehow, we make assessments of what is worth storing and forgetting,ʺ said Upinder Bhalla, professor of computational neuroscience at the National Centre for Biological Sciences, India.
Bhalla explained that the efficiency of human computation comes from this subtractive nature of the brain, which allows us to focus on the meaningful information at hand and attend to the things most important for our survival.
But an AI system can only accumulate data, mindlessly exploring all the realms of possibility in a defined set until it finds the best answer.
For Bhalla, AI systems are way out there in their own tree of evolution, separate from anything remotely lifelike.
ʺArtificial intelligence is vastly different to human intelligence because a computer's architecture is completely different from the brain,ʺ he said.
AI is not capable of emotions
One of the reasons why humans find it hard to bluff is due to a phenomenon called loss aversion. Essentially, a potential loss is perceived as emotionally more severe than an equivalent gain. Betting money on a bad hand requires you to overcome this fear.
Pluribus doesn't experience loss aversion. It bluffed like no human would, betting hundreds of times the value of the pot because it had no fear. Playing poker against an emotionless AI meant all the rules of human emotion went out the window.
In fact, no AI systems with feelings currently exist. Specially trained AI systems have poor performance in recognizing human emotion, let alone being able to mimic them. It will be a long time until computers can truly bluff in the emotional ways humans do.
ʺIt's hardly surprising that AI's seem alien to us,ʺ Bhalla told DW. ʺThe way it achieves its amazing capabilities is very unhuman, very unbiological.ʺ
AI 'alienness' could help science
Irina Higgins, a research scientist at Google subsidiary DeepMind, warns that developing AIs that become too alien might be problematic.
ʺAs we develop AI systems, the danger is we invent a brain v2.0 which we also will not fully understand. We need to align AI with humans, making it less alien so we can understand and control it,ʺ Higgins told DW.
But AI being alien might also be a good thing, she said.
ʺIn science, AI systems that have 'alien' ways of analysing data can help us discover things humans haven't thought of,ʺ she said.
Higgins pointed to AlphaFold, the AI system from Deepmind that has predicted the 3D structure of over 100,000 proteins from their amino acid sequences.
ʺResolving protein structures is one of the most fundamental questions in biology. AlphaFold helps us know how proteins and drugs interact, advancing biological research,ʺ said Higgins.
For Higgins, this is the best example of how the alienness of AI can be used to help humans answer big questions, rather than replace human intelligence entirely.
AI development is still in its infancy. Where we go from here, explained Bhalla, depends on how we approach the development of AI.
ʺAI and computers will be different, but not monstrous. It's incumbent on us make sure we train AI systems to become decent, much like we do with our children,ʺ said Bhalla.
Stay tuned for the next article about AI and the brain...
Edited by Clare Roth and Zulfikar Abbany
Copyright 2022 DW.COM, Deutsche Welle. Distributed by Tribune Content Agency, LLC.
No comments:
Post a Comment