Thursday, January 28, 2021

Calculations Show Humans Can't Contain Superintelligent Machines



Researchers say we’re unlikely to ever be able to contain a large enough superintelligent artificial intelligence>>>P.



The premise sounds scary, but knowing the odds will help scientists who work on these projects.

Self-teaching AI already exists and can teach itself things programmers don’t
“fully understand.”

In a new study, researchers from Germany’s Max Planck Institute for Human Development say they’ve shown that an artificial intelligence in the category known as“superintelligent” would be impossible for humans to contain with competing software.



That... doesn’t sound promising. But are we really all doomed to bow down to our sentient AI overlords?

➡ The world is f#@!-ing weird. Let's make sense of it together.

Berlin’s Institute for Human Development studies how humans learn—and how we subsequently build and teach machines to learn. A superintelligent AI is one that exceeds human intelligence and can teach itself new things beyond human grasp. It’s this phenomenon that causes a great deal of thought and research.

The Planck press release points out superintelligent AIs already exist in some capacities.“[T]here are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” study coauthor Manuel Cebrian explains.“The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

Mathematicians, for example, use complex machine learning to help solve outliers for famous proofs. Scientists use machine learning to come up with new candidate molecules to treat diseases. Yes, much of this research involves some amount of“brute force” solving—the simple fact that computers can race through billions of calculations and shorten these problems from decades or even centuries to days or months.

➡ Cool Stuff We Love: The Best Books About AI






Because of the amount that computer hardware can process at once, the boundary where quantity becomes quality isn’t always easy to pinpoint. Humans are fearful of AI that can teach itself, and Isaac Asimov’s Three Laws of Robotics (and generations of variations on them) have become instrumental to how people imagine we can protect ourselves from a rogue or evil AI. The laws dictate that a robot can’t harm people and can’t be instructed to harm people.

The problem, according to these researchers, is that we likely don’t have a way to enforce these laws or others like them. From the study:

“We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.”

Basically, a superintelligent AI will have acquired so much knowledge that to even plan a large enough container will exceed our human grasp. Not just that, but there’s no guarantee we’ll be able to parse whatever the AI has decided is the best medium. It probably won’t look anything like our humanmade, clumsy programming languages.

This might sound scary, but it’s also extremely important information for scientists to have. Without the phantom of a“failsafe algorithm,” computer researchers can put their energy into other plans and exercise more caution.



 

No comments: