Friday, June 05, 2020





FRANKEN SCIENCE

Genius Weapons: Artificial Intelligence, Autonomous Weaponry, and the Future of Warfare
by Louis A. Del Monte (Author) Format: Kindle Edition https://tinyurl.com/y9hbar9d

A technology expert describes the ever-increasing role of artificial intelligence in weapons development, the ethical dilemmas these weapons pose, and the potential threat to humanity.Artificial intelligence is playing an ever-increasing role in military weapon systems. Going beyond the bomb-carrying drones used in the Afghan war, the Pentagon is now in a race with China and Russia to develop "lethal autonomous weapon systems" (LAWS). In this eye-opening overview, a physicist, technology expert, and former Honeywell executive examines the advantages and the potential threats to humanity resulting from the deployment of completely autonomous weapon systems. Stressing the likelihood that these weapons will be available in the coming decades, the author raises key questions about how the world will be impacted. Though using robotic systems might lessen military casualties in a conflict, one major concern is: Should we allow machines to make life-and-death decisions in battle? Other areas of concern include the following: Who would be accountable for the actions of completely autonomous weapons--the programmer, the machine itself, or the country that deploys LAWS? When warfare becomes just a matter of technology, will war become more probable, edging humanity closer to annihilation? What if AI technology reaches a "singularity level" so that our weapons are controlled by an intelligence exceeding human intelligence?Using vivid scenarios that immerse the reader in the ethical dilemmas and existential threats posed by lethal autonomous weapon systems, the book reveals that the dystopian visions of such movies as The Terminator and I, Robot may become a frightening reality in the near future. The author concludes with concrete recommendations, founded in historical precedent, to control this new arms race.

Review

""A highly readable and deeply researched exploration of one of the most chilling aspects of the development of artificial intelligence: the creation of intelligent, autonomous killing machines. In Louis A. Del Monte’s view, the multibillion dollar arms industry and longstanding rivalries among nations make the creation of autonomous weapons extremely likely. We must resist the allure of genius weapons, Del Monte argues, because they will almost inevitably lead to our extinction.”
―James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era

“For the second time in history, humanity is on the verge of creating weapons that might wipe us out entirely. Will we have the wisdom not to use them? No one can say, but this book will give you the facts you need to think about the issue intelligently.”
―J. Storrs Hall, author of Beyond AI: Creating the Conscience of the Machine

“This thought-provoking read provides insight into future world weapons scenarios we may face as technology rapidly advances. A call to arms for humanity, this book details the risks if we do not safeguard technology and weapon development.”
―Carl Hedberg, semiconductor sales and manufacturing management for thirty-seven years at Honeywell, Inc.


“In Genius Weapons, Del Monte provides a thorough and well-researched review of the history and development of ‘smart weapons’ and artificial intelligence. Then, using his background and imagination, he paints a very frightening picture of our possible future as these technologies converge. He challenges our understanding of warfare and outlines the surprising threat to mankind that may transpire in the not-too-distant future.”
―Anthony Hickl, PhD in materials science and former director of Project and Portfolio Management for New Products at Cargill

"We are already living the next war, which is increasingly being fought with the developing weapons that Del Monte writes about so engagingly.”
―Istvan Hargittai, Budapest University of Technology and Economics, author of Judging Edward Teller

“Del Monte explores a fascinating topic, which is of great importance to the world as the implementation of AI continues to grow. He points out the many applications of AI that can help humanity improve nearly every aspect of our lives―in medicine, business, finance, marketing, manufacturing, education, etc. He also raises important ethical concerns that arise with the use of AI in weapons systems and warfare. The book examines the difficulty of controlling these systems as they become more and more intelligent, someday becoming smarter than humans.”
―Edward Albers, retired semiconductor executive at Honeywell Analytics

About the Author

Louis A. Del Monte is an award-winning physicist, inventor, futurist, featured speaker, CEO of Del Monte and Associates, Inc., and high profile media personality. For over thirty years, he was a leader in the development of microelectronics and microelectromechanical systems (MEMS) for IBM and Honeywell. His patents and technology developments, currently used by Honeywell, IBM, and Samsung, are fundamental to the fabrication of integrated circuits and sensors. As a Honeywell Executive Director from 1982 to 2001, he led hundreds of physicists, engineers, and technology professionals engaged in integrated circuit and sensor technology development for both Department of Defense (DOD) and commercial applications. He is literally a man whose career has changed the way we work, play, and make war. Del Monte is the recipient of the H.W. Sweatt Award for scientific engineering achievement and the Lund Award for management excellence. He is the author of Nanoweapons, The Artificial Intelligence Revolution, How to Time Travel, and Unravelling the Universe's Mysteries. He has been quoted or has published articles in the Huffington Post, the Atlantic, Business Insider, American Security Today, Inc., and on CNBC.

Excerpt. © Reprinted by permission. All rights reserved.


Introduction

This book describes the ever-increasing role of artificial intelligence (AI) in warfare. Specifically, we will examine autonomous weapons, which will dominate over half of the twenty-first-century battlefield. Next, we will examine genius weapons, which will dominate the latter portion of the twenty-first-century battlefield. In both cases, we will discuss the ethical dilemmas these weapons pose and their potential threat to humanity.

Mention autonomous weapons and many will conjure images of Terminator robots and US Air Force drones. Although Terminator robots are still a fantasy, drones with autopilot capabilities are realities. However, for the present at least, it still requires a human to decide when a drone makes a kill. In other words, the drone is not autonomous. To be perfectly clear, the US Department of Defense defines an autonomous weapon system as “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.” These weapons are often termed, in military jargon, “fire and forget.”

In addition to the United States, nations like China and Russia are investing heavily in autonomous weapons. For example, Russia is fielding autonomous weapons to guard its ICBM bases. In 2014, according to Deputy Prime Minister Dmitry Rogozin, Russia intends to field “robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence and to receive from the other com-ponents of the combat system, but also on their own strike.”

In 2015, Deputy Secretary of Defense Robert Work reported this grim reality during a national defense forum hosted by the Center for a New American Security. According to Work, “We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield.” In fact, Work quoted Gerasimov as saying, “In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.”

You may ask: What is the driving force behind autonomous weapons? There are two forces driving these weapons:

1. Technology: AI technology, which provides the intelligence of autonomous weapon systems (AWS), is advancing exponentially. Experts in AI predict autonomous weapons, which would select and engage targets without human intervention, will debut within years, not decades. Indeed, a limited number of autonomous weapons already exist. For now, they are the exception. In the future, they will dominate conflict.

2. Humanity: In 2016, the World Economic Forum (WEF) attendees were asked, “If your country was suddenly at war, would you rather be defended by the sons and daughters of your community, or an autonomous AI weapons system?” The majority, 55 percent, responded that they would prefer artificially intelligent (AI) soldiers. This result suggests a worldwide desire to have robots, sometimes referred to as “killer robots,” fight wars, rather than risking human lives.

The use of AI technology in warfare is not new. The first large-scale use of “smart bombs” by the United States during Operation Desert Storm in 1991 made it apparent that AI had the potential to change the nature of war. The word “smart” in this context means “artificially intelligent.” The world watched in awe as the United States demonstrated the surgical precision of smart bombs, which neutralized military targets and minimized collateral damage. In general, using autonomous weapon systems in conflict offers highly attractive advantages:

• Economic: Reducing costs and personnel.

• Operational: Increasing the speed of decision-making, reducing dependence on communications, reducing human errors.

• Security: Replacing or assisting humans in harm’s way.

• Humanitarian: Programming killer robots to respect the international humanitarian laws of war better than humans.

Even with these advantages, there are significant downsides. For example, when warfare becomes just a matter of technology, will it make engaging in war more attractive? No commanding officer has to write a letter to the mothers and fathers, wives and husbands, of a drone lost in battle. Politically, it is more palatable to report equipment losses than human causalities. In addition, a country with superior killer robots has both a military advantage and a psychological advantage. To understand this, let us examine the second question posed to attendees of the 2016 World Economic Forum: “If your country was suddenly at war, would you rather be invaded by the sons and daughters of your enemy, or an autonomous AI weapon system?” A significant majority, 66 percent, responded with a preference for human soldiers.

In May 2014, a Meeting of Experts on Lethal Autonomous Weapons Systems was held at the United Nations in Geneva to discuss the ethical dilemmas such weapon systems pose, such as:

• Can sophisticated computers replicate the human intuitive moral decision-making capacity?

• Is human intuitive moral perceptiveness ethically desirable? If the answer is yes, then the legitimate exercise of deadly force should always require human control.

• Who is responsible for the actions of a lethal autonomous weapon system? If the machine is following a programmed algorithm, is the programmer responsible? If the machine is able to learn and adapt, is the machine responsible? Is the operator or country that deploys LAWS (i.e., lethal autonomous weapon systems) responsible?

In general, there is a worldwide growing concern with regard to taking humans “out of the loop” in the use of legitimate lethal force.

Concurrently, though, AI technology continues its relentless exponential advancement. AI researchers predict there is a 50 percent probability that AI will equal human intelligence in the 2040 to 2050 timeframe. Those same experts predict that AI will greatly exceed the cognitive per-formance of humans in virtually all domains of interest as early as 2070, which is termed the “singularity.” Here are three important terms we will use in this book:

1. We can term a computer at the point of and after the singularity as “superintelligence,” as is common in the field of AI.

2. When referring to the class of computers with this level of AI, we will use the term “superintelligences.”

3. In addition, we can term weapons controlled by superintelligence as “genius weapons.”

Following the singularity, humanity will face superintelligences, computers that greatly exceed the cognitive performance of humans in virtu-ally all domains of interest. This raises a question: How will superintelli-gences view humanity? Obviously, our history suggests we engage in devastating wars and release malicious computer viruses, both of which could adversely affect these machines. Will superintelligences view humanity as a threat to their existence? If the answer is yes, this raises another ques-tion: Should we give such machines military capabilities (i.e., create genius weapons) that they could potentially use against us?

A cursory view of AI suggests it is yielding numerous benefits. In fact, most of humanity perceives only the positive aspects of AI technology, like automotive navigation systems, Xbox games, and heart pacemakers. Mesmerized by AI technology, they fail to see the dark side. Nonetheless, there is a dark side. For example, the US military is deploying AI into almost every aspect of warfare, from Air Force drones to Navy torpedoes.

Humanity acquired the ability to destroy itself with the invention of the atom bomb. During the Cold War, the world lived in perpetual fear that the United States and the Union of Soviet Socialist Republics would engulf the world in a nuclear conflict. Although we came dangerously close to both intentional and unintentional nuclear holocaust on numerous occasions, the doctrine of “mutually assured destruction” (MAD) and human judgment kept the nuclear genie in the bottle. If we arm superintelligences with genius weapons, will they be able to replicate human judgment?

In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19 percent chance of human extinction by the end of this century, citing the top four most probable causes:

1. Molecular nanotechnology weapons: 5 percent probability

2. Superintelligent AI: 5 percent probability

3. Wars: 4 percent probability

4. Engineered pandemic: 2 percent probability

Currently, the United States, Russia, and China are relentlessly developing and deploying AI in lethal weapon systems. If we consider the Oxford assessment, this suggests that humanity is combining three of the four elements necessary to edge us closer to extinction.

This book will explore the science of AI, its applications in warfare, and the ethical dilemmas those applications pose. In addition, it will address the most important question facing humanity: Will it be possible to continually increase the AI capabilities of weapons without risking human extinction, especially as we move from smart weapons to genius weapons?










No comments: