BEWARE THE KILLER ROBOTS
(P3) Artificial intelligence and robotics EXPERTS must decide whether to support or oppose the development of LETHAL AUTONOMOUS weapons systems, or LAWS, URGED Stuart Russell, a professor of computer science at the University of California at Berkeley, in a paper published last month in Nature.
(P5) For example, the technology used in self-driving cars could be combined with the TACTICAL control of Google’s DeepMind DQN algorithm — which earlier this year DEMONSTRATED its PROWESS in video game play — to support SEARCH-AND-DESTROY missions in urban areas.
(P6) International HUMANITARIAN laws such as the 1949 Geneva Conventions don’t have specific PROVISIONS for the use of LAWS, but they may still apply, Russell THEORIZED, because they require SUBJECTIVE judgments about things like military necessity, who is and isn’t a COMBATANT, and PROPORTIONALITY, which current AI systems can’t TACKLE.
(P7) The U.S., the U.K., and Israel, which lead the development of LAWS technology, have taken the position that a TREATY on the use of LAWS is not necessary, Russell added, because they already have internal weapons review processes that ensure COMPLIANCE with international law.
(P8) However, existing legal RESTRICTIONS “are INADEQUATE to really deal with this FUNDAMENTAL change in technology and warfare,” said Peter Asaro, an assistant professor at The New School, who co-FOUNDED the International Committee for Robot Arms Control and is SPOKESPERSON for the Campaign to Stop Killer Robots.
(P9) Letting machines choose whom to kill would VIOLATE fundamental principles of human dignity, Russell warned. Killer robots could be DEPLOYED in the millions, and their AGILITY and lethality would leave humans defenseless.
(P10) Next-generation armed drones, including the X-47B and the Taranis, are being developed for full flight autonomy and “might be designed to select and ENGAGE targets without SUPERVISION,” the Campaign to Stop Killer Robots’ Asaro told TechNewsWorld.
(P11) PROPONENTS of the use of LAWS, including the U.S. Department of Defense, contend that killer robots will MINIMIZE CIVILIAN CASUALTIES, but opponents argue that LAWS will make it easier to start a war, because it will be easier to launch an attack without immediate risk.
(P12) Whether people accept the arguments in favor of LAWS “will depend on how much faith you have in modern AI and its ability to distinguish people, places, and things on a noisy battlefield with enemies intentionally trying to confuse it,” remarked John Sullins, chair of philosophy at Sonoma State University and author of an ETHICAL analysis of the case for controlling LAWS.
(P13) The technology will make it “more difficult to hold people ACCOUNTABLE for the mistakes and ATROCITIES caused by machines, because they will be unintentional in the legal sense,” Asaro pointed out. Further, it will DEVALUE human life.
(P14) “I have given talks to military and policy professionals and … a number of them … are deeply concerned with [the use of LAWS],” Sullins told TechNewsWorld. “However, the main focus is on law and treaty compliance.”
(P15) Not enough attention is given to the fact that a country might comply completely with international law but still carry out unethical actions, he pointed out.
(P16) For example, a military power could use LAWS in a legal way, but it “could still be unethical if those weapons let it engage in decades-long LOW-LEVEL CONFLICTS,” said Sullins, “since the ethical choice is to seek peaceful relations — not settle for the more PRAGMATIC solution of less lethal wars.”
(P17) Situations like that could ARISE, because “with any new weapons system, the focus initially is on the offering’s primary goal,” noted Robert Enderle, principal at the Enderle Group. “Safety and security comes later.”
(P18) However, it’s critical to design safety and security into autonomous weapons from the start, he told TechNewsWorld, “or the probability of these weapons turning on their masters becomes unacceptably high.”
(P19) LAWS systems “should be OUTLAWED” because of the threat they would POSE if a terrorist were to get hold of them, or if they were hacked or went out of control for any reason, Enderle CAUTIONED. In such cases, “stopping them would be very difficult, and the DEATH TOLL would be EXCEEDINGLY high.”
If you found the passage difficult to read or had problems understanding specific words or idiomatic expressions, please discuss them with your tutor. The following discussion questions should be answered in your own words and with your own arguments.
- Briefly summarize the content of the article in your own words.
- “Killer robots” are only one type of robot. Do you think that there will be many more robots in our lives ten years from now?
- A lot of new technology comes from the military. Why is this true?
- Do you look forward to the future, or are you afraid of the future?
- If we create machines with “artificial intelligence,” will we be able to control them?
EXPRESSIONS TO PRACTICE:
What do the following expressions mean? Practice using each expression in a sentence; extra points if you can use it in conversation.
- Obstacle course
- Artificial intelligence
- Search and destroy
- Point out
- Low-level conflict
- Death toll