As the eastern sky began to lighten, we ducked behind a jutting rock—the best cover we could find. This is Mike-One-Two-Romeo,” I announced to our rapid reaction force, “the village is converging on our position.
THE DECISION
Killing a civilian who had stumbled upon our position would have been a war crime, but killing the girl would have been legal. If it had been programmed to kill lawful enemy combatants, it would have attacked the little girl.
BETTER THAN HUMAN?
THE DEBATE
New AI methods like deep learning are powerful, but often result in systems that are effectively a "black box" - even for their designers. Science and technology heroes such as Stephen Hawking, Elon Musk and Apple co-founder Steve Wozniak have spoken out against autonomous weapons, warning that they could trigger a "global AI arms race".
STUMBLING TOWARD THE ROBOPOCALYPSE
THE COMING SWARM
Without any human direction, they climb to their assigned heights and form two teams, reporting back when they are "swarm ready". The Red and Blue swarms wait in their respective corners of the aerial battle arena, circling like a swarm of hungry buzzards. One just has to choose the swarm behavior - wait, follow, attack or land - and tell the swarm to start.
THE ACCIDENTAL REVOLUTION
Just as robots are changing industries—from self-driving cars to robotic vacuum cleaners and caregivers for the elderly—they are also changing warfare. Drones were not new - they had been used on a limited scale in Vietnam - but the demand for them was huge.
THE MARCH TOWARD EVER-GREATER AUTONOMY
For example, it would be of no use if all the drones in the swarm went after the same enemy aircraft. The person who bids the highest “wins” the auction and catches the ball, while the others get out of the way.
REACHING THE LIMIT
The Air Force Flight Plan acknowledged the seriousness of what it suggested might be possible. At the time the Air Force Flight Plan was announced in 2009, I was working in the Office of the Secretary of Defense as a civilian policy analyst focusing on drone policy.
THE TERMINATOR AND THE ROOMBA
The symbolism in the film is unmistakable: unlike other robots who are slaves to logic, Sonny has a "heart". In the Terminator movies, when the military AI Skynet becomes self-aware, it makes a different choice.
THE THREE DIMENSIONS OF AUTONOMY
Nevertheless, the machine's internal cognitive processes are, at least in principle, generally traceable to the human user. But as the complexity of the system increases, so does the difficulty of predicting how the machine will act.
HOW MUCH SHOULD WE TRUST AUTONOMOUS SYSTEMS?
MACHINES THAT KILL
The Gatling gun was not an autonomous weapon, but it began the long evolution of weapon automation. In the Gatling gun, the process of loading the cartridges, firing and ejecting the cartridges was automatic if a person continuously turned the handle.
AUTOMATIC WEAPONS: MACHINE GUNS
In heavy fire, the barrel of the SAW will glow red hot. The barrel may need to be removed and replaced with a spare before it begins to melt. In the twentieth century, weapon designers would take the next step by adding rudimentary sensing technologies to weapons: the early stages of intelligence.
THE FIRST “SMART” WEAPONS
For a while, at least for the British, machine guns might have been seen as a weapon that reduced the cost of war. Machine guns sped up the killing process by harnessing the efficiency of the industrial age in the service of war.
PRECISION-GUIDED MUNITIONS
Some guided munitions have very little autonomy at all, with a human controlling the weapon's target during its flight. In other weapons, a human operator "paints" a target with a laser or radar, and the missile or bomb is launched based on the laser or radar reflection.
HOMING MUNITIONS HAVE LIMITED AUTONOMY
Because the autonomy of ammunition is strictly limited, the human operator must be aware of a specific target in advance. Locating munitions have a very limited ability in time and space to search for targets, and launching one without knowledge of a specific target would be a waste.
THE WEAPON SYSTEM
This means that homing munitions must function as part of a larger weapon system to be useful. If there is a human in the loop deciding which target(s) to attack, it is a semi-autonomous weapon system.
SUPERVISED AUTONOMOUS WEAPON SYSTEMS
With autonomous weapon systems, the whole combat loop... searching, detecting, deciding to engage and engaging is automated. Aegis Combat System and Phalanx Close-In Weapon System (CIWS); land-based air and missile defense systems, such as the US.
FULLY AUTONOMOUS WEAPON SYSTEMS
Fully Autonomous Weapons For semi-autonomous weapons, the human operator fires the weapon at a designated target or set of known targets. The human decides to fire the fully autonomous weapon, but the weapon itself chooses the specific target to attack.
UNUSUAL CASES—MINES, ENCAPSULATED TORPEDO MINES, AND SENSOR FUZED WEAPON
Cased torpedo mines are a special type of naval mine that, however, function more as an autonomous weapon. Rather than simply exploding once activated, cased torpedo mines release a torpedo that homs in on the target.
PUSHING “START”
The human launching the SFW needs to know that there is a group of tanks at a particular point in time and space. The SFW is different from a traditional homing munition because the SFW can hit multiple targets.
WHY AREN’T THERE MORE AUTONOMOUS WEAPONS?
Another factor was that if a TASM was fired and there was not a valid target within the weapon's search range, the weapon would be wasted. McGrath would be reluctant to fire a weapon due to scant evidence that there was a valid target in the search area.
FUTURE WEAPONS
The Tomahawk Land Attack Missile (TLAM-E, or Tactical Tomahawk) includes a two-way satellite communications link that allows the weapon to be retargeted in flight. Additionally, the commander can maintain control of the weapon in flight, making it less likely to be misfired.
THE FUTURE BEING BUILT TODAY
SALTY DOGS: THE X-47B DRONE
Technological development has focused on automating the physical movement of the aircraft – take-off, landing, flight and mid-air refuelling. The Navy does not appear to have been immune to the same cultural resistance to drone combat found in the Air Force.
THE LONG-RANGE ANTI-SHIP MISSILE
The missile then moves to the next targeting phase: "target classification." The projectile scans each object and finally stops at ID:24. When the LRASMs are fired, the video states that they are fired at a "SAG cruiser" and a "SAG destroyer". Humans launch missiles at specific ships that have been tracked and identified via satellites.
BREAKING THE SPEED LIMIT: FAST LIGHTWEIGHT AUTONOMY
The critical point in the video isn't at the end of the missile's flight as it approaches the ship—it's at the beginning. As DARPA Program Manager Mark Micire explained in a press release, “The challenge for teams now is to improve the algorithms and computational efficiency on board to expand UAVs.
WEAPONS THAT HUNT IN PACKS: COLLABORATIVE OPERATIONS IN DENIED ENVIRONMENTS
As airborne vehicles in the Disco Group find suspected enemy targets, they submit their recommended classification to the human for confirmation. In this one, instead of the human being in the loop, the human is in the loop, at least for emerging threats.
THE DEPARTMENT OF MAD SCIENTISTS
DARPA only invests in projects that are "DARPA hard," challenging technology problems that others may consider impossible. Over the past five decades, DARPA has repeatedly sown disruptive technologies that have brought decisive advantages to the United States.
INSIDE THE PUZZLE PALACE
You can imagine anti-submarine warfare pods, you can imagine anti-submarine warfare wolf packs, you can imagine mine warfare flotillas, you can imagine anti-submarine warfare surface action groups. Like many other robotic systems, Sea Hunter can navigate autonomously and may one day be armed.
BEHIND THE CURTAIN: INSIDE DARPA’S TACTICAL TECHNOLOGY OFFICE
Until the machine processors equal or surpass humans in making abstract decisions, there will always be mission assignment. So that system-of-system architecture is going to be needed to knit it all together.”.
TARGET RECOGNITION AND ADAPTION IN CONTESTED ENVIRONMENTS (TRACE)
In this hide-and-seek contest, focusing on enemy cooperative targets is like finding a person waving a flashlight in the dark. However, stationary targets in a crowded environment can be as difficult to see as a deer hiding in the woods.
CROSSING THE THRESHOLD
The policy describes who is involved in the review process—senior defense civilian policy and acquisition officials and the chairman of the Joint Chiefs of Staff—and the criteria for the review. Instead, the policy provides a process by which appropriate officials can review new uses of autonomy before implementation.
GIVING THE GREEN LIGHT TO AUTONOMOUS WEAPONS
Do you want it to be an acceptance of the rules you set out to identify something as hostile?” Kendall had no answers. I think at this point we're going to have to make a tough decision about how we're going to do that." Kendall saw value in having someone informed as a backup, but, "What if it's a situation where there's no time.
THE REVOLUTIONARY
Technologies such as the internal combustion engine that powered civilian cars and airplanes in the Industrial Revolution led to tanks and military aircraft. A few weeks later in another interview, Work stated that it was his belief that "within the next decade or decade and a half, it will become clear when and where we delegate authority to machines." A main concern for him was the fact that while in the US we discuss the "moral, political, legal, ethical".
THE FUTURE OF LETHAL AUTONOMY
If you have low real collateral damage [requirements],” he said, “you're not going to fire a weapon in an area where the target location is so high that the chances of collateral damage increase.” Humans will use AI and autonomy in ways that surprise us," he said.
THE PAST AS A GUIDE TO THE FUTURE
I'm always looking for: what's the easiest thing with the highest return on investment that we can actually go do where people will thank us for doing it. The lesson is that "the threat gets a voice." Referring to Japanese innovations in long-range torpedoes, Schuette said, “We did not plan to fight a torpedo war.
WORLD WAR R
Distribution of armed drones As of June 2017, sixteen countries had armed drones: China, Egypt, Iran, Iraq, Israel, Jordan, Kazakhstan, Myanmar, Nigeria, Pakistan, Saudi Arabia, Turkey, Turkmenistan, United Arab Emirates, United Kingdom and the United States. As in the United States, the key question will be whether these nations plan to cross the line to full autonomy.
THE CURIOUS CASE OF THE AUTONOMOUS SENTRY BOT
Ho Yoo said, "The final decision about shooting should be made by a human, not the robot." But the article made it clear that Yoo's "should" wasn't a requirement, and that the robot did have a fully automatic option. In the same interview claiming that a human being will always be in the loop, Samsung's spokesperson claimed, "The SGR-1 can and will prevent wars."
THE BRIMSTONE MISSILE
In Single Mode, a man "paints" the target with a laser and the missile sits in the laser's reflection. The missile will go wherever the human points the laser, allowing the human to provide "guidance to the target". The dual mode combines laser guidance with a millimeter wave (MMW) radar seeker for "rapidly moving and maneuvering targets and under tight rules of engagement." The human designates the target with a laser, then there is a "handoff" from the laser to the MMW seeker in the final stage so that the weapon can engage fast moving targets.
THE TARANIS DRONE
The UK does not have fully autonomous weapons systems and has no plans to develop or acquire them. The UK believes that [lethal autonomous weapon systems] do not exist and may never exist.” This means that Britain could develop weapons that would only seek, select and attack targets; I would simply call them "automatic weapons", not "autonomous weapons". In fact, the UK's doctrine note refers to systems such as the Phalanx gun (a controlled autonomous weapon) as "fully automated weapon systems." The doctrinal note leaves open the possibility of their development, provided they undergo a legal review of the weapon to demonstrate that it can be used in a manner consistent with the laws of war.
RUSSIA’S WAR BOTS
Wolf-2 can act on its own to some degree (the creators are vague about that degree), but the decision to use lethal force is ultimately under human control. The Wolf-2 sits among a family of similarly sized robot cars. Uran-9 is completely unmanned, although it is controlled by soldiers remotely from a nearby command vehicle.
AN ARMS RACE IN AUTONOMOUS WEAPONS?
In 2016, the UK-based NGO Article 36, which has been a leading voice in shaping the international debate on autonomous weapons, wrote a policy brief criticizing the UK government's stance on autonomous weapons. In the absence of an autonomous smoking gun, it seems unnecessarily alarmist to declare that the autonomous arms race is already underway, but we could be at the very beginning.
GARAGE BOTS
HUNTING TARGETS
The ability to identify a target is a key missing link in the development of a do-it-yourself autonomous weapon. This requires three abilities: the ability to intelligently maneuver through the search environment; the ability to distinguish between potential targets to identify the right ones;.
DEEP LEARNING
In one of the most powerful examples of how neural networks can be used to solve difficult problems, the Alphabet (formerly Google) AI company DeepMind trained a neural network to play go, a Chinese strategy game similar to chess , better than any human player. Given only the pixels on the screen and the game score as input and told to maximize the score, the neural network was able to learn to play Atari games at the level of a professional human video game tester.
NEURAL NETS FOR EVERYONE
All the tools to build an autonomous weapon that could target people themselves were available online. Much of the technology behind AI was software, which meant it could be copied for virtually free.
ROBOTS EVERYWHERE
Dela Cuesta explained that all students at TJ must complete a robotics project in their freshman year as part of their required coursework. Still, Dela Cuesta pushes students to build things themselves instead of using existing components.
THE EVERYONE REVOLUTION
We can't know with any certainty what a future of autonomous weapons would look like, but we do have better tools than science fiction for guessing the promises and dangers they hold. Humanity's past and current experiences with autonomy in the military and other environments point to the potential benefits and dangers of autonomous weapons.
ROBOTS RUN AMOK
Shooting them down was not the primary duty of the Patriot operators, but they were authorized to engage if the missile appeared to be entering their radar. Even if it had worked, as it turns out, the Patriot would not have been able to see the signal—the codes for the IFF had not been loaded into the Patriot's computers.
ASSESSING THE PATRIOT’S PERFORMANCE
White's F-18 was waving the IFF and it appeared on the Patriot's radar as an aircraft. The reasons for the Patriots' fratricides were a complex mix of human error, improper testing, poor training, and unforeseen interactions on the battlefield.
ROBUTOPIA VS. ROBOPOCALYPSE
There may have been a human "in the loop", but the human operators didn't question the machine when they had to. They did not exercise the kind of judgment that Stanislav Petrov did when he questioned the signals his system was giving him regarding a false launch by the US. The Patriot operators trusted the machine and it was wrong. systems will do exactly what they are programmed to do, and it is this quality that makes them both reliable and insane, depending on whether what they were programmed to do was the right thing at the time.
AUTONOMY AND RISK
Making the broom completely autonomous with no human involved wasn't the cause of the failure, but it did dramatically amplify the consequences if something went wrong. Fully autonomous systems are not necessarily more likely to fail than semi-autonomous or supervised autonomous systems, but if they do, the consequences (the potential damage caused by the system) can be serious.
TRUST, BUT VERIFY
Having a human "in the loop" would have reduced the danger from the faulty software design. The game of Go has more possible positions than atoms in the universe, and the real world is far more complex than Go.
WHEN ACCIDENTS ARE NORMAL
Close coupling occurs when an interaction in one component of the system directly and immediately affects components elsewhere. The human operators, however, faced another problem, one made more difficult by more advanced automation, not easier: the incomprehensibility of the system.
THE INEVITABILITY OF ACCIDENTS
Failures are inevitable in complex, tightly coupled systems, and the sheer complexity of the system prevents predicting when and how failures are likely to occur. Even in fields where safety is a central concern, such as space travel and nuclear power, predicting all possible interactions of a system and its environment is effectively impossible.
BOTH SIDES HAVE STRENGTHS AND WEAKNESSES”
"A significant message to the Nuclear Regulatory Commission from Three Mile Island was that people are not omnipotent," Kennedy said. And Three Mile Island said alarms almost never happen individually. This was an unmanageable level of complexity for any human operator to absorb, Kennedy explained.
AUTOMATION AND COMPLEXITY—A DOUBLE-EDGED SWORD
Studies have pegged the average software industry error rate at fifteen to fifty errors per 1,000 lines of code. I found this to be a challenge with the Nest thermostat, which doesn't have millions of lines of code.
WE DON’T UNDERSTAND ANYTHING!”
Throughout the incident, the pilots repeatedly misinterpreted data from the plane and misunderstood the aircraft's behavior. The pilots had pulled too far on the stick, causing the aircraft to stall and lose takeoff.
THE PATRIOT FRATRICIDES AS NORMAL ACCIDENTS
As the Defense Science Task Force pointed out for Patriot, given the sheer number of interactions, "even very low-probability errors could lead to regrettable incidents of fratricide." The fact that the F-18 and Tornado incidents had different causes lends further credence to the notion that common accidents lurk beneath the surface in complex systems, waiting to happen. The theory of normal accidents says "no". The likelihood of accidents can be reduced, but never eliminated.
COMMAND AND DECISION
In fact, lessons from SUBSAFE and aircraft carrier operations have already informed how the Navy operates the Aegis combat system. The Navy describes the Aegis as "a centralized, automated, command-and-control (C2) and weapons control system designed as a total weapon system, from detection to kill." It is the electronic brain of a ship's weapons.
THE AEGIS COMBAT SYSTEM
The FIS contains a key that must be inserted before any of the ship's weapons can fire. When the FIS is red (or completely removed), the ship's weapons are disabled at the hardware level.
ROLL GREEN”
And as soon as the missile was gone, I saw the tactical action officer FIS roll red again. The automation was powerful and they respected it—they even recognized that there was a place for it—but that didn't mean they were handing over their human decision-making to the machine.
THE USS VINCENNES INCIDENT
The Iranians chose to escalate the engagement by sending a fighter jet, and that his ship was threatened, Vincennes' captain gave the order to fire. The USS Vincennes incident and the Patriot fratricide are two opposite cases on the scale of automation versus human control.
ACHIEVING HIGH RELIABILITY
The Aegis culture is 180 degrees away from the “unwarranted and uncritical reliance on automation” discovered by military researchers in the Patriot community in 2003. As a result, he said, “the military is deluded about how good its people really are.
NUCLEAR WEAPONS SAFETY AND NEAR-MISS ACCIDENTS
Sagan concluded, "the historical evidence provides much stronger support for the ideas developed by Charles Perrow in normal accidents" than for high-reliability theory. Rather, the history of nuclear near-misses simply reflects "the inherent limits of organizational safety," he said.
BLACK BOX
Due to the subtle manipulation of the image, the neural network identified all the objects in the right column as "ostrich". This is because at the micro level, AI has a very simple, linear representation of data.
FAILING DEADLY
Taking the human out of the loop reduces slack and increases the coupling of the system. In fully autonomous weapons, there is no human to intervene and stop the system's operation.
THE RUNAWAY GUN
One can step outside the rigid rules of the system and judgement. Contrary to breathless reports of a "robot cannon rampage," the remote weapon was not an autonomous weapon and likely malfunctioned due to a mechanical problem, not a software bug.
THE DANGER OF AUTONOMOUS WEAPONS
Deploying a fully autonomous weapon would be a big risk, but one the military might decide is worth taking. Experience with controlled autonomous weapons like the Aegis would be helpful, but only to an extent.
BOT VS. BOT
RISE OF THE MACHINES