But would it be authorized? DARPA programs are intended to explore the art of the possible, but that doesn’t mean that DoD would necessarily turn those experimental projects into operational weapon systems. To better understand whether the Pentagon might actually approve autonomous weapons, I sat down with then-Pentagon acquisition chief, Under Secretary of Defense Frank Kendall. As the under secretary of defense for acquisition, technology and logistics, Kendall was the Pentagon’s chief technologist and weapons buyer under the Obama Administration. When it came to major weapons systems like the X-47B or LRASM, the decision whether or not to move forward was in Kendall’s hands. In the process laid out under the DoD Directive, Kendall was one of three senior officials, along with the under secretary for policy and the chairman of the Joint Chiefs, who all had to agree in order to authorize developing an autonomous weapon.
Kendall has a unique background among defense technologists. In addition to a distinguished career across the defense technology enterprise, serving in a variety of roles from vice president of a major defense firm to several mid-level bureaucratic jobs within DoD, Kendall also has worked pro bono as a human rights lawyer. He has worked with Amnesty International, Human Rights First, and other human rights groups, including as an observer at the U.S. prison at Guantánamo Bay. Given his background, I was hopeful that Kendall might be able to bridge the gap between technology and policy.
Kendall made clear, for starters, that there had never been a weapon autonomous enough even to trigger the policy review. “We haven’t had anything that was even remotely close to autonomously lethal.” If he were put in that position, Kendall said his chief concerns would be ensuring that it complied with the laws of war and that the weapon allowed for
“appropriate human judgment,” a phrase that appears in the policy directive. Kendall admitted those terms weren’t defined, but conversation with him began to elucidate his thinking.
Kendall started his career as an Army air defender during the Cold War, where he learned the value of automation first hand. “We had an automatic mode for the Hawk system that we never used, but I could see in an extreme situation where you’d turn it on, because you just couldn’t do things fast enough otherwise,” he said. When you have “fractions of a second” to decide—that’s a role for machines.
Kendall said that automatic target recognition and machine learning were improving rapidly. As they improve, it should become possible for the machine to select its own targets for engagement. In some settings, such as taking out an enemy radar, he thought it could be done “relatively soon.”
This raises tricky questions. “Where do you want the human intervention to be?” he asked. “Do you want it to be the actual act of employing the lethality? Do you want it to be the acceptance of the rules that you set for identifying something as hostile?” Kendall didn’t have the answers. “I think we’re going to have to sort through all that.”
One important factor was the context. “Are you just driving down the street or are you actually in a war, or you’re in an insurgency? The context matters.” In some settings, using autonomy to select and engage targets might be appropriate. In others, it might not.
Kendall saw using an autonomous weapon to target enemy radars as fairly straightforward and something he didn’t see many people objecting to. There were other examples that pushed the boundaries. Kendall said on a trip to Israel, his hosts from the Israel Defense Forces had him sit in a Merkava tank that was outfitted with the Trophy active protection system.
The Israelis fired a rocket propelled grenade near the tank (“offset a few meters,” he said) and the Trophy system intercepted it automatically. “But suppose I also wanted to shoot back at . . . wherever the bullet had come from?” he asked. “You can automate that, right? That’s protecting me, but it’s the use of that weapon in a way which could be lethal to whoever, you know, was in the line of fire when I fire.” He pointed out that automating a return-fire response might prevent a second shot, saving lives. Kendall acknowledged that had risks, but there were risks in not doing it as well.
“How much do we want to put our own people at risk by not allowing them to use this technology? That’s the other side of the equation.”
Things become especially difficult if the machine is better than the person, which, at some point, will happen. “I think at that point, we’ll have a tough decision to make as to how we want to go with that.” Kendall saw value in keeping a human in the loop as a backup, but, “What if it’s a situation where there isn’t that time? Then aren’t you better off to let the machine do it? You know, I think that’s a reasonable question to ask.”
I asked him for his answer to the question—after all, he was the person who would decide in DoD. But he didn’t know.
“I don’t think we’ve decided that yet,” he said. “I think that’s a question we’ll have to confront when we get to where technology supports it.”
Kendall wasn’t worried, though. “I think we’re a long way away from the Terminator idea, the killer robots let loose on the battlefield idea. I don’t think we’re anywhere near that and I don’t worry too much about that.”
Kendall expressed confidence in how the United States would address this technology. “I’m in my job because I find my job compatible with being a human rights lawyer. I think the United States is a country which has high values and it operates consistent with those values. . . . I’m confident that whatever we do, we’re going to start from the premise that we’re going to follow the laws of war and obey them and we’re going to follow humanitarian principles and obey them.”
Kendall was worried about other countries, but he was most concerned about what terrorists might do with commercially available technology.
“Automation and artificial intelligence are one of the areas where the commercial developments I think dwarf the military investments in R&D.
They’re creating capabilities that can easily be picked up and applied for military purposes.” As one example, he asked, “When [ISIS] doesn’t have to put a person in that car and can just send it out on its own, that’s a problem for us, right?”