• Tidak ada hasil yang ditemukan

THE FUTURE OF LETHAL AUTONOMY

off-board sensors say a Russian battalion tactical group is operating in this area. We don’t know exactly what of the battalion tactical group this weapon will kill, but we know that we’re engaging an area where there are hostiles.” Work explained that the missile itself, following its programming logic, might prioritize which targets to strike—tanks, artillery, or infantry fighting vehicles. “We’re going to get to that level. And I see no problem in that,” he said. “There’s a whole variety of autonomous weapons that do end-game engagement decisions after they have been targeted and launched at a specific target or target area.” (Here Work is using “autonomous weapon” to refer to fire-and-forget homing munitions.)

Loitering weapons, Work acknowledged, were qualitatively different.

“The thing that people worry about is a weapon we fire at range and it loiters in the area and it decides when, where, how, and what to kill without anything other than the human launching it in the general direction.” Work acknowledged that, regardless of the label used, these loitering munitions were qualitatively different than homing munitions that had to be launched at a specific target. But Work didn’t see any problem with loitering munitions either. “People start to get nervous about that, but again, I don’t worry about that at all.” He said he didn’t believe the United States would ever fire such a weapon into an area unless it had done the appropriate estimates for potential collateral damage. If, on the other hand, “we are relatively certain that there are no friendlies in the area: weapons free. Let the weapon decide.”

These search-and-destroy weapons didn’t bother Work, even if they were choosing their own targets, because they were still “narrow AI systems.” These weapons would be “programmed for a certain effect against a certain type of target. We can tell them the priorities. We can even delegate authority to the weapon to determine how it executes end game attack.” With these weapons, there may be “a lot of prescribed decision trees, but the human is always firing it into a general area and we will do [collateral damage estimation] and we will say, ‘Can we accept the risk that in this general area the weapon might go after a friendly?’ And we will do the exact same determination that we have right now.”

Work said the key question is, “What is your comfort level on target location error?” He explained, “If you are comfortable firing a weapon into an area in which the target location error is pretty big, you are starting to take more risks that it might go against an asset that might be a friendly

asset or an allied asset or something like that. . . . So, really what’s happening is because you can put so much more processing power onto the weapon itself, the [acceptable degree of] target location error is growing.

And we will allow the weapon to search that area and figure out the endgame.” An important factor is what else is in the environment and the acceptable level of collateral damage. “If you have real low collateral damage [requirements],” he said, “you’re not going to fire a weapon into an area where the target location is so large that the chances of collateral damage go up.”

In situations where that risk was acceptable, Work saw no problems with such weapons. “I hear people say, ‘This is some terrible thing. We’ve got killer robots.’ No we don’t. Robots . . . will only hit the targets that you program in. . . . The human is still launching the weapon and specifying the type of targets to be engaged, even if the weapon is choosing the specific targets to attack within that wide area. There’s always going to be a man or woman in the loop who’s going to make the targeting decision,” he said, even if that targeting decision was now at a higher level.

Work contrasted these narrow AI systems with artificial general intelligence (AGI), “where the AI is actually making these decisions on its own.” This is where Work would draw the line. “The danger is if you get a general AI system and it can rewrite its own code. That’s the danger. We don’t see ever putting that much AI power into any given weapon. But that would be the danger I think that people are worried about. What happens if Skynet rewrites its own code and says, ‘humans are the enemy now’? But that I think is very, very, very far in the future because general AI hasn’t advanced to that.” Even if technology did get there, Work was not so keen on using it. “We will be extremely careful in trying to put general AI into an autonomous weapon,” he said. “As of this point I can’t get to a place where we would ever launch a general AI weapon . . . [that] makes all the decisions on its own. That’s just not the way that I would ever foresee the United States pursuing this technology. [Our approach] is all about empowering the human and making sure that the humans inside the battle network has tactical and operational overmatch against their enemies.”

Work recognized that other countries may use AI technology differently.

“People are going to use AI and autonomy in ways that surprise us,” he said. Other countries might deploy weapons that “decide who to attack, when to attack, how to attack” all on their own. If they did, then that could

change the U.S. calculus. “The only way that we would go down that path, I think, is if it turns out our adversaries do and it turns out that we are at an operational disadvantage because they’re operating at machine speed and we’re operating at human speeds. And then we might have to rethink our theory of the case.” Work said that challenge is something he worries about.

“The nature of the competition about how people use AI and autonomy is really going to be something that we cannot control and we cannot totally foresee at this point.”