AI: To Kill or Not to Kill?

Anduril is a company focused on building technologies for the future of defense — and war. Tech like covert drones that could bring medical supplies to a battlefield, or a virtual wall that could be used to monitor the US’s southern border.

But adding AI to the equation creates increasingly complicated ethical dilemmas. You can start to imagine scenarios where computers are making decisions more quickly than humans ever could.

Laurie Segall asked Anduril co-founder Trae Stephens about the future of defense tech…

Read an edited transcript below, or listen to the full interview on the First Contact podcast.

If you have hypersonic missiles flying at you, you have like a split second to make a decision…

Laurie Segall: One of the most interesting questions when it comes to the future of tech and war, maybe it’s a little dramatic, but it’s — AI: To kill or not to kill, right? This idea that you guys could be building out autonomous systems that can make the decision to kill… will you do that?

Trae Stephens: I mean, our policy internally has always been that in almost every case a human in the loop makes a ton of sense. There are certainly cases where — they might not even involve human casualties — where you really can’t have a human in the loop. For example, if you have hypersonic missiles flying at you, you have like a split second to make a decision about whether or not you’re going to shoot it down. And these are the types of things, again, like Iron Dome, that have been driven by computers. And so, there’s this constant conversation that seems to be happening about, “well in what future world will we have to make these decisions?” Actually, we’ve been doing this for over a decade. There are computers that are making kinetic decisions on a regular basis today. When it deals with human life, I think it raises the stakes quite a bit… Which does feel really important.

Laurie Segall: I remember the last time I interviewed Palmer, asking him that same question about like, “Will you deploy technology that can make the decision to kill?” And I think I remember him saying, “Right now, no, but that doesn’t mean in the future we won’t.” And I thought that was really interesting…So have those conversations moved forward with you guys? I mean when’s the last time you spoke about it or what was the nature of it?

Trae Stephens: I can’t think of any specific examples of tech that we’re building right now where that has been an issue. But I think Palmer’s answer is correct. I mean, there are a lot of versions of Just War application that do involve lethality. It’s very, very hard to predict the future, to say what the conflicts of tomorrow will be. And you know, the types of decisions technologists will have to make in order to sustain an advantage in those conflicts. But to the extent that the tech is deployed as the last resort, to the extent that it is more discriminant, to the extent that it is more proportional, to the extent that it ensures right intent and just authority, and it ensures that human flourishing can continue in a more abundant way — absolutely. I’m sure there are applications of technology that will have lethal intent, that fulfill and check all of those boxes. That said, I’m sure there are also technologies that will not, and those are the technologies that not only would I not build, but I also would not invest in them.

…I think one of the conversations that doesn’t seem to get enough air time is the idea that you can’t just wait for all of the theory around the ethics to be worked out before you build something — because our adversaries will build it. And if we look back at history, you can see that the wielder of the technology, the person that builds the technology and owns the technology, is really in control of the standards and norms that are used to deploy that technology.

…Let’s imagine that North Korea either has like humans, or robots, or humans in robots like MechWarrior style, and they just flood into the demilitarized zone, just like thousands and thousands of objects that are pushing forward. You have the option of taking a serious kinetic, one-to-many action, you know, firing very large bombs, missiles, nukes, whatever, to eliminate them. Not knowing what’s good, what’s bad, or otherwise. Not knowing if there’s like a zombie plague that’s forcing everyone to flee the country. Or you can do some sort of, AI-assisted, auto-turret. So there’s guns on a swivel that you can kind of control and they automatically lock on target. Or if there was an AI that said, “Differentiate between robots, between people, and people that have weapons, and only shoot people with weapons and robots, don’t shoot any people that are running towards the border without weapons.” That’s an AI-driven technology and there is a lethal kill decision involved. But you could save thousands and thousands and thousands of lives by executing that strategy instead. A human could never make decisions that rapidly, with that much data flooding into the system. There’s just no way they could do that. And de-conflicting across all those targets at the same time.

Laurie Segall: The idea is that even when humans do make these decisions, oftentimes they are tired, fatigued, stressed, in these different situations of when to decide to make the decision to kill or not to kill.

Trae Stephens: Yeah. I think if you go and you talk to the soldiers that served in the last few international conflicts, the decisions that torment them, that keep them awake at night, are decisions that they had to make in the blink of an eye. You know, a vehicle driving at high speed towards one of their bases. You don’t know if that’s a sick child and the father is just trying to get them to medical care as quickly as possible or a car full of explosive material that’s gonna run into your base and kill service members. And they have to make these split-second decisions about what to do. They want more information. They want to be able to make better triage decisions. And by withholding that technology from them, we’re putting people’s lives at risk, both service members as well as civilians.

Laurie Segall: I only play the devil’s advocate on AI because then I think about how flawed AI can be, how biased it can be, and how sometimes the algorithm makes these decisions and someone’s on the other side of it and you’re like, “Wait a second, how did that happen?”…

Trae Stephens: Yeah, there’s bias in both directions for sure. Human beings have biases that they don’t even realize they have, that cause them to make decisions. Computers have different sets of biases, that to the extent that we can understand the way that these models are working, we can correct a lot of those over time. I don’t think there has ever in the history of technology been something that was perfect at the outset. There’s always room to improve. There are things that we can do to make the models more accurate, reduce the bias that’s implicit in them. And I think that is important work.