Episode 14: AI: To Kill or Not to Kill?

First Contact with Laurie Segall is a production of Dot Dot Dot Media and iHeart Radio.

Laurie Segall: I guess, I come back to this line, maybe it’s a little dramatic, but it’s like AI to kill or not to kill, right? Like this idea that you guys could be building out autonomous systems that can make the decision to, to kill. Um, will you do that?

Trae Stephens: It’s very, very hard to predict the future, but to the extent that, the tech is deployed as the last resort, and it ensures that human flourishing can continue in a more abundant way. Absolutely. 

To kill or not to kill? Should Artificial Intelligence have the power to make that decision?

And if it does, who will be liable if – or when – it all goes wrong? 

Tech is creating a new arms race. Will the US be able to keep up with the likes of Russia and China?

And what ethical lines will we draw, or cross, to maintain our national defenses

When it comes to our future, the ethics of war and technology are murkey.

Now with that in mind I want to take you to Orange County. Picture a handful of entrepreneurs – some of them controversial –  sitting around a table they’ve created a powerpoint outlining some radical ideas for the future of defense technology.

They’re eating Chick-Fil-A and Taco Bell, and exploring  the idea that what the United States really needs is a real-life version of  Stark Industries from Iron Man to build defense technology that would protect the United States from future threats. 

Fast forward – That brainstorming session led to Anduril, a defense technology company that launched in 2017. Now it’s a billion dollar company and at the center of the debate when it comes to the future of war. 

One of its founders, Trae Stephens, spends a lot of time thinking about the philosophy of war. How technology is transforming it. And how we’re gonna protect ourselves as a nation. 

Expect rigourous debate. Unpopular conversations. Uncomfortable scenarios. Some talk of Superheros and science fiction. And a framework to talk about war – where the front lines of futuristic battlefields are blurred and technology is leading the charge. 

I’m Laurie Segall and this is First Contact.

Laurie Segall: Well. Generally I talk about my first contact with folks and I have no first contact with you. This is the first time Trae, that I’ve ever met you. Uh, so welcome to First Contact. 

Trae Stephens: Thank you. Thank you for having me.

Laurie Segall: But this isn’t my first contact with this idea of like war and ethics and defense technology because I know, um, your co founder Palmer Luckey. I’ve interviewed him many times.

Trae Stephens: Mm-hmm.

Laurie Segall: He’s an interesting dude.

Trae Stephens: He is, yeah.

Laurie Segall: Yeah.

Trae Stephens: Super, super interesting. Super unique guy.

Laurie Segall: How would you describe him?

Trae Stephens: I think he’s one of these people that’s thought deeply about most things.

Laurie Segall: Mm-hmm.

Trae Stephens: Um, rather than having, you know, high conviction but shallow knowledge. He has high conviction and lots of knowledge.

Laurie Segall: Mm-hmm.

Trae Stephens: There seems to be this high correlation between people in the tech community that are kind of like, obsessively passionate about topics and slightly eccentric in personality that makes them a great fit for entrepreneurial endeavors. And Palmer is totally one, one of those people. 

Laurie Segall: He’s the founder of Oculus we should mention for our listeners, which is virtual reality company. Oculus sold to Facebook for $3 billion. And I remember, my first time meeting him was at Web Summit. 

Trae Stephens: Mm-hmm.

Laurie Segall It was years ago in Ireland and he always wears Hawaiian shirts and he always says things that that definitely get him into trouble. And he ended up leaving Facebook, because of some controversy and starting this company that you started with him that we’re gonna get into all about defense technology, which is also interesting and controversial in its own right. And it’s called Anduril and I remember the last time I was at y’alls headquarters. It was probably a year and a half ago? He was like walking around barefoot. I just wanna like paint the picture because it is quite fascinating of all sorts of like really fascinating technology that’s like the future of warfare. And it’s in LA, it’s not in Silicon Valley. And you have a founder with a Hawaiian shirt who’s like, kind of like a billionaire. I think he’s a billionaire. Right?

Trae Stephens: It depends on who you ask, I guess.

Laurie Segall: Okay, unconfirmed but multi multimillionaire. Um, and you guys are just dealing with these fascinating, fascinating issues that, a lot of people in Silicon Valley shy away from. So that’s how I’ll kind of set it all up. Is that a fair way to set it up?

Trae Stephens: Yeah, I think the, the only thing that Palmer, would be disappointed if I didn’t point out is that Palmer, did not leave Facebook. Palmer was fired from Facebook.

Laurie Segall: Mm-hmm. Okay.

Trae Stephens: That is an important distinction.

Laurie Segall: Mm-hmm .

Trae Stephens: But yeah, I mean, y- you know, he is, as I said before, eccentric,  he very rarely wears close toed shoes. I’ve seen him get in trouble with this before. Uh, we were on a, uh, an offsite together and he was playing laser tag,, on a mountain bike, which is something that I guess you do and going downhill very rapidly and went over the handlebars with his Hawaiian shirt open and barefoot of course, and ended up like pretty, pretty badly scraping up his chest and cutting his, his toe, on, I’m, I’m assuming the pedal. But the kind of cool thing about him is that he has that playful, eccentric side, but he’s also incredibly serious. And so the, there are very clear reasons why he was on the mountain bike, why he was charging downhill, what his strategy was for doing that. So, you know, this is like the kind of the combination that I think makes him so unique is that it’s deep seriousness, deep strategy, but also eccentric and playful.

Laurie Segall: When we talk about Palmer’s background, your background is like completely different in a fascinating way. And so, as co-founders, it’s, it’s super interesting because your background is in intelligence. Right?

Trae Stephens: That’s right.

Laurie Segall: And I saw that you were a student during 9/11.

Trae Stephens: Mm-hmm.

Laurie Segall: And I read that that deeply impacted you and, and you ended up working for government. So you didn’t quite have the Silicon Valley background, that a lot of the people you’re working with had, right?

Trae Stephens: Mm-hmm. Yeah. I was a senior in high school when 9/11 happened. Prior to that day, I thought I was going to go into journalism actually.

Laurie Segall: Mm-hmm.

Trae Stephens: And I remember sitting in my principal’s office in, in high school, watching the TV and talking to him about what it meant for the world and really decided on that day that I wanted to go into a career in service to the country. Had kind of toyed with whether or not that meant going to the Air Force Academy and trying to be a fighter pilot or going into, the intelligence route probably via Georgetown, which is where I ended up going. Ended up doing the intelligence thing mostly from a probability perspective, just realized that I might not end up getting that, uh, fighter pilot billing and could have ended up doing something far less interesting in the Air Force. But yeah, it, it all kind of goes back to that day.

Laurie Segall: And so talk to me about the, the government days. What exactly were you doing?

Trae Stephens: Well, obviously I can’t say exactly what I was doing. Nice try.

Laurie Segall: But if we were gonna, if we were going to talk around it in code-

Trae Stephens: Yes.

Laurie Segall: … could you, um.

Trae Stephens: Yeah, if we were gonna talk around it, I was working in the counter terrorism mission.

Laurie Segall: Mm-hmm.

Trae Stephens: Um, specifically focused on computational linguistics for Arabic script based languages.

Laurie Segall: Okay.

Trae Stephens: So, I’ll explain all of that.

Laurie Segall: Okay.

Trae Stephens: Uh, so basically, I studied Arabic in college and one of the things that you find very quickly is that the language works very differently than the romance languages. And the naming conventions are way harder to figure out, and they’re less unique than English names. So the most common name in the English language, I think it is John, it’s like 3% of all English speaking men have the name John either first, middle, last. In the Arabic speaking world, something like over 18% of Arabic speaking men have the name Muhammad. So that makes name matching incredibly challenging.

Laurie Segall: Mm-hmm.

Trae Stephens: So, if you have a name like,Mahmoud Abbas, that could be literally like hundreds of thousands or millions of people with that exact name, Mahmoud Abbas. And so, the job for an intelligence officer is significantly more challenging when dealing with these foreign languages because you have to figure out, “Is this person actually the person that I’m trying to get reporting on or, is it just another person that has the same name?” And so I was working on trying to kind of resolve those entities, um-

Laurie Segall: Mm-hmm.

Trae Stephens: … with one another across a vast amount of databases, which is a pretty tough problem.

Laurie Segall: So, and was there anything that was like, “Okay, I’m leaving the government, I’m going to Silicon Valley and working at Palantir.”

Trae Stephens: Yeah.

Laurie Segall: Was there any, anything that happened that was kind of, the, the catalyst?

Trae Stephens: I think, it’s just like the grinding down of bureaucracy. I certainly never thought that I was going to work for a Silicon Valley venture backed tech company.

Laurie Segall: Right.

Trae Stephens: Um, when I graduated from college, that was never part of the plan. In fact, I remember joining Palantir and asking really crazy questions like, “What is a stock option? What is equity? Why does that matter to my comp? What is a vesting schedule?” I, I knew nothing about how these things worked. And I, I think basically coming to the conclusion that I wanted to be at a place where I was surrounded by a bunch of people who were way smarter than I was, who were much better at me at the things that I was bad at, where I could add unique value and that moved significantly faster than the bureaucracy of the government. All the while still being able to work on the mission that was really important to me.

Laurie Segall: Palantir for folks who don’t know, associated with Peter Thiel, like did you meet Peter or was it that, you got recruited? How did that work?

Trae Stephens: I did not meet Peter. I actually got into Palantir from, the guy that gave the demo, uh, to me when I was working in the intelligence community.

Laurie Segall: Oh, interesting.

Trae Stephens: It was a really small team at the time. I think there were probably like less than 20 people at the company. And I saw a demo and got really excited about it, kind of pushed internally to be able to use it, was told no and then ended up, you know, jumping ship and joining the company really early on.

Laurie Segall: And how did you end up meeting Palmer? 

Trae Stephens: Yeah, so, uh, I got to know Peter really well over my time at Palantir.

Laurie Segall: Mm-hmm.

Trae Stephens: And in 2013, he asked if I would be interested in coming to join Founders Fund, which is a Silicon Valley venture capital firm, that Peter and some of the other PayPal, founders started together.

Laurie Segall: We should also say like Founders Fund for folks who don’t know is, is an interesting venture capital fund in Silicon Valley because they say unpopular things sometimes or invest in things that are not just standard. Right? Is that a fair thing to say?

Trae Stephens: Yeah, that’s totally fair. We have an, a manifesto on our website that I think when it first came out, was pretty unpopular. Um, people thought we were a little crazy. The cool thing about it has been over the last really 14, 15 years that the fund has been around, these ideas of have become actually kind of weirdly popularized. Like-

Laurie Segall: What was the manifesto?

Trae Stephens: It’s a kind of long form essay on what-

Laurie Segall: Mm-hmm.

Trae Stephens: … how we’re thinking about the future and, uh, how tech has stagnated and how venture capital as an industry hasn’t served entrepreneurs well and kind of diving into all of those reasons. But you see some of these things cropping out of it, like the founder-friendly ethos of a venture fund. This idea that actually the founders are the most important single unit, elemental unit of a successful venture. And you should as a fund do the things that are optimizing for helping those people, that has become super common. Like, everyone talks about being founder-friendly as a VC. But you still see really great funds, that are firing CEOs, that are taking aggressive governance stances against them. And this is something that we foundationally will not do. We will not vote against founders. We will not fire founders. We believe that we should be optimizing for upside and not trying to mitigate mediocre outcomes. You know, there’s a bunch of other like more specific, ideological things-

Laurie Segall: Hmm.

Trae Stephens: … that you’ve probably seen on Twitter or, heard in the media about the positions that we take. But yeah, we are, we’re pretty different I would say.

Laurie Segall: Right. And, and so you, um, you met Palmer, and you guys started talking about war and defense?

Trae Stephens: Not, not initially. Uh, so-

Laurie Segall: What were those initial conversations like?

Trae Stephens: Yeah, Founders Fund was the first institutional investor in Oculus.

Laurie Segall: Uh-huh.

Trae Stephens: So, we’ve known Palmer since I  think, you know, he was like a teenager, at that time.

Laurie Segall: I mean, and by the way, let’s be honest, he’s not that much older now. He’s like 26 now, so, or 27 now?

Trae Stephens: 27.

Laurie Segall: Okay.

Trae Stephens: Yeah, he’s 27. But yeah, he’s not that much older now. That’s true. Yeah, I met him, was totally blown away by Oculus, bought one of the original dev kits, have kind of enjoyed interacting with him, just because of his passion and creativity. And had a couple of conversations with him about this search that I was doing at Founders Fund. So when I first joined the fund, not knowing anything about venture capital, not knowing anything about how the tech community works, aside from my one experience at Palantir, I decided that I wanted to go and find the next Palantir SpaceX. So Founders Fund is a large investor in both of those companies. And the theory was, those success stories should breed other success stories. And so I went out and looked at every company I could find, that was doing business or interested in doing business with the government and ended up not really finding anything. And I was talking to Palmer about this and saying like, “You know, it’s pretty crazy given that our adversaries, Russia, China, to a lesser extent but still present Iran, North Korea, uh, were doing things that we’re really challenging, the strategic positioning of the United States and our allies globally.” And Palmer, agreed and had a ton of thoughts as Palmer, uh, does about many things. And, I basically had this conversation with him where I said, “You know what the United States really needs is Stark Industries from the Iron Man movies. We need a company, that is going to be well-capitalized, that will build products for the defense community, not requirements programs. We’re not doing a services contract on a cost plus basis, but building products vertical, and selling them directly to the defense community, to advance our ability to defend and deter. He got really excited about that idea and that was kind of the early phase of, him departing as we said, from Facebook-

Laurie Segall: Hmm.

Trae Stephens: … and starting this company in 2017.

Laurie Segall: And I, I read that there was like, uh, a brainstorming session that happened. I think there was Chick-fil-A involved, some crazy PowerPoint where you guys are putting the future of warfare on there and there’s all sorts of, different future of war, like slides. Can you just like take us to that, what that looked like?  Take us to that brains.

Trae Stephens: Yeah.

Laurie Segall: I mean, because you’re speaking in like very above the whatever, like Silicon Valley terms. Like just take us to what that brainstorming session looked like. We’re in Orange County because that’s where Palmer lives. 

Trae Stephens: Yup.

Laurie Segall: You guys are all sitting around eating Chick-fil-A. I’m from the South, so I can get on board with that. You know, explain what that jam session looked like, what this PowerPoint of the future that could be borderline dystopian. If we’re not careful, we’ll get into that. Uh, looks like, and like what this whole thing happened. Just, just go there.

Trae Stephens: Sure.

Laurie Segall: And don’t speak in Silicon Valley terms, tell us what it actually looked like.

Trae Stephens: Ya. so, it was at Palmer’s house. This would have been I think in April or May of 2017. We had invited a bunch of our smartest friends that, that we could find, that could be potentially interested in leaving their current jobs to come do this with us. And-

Laurie Segall: Anyone interesting that we’d know?

Trae Stephens: … uh, a lot of the early employees of Anduril.

Laurie Segall: Uh-huh.

Trae Stephens: So, we had Matt Grimm who is the co founder COO, Joe Chen, who is, I think one of the first employees of Oculus with Palmer.  And Palmer and I and Matt Grimm, have put together this deck that you’re referencing that was basically like, what would a stable world in the future look like if we were actually able to deter violence? Like how do we stop bad things from happening, and do that in a way that preserves our leadership over time? There was Chick-fil-A. I will, say that I’m allergic to chicken. Palmer knew that I was allergic to chicken and uh, Nicole Palmer’s now wife then girlfriend, went to Taco Bell and bought an entire bag full of bean burritos. And so everyone’s eating chicken and I’m sitting there just like crushing-

Laurie Segall: Good to know you guys kept it classy.

Trae Stephens: … just crushing bean burritos.

Laurie Segall: Mm-hmm.

Trae Stephens: Um, yeah. Palmer is, is a, an avid fast food aficionado. And so he knows all the, all the places to go and the things to do, but that, that was our dining of choice that day. And we kind of just went through this whole plan, like what would Stark Industries look like? 

Laurie Segall: I love that you keep going back to like Iron Man. I mean it’s like when we’re thinking about the future of defense technology, how we’re gonna protect ourselves from like Russia and China. Like it goes down to like a Chick-fil-A, bean burritos and Iron Man and like a guy who likes to be barefoot. I, and I don’t mean this in like a, uh, a snarky way.

Trae Stephens: Yeah.

Laurie Segall: Like this is like, this is like what we’re painting the picture of. It’s kind of interesting.

Trae Stephens: You know I thin- I think this is one of the things that most people don’t realize about the defense community over really like the last hundred years is that these moments, these kind of crucial moments in our history have been defined by like founders, by entrepreneurs. Maybe they worked for the government, maybe they didn’t, but you had Benny Shriver with ICBMs, you had Kelly Johnson with the U-2, the SR 71, the F-104. You had, Admiral Rickover with the nuclear navy. Um, you had Howard Hughes and all the things that he did, um, in the aviation space. And we’ve kind of gotten into this weird like quasi communist state where like, no one’s actually responsible for anything. Like there are no founders. It’s just bureaucracy that’s running all of these multi, multi, multi, hundred of billion dollar defense programs. And so I think the usage of Stark Industries as the analogy is actually really powerful because it says, Tony Stark, the person, is actually the company. And, you see this tension that happens over the course of all of the, the comic books and the movies where he keeps trying to turn over responsibility for certain things. But actually like it’s him that makes this stuff work. And I think Palmer, has this powerful personality where he’s able to pull together really incredible people, really incredible engineers, work through a vision, and actually deliver the products that we say we’re going to deliver.

Laurie Segall: Right. Okay. So go back to that day. So, and so you guys had put together this PowerPoint and, and what did it have on it? It had all these different, what the future of warfare looks like. What does the future of warfare look like?

Trae Stephens: I, I think future of warfare is probably the wrong term because-

Laurie Segall: What’s the right term?

Trae Stephens: … the idea is actually to prevent warfare, right?

Laurie Segall: Mm-hmm.

Trae Stephens: Like you want to deter violence, you don’t want to have more violence.

Laurie Segall: Yeah.

Trae Stephens: Um, and so the, the PowerPoint was kind of talking about what are the near term things that we can do using existing technology? What are the medium term things that we can work on that are probably going to be possible in, you know, a five to 10 year time horizon and what are like the science fiction ideas that would completely change the game and force a paradigm shift in international affairs?

Trae Stephens: And we just had this kind of crazy brainstorming session where everyone was pitching in their ideas saying, “Actually, that’s not possible,” or, “Actually, that’s more possible than we think that it is.” And we were trying to come to a rough agreement on what the first couple of years of a company would look like if we ended up starting it. And uh, I think everyone left that day feeling not only really excited about the vision and commitments in hand for many of them, but also an idea of the types of things we were going to be able to build and a rough prioritization of what that would look like.

Laurie Segall: And I not to state the obvious though when we’re talking about the superhero, bit, but you, you are coming with, a founder who was pushed out of Facebook for controversial comments he made. And like alt-right forums, you have Palantir which you are a part of which has come under fire for surveillance issues and that kind of thing. And it’s associated with Peter Thiel, who is most certainly a controversial figure nonetheless. Um, so are you guys the superheroes to do this and how did, how did you prepare to respond to people looking at this team and saying, “Are we gonna trust this team to build and defend our future?”

Trae Stephens: Yeah. Uh, I think everyone on the team has wildly diverse ideological political beliefs. You know, Palmer’s political beliefs are not indicative of the rest of the founding team nor of the company. You know, kind of the, the crazy thing about this whole conversation is that by U.S. standards Palmer, is pretty boring in his political beliefs. Like he’s a, he’s a libertarian, he believes in limiting government regulation. He believes in, you know, free market economy. And his engagement in the political sphere is kind of limited to pursuing those things. Um, I think he’s gotten a lot of really unfair press treatment, that does not resemble reality. And, you know, I, I think the company by and large is very, aware of that from the founding team to the employees. We’re, we’re all very aware of that. The important thing about the association with all of those kind of different aspects that you just mentioned is that we have open, harsh dialogue internally about everything, about the products that we work on, about the programs within the governments that we work around, about the countries that we work with. And that would only be possible if the people involved, particularly in leadership, were open and receptive to that rigorous debate. And I think this is something that Palmer, found at Facebook is that that was not a culture that was open to rigorous debate. In fact, it was only open to a single ideological bent, that is militarized in a way that, you know, I think, the American people should all be pretty concerned about.

Laurie Segall: I think it’s interesting the last time I interviewed Palmer, um, I had just finished doing a series of conservatives undercover in Silicon Valley and like literally interviewed conservatives who didn’t feel comfortable coming out and talking about being conservative Silicon Valley. And we had to do it in shadow. And this is before, I mean there’s certainly a culture war playing out, um, which is a whole separate conversation, but we’re seeing that on a grand scale now.

We’re gonna take a quick break to hear from our sponsors but when we come back,what exactly is Anduril building? A look inside the technology. Also, If you like what you’re hearing, make sure you hit subscribe to First Contact in your podcast app so you don’t miss another episode.

Laurie Segall: I wanna talk about Anduril, just in, in general like what you guys are doing, so Palmer’s like the product guy and he’s like building out crazy, interesting technology. Uhm, and you’re thinking, I think a lot about these ethical issues and some of the more philosophical issues as we kind of head into this, this future. For folks who don’t know, Anduril, first of all, isn’t this the Lord of the Rings reference if I, if I’m getting it correct?

Trae Stephens: It is. Yeah. It’s Elvish and Lord of the Rings for Flame of the West.

Laurie Segall: Okay.  Um, reasoning behind that?

Trae Stephens: It seemed really appropriate.

Laurie Segall: Okay.

Trae Stephens: Uh, still seems really appropriate. I mean, Anduril was the sword Narsil reforged. Narsil was the sword that cut the one ring from the hand of Sauron in the early ages. And it was reforged as Anduril, uh-

Laurie Segall: Mm-hmm.

Trae Stephens: … to be welded by Aragorn, during the fellowship. So, it has kind of a storied history in, in the series we’re Tolkien fans, it seemed to make a lot of sense.

Laurie Segall: And so the point and so explain kind of the premise of what you guys are building?

Trae Stephens: Yeah. So, uh, again, our kind of view of the future is that if we’re not very intentional, in the West, with, uh, building the technology that’s required to protect and preserve our values and our way of life, that these technologies are going to be built by our adversaries. And we shouldn’t trick ourselves into believing that our adversaries are in some way our moral equals they’re not. And, we want the best technology that is going to deter conflict to be controlled and operated by the people that have the value system that we share. And so, you know, anything that we can build that fulfills that mission while being very cognizant of the ethical considerations, the very real, very meaningful ethical considerations that are required as those things are built is something that we’re really interested in working on.

Laurie Segall: And so, I mean, in a most baseline thing, you guys are building out consumer tech products generally with artificial intelligence that you sell to, to our allies and the government.

Trae Stephens: Yeah. Not everything is, is consumer.

Laurie Segall: Mm-hmm.

Trae Stephens: In fact, a lot of the hardware that we integrated into our systems is not intended for consumers.

Laurie Segall: Mm-hmm.

Trae Stephens: It’s enterprise grade equipment. But yeah, basically the way that you can think about it is that we’re building both hardware and software, to achieve mission ends at the lowest cost possible for the taxpayer. And so rather than building a bunch of bespoke technology on, as I said before, these cost-plus contracts like the F-35, the Ford-class aircraft carrier, or even these bespoke software programs, um, like the Defense Travel Service-

Laurie Segall: Hmm.

Trae Stephens: … and all sorts of stuff like that, we’re trying to take the best of class that exists off the shelf,  integrate it with some things that we do have to build ourselves because it’s not available commercially, and then turn it into a product that works today – as well as we say that it works and turn that over to the government customers, to use for their mission.

Laurie Segall: Can you give us a sense of the products you’re building that the stuff that’s out to market now and who you’re working with?

Trae Stephens: Yeah. So our first product, is what we call a Sentry Tower. It’s a 30 some foot tower, uh, that has integrated sensors. So instead of a security officers sitting in a dark room with hundreds of little CCTV feeds, the tower is just telling the operator when something is happening that they need to look at. This is really critical for all sorts of critical infrastructure, whether it’s military bases, oil and gas facilities, um, national borders, whatever it might be. A second product that we built is basically a tower that flies, we call it, Ghost. It’s a helicopter that has many of the same sensors, it’s fully autonomous, so it takes off, executes a mission returns to base, and provides that same level of autonomous operation and computer vision.

Laurie Segall: And what is that meant to do and what, where do you expect to, to see that deployed?

Trae Stephens: … yeah, same thing as the tower, except it, it moves. and so a tower by its very nature has a range that it can operate in.

Laurie Segall: Mm-hmm.

Trae Stephens: And if something is moving outside of that range or you expect that things are moving at more velocity, you might want a helicopter to be able to track and pursue. There are a lot of remote operations like Special Forces Units in the field that might want forward notice of things that are-

Laurie Segall: Right.

Trae Stephens: … that they are going to be encountering on the road ahead of them. And so there are a lot of potential applications for-

Laurie Segall: Hmm.

Trae Stephens: … for Ghost. The third product, uh, we call, Anvil, which is a kinetic interceptor for unmanned aerial systems. So I’ll explain that. So, most people, I’m assuming, have probably heard about the threat of drones in airspace, particularly in military environments where, you know, ISIS, other adversaries are taking standard consumer drone technology like DGI  Maddix and stuff like that.

Laurie Segall: Mm-hmm.

Trae Stephens: Or just flying them for surveillance operations. Uh, and this is become a terrible risk to our service members, uh, abroad. Taking them down, initially is as easy as jamming their radio signal and forcing them to return to where they came from. Or denying GPS in some environments, things like that. But as the drones become more and more autonomous where they don’t really emit any signals that you can jam, it becomes harder and harder to remove them from your airspace. And so the kind of the crazy idea that, that Palmer and the rest of the team had was to, use another drone, our own drone, with a terminal guidance system on it to lock on to the adversarial drone and fly into it at high speed, kind of like a, a flying bowling ball that’s intended to knock them out of the sky.

Laurie Segall: Wow.

Trae Stephens: And so all of these things kind of paired together where you have static passive surveillance from the towers. You have a dynamic mobile surveillance capability from the helicopters. You have the interceptor that can take out aerial platforms that are approaching your facility or whatever it might be, are like layers in an overall platform that we call, Lattice. Which is the battlefield management command and control software that sits behind all of it.

Laurie Segall: And fast forward from this conversation you guys were having, you know, in, Orange County, and I’m talking about all this kind of stuff to now, like who are you doing business with, Department of Homeland Security, like who, you know, who are you guys selling your technology to?

Trae Stephens: Yeah, predictably our largest customer is the U. S. Department of Defense.

Laurie Segall: Mm-hmm.

Trae Stephens: Um, numerous branches. So we’re working with Special Forces, we’re working with the Marine Corps, we’re working with the Air Force, the Navy. So that represents the, the majority of our business. We do also have some work with, uh, Federal Law Enforcement and the Department of Homeland Security.

Laurie Segall: Mm-hmm.

Trae Stephens: And then we’re working with some of our international partners, close allies like the United Kingdom –

Laurie Segall: Mm-hmm.

Trae Stephens: … and very similar defense mission sets.

Laurie Segall: The last time I interviewed Palmer, it must’ve been, I would say like a, over a year ago. Um, the headline grabbing thing I remember for you guys was, the first product you guys were building. It’s like the virtual border wall. And I think that’s the thing that, people looked at as, are you building the digital wall for Trump? Um, where do you guys stand on that? Because this was the first thing you guys put out publicly. A lot of people looked at Anduril, and looked at what you were doing, um, and asked you some of these questions about, “Is this ethical what you’re building?” And, “Is this the right thing?” So, I’d be curious to know where that stands now.

Trae Stephens: Yeah. The, the work that we’re doing with customs and border protection-

Laurie Segall: Mm-hmm.

Trae Stephens: … within the Department of Homeland Security is not a policy initiative. We don’t have policy views on, how border control should be managed. But we do believe that within the constructs of a democratic government, um, we should be supportive of the things that the government is doing, the mission that they’re executing. And I think in this case, Democrats, Republicans, everyone kind of agrees that you have to have some form of border security just like literally every other country in the world. And that when you’re making decisions about what policies you’re going to accept, uh, with regards to that mission set, you need to have data. You need to know what’s actually happening. Um, you need to know what kind of people are coming across the border. Is there actually the flow of drugs and black market weapons that people think there are? Are there unmanned aerial systems that are posing a threat to our civilian populations? You know, there’s all these questions that you would wanna h- have answers to. And I think that kind of an indefensible position would be, actually, we don’t, we don’t need information. We don’t need any data. We don’t need to know what’s happening. We should just, let, everything happen that’s happening. And so I think this is kind of why we’ve gotten such bipartisan support for the technology is that we’re not making a political assertion.

Laurie Segall: How do you think ethically about the data? So the data that you guys gather, I’m sure you’re gathering a lot of this data, right? From everything it’s picking up and so what’s the standard there?

Trae Stephens: Yeah. We do not own the data.

Laurie Segall: Right.

Trae Stephens: The data belongs to the customer and by the way that’s in every case.

Laurie Segall: Yeah.

Trae Stephens: Every use case, um, aside from our own technology that we’re using in test environments. So yeah, basically they get the feeds off of the towers, the helicopters or whatever else they’re using. Human beings, actual people who have a job, uh, in a mission in that job are making decisions about how they respond to that data. Um, so if you see an unmanned aerial vehicle with drugs attached to it, which is something that happens periodically. They can decide, “Are we going to pursue that? Are we gonna try to shoot it down? Are we going to try to follow it until it runs out of battery are we going to let it go?” You know, these are decisions that human beings are making at the end of the day. So yes, absolutely there are ethical questions that are involved with each of the products that we build. Um, and there’s a process by which I think you go through thinking about how that mission is realized and who are the people that are involved in making that decision.

Laurie Segall: So can you explain, you talk a lot,  about this just war theory.

Trae Stephens: Yeah.

Laurie Segall: Can you explain to us, like what’s the just war theory?

Trae Stephens: Yeah, so just war theory. This is a many centuries old framework for evaluating,how and when you go to war and how war is conducted. And, you know, the, the general concept is that there are all of these principles that we can apply to any technology. They apply the same to a knife, an arrow, a bayonet as they do to, um, you know, autonomous robots in battle. and so I had wrote an essay that it seems like you’ve probably read-

Laurie Segall: Mm-hmm.

Trae Stephens: … um, about defense ethics, So there are four specific principles that I talked about in the essay. The first is, last resort. So the principle of war is last resort. the second is discrimination, principles of discrimination, and proportionality. And the last is right intent or just authority. 

Laurie Segall: So, so this is how you look at the ethical decisions that you guys are making and the products you’re building and what kind of framework you’re gonna think about when you’re, when you’re building things and deploying them.

Trae Stephens: That’s right. So, oftentimes in even some of your previous guests say, uh, Professor Cerf-

Laurie Segall: Uh-huh.

Trae Stephens: … who you had on talk about, uh, kind of brain, uh, modifications and things like that.

Laurie Segall: Uh-huh.

Trae Stephens: He talked about some of the ethical and regulatory structures that should be put in place for this.

Laurie Segall: Yeah.

Trae Stephens: People often talk about how we need some sort of ethical framework for evaluating autonomous systems, artificial intelligence, quantum computing, neuro modifications, things like that.

Laurie Segall: Yeah.

Trae Stephens: And my argument, the core argument is that actually this frame, there are frameworks that exist today just war theory happens to be a really good one for applying these principles.

Laurie Segall: Okay.

Trae Stephens: And you just have to think critically about how those new technologies fit into each of those paradigms.

Laurie Segall: First of all, a little birdie told me that, uh, you do these dinners at your place.

Trae Stephens: Ha.

Laurie Segall: I have a feeling that this theory, you guys jam on this. Like maybe like these dinners are just some sort of … Like, can you give us like specifics around, the technology you’re building now? And if you don’t mind, just like be as honest as possible because I think that, you know, I understand that these are hot button issues. And that like, you know, that, that there can be headlines out of them and everything, but the stuff you’re dealing with is really fascinating. Right? And the issues that you’re talking about I like what you said about this idea of having rigorous debate and saying things we can’t say out loud and just kind of like yelling at each other about theories and this and that. So like, can you take us to this compound? It’s like two hours outside of LA or right, or where, where exactly is Anduril?

Trae Stephens: It’s depending on traffic, 40 minutes.

Laurie Segall: Okay.

Trae Stephens: Five hours, I don’t know.

Laurie Segall: Oh my god! There’s so much traffic there I’m just gonna go take five hours. Um, but , but take me to like how you apply this theory into something really specific that you’ve worked on. Because I can imagine for you, it’s challenging and interesting and you really are kind of thinking ethically about building out this technology that um you guys are gonna be deploying and selling to governments and putting in the hands of other people and seeing how it will be used. So like w- how has this theory worked specifically? Because when you talk about my interview with Moran, like he talks about a lot of this stuff in theory-

Trae Stephens: Mm-hmm.

Laurie Segall: … you’re actually doing a lot of this stuff in reality.

Trae Stephens: Yeah. That’s a really good point. Um, this is something that has to be approached with a high degree of seriousness.

Laurie Segall: Mm-hmm.

Trae Stephens: Because it does have real world implications. This isn’t a theoretical conversation around, you know, some defense technology that might exist in 10 years.

Laurie Segall: Right.

Trae Stephens: It’s like these are the things that are being deployed today, not only by Anduril-

Laurie Segall: Yes.

Trae Stephens: … but by a lot of other companies as well. 

Laurie Segall: And so going to the technology that you guys are building specifically say like Lattice, right?

Trae Stephens: Mm-hmm.

Laurie Segall: This, this technology that’s already been deployed at the border, right? Take us through the ethical,  framework. What did you decide not to do, where did you guys decide to draw the line?

Trae Stephens: I would say that deciding where to draw the line implies that there was some sort of conversation about accelerating beyond what we were building as a product, which wouldn’t be an honest description of what happened. You know, I think you could extrapolate a million different things, ranging, from like, some sort of laser, in kind of a one to one way or, uh, you know, a nuclear weapon and everything in between-

Laurie Segall: Hmm.

Trae Stephens: … with literally any technology that you built. Anything that has a camera on it, you could say like, “That could be used to target for firing many nuclear warheads.” I dunno, it’s like-

Laurie Segall: Right.

Trae Stephens: … that, that was never part of the conversation. The entire concept behind the Sentry Tower was around data collection. That was it. From the very beginning it was never about anything more than data collection.

Laurie Segall: I guess I, I look at like maybe the inevitable, did you have conversations about, um, facial recognition or, “We wouldn’t go near that.” I mean, w- w- where, like, where do you draw the line ethically or even when we talk about Ghost, right, something else that you guys are deploying. Like what are the other ethical conversations you guys have about those specific technologies?

Trae Stephens: Yeah, no, that is a really good question and I think this is like one of these examples where the values that we hold, as, as a country, are wildly divergent from our adversaries. Like facial recognition. Really, really good point. Is it necessary to the mission to be able to at long range determine who people are algorithmically? No, not really. Like what our military wants to do, what our officers in the Department of Homeland Security want to do is they want to know when something is happening. They don’t need information about who everyone is as identified by some sort of, by some sort of algorithm. This is the thing that, you know, China is doing and this is the ideology and technology that they are exporting to dozens and dozens of, of countries in partnership with companies like Hikvision and Dahua and SenseTime and Megvi Huawei, I mean this is, that is the Chinese ideology that, uh, that they’re exporting. That’s not something that we’re interested in doing.

Laurie Segall: Does it worry you that you build out this technology? I’m sure you get this question a lot. you know, you develop this technology that you believe you do it in an ethical way, but that it could be used in a way that is not ethical, that someone could take what you’re doing and then add facial recognition to the technology that you’ve deployed?

Trae Stephens: Yeah. I think this is why it’s really important when you’re making sales of technology in the defense space to consider the end user of that technology and what processes exist to control that. One of the really important things about this ethical conversation to us at Anduril in the context of the United States is that, you know, we shouldn’t take the democratic process for granted. Um, we are blessed to be in a place that has a system of checks and balances. A very large nonpolitical civil servant bureaucracy that exists, that makes these decisions. And we have an ability to change policy, at the ballot on a, every two year basis, basically at a federal level. And so, we don’t believe that the, quote “enlightened,”  technocrats in Silicon Valley, um, should have the ability to decide for the American people what technology our government should be using and what technology our government shouldn’t be using. You know, we believe people are really smart, and that the government has the ability to make changes as needed, as this tech is deployed.

Laurie Segall: One of the most interesting questions I think when it comes to the future of tech and war, um, is this, I guess, I come back to this line, maybe it’s a little dramatic, but it’s like AI to kill or not to kill, right? Like this idea that you guys could be building out autonomous systems that can make the decision to, to kill. Um, will you do that?

Trae Stephens: Uh, I mean, our policy internally has always been that in almost every case a human in the loop makes a ton of sense. There are certainly cases where, they might not even involve human casualties, uh where you, you really can’t have a human in the loop. For example, if you have hypersonic missiles flying at you, you have like a split second to make a decision about whether or not you’re going to shoot it down. And these are the types of things, again, like Iron Dome that have been driven by computers. And so, there’s this constant conversation that seems to be happening about, well in what future world, like will we just have to make these decisions. Actually, like we’ve been doing this for over a decade. There are computers that are making kinetic decisions on a, on a regular basis today. When it deals with human life, I think it raises the stakes quite a bit, in the engagement of humans in the loop, in that decision making. Uh, which does feel really important. I think one of the conversations that doesn’t seem to get enough air time is the idea that you, you can’t just wait for all of the theory around the ethics to be worked out before you build something because our adversaries will build it. And if we look back at history, uh, you can see that the welder of the technology, the person that builds the technology and owns the technology, is really in control of the standards and norms that are used to deploy that technology. You can see this with the way that China is approaching regulation around 5G with the International Telecommunications Union. They have a seat at the table. We do not have a seat at the table. And if you go into these conversations, assuming that, that we’re going to somehow be able to, you know, push our agenda. I think you will find in history that has been the wrong assumption. Another example of this is the Intermediate Range Nuclear Forces Treaty, INF, that we had with the Soviet Union, where we agreed with the Soviet Union that we wouldn’t build enemy at range ballistic missiles. China was never a party of this treaty. They moved forward with building intermediate range ballistic missiles and then Russia, when they realized that that was happening, they also began building intermediate range ballistic missiles. And so we put not only ourselves at a disadvantage, but we put our service members in the Pacific at risk for their lives because we were beholden to a treaty that was not being followed, that was not being taken seriously. And so whether or not we build AI for defense, whether or not we build autonomous systems for defense, whether or not we build better precision fires for defense, whether or not we build quantum computers for defense, other people are going to build these things. And we want to be in a position where we have a seat at the table talking about how those technologies are being used.

Laurie Segall: So, take me to that seat at the table ‘cause you have a seat at the table. You guys are sitting there talking about these things. I remember the last time I interviewed Palmer, like asking him that same question about like, “Will you deploy technology that can make the decision to kill?” And, and I remember, I think I remember him, you know, saying, “Right now, no, but that doesn’t mean in the future we won’t.” You know, and I thought that was really interesting. Because I do think it comes with all these really interesting, ethical issues of at what point and who’s coding the decisions it’s making and AI is so flawed in general. But, um, so have those conversations moved forward with you guys? I mean, what, when’s the last time you spoke about it or what was the nature of it?

Trae Stephens: Uh, I can’t think of any specific examples of tech that we’re building right now-

Laurie Segall: Mm-hmm.

Trae Stephens: … where that has been an issue. Um, but I think Palmer’s answer is correct. I mean, there are a lot of versions of just war, uh, application that do involve lethality. Um, i- it’s very, very hard to predict the future, to say like what the conflicts of tomorrow will be. And you know, the types of decisions technologists will have to make in order to sustain an advantage in those conflicts. But to the extent that, uh, tech, the tech is deployed as the last resort, to the extent that it is more discriminant to the extent that it is, uh, more proportional to the extent that it ensures last, right intent and just authority. Um, and it ensures that human flourishing can continue in a more abundant way. Absolutely. There are, I’m sure there are applications of technology that will have lethal intent, that fulfill and check all of those boxes. That said, I’m sure there are also technologies that will not, and those are the technologies that not only would I not build, but I also would not invest in them.

Laurie Segall: When we’re talking about the future of autonomous weapons and making this … AI, making the decision to kill, not to kill. Are you worried that, if it makes a decision in a split second, like back in, you know, traditionally it’s been a human making that decision and we can put that on somebody that in the future that this, uh, if the AI makes the wrong decision, that could, the liability could fall on you guys?

Trae Stephens: I think liability is a complex issue with all technology, whether it’s, you know, self-driving cars in the consumer world or you know, military technology in the defense world. Of course like the liability needs to be worked out at some point. Whether it’s through regulation, whether it’s through some sort of legislative action, I don’t view that as something that would deter me from wanting to work in the space. Um, because again, I believe in the democratic process and I believe that there will be some sort of fair reckoning, um, for these things. I think one of the things that has kind of always been inspiring to me in this is that, Science Fiction has kind of thought through a lot of these ethical challenges well ahead of its time. Like, well, well, well ahead of its time.

Laurie Segall: Hmm.

Trae Stephens: And so, Science Fiction is an awesome place to go, to start talking through some of these really complex challenges. Like for example, the Prime Directive in Star Trek. Like, you know, Elon’s over here talking about,, interstellar travel and things like that. It’s like Star Trek has worked through more academic research on the impact of visiting neighboring civilizations than any like, university has. And I think these are the types of things that we, can and should be looking to fiction to partially inform, so that we can be more prepared, you know, at the eventual moment that those technologies come to fruition if they come to fruition at all. But I think, Star Trek is a great example of that.

We’re gonna take another quick break to hear from our sponsors but when we come back, a mishap with light sabers. Yep. You heard me correctly, light sabers. And if you have questions about the show, comments, really anything, you can text me on my new Community number: (917) 540-3410.

Laurie Segall: So what, what would you say like as an ethicist and someone who’s kind of thinking about these things and you say like, let’s go specifically to Anduril, like if this is the kind of thing that could come down the pipeline, like what would you want people to keep in mind when thinking about deploying these systems with AI that have the ability to make these decisions? Like what is the conversation we should be having nationally? Like, because you talk about the government, you know, needing to regulate this, but oftentimes the government is, you know, this probably better than anyone, light years behind the technology. So what is the conversation that we should be having about this kind of thing?

Trae Stephens: Well, certainly within the government, the DOD, the Department of Defense has a very detailed rule of engagement that they followed, for a very, very long time. And this goes back to, you know, the inclusion of the Geneva Conventions and United Nations Agreements about use of force. And so I think these types of conversations come naturally to the defense community. They know how to think about it. In the tech community, I think it comes way less naturally and that’s where I have to engage more with people on it. Because, you know, these are generally strongly held very strong beliefs with very little data to back them up. People that say like, “In theory, I would, you know, I would never work on these things, but I also haven’t considered any of the implications that lead to those decisions.”

Laurie Segall: Hmm.

Trae Stephens: Um, and so it becomes more of a conversation about presenting scenarios and saying, “What is the  most ethical way to move forward with this?” This is what the dinners at my house are about. These are not, I’m not hosting Anduril employees at my house. That would be kind of a waste.

Laurie Segall: Yeah.

Trae Stephens: Um, it’s like engaging this … Yeah.

Laurie Segall: Can you give me an example, I think, I guess that’s what I’m interested in.

Trae Stephens: Sure.

Laurie Segall: Like give me an example of one of those, like those scenarios that you guys talk about.

Trae Stephens: Sure. So let’s imagine that, North Korea, either has like humans or robots or humans in robots like MechWarrior style, um, and they just like flood into the demilitarized zone, just like thousands and thousands of objects that are pushing forward. You have the, the option of, A) taking a like serious kinetic, one-to-many action and, you know, firing very large bombs, uh, missiles, nukes, whatever, to eliminate them, not knowing what’s good, what’s bad or otherwise, not knowing if there’s like a zombie plague that’s like forcing everyone to flee the country.

Laurie Segall: Mm-hmm.

Trae Stephens: Um, or you can do some sort of, AI assisted, like Auto Turret. So there’s like, you know, I don’t know, guns on a swivel that you can kind of control and they automatically lock on target. Or if there was an AI that said, “Differentiate between robots, between people and people that have weapons and only shoot people with weapons and robots don’t shoot any people that are running towards the border without weapons.” That’s an AI driven technology and there is a lethal kill decision involved. But you could save thousands and thousands and thousands of lives by executing that strategy instead. A human could never make decisions that rapidly with that much data flooding into the system. There’s just no way they could do that. And de conflicting across all those targets at the same time.

Laurie Segall: Hmm. I mean, and the idea is that even when, when humans do make these decisions, oftentimes they are tired, fatigued, stressed, and under all of these, and in these different situations-

Trae Stephens: Yeah.

Laurie Segall: … of when to decide to make the decision to kill or not to kill.

Trae Stephens: Yeah, I think if you go and you talk to the soldiers that served in, in the last few international conflicts, the decisions that torment them, that keep them awake at night are decisions that they had to make in the blink of an eye. You know, a vehicle driving at high speed towards one of their bases. You don’t know if that’s a sick child and the father is just trying to like get them to medical care as quickly as possible or you know, A car full of, you know, explosive material that’s gonna run into your base and kill service members. And they, they have to make these split second decisions about what to do. They want more information. They want to be able to make better triage decisions. And by withholding that technology for them, we’re putting people’s lives at risk, both service members as well as civilians.

Laurie Segall: I only play the devil’s advocate on AI because then I think about how flawed AI can be, how biased it can be, and how sometimes the algorithm makes these decisions and someone’s on the other side of it and you’re like, “Wait a second, how did that happen?” And you, and you question that as well. So I think they’re, they’re equally as very different issues, but issues when it comes to AI making these decisions. And I think, no doubt the future will be, uh, autonomous systems that are AI driven when it comes to the future of war. So like that seems complicated to me as well since AI is biased…

Trae Stephens: Yeah, there’s bias in both directions for sure. If human beings have biases that they don’t even realize they have that cause them to make decisions. Computers have different sets of biases that to the extent that we can understand the, the way that these models are working. We can correct a lot of those over time. I don’t think there has ever in the history of technology been a, something that was perfect at the outset. Like there’s always room to improve. There are things that we can do to make the models more accurate, reduce the bias that’s implicit in them. And I think that is, important work.

Laurie Segall: But like, so draw a line for me. I mean just draw, I’m just asking for one line, you know? Like, just give me something, a certain type of defense technology that you will not build. Like just like, what is your no zone? Like, what is your like someone says something and you’re like, “Huh, not that,”?

Trae Stephens: I actually think these are really easy. Uh, and there’s a bunch of them. Like one example would be, um, some sort of like bio plague that you can release into an enemy territory.

Laurie Segall: Okay.

Trae Stephens: Like coronavirus, like pretty sure, that’s ethically bad.

Laurie Segall: Hmm.

Trae Stephens: If that was an adversary launching that attack, anything that is disproportionately affecting, non combatants is to be avoided in my mind. So, so I think that’s one. Another is  technology that conceals information rather than making it more accessible to decision makers. You can look at things like the Great Firewall in China, or el picante in Cuba. Blo- blocking out information or the denial of Russia for its incursions into the Ukraine and in Georgia. I think these are all like examples of places where the government has made a concerted effort to hide information from their population to defend potentially a crazy decision that they might make in the future that’s super bad and not something that I would wanna be involved in ethically.

Laurie Segall: Has there been something that Anduril that kinda got left on the cutting room table like you just decided, “We’re not, we’re not building this,”?

Trae Stephens: Huh, none for ethical reasons because we haven’t really like edged into any of these crazy territories yet. There are some that have you know, been left on the cutting table because they didn’t work like we thought they should and weren’t worth pursuing. We’ve had some, some crazy ideas around like real life light sabers and stuff like that. And it turns out that some of these things like the science just isn’t ready yet. By the way, the same can be said for augmented reality for decades soldiers have been really interested in a heads-up display that they can wear in combat.

Laurie Segall: Mm-hmm.

Trae Stephens: That gives them information on mission objectives, on where friendly actors are to prevent them from getting involved in friendly fire accidents. Um, so on and so forth. Uh, map overlays.

Laurie Segall: I remember Palmer saying that to me and I thought that was really interesting the last time we did an interview, he was saying the future almost feels like a bit of a video game, right?

Trae Stephens: Totally.

Laurie Segall: Like, and he, and he has the video gaming background from Oculus, but like, you know, you have this almost virtual or augmented reality layer that shows you everything around you.

Trae Stephens: That’s right, yeah.

Laurie Segall: Now, you’ve just added like the idea of a light saber to, so my mind is blown a little bit.

Trae Stephens: Well, I think we can probably press pause on the light saber. I don’t, I don’t think that’s gonna happen any time soon.

Laurie Segall: How far, how far did you get?

Trae Stephens: Uh, it got far enough that Palmer accidentally cut himself and had to get medical care. Uh, so that is-

Laurie Segall: What do you guys build?

Trae Stephens: That’s a question for Palmer. I don’t, I don’t understand the exact physics of what happened, but, um, but needless to say, we are no longer building it. By the way that would have been to use for breaching. So, instead of putting C4 on doors using a plasma cutter, essentially to get into denied environments.

Laurie Segall: Oh, man! So not like actually like, hand to hand combat?

Trae Stephens: No, no, no, no, not hand to hand combat. Um, so-

Laurie Segall: Okay, just, just during? Okay.

Trae Stephens: And uh, so let’s see, what, what, where’s the question?

Laurie Segall: It’s pretty far. It’s pretty hard to come back from that.

Trae Stephens: Yeah. We, we’ve just like gone down this path.

Laurie Segall: Yeah, should we go further?

Trae Stephens: Oh, uh, augmented reality. And so this is, this has been a dream for soldiers for a really long time and, uh, I think it would be incredible. Like there are all sorts of like ethical goods that can come out of this. Primarily, I think most people would be shocked by how many casualties that are caused in theater are friendly fire related.

Laurie Segall: Hmm.

Trae Stephens: Like it’s just de conflicting all of the stuff that’s going on is really hard. Uh, and will only become harder as there are more autonomous assets deployed. And, uh, you know, obviously our team is the best in the world for building heads-up displays for defense purposes. There’s, I don’t think there’s anyone in the world that has more talent than we do around this specific topic.

Laurie Segall: I mean, if anyone’s gonna build out something like this, it could be the Oculus team, right?

Trae Stephens: Totally, yeah.

Laurie Segall: Or the team that has built Oculus.

Trae Stephens: Yeah, no, I mean there’s, there is no, there is no better team for doing this.

Laurie Segall: Right.

Trae Stephens: Um, and our decision, uh, was that it’s just not there yet. The science is not there yet to make it credible in deployment. There’s a DoD contract called IVAS, I-V-A-S, um, a and Microsoft won the contract to deploy HoloLens. But it’s like super test and prototype driven. Obviously like, it’s hard to imagine a soldier walking into combat with like a, you know, really heavy, blocky HoloLens hanging over their head with a really tiny field of view. And so I think that there’s some like test and evaluation that’s happening to figure out what the future might look like, but you know, we’ll be there and ready, when the science is ready to make it possible. And the software that we’re building, this Lattice Platform is at the end of day, it is the backend for feeding all of that data to the operator in the field, whether it’s on their mobile device, you know, over radio communication or eventually in an augmented reality headset.

Laurie Segall: And have you had any qualms with working with the, the current administration? I mean, I know you were part of the, the transition team for Trump in helping with defense and giving recommendations on what they should do. Have you had any hesitations, and I know that you guys aren’t political. You say you’re not political, but do you take any of that into account to an extent? I mean, beyond the fact that, yes, Silicon Valley is notoriously liberal, but, but I mean, even for you personally, now that we’re in an enclosed room, um, and you know, we have times that you can’t really get out. I mean, do you think about any of this stuff personally? Like what you’re building, who you’re building it for, where it’s gonna go, where it’s gonna end up? Talk to me.

Trae Stephens: Are you implying that the door is locked and I’m not allowed to leave?

Laurie Segall: Did I do that? I don’t think, honestly I don’t think it’s locked. I’m not sure.

Trae Stephens: Look, the, the reality is like, I work at the intersection of tech, finance and government.

Laurie Segall: Mm-hmm.

Trae Stephens: Which are like the three most hated industries in America. So I’m not shying away from conflict or controversy. The way to state our engagement with the government is that we are working with the United States government. Administrations come and go. Some of them faster than others, and certainly they have influence in things like budget, policy execution, presidential authorizations. Like there, there are things that presidential administrations are responsible for. But by and large, the, enormous population of civil servants are the ones that are responsible for executing the mission that they’ve been given to execute. And we are not going to withhold technology, from the people that are responsible for this mission because of who’s in the White House.

Laurie Segall: But you are talking about a White House that has come under controversy for an immigration ban for doing it. And these are technologies that you’re building that directly could impact in, in that kind of way. So, I mean, does that ever come into account for you? Does that ever play into, into how you feel about it?

Trae Stephens: I mean, I think this is, again, the beauty of the democratic process is that if there were some sort of policy that we had a strong disagreement with internally, unanimously, or at least unanimous across the leadership of the company, would we push back and say like, “We’re shutting the business down and not selling to the government because, you know, we’re afraid of where it’s going?” I can’t say that it would be impossible that that would happen because, you know, obviously crazy things have happened to nation states in distress over the history of the world. I don’t think we’re there. I don’t think we’re like on the precipice of some sort of revolution that’s going to lead to some terrifying world war of significant ethical proportion. So  in that regard, like the decision really comes about, you know, do you trust the democratic process? Do you trust that if a policy oversteps that there are ways that that policy can be corrected through the legislative process, through the judicial process? And right now, I think the answer is unequivocally, “Yes.” We have a democratic controlled house of representatives. They have the ability to legislate, the judicial branch is, you know, functioning in America where like the constitutionality of the policies that are laid out by the executive and legislative branch is constantly being evaluated. And as long as that’s the case, I think, you know, it is the right moral ethical thing to do to continue working with the government of the United States of America.

Laurie Segall: I didn’t mean to almost, start laughing, but my follow-up was gonna be, would you deploy light sabers to this White House?

Trae Stephens: Um, wow! I mean, we could talk about the like ideology of the last three Star Wars movies because I think Luke Skywalker in The Last Jedi would certainly say that the answer should be no because Jedi, can’t be trusted.

Laurie Segall: Right.

Trae Stephens: And they should be eradicated from, from the earth because it’s too much power. Um, but I don’t think we’re talking about that. And I also think J.J. Abrams clearly disagrees-

Laurie Segall: Mm-hmm.

Trae Stephens: … and made that very clear in Rise of Skywalker.

Laurie Segall: Okay.

Trae Stephens: Which as a Star Wars fan, I really appreciated.

Laurie Segall: Um, okay. I wanna know, I’m sorry I totally derailed that. Um, what is it like being involved in a business that a lot of folks in Silicon Valley kind of disagree with because you’re looking at, I think it was 2018, you have Amazon employees sending a letter to Jeff Bezos saying, you know, “Stop selling facial recognition services to the U. S. government.” You had I think 4,000 Google employees for Project Maven, uh, protesting this contract work with the government. This is an issue that’s very sensitive of tech companies doing business with the government and especially this current administration. Do you feel that at, Anduril, that it’s not, not a popular thing you’re doing?

Trae Stephens: Two answers to the question. First is external to the company. And the second one I think is internal to the company.

Laurie Segall: Mm-hmm.

Trae Stephens: External the company first and foremost. It is incredibly important that I am available, accessible and having serious dialogue and conversation around what I believe are some of the world’s most important questions around the ethics of defense technology. So, and this goes for all of the co-founders of the company and Palmer, uh, included. and that that’s basically like, you know, are we going to be dismissive of people’s concerns? No, we should not be dismissive of people’s concerns. Should we be involved in the dialogue around how these technologies are built, how the regulations are formed? Absolutely. We should be involved in that. And I think that’s kind of key to, you know, ha- having this conversation with you, Laurie, and to hosting dinners at my house and going down to Washington, D. C., which I’m doing later this afternoon to engage in conversations with policy makers. It is really important to think through all of the implications of these decisions and make the right ethical decisions to the extent possible in every case. Internal to the company, I think is slightly different. So, you can see Brad Smith at Microsoft, Jeff Bezos at Amazon came out and took strong stances, took strong stances of support for the U. S. government for the Department of Defense. Google kind of went in a different direction. And I think has started backtracking a little bit on, on that, those decisions, um-

Laurie Segall: Mm-hmm.

Trae Stephens: … based on just a realization of the control of the business that the activist had internally. But at Anduril, you know, we’re in a unique position because unlike Google, unlike Amazon, unlike Microsoft, our employees signed up to work on defense technology. And so when you read those letters that were written at other tech companies, the key point of contention, was that they didn’t quote “sign up” to work on defense. Well, at Anduril, they, “signed up” for defense.

Laurie Segall: Right.

Trae Stephens: They knew that from day one. And so regardless of political persuasion, you know, whether they’re socialists, or  libertarians or big R Republicans or little R- it doesn’t matter. Everyone at the company, uh, is bought into the mission. And so when we have these discussions, we have ethical discussions, we have mission discussions. But it doesn’t come down to the nature of the company or the vision for the future that we have.

Laurie Segall: May I make a suggestion?

Trae Stephens: You may.

Laurie Segall: Okay. It’d be interesting because you guys are having all these ethical, conversations about the future. Having covered Silicon Valley all these years, it’s a lot of dudes. Right? And I don’t know about these dinners and I don’t know about, you know, what it was like at Palmer, when you guys are talking about the future, but whether it’s women or diversity of mindset, you know, that’s like a killer app, right? It’s, it’s like thinking about, empathetic mediums, thinking about the human element in ways that I don’t think a bunch of the same types of folks will be able to think about. So I don’t know what the, the breakdown is and I don’t wanna judge it of Anduril, but I hope you guys have a diversity of thought in the, in the room when you’re thinking about these types of things. Just because I, I do think you guys are very, you have a big play at the future. And so do you guys have diversity of thought in the room?

Trae Stephens: Well, I mean, diversity of thought. Absolutely. Other forms of diversity is something that-

Laurie Segall: In the form, diversity of thought that translates into the diversity of like skin color-

Trae Stephens: Other forms of diversity, yeah.

Laurie Segall: You know, all of that.

Trae Stephens: Yeah. Um, you know, this is something that we work really hard at every day, um.

Laurie Segall: But don’t give me the Silicon Valley line on it. Like, you know what I mean?

Trae Stephens: Yeah.

Laurie Segall: And I know everyone in Silicon Valley is working really hard to change it, blah, blah, blah. And it hasn’t really changed. 

Trae Stephens: I agree. We, and we, we do have people that are critical to our mission internally that do not look like standard Silicon Valley people of whether it’s skin color or gender or whatever it might be. Um, but of course not enough and that’s something that we are working hard at, which is, I know, as you said, the Silicon Valley answer. The other thing that I would, I would say here is that oftentimes people say that in the defense community, um, it’s like, uh, it’s m- more unbalanced than in other industries. I think that is partly true, but there are some really, really credible, uh, minority populations-

Laurie Segall: Mm-hmm.

Trae Stephens: … uh, that already exists within defense. That I think is al- as always very important for us to engage with. Um, you know, undersecretary Ellen Lord, is the Head of Acquisitions and Sustainment, uh, at the Pentagon. She’s, you know, one of the top four people in the department, uh, and we engage with her frequently. Um, uh, there are startup founders like Rachel Olney at geo site.

Laurie Segall: Mm-hmm.

Trae Stephens: And, uh, Daniela Perdomo at goTenna that are active in the defense community. There are academics like Renée DiResta at Stanford, uh, that are, that are really strong. Um, in the journalism space some of the strongest defense reporters are, are female. I think. Uh, Lara Seligman, at Foreign Policy is incredibly strong in defense, knows what she’s talking about. Um, Morgan Brennan at CNBC is really strong and so, you know, this, this like rejection or some sort of suggestion that that doesn’t exist I think is, is just false and that should be tapped into for sure.

Laurie Segall: I’d be curious to know when we look at Russia and China, as someone who’s been deeply involved and looks at these types of threats, what’s like the scariest thing that Russia and China are doing that you’re a little bit concerned that in the United States we’re not going to be able to keep up with or we have to keep an eye out on?

Trae Stephens: Uh, by the very nature of the conversation and the scariest things are the ones that we don’t know about.

Laurie Segall: Mm-hmm.

Trae Stephens: That’s the first thing that I would say. The second is, is that, China has an actual strategy around exporting ideology and technology in tandem with one another. And so, you know, if you look at the deployment of, or the export of surveillance technologies globally, as I mentioned before, you have these companies like Hikvision and Dahua, that are camera companies. Chinese camera companies, uh, you, you have iFlytek , um, DJI that are drone companies. Um, you have, AI companies like Megvii and SenseTime. SenseTime is the most valuable AI company in the entire world.

Laurie Segall: Mm-hmm.

Trae Stephens: That’s interesting to, to note. Um, and then you have all of the networking and communications infrastructure behind Huawei and when China is going out to countries that, you know, it’s engaging with for belt and road or engaging with for, uh, loan programs, they’re coming in and they’re saying, “Venezuela, Ecuador, Egypt, Iran, so on and so forth. Here is not only the technology but also the ideology that wraps up the surveillance state that we’ve built in China. And that we’ve used to systematically oppress entire people groups, using the technology that’s being built by our tech community.” That type of stuff is really scary to me. Not only because you know, it’s being exported at a rate that is really unbelievable in the last five to 10 years. But also because this is not the engagement in the U. S. our defense department does not work closely with the tech community. And so, post Cold War, our best and brightest engineers, uh, kind of stopped going to the DOD directly or to the defense primes like Lockheed, Raytheon, General Dynamics, so on and so forth. And they started going on optimizing ads at Google and Facebook. You know, that’s where the, the bulk of computer science talent is in the United States. And so, you know, our exports, if you look at the exports of United States technology internationally, particularly related to the military, um, like foreign military financing, foreign military sales facilitated by the Department of State, it’s like F-35’s, it’s a AMRAAM missiles. It’s, all, all this really high end military equipment that’s built by primes and it’s not the like low level stuff that actually runs the countries that we’re, that we’re sending it to.

Laurie Segall: Hmm.

Trae Stephens: Um, and that, that’s really concerning to me.

Laurie Segall: Yeah. I saw you, you wrote, um, in an op ed, “Like Putin in September, 2017, said that AI leadership was a means to become the ruler of the world. China has vowed to achieve AI dominance by 2030. Meanwhile, much of the top AI talent in the United States is working on things such as ad optimization.” So, that definitely seems like it- it’s something you think a lot about.

Trae Stephens: Yeah, absolutely. You know, some of these countries, uh, for what it’s worth, um, China, Russia, they actually have the ability to conscript their top talent into washing on, working on national priorities.

Laurie Segall: Hmm.

Trae Stephens: I’m not suggesting by the way that we should conscript our talent in the United States. I’m just stating that it is an obvious advantage that they have. Which means that we need to figure out ways, to appeal to the talent that we need, domestically. To get them to work on those national priorities.

Laurie Segall: But how do you do that because there’s been so much tension between Silicon Valley and the government, for a while and, and the government certainly has a reputation for moving slow, and not really getting things done, and this current administration isn’t exactly some, a place that a lot of liberal leaning Silicon Valley wants to be apart of? So it certainly creates a, a conundrum of sorts.

Trae Stephens: You know, I think there are  a lot of people in the tech community that not only would be open to working on national priorities and defense, but would be excited to work in national priorities for defense. They just need a place to do it. And we have a model for this. If you look back over the last 30 years, since the end of the Cold War, there are two, unfortunately only two, but there are two multi billion dollar venture backed companies that have done the majority of the work with the government. Palantir and SpaceX. Palantir and SpaceX have been able to recruit the people they needed to execute their mission.

Laurie Segall: Mm-hmm.

Trae Stephens: Like they, SpaceX has a collection of the best, uh, rocket scientists in the world working onthe Falcon platforms. Palantir has access to the best, you know, data engineers in, in the world to build, um, Palantir Gotham and Foundry products. Anduril, you know, recruiting hasn’t been the, the problem. It hasn’t been one of the core problems like we’re able to recruit and retain top talent. The problem is in the government working with those companies.

Laurie Segall: Hmm.

Trae Stephens: If you look at Palantir and SpaceX, um, it took them such a long time to actually crack into the, the government industrial base, that they literally needed to have billionaire co-founders.

Laurie Segall: Right.

Trae Stephens: Like they needed to have a billionaire working at the company to make sure that the business was able to be financed, through meaningful contract volume. You know, something that we realized at the outset, like something that, uh, I’m sorry, at the onset when we first started Anduril, um, you know, you can debate whether or, not Palmer’s a billionaire. But, um, I think having the ability to raise capital at, uh, good terms at reasonable terms for, uh, a capital intensive business, as well as potentially finance the business through the slowness of government is incredibly important. And right now, because of this ecosystem that’s been created around the defense primes in the military, you really have to have this kind of Howard Hughes style of entrepreneurship where like only the, the, you know, ragtag bunch of billionaires are actually able to build a business and that needs to be fixed.

Laurie Segall: Hmm.

Trae Stephens: We need to get to the point where the government is willing and able to deploy meaningful contracts to companies that are working on things that are important to them. Um, rather than just like writing a bunch of little, $200,000.00 grants, which is their strategy today.

Laurie Segall: Can you give us like a very, visual, like I, I want this very specific, if you don’t mind, like what does the future of warfare look like? I mean, you’re in the center of it. You can throw in some of your technologies. You’re building some of the ones coming down the pipeline that you’re not talking about yet. You can throw in those. But like what is the future of warfare look like? How do we battle this out? How do we defend ourselves? What are soldiers using? Just like take us there.

Trae Stephens: We’ve talked about some of these already.

Laurie Segall: Hmm.

Trae Stephens: I think some of it is real time situational awareness of the entire battlefield. Um, so knowing everything that’s going on in an environment.

Laurie Segall: So these augmented spaces?

Trae Stephens: Uh, yeah, i- augmented for soldiers on the ground to the extent that we require soldiers on the ground.

Laurie Segall: Mm-hmm.

Trae Stephens: Though, I expect that that number will go down significantly. But i- if you, if any of the listeners have read the book or seen the movie, Ender’s Game, you kind of know this concept of basically putting yourself in virtual reality and then kind of having a top down view of a battle space and be able to manipulate the assets that exist in that space. That type of interaction is very likely going to be more and more common. The air force is starting to play around with this in a program that they’re calling The Advanced Battlefield Management System. And I think that over the next 10 years that’s gonna become super commonplace. So there’s that. I think, you know, as far as engagement, like kinetic engagement with the enemy, we’ll be much more driven by autonomous systems, whether they’re remotely controlled or fully autonomous. So that we’re not putting humans in harm’s way where they’re not required. Another great example of this is, there’s a venture backed company called Shield AI, that has built a small drone, uh, unmanned aerial system that can be tossed into a building and it will do a survey of the building-

Laurie Segall: Mm-hmm.

Trae Stephens: … and, allow the operators, the Special Forces guys, whatever that are about to kick down in the door to know what’s behind the door. This type of information collection-

Laurie Segall: Yeah.

Trae Stephens: … that will save people’s lives. I think will become more and more pervasive across all of the different battlefields. But really like the summarization of all of these categories is that I think if we can stay ahead, the future of warfare is no warfare. And that is the intent is that you get to a place where your information dominance, your battlefield dominance, your, weapons platform dominance are all so, so real and so large, that the gap is insurmountable and the enemy won’t want to engage in combat because they know that they’ll lose. And you know, there’s all sorts of like crazy science fiction versions of this. Um, one of my favorite science fiction authors is this guy named Vernor Vinge. And he has, uh, a series called, The Peace War series. And in it he talks about, this like force field that he calls a bobble. And the inside the bobble time is frozen and it’s impenetrable. And so, if you had two, um, two enemies that were fighting one another, you just bubble one of them and then you go to the other one and say, “If you don’t stop, we’re going to bobble you.” And then you bobble them and you unbobble the other one and you say, “I’m gonna say the same thing to you. If you don’t stop, we’re going to bobble you for 10,000 years.”

Laurie Segall: Mm-hmm (affirmative).

Trae Stephens: And then you unbubble them and you say like, “It’s your choice. You can engage in combat or you can be bobbled for 10,000 years.” And, and basically the moral of the story is that it ba- conflict basically just stops.

Laurie Segall: Hmm.

Trae Stephens: Because people realize the cost of engaging in conflict is way too high. And I’m not saying that we’re like building a bobble or that we’re anywhere near building, but I think every piece of technology that you build that continues to build upon your advantage, gives us the ability to control, to some extent the amount of conflict that’s happening globally.

Laurie Segall: Could the future of war be fought solely by artificial intelligence?

Trae Stephens: Uh, I mean, I don’t think so. Eh-

Laurie Segall: Mm-hmm.

Trae Stephens: … this conversation is so far out there that it’s always so hard to engage in like a credible way.

Laurie Segall: Mm-hmm.

Trae Stephens: Um, you know, there’s movies like Wargame where the computer decides that the fate of humanity will be better off of it just like nukes everything to oblivion. You know, I think that humans are responsible for making human ethical decisions. Uh, and it will be that way for a very, very long time. And to the extent that computers are responsible for making decisions, we should be working on ways to counter that threat, to prevent that from becoming the way, th- kind of like the standard for how conflict is, is managed.

Laurie Segall: I only push back on that to say, well, its … isn’t it y’alls jobs to think kind of far into the future because if the, the problem in technology is that it seems as though some of the, the folks in Silicon Valley haven’t thought far enough into the future and we see all these human problems that have arisen?

Trae Stephens: Yeah, I would say that there’s some of this that is academic and true.

Laurie Segall: Hmm.

Trae Stephens: Like, we should be having academic conversations about these things. But this is not how conflict has been managed over the course of history.

Laurie Segall: Hmm.

Trae Stephens: Like we didn’t have a detailed discussion about the atom bomb, uh, and come up with like ethical frameworks for how we think about it. And then like, onl- only then after we’ve like perfected the ethical framework decided to build it. Same thing with like Kim bio, same thing with precision-guided weapons, same thing with cyber. Like, these things get litigated regulatorily litigated by the people that hold the technology. If we sit back and just say like, “We’re gonna spend the next 30 years, like just having a bunch of like fireside chats at conferences with a bunch of academics about what each of these defense technologies could mean.” And we don’t build it and guess who’s going to build it? All of the other countries that are not having that academic dialogue and we’ll be sitting on our hands when they have a critical national security advantage against us that puts our own lives at risk. That seems like a really bad trade off.

Laurie Segall: Speaking of all of the other countries, who will you not do business with?

Trae Stephens: Yeah, I mean this is a, it’s a, it’s a great question. Um and you know, it, it, it’s case dependent, it’s process dependent. It’s governmental system dependent. Um, certainly we’ll work with the close allies. So the flyby community, Australia, UK, um, Canada, uh, New Zealand, and, there are no questions about our close allies, but we have to have rigorous conversation about really anyone else.

Laurie Segall: So no China.

Trae Stephens: No China. Although I, I think the only reason China would want our technology at this point is to steal the IP and, and develop it for their own. Um, I don’t think that they would actually be interested in being like a paying customer. 

Laurie Segall: Have you turned anyone down?

Trae Stephens: Uh, you know, uh, we have so much inbound interest right now in what we’re building, that, you know, we turned down 99% of what comes into our, uh, into our funnel.

Laurie Segall: But have you turned anyone down for those types of reasons? Like, uh, maybe not-

Trae Stephens: Like an ethical reason?

Laurie Segall: … ya, an ethical reason.

Trae Stephens: Ee have decided not to follow up with people because we thought that the use case violated some ethical principle.

Laurie Segall: Like who?

Trae Stephens: I wouldn’t, I wouldn’t want to like start throwing people under the bus-

Laurie Segall: Mm-hmm.

Trae Stephens: … who I never even responded to an email for that seems, that seems cold hearted and unnecessary.

Laurie Segall: Fair. Um, lately I’ve become obsessed with this idea of spies, and this idea that, um, you know, there’s so much valuable IP in Silicon Valley. You guys aren’t Silicon Valley based, but I’m assuming, you know, you guys are a valuable company. I’m assuming you guys do background checks and all sorts of stuff, but are you seeing, eh, you know … Do you ever think about, um, the employees you guys hire or even in Silicon Valley as a whole, worried about people going to infiltrating these companies? I know it’s, it’s kind of, it’s not necessarily Anduril, focused but just in general, having your experience in the government and now in Silicon Valley, you were at Palantir. Do you ever worry about, um, nation States kind of infiltrating these companies for valuable data?

Trae Stephens: Of course. Yeah. I mean, I think our adversaries, particularly China has made no secret of its interest in disrupting, our defense industrial base and stealing intellectual property to the extent possible. The similarities to the F-35 of their fifth-generation fighter are striking. Like, it, it seems to me like, they’re actually doing a pretty good job at stealing IP when they, when they want it. This is a huge concern. Um, I mean if you look at like the impact of the tariffs tha- that have been recently implemented, like compare that to the cost to the American economy for IP theft from China and it’s like,there’s not even a comparison. Like they’re just, just like ripping so much. And so yeah, of course, like it would be crazy to not assume that they’re trying to get at the personal data of the people that are working on these top priorities-

Laurie Segall: Hmm.

Trae Stephens: … as well as information proprietary to the companies that are working on these priorities.

Laurie Segall: Have you had discussions about it at Anduril?

Trae Stephens: Oh, sure. I mean, information security is a critical piece of the pie for everyone working in the defense industrial base. We have a crack team of InfoSec professionals, that spend their entire day thinking about how to lock down the edges of the network, how to think through insider threat. Y- you know, that it’s a core competency that I think you have to have.

Laurie Segall: And I know that for the border control tech, you guys actually like went out there. I don’t know if you went out there, but I know Palmer, went out there and actually like was looking at this technology, deploying this technology, playing around with it. You know, you’re out. Um, on the border I’m sure there are certain dangers. Have you ever worried about your own safety?

Trae Stephens: Um, I wouldn’t say that I’ve worried about my, like physical safety. I’ve definitely spent a lot of time thinking about my digital footprint and making sure that, uh, I’m not presenting myself or my family, uh, to undue risk. And so there’s al- always this like kind of digital hygiene exercise that you can go through-

Laurie Segall: Hmm.

Trae Stephens: … to try to protect against that. 

Laurie Segall: Uh, last question, why do you do this?

Trae Stephens: It’s a sense of duty to be honest. I mean, it, it would be crazy to m-make it about some sort of like sacrifice because I think, you know, being an entrepreneur is a ton of fun and, I think if we’re successful, you know, I think there’s financial reward for our employees, for our investors, certainly for the founding team. So I, you know, I’m not gonna act like a martyr, um, but at the same time, like these are really hard problems and, you know, we’re not building an app to share 140 characters in a slightly better way with our friends. Like this isn’t popular. And, I think w- we have to unify around the idea that it’s really important. And going back to that, September 11, 2001 sitting in my principal’s office, I knew at that moment that this was going to be the career that I was going to work on. I didn’t know that this is what I would be doing specifically. Um, but I can’t imagine, you know, going to work every day and not thinking about how I can be helpful to our national security, to the priorities that are set forward and, the values, that our national security stands for.

Laurie Segall: Why did you know that, that, that was gonna be all you’re gonna do the rest of your life?

Trae Stephens: I, I think, you know, part of the, the lie that’s being told to the world, particularly by,  the kind of modern culture whether it’s like Millennial, Gen Z, whatever, is that there’s absolute equivalence in like all things like morality, culture or whatever. And, I think events like 9/11 kind of stuck with me as this realization that, there’s a real world out there and like we can’t just hide in this little bubble of comfort and say like, “actually, everyone’s the same all like everyone believes the same things. Everyone values the same things,” because I just fundamentally think that there’s something about the democracy. There’s something about capitalism. There’s something about the freedom that we’re, that we’re afforded, that’s worth defending. And without that, you end up living in these authoritarian, oppressive societies where none of those values can be exercised. And, you know, I don’t really care if I can open Twitter on my phone and, you know, yap at people, about the political issue du jour, but I do care a lot about my ability to exercise freedom of speech and, that’s something that is not protected in many places in the world

So this episode had a lot to take in. I’m guessing you guys might have some thoughts. What do you think of Anduril and the technology they’re building? Where should we draw the line?  Text me on my new community number: 917- 540- 3410. It goes directly to my phone. I promise I’m not just saying that. And here’s a personal request, If you like the show, I want to hear from you leave us a review on the Apple podcast app or wherever you listen and don’t forget to subscribe so you don’t miss an episode. Follow me I’m @lauriesegall on Twitter and Instagram and the show is @firstcontactpodcast on Instagram, on Twitter we’re @firstcontactpod.

First Contact is a production of Dot Dot Dot Media executive produced by Laurie Segall and Derek Dodge. This episode was produced and edited by Sabine Jansen and Jack Regan. Original theme music by Xander Singh. 

First Contact with Laurie Segall is a production of Dot Dot Dot Media and iHeart Radio.