An Inside Look into DARPA’s RACER Program
Executive Summary
Stuart Young, Program Manager at DARPA‘s Tactical Technology Office, joined Grayson Brulte to discuss the revolutionary RACER (Robotic Autonomy in Complex Environments with Resiliency) program. Moving beyond the breadcrumb approach of early challenges, RACER pushes autonomous vehicles to navigate unstructured, off-road environments at tactical speeds without reliance on pre-existing maps or GPS. The conversation covers the evolution of military robotics, the successful deployment of this technology with the 11th Armored Cavalry Regiment, and the dual-use potential for industries like mining and search and rescue.
Key The Road to Autonomy Episode Questions Answered
The RACER (Robotic Autonomy in Complex Environments with Resiliency) program aims to develop high-speed autonomous vehicles that can navigate unstructured, off-road terrain without relying on pre-existing maps or GPS. Unlike previous challenges that followed breadcrumbs, RACER forces robots to generalize across environments at speeds faster than manned formations, enabling them to move ahead of soldiers and change the risk calculus of missions.
While traditional autonomous vehicles such as Waymo rely on high-definition maps, road rules, and predictable infrastructure, RACER vehicles operate in “off-road” environments where trails may not exist or are blocked by obstacles like vegetation or terrain features. The RACER system must calculate driveability in real-time using onboard sensors (lidar, cameras) without GPS, adapting to everything from desert rocks to dense forests.
Stuart Young highlighted that the technology developed for RACER has significant civilian applications, particularly in mining, agriculture, and search and rescue. For example, autonomous vehicles could navigate dangerous or inaccessible disaster zones to aid rescue efforts without putting human responders at risk, or operate in mining environments where GPS signals might be obstructed.
The Road to Autonomy Topics & Timestamps
[00:00] The History of Autonomy at DARPA: From the Grand Challenge to Today
Stuart Young discusses DARPA’s long history with autonomy, highlighting the Grand Challenge 20 years ago as a pivotal moment that aimed to change the world with autonomous vehicles. He reflects on the “DARPA hard” mantra of pushing boundaries until failure occurs to ensure true innovation.
[6:54] How RACER Differs from The Grand Challenge
While the Grand Challenge relied on “breadcrumbs” and pre-defined paths, RACER separates these checkpoints by kilometers, forcing robots to make complex decisions in unstructured environments. The goal is to move beyond following a path to handling dynamic obstacles like blocked roads or washed-out banks.
[11:59] Operating Without Maps or GPS
To ensure resilience against jamming or lack of data, RACER vehicles are tested without maps or GPS, relying instead on onboard sensing to navigate. This approach aims to solve the “communications problem” by allowing robots to close the loop locally rather than depending on high-bandwidth data links.
[14:00] Managing Heat, Acoustic, and Visual Signatures in Autonomy
The discussion turns to the complexities of signature management, where autonomy must account for thermal, acoustic, and visual detection by adversaries. Young explains that while initial tests focused on making the system work, future steps involve refining these signatures against multimodal sensors.
[19:43] Testing in the Mojave, Central California, and Texas
Young details the rigorous testing schedule, moving from the geometric rock formations of the Mojave Desert to the vegetation-heavy environments of Camp Roberts and Fort Hood. This variety tests the system’s ability to adapt to compressible terrain like tall grass and bushes versus rigid obstacles.
[25:11] Building the RACER Brain and Spawning New Companies (Overland AI, Field AI)
The program’s academic roots at institutions like UW and JPL led to the organic creation of companies such as Overland AI and Field AI. This ecosystem allows private capital to further develop the technology for both military and dual-use applications.
[27:12] The Rules of RACER: Speed Metrics and “No Maps” Constraints
To simulate combat tempos, RACER vehicles were required to match or exceed the speeds of manned formations like the M1 tank. The “no maps” rule was strictly enforced to prevent robots from simply finding low-cost trails, forcing them to navigate complex terrain autonomously.
[33:36] The Hardware: Modifying Polaris RZRs and Textron M5 Tanks
The program utilized modified Polaris RZRs with by-wire kits and Carnegie Robotics’ compute stacks, later expanding to 12-ton Textron M5 tracked vehicles. This progression proved that the autonomy software was platform-agnostic, working effectively on both light utility vehicles and heavy tracked systems.
[37:37] Requirements vs. Possibilities
Young explains DARPA’s role in the acquisition process: rather than waiting for specific military requirements, DARPA demonstrates what is technologically possible. This helps de-risk advanced capabilities and informs the services (Army, Marines) on how to write more ambitious future requirements.
[40:01] Field Testing with the 11th Armored Cavalry Regiment at the National Training Center
In a significant field test, the 11th Armored Cavalry Regiment used RACER vehicles as an opposition force, validating the technology’s tactical relevance. The experiment highlighted the value of unmanned systems for high-risk reconnaissance, allowing commanders to send robots much further than manned teams.
[44:43] Deploying RACER in the Field
Looking to the future, the vision is for a single operator to command a platoon of vehicles, shifting from simple movement to coordinated maneuver. This “football coach” style of control allows for complex, multi-agent tactics that enhance operational effectiveness.
[46:12] The Legacy of RACER: Dual-Use Applications and Saving Lives
The ultimate legacy of RACER is saving lives—both by keeping soldiers out of harm’s way and enabling safer search and rescue operations in hazardous environments. By changing the risk calculus, these autonomous systems open new doors for humanitarian and industrial applications alike.
Full Episode Transcript
Grayson Brulte: Today’s a special day on The Road to Autonomy, A dream came true. We have a representative from DARPA today. Stuart Young, the program manager of Tactical Technology Office is here today. Stuart, I am honored to have DARPA on and this is the one get that I always wanted to have. So thank you so much for making this happen and to kick things off, when did DARPA first get interested in autonomy? ’cause for all practical purposes, DARPA paved the way.
Stuart Young: So the history of autonomy at DARPA is, uh, quite long. Um, we’ve been focusing on autonomy in all domains, really. Um, you know, I’m, I’m more of a ground guy, but, um, and, and somewhat in the air. But, um, you know, the, the big, um, you know, I would say penultimate moment was, uh, when we did the DARPA Grand Challenge 20 years ago. Um, in the ground domain, there was, there was some work done before that, but, um, the ground domain, the Grand Challenge was really a vision to try to like, change the world, um, with autonomous vehicles. And so, um, to me, in, in, in my domain, I think that’s when we really start.
Stuart Young: Um, so that was, what, 20 years ago? I was, I remember being at the first, uh. The first, uh, race, if you will. It, um, it was a little challenging. Things didn’t go perfectly well. Um, and then, and then they conquered it the next year. So that’s kind of how DARPA does, like throw down a real hard problem. Sometimes it goes great, sometimes it doesn’t go well. If it goes perfectly every time, then we’re not pushing hard enough. So that’s kind of the mantra around here. We, you know, we push if, if everything is um, successful, then we reevaluate. Like, are we really pushing that hard? Um, so that was to me the big thing.
Stuart Young: Um, my friend, um, runs the NoMars program, which is, you know, a, a ship they built an autonomous ship, you know, so a ship that doesn’t need any people on board uh, there’s a lot of work going on in the undersea space. Um. It’s, you know, during my tenure, quite a few of my colleagues have been doing that. Uh, we’ve looked at, uh, probably about 10 years ago, we’ve looked at, um, air vehicles collaborating together. Um, and so that, um, you know, goes beyond a single agent into multiple agents, which is pretty cool. Um, but, you know, the, the ground domain is my passion. Uh, one of my friends who’s a former DARPA PM said The hard thing about ground robotics is the ground. And, um, it really kind of speaks to this, uh, vehicle terrain interaction problem that we have. And there’s a lot of things on the surface of the earth and, um, it makes that problem really difficult. Um, so that’s kind of why I came to DARPA to work on that problem after working on it for about 15, 20 years, um, in the labs.
Grayson Brulte: There’s so many great, interesting historical stories from where you were in Prim, Nevada for that race. There was the, the vehicle ran through the fence. The one tumbled down the, the, the side of the hill and that terrain out there for the audiences. It’s difficult to say the least way. You, you never gave up. And obviously, like I conquered the Stanford, uh, team went on to win that. And then a few years later, DARPA rolls out. The Urban Challenge said, oh yeah, you conquered this. Oh, we got another challenge for you conquered this. Is that in, in the mission of DARPA to always keep pushing for the the possible forward?
Stuart Young: Yeah, I mean, the mission is, you know, to create strategic surprise for the United States and to prevent na um, prevent tactical or strategic surprise against us. So, um, in order to do that, we are always pushing the boundaries of what’s possible and demonstrating what is possible and understanding what’s not possible, and seeing if there’s something we can do about it. So, yeah, that’s the spirit. That’s what everybody lives towards here. And, um, you know, it is about taking, you know, risk. We take risk, uh, measured risk so that, you know, we don’t do it recklessly. We, we do it with intention. Um, and we have ideas of how we can actually overcome those risky areas when we embark on programs. But yeah, like the, the Grand Challenge, you know, really showed, um. You know, this idea of can we drive autonomously from over long distances? Um, that environment is super tough.
Stuart Young: Um, I actually went back to that environment quite a few times during the Racer Program. Um, I like to say that we went off road, whereas they were on trails, um, out in that environment. So. As you said, the, the environment out there is super unforgivable. Um, and then we went even further and like went off road, um, in the Racer program to kind of push the boundaries even further than where they were in the Grand Challenge. Then the uh, um, because it’s not too far from there. We went to where they did the Urban Challenge, and I didn’t go to the Urban Challenge personally. Um, I saw all the videos and stuff and, and um, that environment’s, um, out there in the middle of the Mojave Desert as well at an old Air Force base. And, um, it was really exciting to get to see, um, how they interacted with other vehicles and, and it kind of just exposed this whole new problem that had to be solved. You know, it’s one thing for the vehicles to operate. In a static world, but the world’s not static. And so you have to start looking at how do you interact with other agents or other people or other things in the environment, which I think those two combined really kind of set the conditions for the self-driving community and the problems that they’re still working on.
Stuart Young: And, and it’s really, I. Gratifying that to be part of an organization that not only goes after a hard problem, but then it stimulates others to like, um, keep working on it. And so that’s, you know, what we try to do with Racer and a lot of us try to do the same thing with our programs is not just solve the technical problem or, or at least start, uh, solving the technical problem, but it stimulate others to want to invest and realize, oh yeah, yeah, we need to do that. And, um, it, it gets other people motivated to work on those problems as well.
Grayson Brulte: If my memory serves me correctly, the Urban Challenge was that the Victorville, the base in Victorville, California.
Stuart Young: Yes it was. That’s right.
Grayson Brulte: Because if you, if you look at what happened in Victorville and you look at what happened in Prim, DARPA incubated. Funded. We can’t forget about that. The self-driving car industry. I can make a very analytical argument, but you already know the truth of this. If it wasn’t for DARPA, it is my true belief that we would not have self-driving cars operating at the ability they’re operating today is because what you, the fine men and women in DARPA did to accelerate this. So there’s a huge thank you for that. It’s just, it’s tremendous, the impact that. DARPA has had on society and it’s had on on the US economies. It’s not just self-track. There’s so many. If you go through historical archives of DARPA, all the inventions that, if you wanna say were seeded through DARPA, it is pretty amazing. I, and I would encourage our audience to go through the DARPA archive because it’s absolutely fascinating. DARPA Dot MIL, you can go through it. So you have the challenges, you have the Urban Challenge, the Grand Challenge. Obviously those wrapped up and I just said started industry. Did you take learnings from those challenges into the Racer program?
Stuart Young: Absolutely. So, uh, in fact, when I was pitching the idea, the, the deputy director of DARPA said, well, we already solved this problem, Stuart, what are you, what are you doing here with the Racer program? And I said, well, you know, there’s subtleties to the problem. And, um, you know, I, I explained those to him. And he was like, yeah, good, good point. So, you know, the. I like to just characterize it simply not to diminish anything. You know, you gotta start somewhere. It’s, it’s always about knocking down the next, um, technological barrier. But the self-driving challenge in the, the Grand Challenge was a route that was, um, known, you know, not too far, be ahead of time, but they were, they had essentially breadcrumbs that they had to follow. And so when you think about it. The path was provided and they had to prov, you know, they had to follow that path generally. And when those breadcrumbs are really close to each other, um, you know, whether they’re a few meters apart or whatever. You think about it, the robot doesn’t have to do a lot of decision making outside of the bread falling the breadcrumbs.
Stuart Young: And so we, we took that, we took that idea and they were like, well, what happens if we separate the breadcrumbs like over the distance of kilometers? And it’s the distance between these checkpoints or, uh, that, that the robot has to make all these other decisions. The other thing is when you exquisitely define the route. The human is essentially saying, this is where I want you to go, and that’s fine until it’s not right. So what happens if there’s a cow blocking the road or a creek is over washed the banks and the road is not available? There would’ve been limitations in how we could have done, and we’re seeing these in the, some of the challenges that the self-driving community is addressing, you know, occlusions, pedestrians, all these types of things. So it’s really a, it’s a, it’s a much bigger problem, but you have to start somewhere. And so the Grand Challenge was amazing in that it really got people starting to think about these problems and solving part of the problem, and then really uncovering like, what are the other problems that still remain?
Stuart Young: So that was kind of what led to the, um, the Urban Challenge is like, okay, well this is great, but how do we deal with like. Dealing with other vehicles and structure and rules of the road and all these types of things. And so you uncover this next problem. Um, Racer, we wanted to focus on, um, really getting robots out in front of the formation. So my, my premise was that I wanted robots to be in front of the man formation, uh, which means I don’t want them following humans because that’s dangerous inherently in a lot of missions. So what does it take to get robots out in front? And robots were too slow, which means if the robots aren’t fast enough, then people are waiting for them. That’s unsatisfactory. Um, and if they can’t handle the environment and they constantly need help, then they’re not very autonomous. And so we kind of used that as our principles and we separated the checkpoints. We wanted to do it in any environment, not just in the desert, which is hard, but it’s not the only environment on earth.
Stuart Young: Right. So what does it take to have that more generalizable capability of going in completely complex off-road, cross country style terrain at speeds faster than like the Army or the Marines, you know, on the ground domain need to maneuver so that we can get robots out front. And then that opens up this whole other opportunity for how you can employ. Ground vehicles, um, for military applications and even, you know, dual use civilian applications. So that was the premise. Um, we learned a lot of lessons from the Grand Challenge and built off of those. So we stand on those shoulders of those giants and, you know, really kind of kept pushing it. And we also recognized it to solve the problem for Racer, that really you needed to solve the problem in a different way. The Grand Challenge and the way the self-driving community does it, you know, they have a lot of a priori information, you know, information about the world. They know where the, all the vehicles, or excuse me, all the buildings are, all the road networks. You know, they have these exquisite maps and which is very important for them in the approaches that they follow.
Stuart Young: Um, we said, well, what happens if we don’t give them maps? Well, they give them no maps, like, which is probably over. Complicating things, but we did it for a reason, because we wanted to drive to being able to solve the problem in a different manner than the Grand Challenge did, and then we knew we could always bring them together. Like, if we can solve the problem without maps, then how much better can the systems be when we do add, you know, give them the ability to use maps. Uh, similarly, what happens if you don’t have GPS? You know, it could be jammed, it could be out that day. Um, so you, you have to start looking at this, but this all. You know, builds from what we learned from the grand challenge. Um, and we knew that, you know, solving it a different way was gonna try to create another industry. And that’s a little bit distinct and different from the self-driving community’s problem With that huge amount of a priori information before you start the mission.
Grayson Brulte: You open up a lot of opportunity, in my opinion, in the mining segment, the agriculture sector, in the private sector. But in the military sector, to me, no maps, no GPS, and I’m not a military historian or military technologist, but it would seem, from a soldier’s perspective, it would make it a lot harder to find those robots if there’s a lot less communication and signals going outta that. Was that one of the things that you potentially looked at?
Stuart Young: Yeah, absolutely. Um, I, you know, in my research before I came to DARPA, one of the big things that we focused on was there’s this huge communications problem that we face. Um, you know, command and control is a, is a big thing. And one of the reasons that you can’t have it is because it is hard. Physics is hard. And then. There are techniques that adversaries can use to make it more complex than that, even that. And so we wanted to make sure that we could still achieve the missions if we’re not dependent on them. And so, um, even before I came to DARPA, autonomy is I think, part of the solution to the comms problem. And similarly, comms can be helpful for the autonomy, so. Instead of depending on like high 4K, video high, you know, high bandwidth 4K video, what if we just have a little bit of situational awareness and now I have less information I have to pass through. So now the, the communications pipe, whatever mechanism it is, is smaller. So I don’t require as much because I can let the robot close the loop for itself onboard the vehicle. And we actually found that the robots do a better job than humans do. Teleoperation, for example. Um. Through these, you know, even with Pristine, you know, communications pipelines. So yeah, absolutely. I think autonomy is part of the solution to not only the comms problem, but also to scaling robots. You can’t have one person controlling every robot, or you need lots of people, which then just makes your bandwidth problem even worse. So the more you can put on the robots to solve, which is kind of the definition I use of autonomy. No human having to do a specific task. It doesn’t mean the humans aren’t involved, right? The humans can say like what they want the robots to do, but they don’t tell ’em how to, you know, necessarily how to do it. And that really opens up a lot of opportunities for how to, you know, use these systems in, in all, um, all parts of society, not just military.
Grayson Brulte: Let’s look at all pulses of society, and I’m gonna throw you a curve ball here. Heat. What aspect did heat? Heat? If you’re gonna give off heat, because if you’re in going back to the military, you, you’re in theater A, an adversary, or I’ll use a simple term, a bad guy can, can use the thermal imaging and see the heat coming off of these vehicles from the sensors. Did heat play an issue as you developed this all encompassing autonomy stack that you had to limit the heat because you both have civilian application uses for this and you also have military application usage?
Stuart Young: so the short answer is yes, we have to consider it. Um, but as I mentioned earlier, like the more you do, the more you learn all of the other problems that you have to solve. And so I would like to characterize it as we, we spent a lot of time making sure that our systems could handle the heat of the environment. Um, before we even get into thermal signature and signature management, um, you know, just dealing with the systems are, they’re hot. Um, but the engines are hot too. Yes, you can detect ’em with thermal imagers, but there are things that even our soldiers do on manned vehicles that you can do. Um, how you maneuver and that type of thing is one thing. Um, from a dual use perspective, you know. It may not be as important, um, you know, as being sensed, um, with thermal imagers, but we’re always, there’s always this game of figuring out like, what do I have to do to manage my signature? And, and, um, you know, first thing we wanted to do was focus on. Can we solve this problem? A lot of the success we’ve had in the Racer program, people are like, well, why do, but you use lidar and other things like these emit, these are active sensors. And I’m like, yes, but when we started this program, we didn’t know how to solve this problem. Like it had never been done. And so, yes, you have suggestion or you know, success and then people are like. But you didn’t do this and that, like, I understand, but you know, it wasn’t done four years ago. Like we weren’t driving fast through the desert, um, or through woods or anything.
Stuart Young: Um, so we started with that and then we started relaxing our constraints. So we started not using GPS. We started, um, seeing what we could do, um, at night, uh, with lidar. Then we started doing what can we do at night without lidar? What can we do with just imagers? And so we started understanding where the technical boundaries are. Uh, um, there’s off ramps of success that the systems can use now. Um, for a lot of utility in search and rescue and other civilian dual use domains or mining as you talked about. Um, and then there’s also the opportunity for understanding like, okay, where do we go next in the military domain regarding this con concept of, you know, signature management. Like what does camouflage even mean in a world where you have to manage? From EO sensors, IR sensors, acoustic sensors, radar, like how, you know, you know, we, even the director of DARPA even talked about stealth. Um, you know, stealth was about, you know, being not visible on radar, but there’s other sensors now, right? So this, this multimodal perspective is, uh, a problem that we started looking at. How do we manage our signature against all of these modalities of sensing? So. It really just revealed that this complex problem is even more complex than we first anticipated. Um, it’s more than just heat. Um, you know, you can’t have a really loud vehicle, you know, a hundred yards away and expect to be hidden, right? Like it, so it’s acoustic is a problem. So it’s not just heat, it’s acoustic, it’s visible. Um, and so it really is exciting because then you now uncover this whole new area of problem that maybe DARPA needs to get after next. So, um. Just like the Grand Challenge helped in stimulate me. I think Racer Will has a lot of benefits, but it also stimulate, um, new and exciting research for the people who follow me.
Grayson Brulte: putting on my engineering hat here. Thinking of all the different components inside of one of these vehicles and things you potentially have to, to, to modify from barrens to brakes and all sorts of things, how, how, how close do you think we are to having the B 52 stealth moment for, for ground autonomy?
Stuart Young: a stealth perspective, I think it kinda goes back to that point I just was talking about, like. Against what sensors, right? Like stealth kind of evolved out of like, can we make a lower radar cross section for a vehicle? And so it’s not, it’s less detectable. Um. But the modern world, there are so many other sensor modalities that are available to us, especially like space assets and imaging from space and stuff like that. Um, so you gotta look at the problem more holistically. And that’s the super cool thing about DARPA is we look at it from those broader perspectives and sometimes we uncover that the problem is much, much harder than we actually envisioned originally. Um, but I think autonomy, um. I believe autonomy is the secret sauce in the ground domain. I think that it will allow opportunities for. Going beyond like the status quo that you see in Russia, Ukraine, or with first person video drones. Like this idea of scaling and putting multiple vehicles together to do more complicated tasks. It’s akin to, you know, what can I do by myself versus what can you and I do together versus what can you and I plus 10 other people do together? Like, you know, the combinatorics are really exciting about what you can do, and we’re trying to. Solve a foundational problem that enables that kind of conversation to start occurring. So I think autonomy is a key enabler to the leapfrogging of, um, technologies that have been, you know, demonstrated over the last couple conflicts. Uh, whether it’s Nagorno-Karabakh or Russia, Ukraine, or others. Um, it really realizes that there’s something needs to be done that is not easily, um, defeatable. It also gets into what do we think about doing next? Where, you know, our systems in Racer can do quite a few things, but um, there are still edge cases of problems that we don’t have completely solved. You know, how do we make the systems more adaptive and even more quick to react to other things in the environment? So it really is an opportunity to explore those new things. Now that we’ve solved one of the problems that has had been a problem up until Racer.
Grayson Brulte: Because what you described in those conflicts are two different sets of terrains from Racer. You’re building a stack to go on, if you wanna call it a mind clearing vehicle, all the way up to a Abrams tank. How do you prepare the autonomy for the different environments that you and right now, Russia, Ukraine, you have snowy weather, and then you mentioned Prim Nevada. You have desert weather, or if you want to go to the Middle East where there’s conflict, now it’s desert weather. How do you prepare the autonomy system for those different types of weather environments and terrains they’re gonna operate in?
Stuart Young: Yeah, so that’s a great question and one that I really wanted to handle in the program. Um, so you’re kind of hitting a sweet spot for me. Um. I, we started at NTC in the Mojave Desert at the National Training Center that is in Fort Irwin, California. Um, and I went there because, uh, a general told me before we started Racer is like, Hey, that’s tough. If you can do it there, you can do it other places. And I’m like. Can you? But, um, we started there anyway. Um, and, and largely we found that, you know, the desert was a very geometric problem. You know, lots of rocks, very few, you know, not a lot of vegetation out there. So your semantic classification was a little simpler. Um, and. We, we did really well there and we learned a lot. And then we went to, um, Camp Roberts, California for our second experiment, which was, you know, the central coast of California. So, yes, still California, but completely different as you know. Um, you know, lots of oak trees, lots of rolling hills, lots of vegetation. We learned, okay, now I have to start figuring out what kind of autonomy do I need to operate in this environment so I could build off of what I learned at the, at the Mojave Desert. And then we, we started understanding like, how much data do we really need to collect? Um, how do we adapt? And one of the things that we learned, which is super exciting for me, when I started the program, we were, we had budgeted to do six experiments. Sorry, six data collects to facilitate our experiments. And then we learned it’s like, eh, it’s not about just collecting data blindly, it’s, we need to collect the right kind of data to solve the specific problem. And it seems maybe obvious now, but it wasn’t what we had anticipated at the beginning.
Stuart Young: And of course the cool thing about DARPA is we pivoted very quickly and you know, we didn’t waste any money. We collected the data we needed and then we started asking questions To your point. I’m going to this new environment. How much data do I need to train my models to be able to act in that new environment? And so by the time we got to experiment four and five, we went to Fort Hood, Texas, which is, you know, um, live Oaks and, you know, much more vegetation than, um, in California at the two locations we went there. Then now you start having compressible terrain. You know, you can drive over some little trees, you know, a oak tree that’s, you know, two feet in diameter you can’t drive through. But an oak tree, that’s a sapling you can drive over, um, tall grass. What happens when you have three foot high grass and you can’t see the ground plane? How do you deal with that? Um, you know, the bush is in front of you. Like maybe you want to avoid the bush, if there’s a better alternative, like of no terrain. But what happens if the only thing I have is a bush? Am I in a cul-de-sac or can I actually just push through it like a human might? And so sometimes you have to learn like what are the costs of that compressible terrain? And so all this is a way to say we, we exposed our teams to these different environments and we challenge them to bring the system that they had before. And then we measured how fast could they adapt to that, a new environment.
Stuart Young: So imagine when, when we first went to Texas, I said, great, we’re going to Texas. You can’t collect any data from Hill Country of Texas. All you can collect is stuff from other parts of the world. Um, we didn’t even tell ’em we were going to Texas for until pretty close to the end, so they wouldn’t have had time to like go over collect on it. And I told ’em they can’t even collect like next door. Like they have to not go anywhere near that environment. To measure how fast it took us to adapt to those new environments to really get after your question. And so that really then was the model that we used for the last four experiments of Racer where we didn’t let them collect in, in, in previously gone in environments and we went to new environments until the last one we did a little bit different. Um, we went back to NTC, but we, we really wanted to understand that problem of, you know, how fast does it take us to adapt, which is somewhat like what humans do when they go to a new environment. You know, they already know how to drive, but. This, this dirt here, it’s a little softer or a little muddier. It’s a little harder to navigate. And I learn very quickly how I want to deal with it. Um, snow, for example, we always dealt with rain and snow and, and the experiments, um, to even complicate the problem even further. So that was one of the big things we did with Racer. Um, we. We did a lot on automatic adaptation, but there’s still a lot of hand tuning that is required. And, and, but we, we think we have some new ideas and we did solve some problems on how we can, um, automatically adapt. And then it got down to how fast can we adapt and how much human. Touchpoints are required to adapt. You know, we want to get this from a bunch of PhDs having to adapt the system to, we can have a bunch of, you know, forest fighters or search and rescue personnel or, you know, soldiers or marines being able to adapt the system and not have require, you know, someone with amazing expertise to be able to do it. So, long answer, but that’s a super exciting thing that we’ve really focused on, racer.
Grayson Brulte: Because if you look at the breakthroughs in reinforcement learning over the last, if you look three months, some of the papers from RX and some of the other scientific papers from published, you’re, you’re, you’re seeing these technological breakthroughs that I’ve never seen in my lifetime. I literally, I, my wife’s like, again, there’s another breakthrough again. There’s enough. I’m like, yes, there’s all these. These technical breakthroughs that are continuing to happen. Then obviously it’s very public that there was teams that worked with some from the private sector, some from academia. Was the overarching goal or one of the goals of racer, or if you want to call it to build the racer brain that you could deploy in these various scenarios?
Stuart Young: Yeah, very much. You know, the, the program started out, um. You know, largely an academic exercise. We had University of Washington, uh, nasa, JPL and Carnegie Mellon as our performers. And um, so yeah, we were trying to take the best of what, you know, was available in academia to try to get after this problem. ’cause again, we didn’t know how to solve it. Um, so we basically. Provided the best possible vehicle we could to our performers and then allowed them to go after it. Um, and, you know, then that started to morph into, um, they started creating these companies, um, organically, um, from JPL Creative Field AI. Um, UDub created Overland AI. Um, and you know, this is super exciting because now you, now you’re creating, um, another, you know, ecosystem and an industry ecosystem that not only can the military tap into and private capital can get involved in this, um, you know, the, both of the companies have raised more than DARPA invested in this. So this is super exciting to not just work on the technical problem, but also to create the. Conditions for, um, industry to grow and continue to build on this. And and of course they do that because there’s military opportunities at some point, hopefully. Um, but there’s also, you know, um, dual use capabilities for this technology. And so that is, um, a huge part of the plan that we wanted to, um, enable.
Grayson Brulte: But at the core of it, it’s American ingenuity. You gave, if you want to call it the seed, because you did give the seed to build these companies. Look at Overland for example. They went through the program, then they ended up getting money from Stevie Cohen’s, point 72 Ventures, and they’re doing very well. Other companies that got funded, I mean, look at the, the 10,000 pound gorilla that came out of the Grand Challenge was Google’s self-driving car program now known as Waymo. And that’s because of. You and the men and women at DARPA. Uh, if it wasn’t for that, if it wasn’t for DARPA, there wouldn’t be Waymo. I’ll stand by that all, all, all day long. So obviously the, the, the Grand Challenges had rules that were documented, well known in the industry. Now we’re gonna flip the switch. Did Racer have rules? And if so, what were the rules that put in place and why didn’t you put those rules in place?
Stuart Young: Yeah. So, um, I’ve already alluded to a few of the rules. Um, the f you know, the first thing we wanted to do was we wanted to show that we could. Meet speeds that were faster than, um, and, and we just for a data point, we used an M1 tank and whatever, how fast that can go. ’cause that’s a pacing speed, right? ’cause you know, what is the speed we need to go? I don’t know. So let’s use something. So, you know, we wanted to go faster than a man formation. Um, and so can we, how fast can we go? We wanted to maybe even go past what an M1 can do, um, in the same type of terrain. That was kind of our speed metric. Um, we also wanted to show that we can do cross country, but we’re not limited to cross country. We can take trails and roads, and so we had faster metrics on those. Like, you know, if there’s a trail and I want to go really fast and that makes sense, like I don’t need to beat my vehicle up through this really wicked terrain if there’s a trail I can find. And so we had metrics on on that, which. We found that the robots tended to start looking for trails because they could meet the speed metrics better. So then we had to start saying, well, some courses you can’t go on the trails and some courses you can, um, uh, to add, you know, to really get after the problem.
Stuart Young: And then, you know, the interventions, you know, how much time or how many times did the human have to interact, whether we flipped the vehicle, whether we crashed into a rock. Whether the robot just didn’t know what to do, we counted those against the robots, um, and the teams. And so they basically had to. Reduce those while simultaneously trying to increase speed. Um, so that was our, our metrics that we used for the program. We also had a few other rules, like no maps ahead of time. Um, we would just basically give you a grid coordinate, um, and they had to go to the next grid coordinate. And, um, this makes the problem much harder because you don’t have this prior information about, oh, there’s trails on the other side of the hill. I got no, the robots pretty much just have to figure it out. And that was very deliberate and yes, it made it. DARPA harder and Racer, um, uh, platforms, you know, had to deal with more difficult terrain, but that was very much by design because. If we didn’t do that, then we would’ve had robots that would constantly try to find lower cost paths, trails, if you will, through the environment.
Stuart Young: And there’d been a lot of work in that in prior DARPA programs from 10, 15 years ago, like Preceptor and others. And so we wanted to, um, in fact, one of the, the preceptor PM was on my team. And so we, you know, kind of going back to that is another DARPA program in autonomy. You know, we wanted to learn from those lessons and how can we really, um. Address the next technological problem. And sometimes you have to force the problem to be more complicated than it needs to be so that you can achieve your overarching goal. Um, I remember we had to redo some courses. Because the robots were finding these trails that we didn’t anticipate that they would find. Um, so especially like when we ran them backwards, um, we also found terrain that was impossible. Like one course we did, if you ran it the wrong way, you would end up in a canyon and which was a cul-de-sac. And basically no one, not even a human can get out. Um, and it was really cool though as we progressed, ’cause then we started seeing the robots actually going better than human baseline drivers in some of the terrain, which. I was not expecting quite so early in the program, um, because humans are pretty good. Um, and um, so that was another one that we did.
Stuart Young: Um, we didn’t want to have, you know, everything had to be done onboard the robots, so we didn’t allow them to use off-road computing. Um, and. Humans couldn’t have more information than we thought was reasonable. So when they interacted with the system, the time stopped. Um, they were able to do it safely, you know, interact with it. They got it, you know, counted as an intervention. Um. Then we also learned that maybe there’s another metric of interaction, like how bad is the interaction? Is it, do I have to go send out a record to fix this thing or, or do I abandon it because it really crashed? Or is it just needs like, you know, a bump start and reset the system. Um, so we learned those lessons and those are lessons that we can transfer to performers, um, or. Future, um, customers, um, that are developing systems, whether it be for the Army or the, or the Marines. Um, collecting data. How much data do we actually need to do these systems? How much compute do we actually use? So we, we measured those as well. So those are some of the rules that we don’t really talk about. But obviously I can’t put a super compute quantum computer in my robot, um, and have something that’s affordable, right? So we, we put things in place that allowed us to. Um, have one eye towards like, what could we actually do with, um, you know, existing reasonable compute. And, and same with sensors. We used existing sensors. We didn’t allow the teams to modify the sensors at the beginning. We forced them all to use the same sensors and same vehicles because we really wanted to find who were the best software performers. Um, and then as the program progressed and we got down to just one company, then we started relaxing that a little bit because there was no more competition in the program. It was more of like the vehicle against the terrain and, you know, if you need a bread or lidar and, and of course. Over the course of the program, you know, technology keeps coming out. Like we didn’t have foundation models like we have now at the beginning of the program. And then everybody’s like, oh man, we could use foundation models to solve this problem more efficiently. And so, you know, that was never against the rules, but that just wasn’t state of the art at the time. Um, so all those kind of combined to build our rules, if you will.
Grayson Brulte: is awesome. What was the vehicle platform that the teams were allowed to use?
Stuart Young: Yeah. Um, so that’s actually another exciting point. So we, um, we used a Polaris Razor, uh, S four Turbo. Uh, we had to modify it, so we worked with Polaris, um, as our OEM and they put a by wire kit on it. Um, then we, um, got Carnegie Robotics, um, which is, uh, private company in Pittsburgh. Um, that. Modified the vehicle. They, they put the compute on the vehicle for us. They put the sensors on the vehicle. Uh, so before we even really selected our other performers, we were already working with Carnegie Robotics to help us build the vehicle that we needed. And we started with looking at fundraisers and other Polaris vehicles, other brands, other companies. And the problem was no vehicles were fast enough to get after the problem that we wanted to do. So we actually had to. Build our own robots. We looked at, you know, using existing hybrid vehicles and we found that we were just gonna have to build what we needed for our testing. Um. So we took the route that I just explained. We also, um, forced them to use the sensors where they had input into the sensors and they could request sensor changes. But those sensors would then pro, um, propagate across all the teams so all the teams would be able to use the same sensors. And again, I was trying to measure who has the best autonomy, not who can put the best robot together.
Stuart Young: And we learned that from Tim Chung’s sub subterranean challenge program because. Some of the, the best performing teams were the ones that had the best robots, not necessarily the best algorithms. And so we really wanted to like, um, focus on, on that problem. Um, and then as the program progressed, um, through the program, we wanted to prove that this vehicle wasn’t the main thing. It was about the brain, as you mentioned. And so we wanted to put it on a track vehicle to show that the autonomy can not only work on a two ton, um, side by side utility style vehicle, but can also work on. Like a 12 ton tracked vehicle that looks kind of like a tank. Um, and so we put it on a Textron M five and we got four of those and we had the algorithms put on that. And it was more complicated than I thought for a couple reasons. The reasons that we thought like controls low level controls and stuff, obviously we knew would be different. Um, but there were a few perception and planning things that we learned, like we would throw tracks if we try to turn too sharply. Um, so there are a few, um, nuances there, but it wasn’t really. Too much of a problem. But what it did do is prove our point, which is the autonomy is somewhat agnostic to the platform that we’re on. So we can go on small vehicles, it can go on big vehicles, it can go on skid steer vehicles, it can go on Ackerman steer vehicles, and this is what we really wanted to prove. And um, so that’s kind of the, the, a little bit of the genesis of the platforms. Uh, we got three of the platforms from the Army, so we actually went with those on the heavy side. Um, and then we actually partner with the Army on some testing as well in the RCV program, where, um, they also use some of our Polaris vehicles from some of their experimentation. So, uh, it, it, uh. It became a little bit of an apples to apples comparison.
Stuart Young: And the rigor that we tested, uh, I think was a model for all of the, um, DOD and other organizations on how to test in this off-road environment. Because the state space is so big, you have to constrain some things in order to get after the questions that you want. So, um, I think we did a lot on that and, um. That’s another legacy of Racer, which I think will help in the future with when a tech and, you know, the, uh, Army Test Evaluation Committee, um, organization starts doing testing of autonomous systems. They’re gonna need, how, they’re gonna need to know how to test systems and you can’t just test every possible situation. The, you know, the world is too complex for that. So, and when the systems are actually learning. It’s another complication because you can test them the way the Army does now, or the Marines or whoever does now the DOD does now, but then the robot’s getting better every day, just like you and I get better. Like we can drive better tomorrow than we did today, theoretically. Um, and so, you know, how do you test a system, um, really means something different. And so, uh, that was another legacy of Racer that we borrowed some things from prior programs and we enhanced them and I think those will be really helpful, um, for organizations to test these kinda systems in the future as well.
Grayson Brulte: How does the relationship work between the Department of War in in the Army W with DARPA? Do they come to you with ideas? Do you go to them with ideas and you collaborate together? Because reading a lot of the defense news, it seems to me, this is just my opinion, that the Department of War in the Army, the Marines, Navy, they’re all moving very fast towards autonomy and automation.
Stuart Young: the way DARPA works, we, we like to say that we don’t use, we don’t require requirements. Um, so if you look at the existing acquisition process, you know, money is usually obligated when you have a requirement. So somebody writes a requirement, Hey, we need some thing and it’s gotta have certain specs. Um, and so they then allocate money on those based on that. Um, part of the problem is yes, they wanna move fast. Part of this endeavor, um, in autonomy is to figure out what is autonomy good for. And so they don’t necessarily write the requirements, um, with all of the information. So DARPA comes at it from a different angle where we’re like. What’s, what’s possible? What can we do technologically and not, we’re not burdened by what someone said we have to do. We’re like, what is technically possible? And so this is really important, um, interaction because then we can de-risk these hard technological challenges. Allow the services to say, Hey, wait, I’m only asking for this, and you’re showing that you can do that. Maybe we should make our requirements more complicated. And so there’s this interplay that happens between the services and DARPA. To, you know, and we inform them. We’re not beholden to their requirements. ’cause our job is to figure out like what is possible. Um, but that is definitely informative to them as they develop their requirements and their acquisition and how they spend their money. Um, and you know, part of us doing that is to help them accelerate. Um, the quality of the products that they can have, and they, you know, we can take more risk than they can. And so we use that opportunity to push the boundaries sometimes beyond what limits are, um, but that also can inform them on what’s possible. And so, um, that’s. Generally the way it works, um, sometimes we wanna add features to their existing programs, um, but we don’t really wait for the programs. We’re like, how do we create new industry? How do we create new possible programs? Um, that kind of thing.
Grayson Brulte: Creates the impossible. You, you build the future and staying in the military thing. I want ask you this question, ’cause in November, 2025, the 11th Army Armored Cavalier Regiment used Racer technology as part of the opposition force and live force training at the National Training Center. How did that go? And the big question is, how did you build trust with the soldiers of using this technology?
Stuart Young: Yeah. So, um, you know, one of the things that we tried to do when we did that with them, um, we recognized that we had to some extent largely solved this autonomy problem. And so then part of getting into the bigger, um. Higher level, um, of being a DARPA PM is. Okay, great. Stuart, you solved this problem, but what is it good for? So part of what is it? Good force, um, translates into how can it make the soldiers’ lives better, or the marine’s lives better? How can they accomplish their cha tasks more effectively, more efficiently, more safely, whatever that might be. And so we have to go from working on this program of autonomy to now what is autonomy good for? And so. The reason we did this experiment with them is to try to facilitate this transition conversation, which goes back to your previous question is how do we like lead the witness, if you will. Like let them uncover like, what is this possible? Like, you know, they may not do it on their own. Um, so maybe we can work with them to see if they can start uncovering how it can be useful to them and give ’em a new tool and see, you know, how can they use that new tool to solve their problem more effectively.
Stuart Young: So one of the cool, and this isn’t the first time they’ve had autonomy autonomous systems, um. But the systems that we provided them, um, I felt like, you know, my perspective was they were very effective. Um, there were a few lessons that we, I learned from this. One is that they were super excited that the technology worked. Um, they seemed to embrace it, but they also fell back on the ways they had done things in the past. So, as an example of that, you know. We could, they could have told the robot to go to a point, say 12 kilometers away, and set up an observation point. Um, but they still wanted to have a lot more positive control over where the robot was going and not trusting the robot to pick its own route. But that’s, that’s part of why we did the experiment so that we could start uncovering not only so they could experience and gain trust, um, and, but for us also to uncover like, how are they using this? What are the next technical problems that we need to address? How can we make the, i, I like to call it. You know, borrowing from Apple and stuff, you know, how do we make the user experience more pleasant for them? You know, just because I have a cell phone doesn’t mean I know everything it can do and it’s supposed to make my life better, right? So similarly here, like the autonomy should make their life better, but does it? And so we gotta really test that, which is kind of going to the next step, but that also gets into, instead of using. Scientific autonomy speak. Now I’m talking in mission terms, like how does this make their mission of setting up an observation point or doing reconnaissance or whatever the task might be. How do they now use this new tool to make that better? And so one of the things we found was they were able to. Send the robots much further because they were unmanned. They were, they were willing to send them much further than they would’ve if they had had to send a man formation.
Stuart Young: Um, and I think the pos, it was a positive experience for us in learning. Um, so it validated some of my hypotheses about that. Um, but it also, they uncovered other ways that they might want to use the system, which was really great because now they can use their creativity and their expertise as, as war fighters to be able to like use this tool in a different way. Um, and, and ultimately it was, you know, they basically didn’t want to expose themselves and, and risk to humans, and they were able to use the robots and accomplish their objective with lower risk to the human. So, um, it was really intended to be just the first domino that we wanted to knock over. It was not intended to be a be all, end all. It was really to try to, um, fit into the army’s mindset of they, you know, they have these transformation and contact or tick brigades where they’re trying technology and bring ’em in. And, and the 11th, a CR. You know, they are the opposing force at the National Training Center. And so they have the mission to also uncover new technologies that our adversaries might use and use them against our forces and see how our forces can react. So it really allowed this, this force on force, um, innovation cycle to begin, um, which is. What was super exciting for us, and it’s, it’s a campaign. It’s not a one time off event. Um, but it was a really great starting point and you know, we hope that the Army will pick it up from there, um, you know, and continue that, which I believe they will.
Grayson Brulte: Because at the end of the day, autonomy and automation are gonna save lives, especially when you’re in theater. Do you imagine a scenario where there’s a commander that perhaps is operating a platoon of these autonomous ground vehicles in the future, especially for, for Ford missions?
Stuart Young: A hundred percent. That’s why I do this. Um, I think that’s just the progression that we’ll go after when we started this conversation today. Um, that is how you will solve a lot of these problems. It’s not one person to one robot. Um, you know, the robots are just, the robots have a vehicle ability to get somewhere, but they also have a payload that does something for you, right? Um, and so it’s not just can it get there? The question is, can it do something useful for you? And what is that useful thing? And then if I can now say, Hey, look, I’d rather. You can think of it as like you have an individual football player or you have. And each individual does their own thing, but the beauty is when they do it together. And so the coach like calls a play and the coach then is now impacting the behaviors of 11 people on a football field as we get into Super Bowl weekend here. Um, to use that metaphor, but the, the idea is that you can interact with the system at a higher level of abstraction. Um, so absolutely, you know, my goal is, you know. You know, maybe one person controls a couple vehicles and then can control a whole platoon of vehicles. And then maybe a couple guys, like you could be a controlling a platoon and I could be a patrol, a platoon, and then I’m like, Hey, I’m gonna do this and you’re gonna do that. And like now our platoons are working together. And then that just continues to scale up. Um, and then your operational effectiveness can just be even further enhanced.
Grayson Brulte: And as DARPA will continue to have a positive impact both on the military sector in the civilian sector, Stuart. The Racer program is now concluded. What are you hoping the lasting legacy of this program will be?
Stuart Young: Yeah, so I’m hoping that the lasting legacy will be, um. Ultimately to that we accomplished some significant technological objectives on the goal towards fielding robots for our soldiers so that we can save our soldiers and Marines lives on the ground, um, make their lives, uh, better. Um, war is not a pleasant experience, but, um, can we make it more safe for our soldiers? I think there’s also a lot of opportunities in what we’ve done to better humanity in general, whether it be through dual use capabilities here. Imagine it, it tears me apart when like you have search and rescue missions and you can’t get in there. With existing tools because it’s too risky. Now, you might be able to change your calculus of how you get in there. So I think ultimately it’s fielding robots that can make the lives better for whether it’s search and rescue mining, um, you know, make it safer for humans. Um, and then also, um, allow our soldiers and marines to be more effective and come home.
Grayson Brulte: That’s well said. It’s, it’s 2026. Foundation models for scaling technology. Scaling DARPA, as I said earlier, is building the future. So what’s next for Stewart and what’s next for DARPA and autonomy?
Stuart Young: So, uh, well, I’ll, I’ll say like, you know, what’s next for me is, you know, I’ll be leaving DARPA soon as my six year clock ends. Um, probably going into industry, um, and continue to work, um, in, in technology space. Um, you know, what’s next for DARPA is always up to the next program manager. Um, but I think there’s a lot of opportunity to, now that we’ve solved this problem of. Platforms be able to move. Now you can get into more com, you know, more, um, higher levels of abstraction. So we’d like to say there’s, you know, movement and then there’s maneuver. So maneuver on the, on the battlefield is more about like, now what can all these things do for you? And so I’ve kind of alluded to that, is. What are all the things that this technology that we’ve unco, uncorked, what can it do now for us, you know, how does it make our, our ability to do contested logistics more effective, um, or safer or longer distances? Like what are those things? How do we, um, enable our soldiers to accomplish missions in a better way than they previously had or can do it possibly more safely? Um. I think that’s where DARPA’s gonna go. I think, you know, they’re gonna focus on, um, even in the NoMars program and other autonomy domain spaces like Air and ACE and, um, prior, um, air programs. You know, it’s like what is the autonomy good for, um, I think is the next, you know, frontier for us, um, in that warfighting domain. But it also has the dual uses I’ve emphasized quite a bit.
Grayson Brulte: DARPA is building the future and making the world a better place at the same time. The future is bright. The future is autonomous. The future is DARPA Stuart. Sir, thank you so much for coming that road to autonomous and absolute honor to. Have you here, and if you go into the private sector, we wish you well because you played a key role in building the future. So thank you so much for coming on the road to Autonomy today.
Stuart Young: thank you so much. It was a pleasure and I really appreciate the invitation, and it was wonderful to talk to you and your audience. So thank you very much.
Subscribe to This Week in The Autonomy Economy™
Join institutional investors and industry leaders who read This Week in The Autonomy Economy every Sunday. Each edition delivers exclusive insight and commentary on the autonomy economy, helping you stay ahead of what's next.
