Qualcomm’s Snapdragon - The Road to Autonomy

Transcript: Qualcomm’s Snapdragon Autonomy Strategy

Executive Summary

In this episode of The Road to Autonomy podcast, Grayson Brulte sits down with Anshuman Saxena of Qualcomm to explore the company’s transformative impact on the automotive industry. Anshuman details the Snapdragon Digital Chassis, a comprehensive platform unifying the digital cockpit, connectivity, cloud services, and the Snapdragon Ride system for ADAS and autonomous driving. He discusses how a safety-first, scalable approach, highlighted by a major co-development partnership with BMW that is driving Qualcomm’s significant growth and shaping the future of in-vehicle experiences and autonomous capabilities.

Key Topics & Timestamps

[0:00] Qualcomm’s Approach to Autonomy

Qualcomm’s automotive journey started in 2002 with a focus on telematics for safety and emergency services. Today, its entire approach to autonomy is built on the central pillar of safety, which serves as the foundation for all its hardware and software development.

[01:10] The Snapdragon Digital Chassis

The Snapdragon Digital Chassis is Qualcomm’s all-encompassing platform for a car’s digital systems and electronics. A critical component is Snapdragon Ride, which provides solutions ranging from Advanced Driver-Assist Systems (ADAS) to higher levels of automated driving. The entire platform is built on a safety-first development process that adheres to rigorous regulatory and certification requirements.

[05:10] ADAS for Elderly Drivers & User Adoption

Automated driving technology offers a significant opportunity to assist elderly drivers who may have slower reaction times or impaired vision. Practical features like automated parking and driveway assistance can add immense value and comfort. To encourage user adoption, the technology must be extremely reliable and avoid false alerts. Building trust requires a seamless user interface where the driver can easily interact with and understand the system through the digital cockpit.

[11:50] The Evolution of the Digital Cockpit

The in-vehicle experience is shifting from analog dials to large, pillar-to-pillar digital displays. The next frontier is making AI the new User Interface (UI), enabling natural voice commands to control vehicle functions. To improve latency and privacy, Qualcomm is bringing these powerful natural language and AI capabilities directly onto the vehicle’s hardware, reducing the reliance on a constant cloud connection.

[17:20] The Role of Adreno GPUs in Vehicles

Qualcomm’s Adreno GPUs are already a core component in many vehicles, powering the graphics-intensive Snapdragon cockpit solutions for gaming, 4K/8K displays, and other visual experiences. For the complex tasks of automated driving, Qualcomm uses a “right tool for the job” approach, utilizing a mix of high-performance CPUs, power-efficient Hexagon NPUs for AI processing, and Adreno GPUs for parallel processing to achieve optimal system performance.

[24:10] Co-Developing with BMW

Qualcomm and BMW have a public partnership to co-develop an ADAS system. Technology teams from both companies are working together on the system design, which is built upon Snapdragon Ride processors and the Snapdragon Ride Vision Stack. This solution is scheduled to premiere in BMW’s 2026 model year vehicles, and the resulting platform can be offered by Qualcomm to other carmakers.

[26:20] Automotive Revenue Growth

Qualcomm’s automotive business is seeing impressive growth, with revenues reaching $959 million in Q2 2025, a 59% increase year-over-year. This growth is driven by the entire Snapdragon Digital Chassis platform, including telematics, digital cockpit, and now increasingly, ADAS solutions. The company is on track to meet its financial goals of $4 billion in revenue by FY26 and $8 billion by FY29.

[29:10] Qualcomm’s Vision Stack

Qualcomm has developed a complete, in-house “vision stack,” a system that uses only cameras for perception. This active safety solution has been validated in 60 countries and will launch in BMW vehicles before expanding to 100 countries. While powerful on its own, the vision system is designed to be fused with other sensors to enhance safety and capability, especially in difficult conditions.

[35:15] Building Driver Trust with System Reasoning

Allowing the vehicle to explain the “why” behind its decisions is a crucial feature for building driver confidence and trust. When the car communicates its reasoning—for instance, changing lanes because an adjacent truck was too close—it helps the driver feel comfortable and understand the system’s intelligence. This must be carefully balanced with safety measures to prevent drivers from becoming complacent and ensure they remain attentive as required.

[40:30] Qualcomm’s Scalable Chip Design Strategy

Qualcomm’s chip design strategy starts with a comprehensive understanding of the entire automated driving system’s requirements. The company develops a scalable family of processors, enabling automakers to deploy a common software architecture across their full range of vehicles, from entry-level to premium. This approach leverages Qualcomm’s vast R&D investments in other areas, such as mobile, to bring the most cutting-edge and efficient technology to the automotive market.

Subscribe to This Week in The Autonomy Economy™

Join institutional investors and industry leaders who read This Week in The Autonomy Economy every Sunday. Each edition delivers exclusive insight and commentary on the autonomy economy, helping you stay ahead of what’s next. 

Watch the Full Episode of The Road to Autonomy

Full Episode Transcript

Grayson Brulte: Aman, Qualcomm’s a great company. We were talking offline ever since you had that first CES presentation around automotive. You caught my attention. I’ve been following the company for a long time. You continue to make headlines in and around autonomy. How is Qualcomm approaching autonomy? 

Anshuman Saxena: First and foremost, uh, thanks Kristen for giving me an opportunity to to talk about Qualcomm, Qualcomms Automotive and Autonomy Solutions. So always good to, uh, talk to you. Uh, so let’s dive right in, right me. So, So, Qualcomm, as you know, we have been doing a lot of, uh, work, uh, known for our communications, uh, uh, solutions, uh, on the modems on the connectivity. That’s actually how I, we started the automotive play as well back in 2002. Uh, like it or not, our first need of a solution for. Anything automotive was to get to safety and emergency, uh, support e calls. And even before that, when we had Omni trucks, uh, omni trucks was actually for building up a safety solution, primarily based on telematics. Since then, we have in last, uh, 23 years now, I would say, uh, we have grown into a full blown Snapdragon digital chassis solution where we talk about every piece of. Electronics and the digital footprint in the car, uh, so that the automakers can bring in much newer experiences to their consumers and users. Out of that, one important pillar is Snapdragon ride, which is what you asked me is about the autonomy and how, how we see as the autonomy solutions, right? So we, again, if you look at it, the central piece of anything to do with the autonomy is first ensuring safety. Right. And safety is the central piece of everything that we do, uh, from our SOC development, from our stack and systems development. That’s the front and center of any deliverable. So with that, what we have been doing for last, uh, I would say 10 plus years, start putting brick by brick, the whole foundation of what is now called as the Snapdragon ride. ADAS advanced driver assist systems all the way to the automated driving solutions. What does it mean When we thought about it, the SOCs have to have a foundation of what is a safety, uh, development. And this safety development does not happen that, you know, I decide today and can deliver tomorrow. It is a process. It’s a, it’s a certification, it’s a regulatory requirement. The outside SSS come and say, yes, this is good. That’s when the automakers will say, yes, we can, we can take it to them. Then comes the question of what is the, the system solution? Why are you building up a, uh, uh, automated driving solution? Again, back to the first point, which is ensuring that you have your regulatory requirements for these automated driving systems that work across the globe, multiple hundreds of countries. Can you ensure the regulatory requirement of all of them? Emergency breaking has to happen. Irrespective of whether it is the United States or Europe or somewhere in, uh, in India or Southeast Asia. All of them have to have their regulatory requirements met. Big exercise you can imagine, you have to collect a lot of data, test it out so much, and ensure that it works every time. And then it comes to the next levels, which are called the comfort driving. In a SAE level, it’s called the levels of autonomy, level two, level three, level four, level five. You might have heard all those names. Most of the passenger vehicles are in this tier where there’s a lot more comfort driving coming in. But level three onwards, you can say eyes off. That’s not where the auto industry from a passenger vehicle perspective has reached beyond one or two automakers doing it. Our focus is brings safety, bring comfort, driving for these level two peers, expanding the use cases. So, so that’s why this, you might hear about level two plus and plus plus and plus, plus plus depending upon whom you talk to. That is basically saying, can I do a highway navigation in a autopilot, kind of a mode that, you know, I’m saying hands off, I want to drive from point A to point B. There might be as soon as I ramp into the freeway. Tool that ramp out and in between, if I have to do interconnects between the interchanges on the freeways or some whatever, um, uh, routing exits and entries, it should all be automated. Expanding to a urban navigation ator, right? Mean depending upon, I start from home to work. That kind of use case, it’s a thing that I do every day, twice, at least a day, right? If I can make comfort. In that driving and I can let my hands off. It’s a big deal. That’s the urban navigation on autopilot, kind of a use case. So that’s how we are looking at it. Where is the biggest value coming in? What is the, and in a reasonable cost, reasonable footprint, uh, from a power performance standpoint and deliver it to every possible automaker. That’s what Qualcomm is doing from a, uh, automated driving standpoint.

Grayson Brulte: You are, you’re right about about the value aspects, and I wanna ask you particularly about one aspect of the value chain that I’ve been having a lot of conversation with. I live in Florida, there’s a lot of old elderly people and you hear the conversations around, oh, I gotta talk to mom, I gotta talk to dad. We have to have the keys conversation. I’m beginning to, to look into this and speak to elderly individuals. When I say that, I want to emphasize that 77, 78 and older, some have bad vision or bad hearing, and they’re asking questions around, well, if the car had some form of automation, do you think it could find where I need to get home? Do you think I could have a better opportunity of turning into my driveway perhaps? And you’re starting to get these questions. Do you see that potentially as a growing, as a growing market for these technologies? And depending on how advanced the technologies are, uh, uh, potentially becoming a growth market. 

Anshuman Saxena: Wonderful question. Right? And then that’s exactly how we should look at it. And then glad many automakers look at it that way. I’ll, I’ll give an example of, uh, uh, we are coming out , of a Snapdragon automotive day, uh, in India. And uh, uh, it was discussed as an example, and this has nothing to do with the driving assistance, just a technology standpoint, right? The sensors in the car, lot of sensors, right. There is, uh, there was, uh, this example that Mahindra, who is a model maker in India, they were explaining they are a great customer of us. They were explaining that, you know, the leadership team was explaining us. In their car. They have these in Inca sensors, right? And, uh, uh, elderly lady who had to rush to a hospital, uh, in time was getting into that car. And, and her daughter who lives into the, in the us she had to make that phone call. And this through the in-cabin sensing and all of the, the overall audio video system that they have put as a feature. Because the car allows, the technology could very, well, of course you can do it the phone, but you had the whole experience and the, the, the, the, uh, lady on the, in the states was able to connect and, uh, calm her down through the whole journey to the hospital. And, and that’s one simple example, but they were so happy about what a car maker has brought in from a tech perspective. Now you coming to your question, which is a great question, you might not need. To drive in a hands off and maybe eyes off mode all the time. That’s not necessarily, that’s the ultimate technology solution, but that’s not really the first thing that people are looking for. The, the kind of problem statements that elderly people that you talked about, oh, California tight parkings, I don’t know how sorry the parkings are, but tight parking spots. Can I solve this? Parking the car in that parking spot in an automated way? I’m sitting in the car. The technology exists. The sensors exist. The computing exists. Can I find the parking spot and park the driveway thing? That’s the thing that we do every day. We drive into our driveways every day. Can I help the elderly people that, you know what, again, tight park, driving into the garage, but this is like a known thing. Train it over time and going forward, take it all the way in or be a assistant. You know, you might not be able to react. I mean, again, elderly people, right? You might, they might not be able to react in the same, uh, latency as you and I can. Car has the sensors, which are always active, that does not have a age that it is a 20-year-old, uh, driving versus a 75-year-old driving. It’s a sensor. It’s always gonna be there. So those are the kind of things which make a difference, and that’s what we are focusing on, keeping the cost contained. The safety is required for your kid who started driving, going to the college. They need the same level of safety, like you said, 75-year-old who is driving and who might not have the react, like the, the activity as the same way when they were younger. They need the safety. This is what we are thinking about when the car, car, uh, cost of the car might not be, uh, allowing to put a lot of the sensors for automated driving. Safety is still super important. 

Grayson Brulte: You, you need the safety. You’re right, because if you stay on the elderly, they’re, they’re not, they might not be thinking clear, or the reflexes might not be quick enough to react. And that’s where centers come in. I was doing some research the other day and I was shocked the amount of accidents, if you wanna use the term crashes that happen in grocery stores on a regular basis, we’re not, we’re not, you know, the bumps and stuff. The automated parking, as you clearly stated, can solve that, and then you can give more mobility. You’re building these technologies, but how do you get, and this goes to your customers, I would love your, how do you get the customers to want to use the technology to, to adapt and have that willingness to use it? 

Anshuman Saxena: so that’s another wonderful question, right? So put the technology and if it keeps beeping every now and then for a false alert, the obvious reaction, whatever is you are like, switch it off, right? The tech has to be aligned with the. User acceptability, the user interaction. So, so it has to be first and foremost, robust should work, I would want to say 99.999, whatever. And by the way, that’s that kind of requirement when you talk about emergency breaking to work, right? It is not okay to run 99%, it has to be multiple nines over 99 that has to, uh, be working with it. But the other important thing is. How do you interact with the user? How do you tell the user this is what, for example, even if you’re talking about automated parking, you would like to see how the car is moving around and getting into the parking, uh, uh, spot. Whether it is perpendicular, parallel parking, these are tight things, and you should be able to take over if there’s real need. That is where the interaction with the displays and cockpit starts coming in. So, so think about it. The sensors are doing their job. The tech has to be, like I said, very robust, but still, there has to be that interaction piece where whether, whatever age we are in, we need to have an interaction with the car. That’s where Qualcomm’s, the reason of the so, so-called Snapdragon digital comes in because you are bringing in the driver assistance systems. Connecting them to the user experience of the HMI kind of a thing in the cockpit systems and bringing in an experience that starts making a difference versus a button press kind of thing. 

Grayson Brulte: let’s unpack the, the, the digital cockpit or the in-vehicle experience. I’ve had voice in several of my cars and the new one that I have now, the voice works really, really well, but it’s, it’s limited. Does does the vehicle, does the, does the ultimate tool inside the vehicle become voice as this technology advances? 

Anshuman Saxena: as a user, before I become a tech guy, as a user, you and I would want so many buttons. now buttons are getting away on so many screens, and the beauty of, uh, user interface, which is based on this touch screens, is that, you know how easy it is to reach there. For example, I go to the car wash, right? And the cars are smart enough. They’re all having this electric e uh, architectures that are, that can control the, the mirrors electronically, the, the windows electronically and all. I have to go and figure out, you know, what, can I go into that menu and say car wash mode before I put my car into the neutral and let it drive in. Right? Think about it. Put the car in the car wash mode. That’s the interaction. That’s why this, this AI is the new a UI concept comes in that, you know, it should be as interactive instead of saying switch the car in a specific mode. Can you move it into car wash mode or can the car suggest to you that I’m in the car wash area, gonna turn it on into car wash mode, unless you, Grayson don’t want it. 

Grayson Brulte: You’re right. The car should simply know when I get home, the mirrors fold and it pulls into the driveway. So if I’m at the car wash, it should just automatically know. 

Anshuman Saxena: Exactly. Exactly. So, so these are the kind of things now bringing the tech part of it, right? So, so you ask that, you know, is this what is happening? So this is where the cockpit is moving. Originally there used to be meters, dialogue meters, et cetera. We all have lived through those and those small. Bright amber lights that, you know, oh, the, the temperature’s increasing or something else is going on, and all that stuff, right? With the EVs that automatically those things have gone, but new kind of things have come in. You and I have to train these other relevant things for a EV versus a, a combustion engine car, et cetera, et cetera. Right. If you look at it now, meters have gone, the analog dials are not there anymore. It’s one display. Actually, many cars have got a pillar to pillar display, uh, that, that you can actually move around content, you know what, uh, from uh, YouTube playing on the center screen to the right screen because the passenger one. So all those concepts are there. Now comes the next level, which is. The voice interaction and people have tried voice interaction. It’s not new, right? I mean, probably a 2015 car had a voice interaction, but it was very, very kind of raw. It was not effective. And, and if you have to try it 15 times, you would say, you know what? Let me press the button and get going. That all has gone away because one lot of data exist, right? Uh, so that you can train the models on lot of data. Second. The whole advancement in AI for last three, four years when we have started talking about gen ai, way more, uh, openly with the, uh, open AI coming in and whatnot. Elements are there. You can interact with anybody, phones or whatever, using your natural language in any language and all that is there now. So, okay. Why would the car be any different? So we have brought in all of that in cockpit. Now comes other, uh, logic, which is. You can do today. The phones connect to the cloud and do a lot of the interaction based on the cloud things. There is gonna be a latency, there is gonna be a question around privacy, right? Cars having a lot of the information in the car cockpit systems, the high performance compute systems having a lot of capability in the car. We are bringing in these. Natural language interactions right into the car as much as possible. Of course, you need to go to cloud because there’s a, you cannot replicate the cloud data into the car, but whatever can be serviced inside the car as a personalized solution is done inside the car and then goes out to the cloud as needed. So that’s, that’s what we are developing as part of our digital cockpit, uh, solutions in snap, drag and digital chassis. 

Grayson Brulte: Is the Snapdragon digital chassis, is that an o, an overarching, if you want to call it a holding company? We are holding division for all of the assets that Qualcomm views going into the future of automotive. 

Anshuman Saxena: in a way you can say that that’s our umbrella of everything, electronics, everything digital as a digital footprint of, uh, individuals or the car makers that we bring into the car. Of course, it’s not a, a holding company or anything of that sort. It’s, um. It’s our view of what does a car digital chassis looks like, where you have a lot of computing in cockpit, that’s where the users interact. A lot of computing and uh, uh, software around the, uh, the safety systems, which is where the, the ride solutions come in. Connected vehicles, because they have to connect to the cloud. That’s again, a big compute, uh, and connectivity element that is there in the car. And then of course, at the backend, how do you bring the experiences? Which is the cloud and the services part of the things that we do. So, so end to end. How do an automaker bring in a new digital footprint for their users into the car? That’s what we do as part of Snapdragon Digital Chasis. 

Grayson Brulte: You. Really great technology. Are the Adreno GPUs, are those going into vehicles today or could they go into vehicles in the future? 

Anshuman Saxena: They are all in the, in the vehicles, uh, for every snapdragon that we ship out. GPUs are a core part of all of that. So, so again, look at the, the, I mean, it’s, I, I feel good and proud about saying that in a lot of the vehicles are driven on snapdragon, uh, cockpit solutions and, and all of them. Excel in the GPU capabilities that we bring to the cars, right? So, so every experience, if you or your kids like to play games in the, uh, in the car, it’s all driven by the adrenal GPU. You move around the content, the 4K eight K displays that are there nowadays in the car driven by adeno gpu, the pillar to pillar display where you can kind of do the tiling and move the content around. A lot of it is driven by the adeno GPU that, uh, Snapdragon. Cockpit solutions have, and they bring to the, the, the level of displays and shaders into the mix. 

Grayson Brulte: Could you eventually see the a GPUs powering a full SAE level four autonomous driving system?

Anshuman Saxena: that’s an interesting question. Uh, two parts. One, SA level four is not just, uh, uh, compute element problem. It’s a whole system concept. Because why I’m saying that is. Level four expectation is no driver in the car, right? It’s a driverless eyes off, hands off, mind off kind of a solution. Uh, which basically means car has to have capabilities to do the maneuvers come what may in a very predictable fashion. So that’s a whole top, top-down safety concept. Uh, safety of the intended functions kind of a thing comes in like, you know, what will happen in this scenario, that scenario, et cetera, et cetera. So it’s, it’s. Way more than just the compute element, right? And which could be GPU or any other thing, but it’s a whole system concept. Can our Snapdragon solutions be part of the, uh, of those kind of systems? Answer is very much yes, because we design them for all of the safety needs. But the safety concepts would basically mean sometimes you have to have written in paths, right? You have to have secondary path being available, which basically automatically would say. Might have multiple computers, multiple connected solutions, et cetera, et cetera. So that’s a much deeper discussion. Snapdragons, can they solve, can they be part of the level four systems? Of course, yes. Now, coming to your first question of adrenal GPUs, and I will make it a little bit simpler if, uh, if you agree to that, uh, adrenal GPUs, are they, uh, powering the driving assistance or the automated driving systems versus talking about level four? Right? The answer is. Yes, but GPUs for us are not the only solutions for all of the intelligent computing that you need in the car. Why? So Qualcomm, uh, brings in a lot of different varied processing blocks. Uh, we have, uh, really high performance. So we, we talked about these, uh, these, uh, high tier SOCs called Ride Elite. Uh, Snapdragon Ride Elite. Uh, we actually announced that those in Snapdragon Summit, if you followed us on that, uh, last year, uh, wherein we brought in high performance CPUs, the central processing unit course, which is the, or NCPU, uh, homegrown Qualcomm CPU, to bring in really high performance single and multi-threaded computing element, right? Very important to, to have. These decision making trees to be implemented in a latency critical fashion. Right? You don’t want to put it on gpu. It’s not the right place. CPUs are the right place. Then Qualcomm has what we call as Hexagon. Npu, again, multiple of them in our Ride Elite platform. Why? Because that’s when you bring in so many sensors we are talking about now, cars having almost 40 sensors, different kind of sensors coming in. All that sensor processing has to happen in an intelligent way, data-driven AI processing. Can you do it on the GPU on Qualcomm adrenal GPUs? Of course, yes. Is that the right place to do all AI processing? We have developed this exa nmps to do a more performant, more optimized AI processing so that the power consumption becomes smaller. Your back and forth data becomes lower. While the Adreno GPUs complement that whole AI processing really, really well for a lot of parallel processing, which is designed for, so in a convoluted way, I’m trying to tell you is Qualcomm, instead of saying Adreno GPU is the solution for everything Qualcomm focuses on, right solutions for different parts of the system that are required for an automated driving sensor. Processing happens on a sensor processor. I told you the central processing unit for like a sequential processing elements, GPUs for a lot of parallel processing and a lot of AI processing on the hexagon NPU. And there is a big orchestration mechanism that makes it simpler to deploy multiple things in parallel on different things. So that’s how we operate, uh, automated driving systems. 

Grayson Brulte: Is it a fair statement to say when you’re working with your partners, the overarching goals to optimize for performance and safety? 

Anshuman Saxena: Absolutely. Yes. Safety, performance, security, power, consumption cost, all these are equally important, and each one has a a real critical need, which cannot be compromised, meaning you cannot bring in. An umpteen level of, uh, uh, of, uh, performance and increase the cost. Like, like, which is not affordable. What is the point of doing that? Or you cannot throw in a really heavy, uh, power consuming, uh, uh, PC kind of a system. Say, yes, it works, but it cannot really be deployed because it brings down the range to half. Why would you do a DAS by bringing or or taking your EV range to half? That, that doesn’t make sense. Safety cannot be compromised. So all these things that I just mentioned are the central piece of everything that is required for deploying an automated driving system at scale. 

Grayson Brulte: Let’s go back to the. Board, you have a very public partnership with BMW where you’re co-developing an a DAS system. Does you sit, do you, do you and the teams BMW team, the Qualcomm team sit down on a whiteboard and, and, and take all the possibilities of what is, and then narrow it down to what makes the most sense? Or how does that process work? 

Anshuman Saxena: It’s, uh, again, a great question. Glad that you asked. Uh, so, so BMW relationship with Callcom. Started, uh, for this ADAS system. Of course, BMW has been a great customer for us for much longer. But a, a system, uh, uh, started I think in 2021 with the, uh, formally, uh, publicly. We, uh, we started talking about it in 2022 timeframe. Qualcomm and BMW teams, the technology teams, worked together to develop this system. Which now is gonna be, uh, launching and premiering in the, uh, BMW newer Clustera for 2026, uh, BMW vehicles. The overall engagement is where we look at the system design, look at what we call as o dds or, or the functions that we have to deploy and develop. BMW has a view of what their users would need. Qualcomm. As a technology, as a technology company contributes and plans of how the implementation and the system design can be done. And we worked together for last three to four years to bring it onto the roads, put on top of Snapdragon ride processes using Snapdragon Ride Vision Stack, and the collaborative work between BMW and Qualcomm. To deploy the BMW vehicles. Now, once that is done, the same solution is available because it’s co developed with between Qualcomm and BMW to take it to many other automakers where Qualcomm can actually bring the same level of experience on other vehicles based on the other automaker requirements. 

Grayson Brulte: Well, because I wanna highlight something here from your Q2 2025 earnings call, the Snapdragon digital chassis is growing. It’s growing bonkers. Your automotive revenues reached 959 million to Q2 25, up 59% year over year. Impressive growth. Is this an example of where you’re, you’re building a platform to continue that automotive growth? 

Anshuman Saxena: So, so Snapdragon digital chassis. As I said, we started this journey 23 years back. Telematics has been our core starting point. Telematics includes all of our connective connected car solutions with transitions onto digital cockpit, and now Snapdragon Ride. As the ADAS solutions starting to contribute to the, uh, to the, to the growth, uh, uh, growth of the automotive business we had committed. FY 26 in financial, 26, a $4 billion, uh, revenue target for automotive and, uh, in, uh, FY 29 an 8 billion target. Right? So that is something that we are tracking every day to ensure that we have a path to grow. And this whole BMW engagement where we are bringing in this whole platform so that we can take it to multiple automakers is. A core component of our growth path for a DAS business contributing to the overall automotive business income.

Grayson Brulte: jumping forward into the future, do you see a point, and I’m asking this because your company’s getting a lot of buzz in the autonomous driving developer community, seemingly every week somebody’s mentioning Qualcomn. I’m very thankful you’re here. Do you see, at some point in the future, don’t need you to give a timeline on this, but. Where Qualcomm develops an autonomous driving full, like a robotaxi platform that you then go out and commercialize. 

Anshuman Saxena: Great, uh, uh, question again. Uh, right now our focus is to be, to be very, very clear, is up to what we call as level three systems where we can go eyes off, uh, hands off, uh, on the passenger vehicle segment. Of course, the technology that we are developing, both on the software stack and the SOCs can scale to, uh, or, or can contribute as, uh, uh, components into the Robotaxis solutions as well. But right now our focus is. To make sure that we can bring safety, comfort, driving to every possible car maker, uh, uh, on the, on the, uh, on the earth to bring it to the, to the users, right? That’s what we are focusing on, uh, at this point of time. And by the way, uh, just to extend on that point, uh, because with a DAS systems and automated driving systems, we are starting to push the boundaries of what is called as physical ai, of course, robot being the, uh, a, a great ultimate goal of, uh, completely. Um, now automated, uh, physical intelligence, uh, that is there. But for us, everything that we do is becoming the blueprint of taking intelligence and AI to more vectors, including robotics and, and, and, uh, Qualcomm leadership has been talking about it. So, so, um, so the overall idea of ai. Is getting driven by the ADAS and automated driving use cases inside Qualcomm, and now more towards how we can use some of it into the robotics world too. 

Grayson Brulte: Robotics is interesting ’cause you and I know that opens up a whole. Logistics opportunity and, and warehouses and all, and all those opportunities that, that’s growing. You mentioned something earlier that I, I want to dive into here. You said Vision Stack, did you develop, is Qualcomm developed a, a, a vision only system internally? 

Anshuman Saxena: Yes, yes, we have, we have developed a complete, uh, uh, vision only system. Because, see, again, look at it this way, uh, the traditional systems have been having a front camera, vision camera, uh, and some sensors to, uh, to do the fusion and. Get the minimum regulatory active safety solutions. Qualcomm has developed a full vision stack, which is validated in, uh, for active safety solutions, uh, validated in 60 countries already as we launch the BMW vehicles and would be launching in a hundred countries next year. So that’s the, that’s the scope of the vision perception solution that we have been bringing, uh, which is getting deployed in the BMW. Vehicles first and then other OEMs will follow very soon. 

Grayson Brulte: Could we see that vision only scale to higher levels O of automation? As the technology advances and your data set gets more robust. 

Anshuman Saxena: Yes, yes. That is how it continuously grows, so, so the vision only thing is, is see there, you always need multi-sensor to get to a better result of things around you. Now, with vision only, we have already got. Multi cameras. So in, in, in the, in the BMW vehicles, uh, there are multiple cameras that are there and all of that vision perception, vision only systems are done by Qualcomm, right? That already brings a lot of capability. Now, think about complementing them with additional sensors. You can bring much more value to the, to the, uh, output of the system for detecting the vehicles even in the night. With additional modality, which adds to the safety case. That’s how we look at it, that, you know, vision only systems are there and eventually it’ll become more, uh, multiple modality sensors perception coming together. 

Grayson Brulte: Staying on the growth trajectory here, I’m really liking this. Do you see potentially that vision only system going into robotics and eventually humanoid robotics? 

Anshuman Saxena: Yeah, I mean, so, so, so the vision systems. And I want to call it vision systems versus vision only. Uh, just to kind of clarify, because vision is such a brilliant sensor. The camera sensors, it can do so many things. It can see so many things. And, and with the AI technologies and the data being available and the simulations being available, you can generate a lot of information around either a robot or a car or anything of that sort. Which can be used to make decisions. Many other sensors can get you some other useful criteria, like a depth, et cetera. But the value that the vision sensor or the camera sensor can bring, where you can detect colors, you can detect signs, multiple different things, is a very differentiated sensor, which is very much useful in the robotics wall tool. 

Grayson Brulte: it, it all comes back to growth. Touching on another market, which we haven’t touched on, and GM finally went public with this a few weeks ago. They’re now pivoting from robo taxis to, to personally owned autonomous vehicles. You have a lot of the world’s leading automotive companies as customers. I wanna also point out you have leading tier ones. As customers and as as, as your customers customers demand higher and higher levels of automation and then eventually there will demand, I believe, per personally, on Autonom vehicles. Is that a natural extension for Qualcomm to go if that’s where your customers want to go? 

Anshuman Saxena: Absolutely. And that kind of starts expanding from my original comment of First solve hands off. Systems for highway to urban so that it starts adding value to eyes off, which is where the level three, and then that’s when starts going into the personalized AV systems, which is exactly what automakers are looking at, right? Versus claiming that it can be a robot, robot taxi, which has a different implication, different use case and whatnot. 

Grayson Brulte: Yeah. Because when you get to personally, and this goes back to your digital, digital cockpit, that cockpit’s gonna change that, that, that, that vehicle experience is gonna change. And the thing I like about Qualcomm, you can power every step of that journey. 

Anshuman Saxena: Absolutely. And that’s, that’s the, that’s the key difference, right? Mean, uh, that, uh, how do you bring in the, the experiences because you can kind of connect what the car is sensing to how the car has to interact. Mean the biggest thing that is gonna be there, of course, the technology to, to drive the car with the eyes of mode is super, super important and difficult and critical. Same thing, but. Even more important is how do I, and you get confidence on the car that yes, it is seeing everything. Yes, I can really be eyes off for some time and the car will be fine. That’s when the car has to gain your trust and confidence for that. It has to bring in lot of the, the interactions, what it saw to communicate with you in a normal language. Yeah, I did this turn, or I, I, I passed, uh, through this, uh, big, uh, 18 wheeler because I’m not feeling comfortable. Good. If the car tells me that, that it did a lane change because it was really close on the 18 wheeler and the 18 wheeler was like coming close to the edge, you and I will feel comfortable that okay, the car is seeing exactly how I would’ve behaved because I don’t feel comfortable driving next to the 18 wheeler. So, okay. This is, this is where the interactions of, uh, eyes off system or getting more confidence and trust on the car is gonna be important, and that digital cockpit plays a big role. That digital chassis as a whole plays a big role to bring it all together. 

Grayson Brulte: Okay, you opened up Pandora’s box and, and I, and I’m stepping into Pandora’s Box here. It raises the, the HMI in reasoning we always see. Now we went from ChatGPT 4o to 5. And the reasoning’s taking longer. We’re seeing it with, with Claude, the reasoning’s getting longer. From an automotive perspective, that example that you just gave, do you feel that the system should give reasoning to either the passenger or the attendant, or the driver of why the vehicle made that decision? 

Anshuman Saxena: It in my mind, helps to convey the decisions, which otherwise you might be amazed to understand why did the car did do what it did, and it is, it is, again, to my point of building up. Confidence or you can override that. I’m not okay to let you do this. But knowing why the car did it is a super important feature set, which is gonna help you and I to, to get used to this automated driving system. And by the way, we are in the middle of the technology. You write about the technology, you read about the technology, you, you hear about the technology from everyone. Think about people who are not tuned to technology, right? Many people don’t even understand what chat GPT-4 or five would mean, but if you press that button on the car and say, yes, I am approving you to drive automated, and the car tells me you miss this, or Mr. X, I’m doing this turn because I don’t feel comfortable next to the car. As that person, I would feel, okay, my car has got some intelligence and I can trust it a little bit. Now we have to make sure that it does not become so much that people start trusting it blindly. So that in that safety piece has to be there in the system that you have to give guidance. That’s why people talk about level two plus plus systems, that you as a user still have to have the driver attention on the road. While the car is telling you I can take care of it, but you still have to be in control if something goes wrong. 

Grayson Brulte: Those are valid points. And do you think that has to be voice or. Does that go to something like, you put the Unreal engine in the vehicle and it could show you an say, an animation of why, and kind of explain it. What do you think that should look like or feel like? 

Anshuman Saxena: again, it depends upon, uh. Upon how the safety cases and the automakers being there is there are kinda guidances, I don’t know whether the regulations or not, but guidances to not have a lot of distraction while driving. So this information that we are talking about, reasoning displayed as an additional information versus continuously speaking in your years. It depends upon how automakers look at it, right? Our job is to provide the technology that it can generate that information. After that, how it interacts with the user can be automaker to automaker dependency, for example, your attentiveness to the road. Some automakers might just have a blinking light that you know, pay attention. While some automakers go to the extent of having a little bit of, buzz in your seat itself that you know you have to pay attention because you’re not like, uh, you, you or some, some have on the steering wheel, uh, they start glowing with the red light that, you know, put your hands on. So depending upon the, the user interface that each automaker has, these things could be very different. 

Grayson Brulte: at the end of the day, do you see its auto automakers are following the reg reg regulations around the world or are they doing. Hands-on case studies or how, how do you see that eventually just going and then you’re giving them the tools, I assume, to do what they choose to do. 

Anshuman Saxena: Yeah, so I think from our perspective, of course, we are the technology providers. We develop and deliver everything which is, which is per the automotive. Standard requirements. Uh, the, the security and the safety requirements we deliver are components based on that. The automakers tie it all together because at the end, they own the, how the system is given to their end users. What is happening now is even for these highly automated use cases, there are regulatory bodies who are putting guidances because again. People should not end up, misunderstanding and misinterpreting the capabilities of the system. So it’s important that there are right guidances that are put, and that’s what automakers in their jurisdiction decide how they will be laying it out in front of their users. We are a technology provider. We provide all the information with the clarity of what safety and security levels it stands for, and then automakers can tie it all together. 

Grayson Brulte: From the Qualcomm perspective, as, as we’ve, we’ve talked about throughout this podcast, the technology’s changing. It’s, you need to become more energy efficient, but yet you need more power. From a chip design perspective, how is Qualcomm staying ahead of a rapidly moving market to ensure that you have the right chips that your customers need? 

Anshuman Saxena: That’s, again, a brilliant question and, and actually allows me to talk about how we, how we bring this, uh, all together. So I think. In this space of, uh, automated driving, right? Understanding the system is very, very important. Understanding the system basically means what does, what do you want to achieve, right? Mean? You’re talking about urban driving with people crossing and, and you have to stop at a traffic light and you have to do a big left turn on a big intersection. You need to see a lot of things around you. You need to understand what kind of sensing is required, what. Performance of sensing is required. How many cameras, what resolution? Bunch of these things. Once you understand that, then you talk about what is the processing. AI is growing very fast. We need to be on the cutting edge. We need to understand what is happening. Uh, plus all of this AI driven things, which could be kind of data and people have talked about the hallucinations, et cetera. You have to have a very good data. To train the models to, which will lead to actions on the car, which is like driving on the road. There has to be a safety guardrail at whatever happens. This is the safety guardrail. So all these things have to be understood now. Then it translates into what to design as a chip, right? And the chip has to be designed for different kind of use cases. Some of them are ultimate urban driving and some of them might be just stay on the lane and follow a car. It is dependent on who is the user base, what is the cost of the car, uh, which kind of regulatory requirements are there, et cetera, et cetera. But from an automaker perspective, whichever automaker you pick, whether it’s in North America or Japan or Europe, they have a scale of variations of vehicles. Right. They cannot put a really expensive urban navigation system on every car. Ideally they would want to, but it comes at a cost and the user base might not need it and whatnot. Right. So it’s always the economics that comes in. Now when we look at it, we have to have a family of processes so that I can go back to the my customer, my automaker customer, that we can solve all different tiers, because at the end of the day, software cost becomes supreme. They cannot keep developing software for. Different tier and then new chip for a different tier and new chip for a different tier. They have to have a scalable software, and scalable software comes with scalable soc. So Qualcomm supports them the whole scale from a entry vehicle to a premium vehicle. Right. Now, when we do this, it might mean that, you know, you have to be on the cutting edge. You have to spend a lot of money to do the development. Qualcomm in a differentiated position because of. All of the assets that we do for the development in multiple other businesses, including mobile, a huge business for us and, and, uh, drives a lot of technology. Uh, we are playing in the CPU’s data center kind of solutions. We are having compute solutions. We have low power solutions. Of course, mobile is there, but XR kind of solution. So the DNA is there, which keeps driving multiple different vectors every year. On enhancements and advancements in technology and CPUs on graphic scores, GPUs on, um, NPUs for AI on camera pipelines, multiple things that allows us to be on the cutting edge, put automated driving front and center, push the limits on AI capabilities, for example, bring it all into ISOC, which again, by design has been proven from a system perspective that this is the system it has to target versus the other way around. We built A SOC. Can you do something about we use it? No. We understand the system. We are building the SOC and we are scaling it down to all different levels so that we can provide an answer to the automaker. That’s how we approach the problem. And the Qualcomm bigger horizontal r and d investment allows us to bring the latest technology into the automotive market. 

Grayson Brulte: And you’re gonna continue to invest with traces. The question, what is the future of Qualcomm’s Automotive Division? 

Anshuman Saxena: We are mulching on the path to deliver what we have been, uh, uh, promising and increase the pipeline as fast as we can. So that’s exactly what we are working on. 

Grayson Brulte: Anshuman, it’s been wonderful having you on the road to autonomy today. We can’t wait to have you back and you come and say, look, we grew again. We’re continuing to grow. We’re introducing new chip architecture. We’re introducing new partners. ’cause something tells me that Qualcomm is going to play a very large role in the future of autonomy. The future is bright. The future is autonomous. The future is Qualcomm Anshuman. Thank you so much for coming on the road to autonomy today. 

Anshuman Saxena: Thank you. Thanks Grayson. It’s really nice to talk to you and hope to see you soon somewhere else. Take care.

Key The Road to Autonomy Episode Questions Answered

What is the Snapdragon Digital Chassis? 

The Snapdragon Digital Chassis is Qualcomm’s umbrella brand for its comprehensive suite of automotive electronics and digital solutions. It is built on four main pillars: the Snapdragon Cockpit Platform for in-vehicle experiences, the Snapdragon Ride Platform for safety and automated driving, Snapdragon Car-to-Cloud services for connectivity, and Snapdragon Auto Connectivity for telematics.

How is Qualcomm ensuring the safety of its autonomous driving systems? 

Qualcomm places safety as the central principle of its development process. This begins with building a foundation of safety into their System on Chip (SoC) designs and software stacks. The company adheres to strict international regulatory requirements and certifications, ensuring features like emergency braking are extensively tested with collected data to work reliably across the globe.

What is the nature of Qualcomm’s partnership with BMW? 

Qualcomm and BMW have a public, co-development partnership to create an advanced driver-assist system (ADAS). The technology, which will premiere in BMW’s 2026 model year vehicles, is built on Snapdragon Ride processors and utilizes the Snapdragon Ride Vision Stack. The resulting platform is designed to be scalable and can be offered to other automakers as well.

Now that you have read a The Road to Autonomy transcript, discover how our market intelligence and strategic advisory services can empower your next move.