No Lidar, No HD Maps, Six Cameras, One Chip, Autobrains
Executive Summary
Igal Raichelgauz, Founder & CEO of Autobrains joined Grayson Brulte on The Road to Autonomy podcast to discuss the company’s strategic partnership with VinFast to deploy L2 through L4 autonomous systems. He details their unique Thinking AI architecture, which uses a skill-based agentic approach to achieve human-level driving performance using low-compute, vision-only hardware, and explains why affordability is the ultimate key to global scaling and the future of robocars in the consumer market.
Key The Road to Autonomy Episode Questions Answered
Autobrains utilizes a skill-based agentic approach rather than a monolithic end-to-end model. By using an orchestration process that only activates specific agents or skills (such as a school zone agent or a monsoon agent) relevant to the current driving scenario, the system remains highly efficient, running on just 20 teraflops of compute.
Air to Road is a signature-based localization technology that uses frequently updated satellite imagery instead of traditional HD maps. The system compresses these images into signatures that are stored on the vehicle, allowing the car to localize itself within 10 centimeters of accuracy, even in areas with low connectivity
Because the underlying technology platform is the same for L2+ and L4, Autobrains envisions a “robocar” that serves the consumer daily but can be opted into a robotaxi fleet when not in use. The goal is to keep the total vehicle cost around $30,000, making the autonomous capability a marginal cost that enables vehicle owners to monetize their car’s down time.
The Road to Autonomy Topics & Timestamps
[00:00] How the VinFast Deal Came Together
The partnership began several years ago when VinFast invested in Autobrains’ Series C round after conducting deep technical due diligence. The relationship evolved into a production deal after Autobrains successfully bid on a Level 2 program, proving their solution was more performant and cost-competitive than the market leader.
[03:16] Skills-Based Agentic AI Architecture
Unlike monolithic black box models that require total retraining for new features, Autobrains uses a Thinking AI approach based on individual “skills” or agents. This allows the system to scale from basic cruise control to complex urban navigation by simply adding new agents to the existing platform.
[07:16] Six Cameras, 360° Coverage, Low Compute
The VinFast VF 8 and VF 9 utilize a six-camera setup (front, rear, and four sides) to provide full 360° coverage. Despite processing high-resolution pixels, the system uses an adaptive signature-based technology to maintain high performance with very low computational requirements.
[09:37] Air-to-Road: Satellite Imagery Replaces HD Maps
To provide redundancy without the cost of HD mapping, Autobrains uses Air-to-Road technology which indexes satellite imagery as signatures. This allows the car to localize itself with 10-centimeter accuracy by comparing its environment to global satellite coordinates.
[12:40] Robo-car Vision
The Robocar concept leverages the same underlying platform for both consumer vehicles and robotaxis. This design allows personal vehicles to be integrated into a robotaxi fleet when not in use, enabling owners to monetize their vehicle’s idle time.
[15:10] The $30K Fully Autonomous Car
Affordability is the primary driver for mass adoption; Autobrains aims to enable fully autonomous vehicles in the $30,000 price range. By keeping the autonomous hardware costs marginal, the technology can be deployed across high-volume vehicle segments rather than just luxury cars.
[20:20] The Thinking Layer
The Thinking Layer moves beyond simple reaction by simulating the environment on the fly to predict possible futures. It uses an internal computational model to imagine different scenarios and make transparent, optimal decisions based on those simulations.
[24:22] 20 Teraflops, Sub-20ms Latency, Edge Computing
The current system runs entirely on the edge using a Texas Instruments SOC with roughly 20 teraflops of compute. It achieves an end-to-end latency of less than 20 milliseconds, which is critical for high-speed driving and emergency braking.
[27:58] No Lidar: The Vision-Only Thesis
Autobrains focuses on a vision-only approach because the global driving infrastructure was designed for human eyes and brains. While Lidar can provide superhuman redundancy for specific use cases like robotaxis, Autobrains believes vision is sufficient to achieve human-level driving performance.
[28:59] The Future of Autobrains
The company anticipates an inflection point where autonomy will reach mass scale on roads within the next five years. Autobrains intends to be the “brain” that bridges the gap between high-end autonomy and affordable, high-volume production vehicles.
Full Episode Transcript
Grayson Brulte: Igal, it’s great to have you on the road to autonomy. Autobrains got a big deal. You got a deal with VinFast for the VF8 in, in the VF9. How did that deal come about?
Igal Raichelgauz: Thank you Grayson. So we started, uh, the journey with, VinFast quite a few years ago. They became first an investor when we were raising our serious C round. Uh, they did quite a deep technical, uh, due diligence on the company at that time. We were still, uh, early stage in terms of, uh, readiness for mass production. Uh, but the technology was, uh, quite interesting to them. So they made a decision to invest and we were in touch. And that, uh, the right opportunity, which happened about a year, uh, plus ago. There was an opportunity to basically, uh, bid on a specific level two program compared to the market leader in the space. So we had to prove that our solution is competitive compared to other alternatives price wise in terms of, uh, the hardware we are using, the sensors, but most importantly the performance. Anything at the time we, we convinced, uh, VinFast in terms of, uh, performance of our product. So we started with level two. Level two as you know, is a basic smart camera enabling safety features, uh, for every car. Emergency braking, lane departure, uh, uh, lane centering, and, uh, you know, basic comfort functions like adaptive cruise control. Uh, I think we delivered quite well milestones, so they saw it’s good. And why not to try to extend the partnership to L two Plus, which is a more interesting product. We are talking about navigation on pilot in, uh, highway. For us as a company, it was a great opportunity to scale from using our thinking AI technology for perception to end to end basically from perception to navigation control, decision making, uh, and provide still very low compute vision only affordable solution for the car to completely navigate, uh, while it’s on the highway. This worked well as well. Uh, we are hitting milestone consi, uh, co consistently, and, uh, you know, we, we start to think how we can, you know, take this technology forward, why not to push it to the limit. Uh, we’ve built a few cars that, uh, we’re driving in very complex scenarios in urban, in Hanoi. As you know, that’s one of the toughest, uh, driving conditions in the world, and it worked quite well. And basically we’re taking this partnership now to the next level. To productize this L two plus plus, which does both, uh, urban and highway NOP and can scale all the way to full cell driving for consumer vehicles while still again, being extremely affordable for every car. Uh, and in parallel to use those cars in robotaxi programs where it goes beyond consumers to actually drive passengers in additional geographies.
Grayson Brulte: Because Autobrains, you’re known for building a, a highly scalable, cost efficient autonomous driving model. And as your company says, you’re the brain, you’re, you’re the brain of, of autonomous driving. From a technical architecture perspective, how did you build the brain, or if you wanna call it the virtual driver, to scale from L two system, uh, to an L four system? How, how did you technically architecture that ’cause did you have to build a foundational model first, knowing that where you eventually wanted to go to get to L.
Igal Raichelgauz: So I think one of the unique, uh, um, fundamentals of our thinking AI technology is, is what we call skills or iGen approach, right? So. So we have, uh, the ability to scale as we add more functionality unlike, uh, classical end-to-end solutions, which are monolithic model, like you mentioned, the foundational model that has to be retrained, uh, for new functionality or, you know, to, to be designed ahead of time. We can start from a specific ODDs specific functionality and then scale it by adding more agents and more skills. So the same product, the same platform that started from adaptive cruise control and started from highway can scale to urban, to navigation on pilot, to passing junctions to, you know, very complex, uh, overtake scenarios in urban by reusing the same platform and adding additional skills, using the same hardware, the same sensor sensors, and adding this, uh, additional functionality.
Grayson Brulte: The company’s based in Tel Aviv. You’ve at, you’ve tested in Hanoi before two highly congested cities, different driving environments, different cultures, different customs. Do you, how, how, how do you get the system to adapt to those, those different driving environments, different cultures, d, different customs?
Igal Raichelgauz: One of the very important aspects of the approach is that we don’t require huge amount of data, right? So it’s not a brute force approach where you need to feed the millions and millions of hours. We believe that the essentials, the basics are, uh, reusable and generalizable. You know, it’s the same pedestrians, the same cars. At the same time, of course there is a lot of nuances, a lot of specific driving patterns, driving scenarios, behaviors even of, uh, regular pedestrians. And those, again, do not need to retrain to redefine the whole architecture. It’s more adding additional resources to the architecture, which we call skills, which we call.
Grayson Brulte: That’s fantastic. So you’re, you’re adding new skills and, and let’s dive more into the technical architecture. ’cause you clearly articulated you’re a vision only system. Why did you take that approach? ’cause I look at, from my standpoint, says, okay, vision only, highly scalable. But from your technical perspective, why did you take that approach?
Igal Raichelgauz: So in general, our thinking approach is heavily based and inspired by the human perception, human thinking, and overall human decision making in human driving. And as, as we know, the, the driving task was built for humans. We’re using our eyes, uh, in order to understand environment, and we’re using our brains to make decisions. So instead of, um, trying to solve the problem brute force by more compute, more memory, more sensors, we are following, uh, the path of trying to reverse engineer the human driving. And this starts from, from vision. This starts from, uh, adaptive vision or active vision, where it’s not just about sampling, you know, fixed amount of pixels, as many vision systems do. But thinking about where and what to see, where to allocate the resources and doing in a very adaptive way. So it’s a merge between perception and planning and thinking and reasoning. And this allows us to be super efficient in terms of compute. It’s not, you know, processing all those pixels. Now, in terms of additional sensors, of course it’s a, a decision of the OEM, right? So, uh, our task is to get to human level driving with vision only. And of course, additional sensors, additional compute, additional redundancies, can take it beyond the human level, which is of course needed for many cases like robot.
Grayson Brulte: When you’re working with the design team, or if you wanna say the engineering teams at VinFast for the VF eight and the VF nine, is it a col a collaborative process to determine where the cameras are going to go and what type of compute is going to go inside those vehicles?
Igal Raichelgauz: Absolutely, especially on, you know, on those project which consider to be crazy in terms of timeline in OEM terms, we, we have, uh, we have to work like one team. Uh, and to solve the problems very creatively. But even the design phase was, you know, very, very collaborative and it’s, it’s essentially one team in order to be able to deliver such aggressive products.
Grayson Brulte: How many cameras are, is the VF eight and the VF nine going to have on it?
Igal Raichelgauz: So we’re talking about, uh, six cameras. It’s mostly, uh, front, uh, uh, four side front camera for side cameras and the rear camera, which provides the 360. Uh, we are using our perception, uh, signature based technology to do the fusion in a very efficient way. So essentially the car, uh, covers the whole field of view with those cameras. Um, and the front camera is very high resolution. So while we use low compute, we still process a lot of pixels. As I mentioned, we do it in a very adaptive way, and this gives us the ability, you know, really to, to see through. Many complicated scenarios. Urban traffic, congestions, very high speeds, heavy rain fog. We even had, uh, uh, you know, a small, uh, a small, um, uh, you know, very significant heavy rain during one of our tests, which the technology passed in, in, uh, in Hanoi. Uh, so it’s a very robust, solution.
Grayson Brulte: When you reference low compute, should we think about that from an arm architecture or what type of architecture should we think about that?
Igal Raichelgauz: Today the solution, uh, the SOC uh, the system on the chip that we are using is a TI Texas Instruments, pretty much of the, of the shelf solution. Uh, we are using an a classical neural network, accelerators kind of, uh, small GPUs that are running our agent, our small neural networks in power. There is an orchestration process on the fly within this SOC so we can pick the right agent for the right scenario. While we are having a lot of, you know, available computational resources, we are picking the right brain for, for the right scenario, and this allows it to be superficial in terms of compute.
Grayson Brulte: To me, looking at this from a VinFast perspective or from a VinFast potential customer perspective, it seems that the system’s highly scalable and highly cost effective. Where if you’re the end consumer, you’re not looking at a 10 or $15,000 upgrade to the system. Is that right? To think it that way?
Igal Raichelgauz: Yeah, this system is, uh, dramatically lower, dramatically lower than, you know. It’s, uh, we’re talking about, as I mentioned, six cameras, uh, very basic, uh, SOC. As a redundancy, there might be additional radars, but that’s all, that’s, that’s the whole system. Of course, as a redundancy, we are using additional layer of technology, which we call air to road, uh, which doesn’t add additional cost, but we are using, uh, satellite imagery, which is, you know, highly accessible today and frequently updated. So instead of HD mapping, uh, those satellite images are being indexed as signatures and. Cached into the car and where the car drives it basically localize itself on a satellite image in global coordinates to the level of 10 centimeters accuracy. Similar as a human being would localize, uh, himself on a map. That’s how the car does that. And then as a redundancy, we get additional layers of information like lanes, contraction sites, road boundaries, uh uh, and many additional layers of information and traffic patterns. That are coming as a redundancy from from the air. Essentially there is kind of a supervision from the air that makes door cars much more safe than just being reliant on the perception from the commerce.
Grayson Brulte: It’s fascinating ’cause when you use satellite from a geographical perspective, you’re not limited where you can go ’cause satellites are covering the entire space, which gets us to, uh, keep going back to engineering architectural questions. The, the over the air updates, obviously you’re pushing these new coordinates from the sky and then you’re also updating the brain. Did you engineer the over the update system? Did VinFast do it? Was it a collaborative system? And then, and how will the brain update over time as, as you push new versions of the brain forward?
Igal Raichelgauz: It’s a close collaboration. You know, VinFast is very advanced in, in this approaches of over their update and in general the vehicle design software defined vehicle. So we we’re very happy to work with them on that. Of course we keep, uh, this, uh, whole layer of ER to road and pushing skills, uh, to the vehicles, uh, as part of our solution. And the idea is to be able also to work in extremely low connectivity areas, right? So where there is no connection at all, so the car doesn’t need a connectivity all the time. Uh, when we are moving between cities, we can update those signatures, but the signatures are so compressed so they can be stored in the car for very large areas, very large geographical areas.
Grayson Brulte: So if you just look at the geography of Vietnam, so if we go from Ho Chi Minh City, for example, down to the south of Vietnam when your system’s fully deployed, no problem taking it there without any interventions.
Igal Raichelgauz: Yes, we have today very, you know, very robust performance, both in urban, uh, scenarios like Hanoi. So that’s one of the most complicated one. As well as on highways, we are, we are using this area to road, which is a big challenge to cover those very significant areas, uh, and both of them are at very robust, uh, performance.
Grayson Brulte: So you’re, you’re billing if you want to call con consumer performance. ’cause I believe that over time consumers are going to demand an L two, an L two plus plus system and eventually an L four system in the car. It’s going to come and you’re seeing global OEMs going there and your partnership with VinFasts is going there. The next logical step, which you announced in the press release, was a robocar system. That’s really interesting talk. Talk about that as, should we think about it from a design perspective? Will it be this, this similar VinFasts designs, or how should we start thinking about this Robocar system that you’re going to develop with VinFast?
Igal Raichelgauz: I think that the, you know, the foundation are, are the same, right? But when we’re talking about, uh, consumer cars, we still assume an intermediate phase when there is a, a person in the car. So the person can take control after an alert, maybe of a few minutes, maybe after a few seconds. It depends on, on a specific ODD and specific scenario. The idea of the consumer robo car is to give the driver time back, right? So people can actually not just take their hands off the steering wheel, but also take their eyes off the road and, uh, you know, see emails, watch videos, but basically get the time and be productive or enjoy the time. On the robot taxi side, the idea is to take the driver completely out of the car and regulation wise, and of course in terms of uh, uh, the use case, there has to be a much higher bar in terms of redundancy. Uh, but the platform and the technology is the same. And this allows those cars that are deployed on the consumer side. To be reused also for the robot taxi. That’s, that’s why we call it the robocar. Uh, the vision is to have a similar solution, which is a mass production solution, which is affordable, uh, that can be used for consumers, can be used for robot taxi and can be even used for consumers to kind of add their cars into the robot taxi fleet when they’re not using them and, and monetize this kind of wasted time of the car.
Grayson Brulte: you hit the nail on the head because we’re going to figure out the insurance liability aspects of taking your personal vehicle and putting it in the fleet, because I believe. In the future, when you buy traditional lease, I believe it’s going to become a subscription and you’re going to have your autonomous driving bundle in your charger bundle in, and then you’re gonna have the option to put it in a fleet and, and your lease at the base price will become zero and then become a revenue generator for you. And I think that gets really interesting ’cause liabilities going to be figured out. Have you started to think about that from a, a design architecture perspective?
Igal Raichelgauz: I think, you know, the key to all of these, uh, uh, use cases is affordability, right? So once you can make, you know, fully autonomous car, that can still be in the range of $30,000, and the autonomous vehicle part is marginal to the cost of the vehicle. And the autonomous vehicle can drive anywhere. It’s not limited to spec specific, you know, geo-fenced area. Then those use cases really become possible, and that’s, that’s what we enable.
Grayson Brulte: And you’re enabling a, a lot of things and you’re enabling low cost scale. How much did the agentic system that you built enabling you to scale this quickly for, uh, the cost efficiencies that you’ve been. To achieve.
Igal Raichelgauz: So the idea of a skill-based agent system is that once we have it in the car. The additional ODD, uh, can be by adding more skills, right? If you look at today’s system, essentially what you have is end-to-end systems. We call them monolithic systems. The challenge with the systems that basically, once they finish the training phase, they’re fixed and static. They react to everything that you have, uh, on the road, and you have a new situation, a new scenario to deal with. There is no way but to take the system off the car, you know, bringing to the lab and start retraining. The whole idea of the agent approach is a continuous learning, a continuous learning and continuous update over the air, uh, which doesn’t incur additional, dramatic cost, right? You don’t need to collect now billions of miles. You don’t need to pass additional homologation on the full system. You have this granular representation of agents and skills, and they can be just added and granularly tested, and the cost will be always marginal.
Grayson Brulte: Well, let’s take a step back for a moment. How did you come up with the concept of the architecture for Autobrains?
Igal Raichelgauz: So the idea was, you know, and the inspiration was always the human brain. Um, and if you think about the big gap today between the human capability of driving and cognitive abilities in general, and what we have today is this gap between one giant neural network. Uh, which tries to solve the whole task at once, and it’s pretty static after the training phase, right? There is no thinking. Once you build this neural network, you train it. I’m talking about, you know, other approaches. Uh, this neural network only has a chance to respond. It doesn’t have a chance to think, it doesn’t have a chance to be adaptive to different scenarios. In contrast, if you think about the human brain, first of all, the human brain doesn’t use all the neurons all the time, right? We are using every time a small piece of our brain. So this routing approach, this orchestration, is key always to use the right resource for the right scenario. So that’s how we came to the Agenting approach, which starts from orchestration and routing to the right resources, right agents, and right skills. By the way, this agenting approach today became. The mainstream for non-autonomous driving AI task, right? Companies like control, uh, are all based on the agenting approach. It’s still not in the autonomous driving domain, so, so we believe that the autonomous driving goes through this kind of evolution from autonomous driving 1.0, which was aada based systems, compound systems of perception, planning, activation. It is now an AV 2.0, which is a monolithic end-to-end approach. And there are many companies in this domain, and we believe that auto brands can be a pioneer in this thinking AI agenting approach, which brings the adaptivity and the thinking layer, uh, to the autonomous driving, which allows this very efficient low compute because we don’t need to use all resources all the time, and most importantly. Instead of trying to optimize the whole system on average, right, and then kind of missing the long tail of edge cases, we are having this divide and concur approach where every agent and every skill can be super optimized for each specific task, which allows this additional needed accuracy and additional solution for the edge cases.
Grayson Brulte: The Anthropics a great analogy, so I’ll give you two examples. I think that the, the sonnet 4.5 is a heck of a reasoning model and, and Opus 4.6 is the. Is the best coder out there, one heck of a worker. The stuff that I’ve been able to do in Opus 4.6 is just, is just mind blowing. But the the under arching thing that if you play enough with, with Claude, you can give it a personality and you can train it to understand you to do things. And I could tell you from a workflow perspective. Opus 4.6 has made the biggest thing, ’cause it knows everything I want and all, all the colors and everything. Do you envision from an Autobrains perspective where you’re going to give your model, your brain the reasoning capabilities to make, say, at first low level decisions and then eventually high level decisions? I’ll give you an example. If you’re at, at a crosswalk and the, the, the guards waving you to, to go through and, and say, pointing to the right, the vehicle, know, okay, it’s safe. The, the crosswalk guard is giving me the signal to go. And then eventually get into higher levels of reasoning. Is that where you’re gonna take Autobrains?
Igal Raichelgauz: Absolutely. So that’s why we call it the thinking layer, right? So the agent layer is, uh, step one. It’s the first layer. This layer basically builds on top of the fixed model. Those kind of, uh. An army of optimized small drivers for every scenario. But then the second layer allows this thinking and thinking is all about mapping the real world, uh, to those agents, to those digital twins that we can simulate on the fly and predict the different possible futures. Uh, so, so that’s the essence of this technology, which, you know, I can elaborate more, but the idea is to have this, uh. Internal model, internal computational model that is not just reacting to the environment, but it’s simulating the environment on the fly. Thinking about the environment, about the possible futures, which allows to go beyond the training center, allows to imagine those different futures and make a transparent, optimal decision based on the different rollouts.
Grayson Brulte: Let’s take this one step further here and let’s, let’s, uh, this is all hypothetical. You, you give the Autobrains the reasoning, the the thinking level. When do we see a point where you start inputting real world weather data where the vehicle could make a decision? If you look at Vietnam, they have tropical downpours, monsoons in some cases, and the vehicle could say, okay, we know in 15 minutes a monsoons going to stop. We’re going to pull over for safety purposes and then restart when the monsoon is, what are we gonna get there at some point?
Igal Raichelgauz: So now this data is available today, actually part of our air to road solution. So these APIs are available, right? They’re available to everyone, not just Autobrains. The point is that we can leverage those solutions in the kind of more personalized level. So the agent for monsoon will be a different agent than for a sunny condition, right? So those are very different drivers and they can be switched on the fly based on this data.
Grayson Brulte: And then that changes the game. So let’s take us one step further. How about, let’s take it out of Vietnam and let’s go to Europe where it can get cold, damp, and even snow in some cases.
Igal Raichelgauz: Exactly. Exactly, and, and you know, it goes beyond even the country level. There might be a specific zone, I don’t know, a school zone where there is a specific behavior at certain hours. And the general model will be completely blind to that because it tried to optimize billions and billions of hours. And this, you know, small data was diluted. But we have a dedicated agent for this area that, of course, absorbed all the knowledge of the foundational model, but it was optimized and or to speak in the right point of time for this specific scenario. And this allows us really now to get to this, uh, perfection in terms of accuracy and not stay at the generic level, which a bit converges to the mean.
Grayson Brulte: If you use school zones, there’s. Hundreds of millions of school zones around the world. How do you model for that? And then also not have the bloat of all the excess data that you don’t need in order to maintain the, the speed and accuracy of your system.
Igal Raichelgauz: So the, the data that we, we leverage is always incremental, right? So the idea is that we keep the foundational level, which is generic, and then we train the agents. Now, there can be an agent for a specific school zone, and there can be an agent for a general school zone. They can use the same data, but this one can be optimized as long as it gets additional data for the particular schools zone because there might be additional, additional patterns. The idea is that this model allows you really to scale, right, to optimize for this narrow scenarios without paying the cost of more compute because the fact that you have additional agents that are not used at real time at the specific moment. Allows you to run a single agent, but you know, to keep a lot of available resources, very similar to our brain, a lot of available resources, but only small portion of them is being used for the right scenario.
Grayson Brulte: So if you take the agent model, what type of latency do you need to achieve? And can you just run this over 4G? Do you need 5G? How should we think about that?
Igal Raichelgauz: In today, deployments like, you know, L2 L2+, L2++, everything runs on edge. We are just loading from city to city the the different signatures of, uh, air to road. The agents and the skills, uh, are, are on the car. Of course, we plan to scale this platform and we plan to soon introduce, uh, this solution that you’re asking actually in this direction. Uh, but currently everything is on the edge and, uh, um, the compute that you’re using is about 20 terra flops, which is mentioned very low compute. Uh, the latency is, uh, below, uh, 20 millisecond of end-to-end. So it, uh, solves the, the most difficult problems of high speeds and, uh, emergency braking. We have essentially two processes. One is the orchestration or the routing, which happens on a scale of, uh, hundreds of milliseconds. This kind of the context switch of picking the right agent. Sometimes it can be on a scale of seconds as well, because the environment doesn’t, you know, change so much. If you’re in a school zone, you still need the same age and half a second later. Then the agents that we pick are sufficient to deal with this micro ODD, and they of course are much lower compute and much more optimized for this scenario.
Grayson Brulte: Because at the end of the day, it’s highly scalable. So you have the public partnership with VinFast. Are there other OEMs that could be coming down the pipe at some point as well?
Igal Raichelgauz: Yes, of course we’re working with additional OEMs. And we’re very happy to, to have Infa as, as our first customer because we believe that this partnership can bring this solution very fast and in a very broad way. Uh, so not just, you know, a specific car line or a specific product, but really the full spectrum from L two all the way to full autonomy. And that’s why, you know, we are so focused on that, but we are of course, working with additional players.
Grayson Brulte: With the VinFast partnership, will the Autobrains Software and the hardware requirements be done fully on the factory line. So when it comes off the factory line, let’s say after a certain period of time, 15 minutes, 10 minutes, the system is fully operational.
Igal Raichelgauz: Yes, that’s the idea. So the idea is once we solve the affordability piece, the idea is that cars that are mass produced are already embedded with the right architecture to provide this full autonomy.
Grayson Brulte: So you’ve got the consumer side. On this side you have the brain overarching. Are there any limitations in terms of vehicle size or different classes of vehicles that you can deploy? The Autobrain.
Igal Raichelgauz: We actually are on purpose focusing on, on more the segments that, that, uh, are high volume, uh, where the cost is a big pain. Even for lower level of autonomy, like L two plus and L two plus plus, uh, and for these vehicles, uh, the basic sensors, we believe that, uh, starting next year, almost every vehicle will have those sensors, right? The front rear, and the side commerce. Uh, some of them are parking, commerce. It’s part of the parking solution. So essentially the vision of our company is to make every vehicle autonomous, right? And that’s why we focus today on the volume segment and not premium cars, not luxury cars.
Grayson Brulte: It’s smart. ’cause then you’re getting more technology into more individual’s hands. You’re going to increase the driving experience by making it a riding experience. You’re going to save lives and do a, a lot of good and staying on the, on the low cost angle here that your system, no lidar. And that’s, I can tell you that’s one of the biggest debates we have here. And and why did you not use Lidar to build Autobrains?
Igal Raichelgauz: I think as a AI player, right? Our task is to get to human level with vision only. The lidar should get us from human to superhuman. But, uh, if you look at the brain, and that’s what Autobrains provides those brains to the cars to drive autonomously, we must be able to get to human and build and, and above performance with vision only. And then the lidar should create additional redundancy for robots, for additional oddities and so on. So that’s why laser focus on vision, that’s the human, uh, human sensor.
Grayson Brulte: I would say that’s what you’re focused , on practicality. I have a vision only system in my vehicle and, and it works flawlessly. I’ve gotten hundreds of thousands of miles w with it and has worked really well and drives better than I do frankly. You go, this has been a fascinating conversation, putting it all together. What is the future of Autobrains?
Igal Raichelgauz: So I believe that, uh, the future of the autonomous vehicles in general is in kind of an inflection point now. And we will see autonomy within five years on the roads at mass scale, right? There is already a proof on one side from players like Waymo that autonomy is possible, unlike it was a few years ago or even two years ago. And then on the other hand, there is a lot of scale from very advanced ADA system, so, so we have this disconnect still between the autonomy and the scale, the vision of Autobrains, what we call affordable autonomous driving. Based on this thinking, AI technology is really to make this two happen in one vehicle. The scale and the autonomy, making affordable and doing that based on this technology, that brings us few steps closer to how human thinks.
Grayson Brulte: I’ll summarize it this way. Autobrains is building the brain to power. Autonomous driving and autonomous driving is scaling globally. The future is bright. The future autonomous, the future is Autobrains. Igal. Thank you so much for coming on the road to autonomy today.
Igal Raichelgauz: Thank you. Thank you very much.
Subscribe to This Week in The Autonomy Economy™
Join institutional investors and industry leaders who read This Week in The Autonomy Economy every Sunday. Each edition delivers exclusive insight and commentary on the autonomy economy, helping you stay ahead of what's next.
