In a country where road realities are complex and technology often struggles to keep pace, Deepgrid Semi is building solutions grounded in how India actually moves. Founded in 2024 by Aravind Prasad and based out of Hyderabad’s T-Hub Phase 2, the semiconductor startup is developing next-generation AI-powered System-on-Chip (SoC) solutions for Advanced Driver Assistance Systems (ADAS) and smart mobility. At the core of its innovation is the DGrid-SoC — a low-power, high-parallelism chipset designed for real-time edge processing, enabling sensor fusion and intelligent decision-making in high-intensity environments. The startup’s approach has already begun to gain recognition, having been selected among the Top 50 at TiE Hyderabad, the Top 8 at VLSID, and most recently, among the Top 10 startups out of 600+ applicants for the HDFC Parivartan programme with T-Works, along with a Rs 25 lakh grant — marking a strong validation of its technology and direction as it advances vehicle safety and next-generation mobility infrastructure in India. CE interacts with Aravind Prasad to understand the technology, how it works and more.
Excerpts
Could you take us through what led to starting Deepgrid Semi?
I’ve been an entrepreneur from the very beginning. I built my first company, followed by Evolgence Telecom around 2007–08. I then moved on to build other companies, including an algorithm company and, around 2014, another venture focused on AI algorithms and data centres. By 2018, I was working on edge solutions. Later, during COVID, something caught my attention. There were reports suggesting that COVID was ‘good’ for one sector — because there were fewer road accidents. That startled me. Looking back, I found that around 2007, accident-related deaths were about 30,000 – 40,000. But over 15 years, those numbers had doubled. When I checked the data more closely, I saw nearly five lakh accidents and around two lakh deaths annually. Even 15 years after my cousin’s death in an accident, the problem had only worsened. I realised that this issue had largely been addressed in countries like the US, where deaths are around 16,000. So why is India so different? The key issue is that nearly 80% of deaths involve trucks, which lack systems like lane detection, forward collision warning, or adaptive cruise control. These technologies are largely missing because chipsets are expensive. I also came across research by professors stating that 80% of accidents could be avoided with basic ADAS systems. That’s when I began to question whether the cost barrier could be solved — and that’s where the journey began.
How did you go about solving this problem?
During COVID, I had the time to get deep into semiconductor architecture. Being an electronics engineer, I went back to hardware. I worked with senior experts — Satyanarayana from ISRO and Subramaniam sir, a veteran from Thermo Fisher in the semiconductor space. They helped architect the solution and confirmed that it was possible. We realised that competing with companies like NVIDIA in general-purpose computing is extremely difficult. But if you focus on a very specific problem — like ADAS at the edge — you can innovate. Instead of using large transformer models designed for the cloud, we created a transformer-native, domain-specific chipset. We brought non-linear equations directly into the chip. It works like a lookup table — you pre-calculate values and retrieve them instantly instead of computing them every time. With this approach, we reduced the cost from about $3000 to $30 at scale. This is critical because India is extremely price-sensitive. If the solution costs more than Rs 30,000 – Rs 40,000, the trucking industry won’t adopt it. Many wouldn’t even install a camera otherwise.
Can you explain the development and validation process of your chip?
We started with a 130-nanometer tape-out for validation. This stage, known as silicon validation, tests whether the model actually works on the chip. The cost of this stage is around Rs 20 lakh. Once validated, we move to a full tape-out. The $30 cost I mentioned applies when we manufacture at scale — around a million units. The key innovation is that we built a transformer-native chipset specifically for ADAS, making real-time edge processing both efficient and affordable.
What exactly does your system offer?
We provide a complete system: compute, model, and display. The display could be a smart mirror, an Android screen, or a head-up display — we integrate those, but our core is the compute and the model. We first built an AD0 system, which is an alert system. It detects lanes, monitors distance from other vehicles, and gives warnings — both visual and voice — if you get too close. It’s designed not to distract the driver but to assist. We then built more advanced capabilities. The system works even without lane markings, which is very common in India. It predicts driving behaviour using inputs from cameras and LiDAR. It generates waypoints for the next few seconds — essentially predicting what the vehicle should do — and adjusts speed, braking, and steering accordingly. For example, if it detects an obstacle, it slows down, stops, and then proceeds when safe.
Why don’t global ADAS systems work well in India?
Systems in cars like Mercedes or BMW depend heavily on structured inputs — lane markings, traffic signals, signs, and HD maps. These are not consistently available in India. Driving in India is behaviour-driven. People adapt based on how others are driving. A child here can cross a road easily, but a foreigner struggles. So instead of relying on structured inputs, our model learns from driver behaviour and sensor data. That’s the core difference. The model is trained on how people actually drive in India.
You demonstrated multiple applications. Where else can this chipset be used?
Initially, our context was emergency braking for trucks. But the chipset evolved into a full autonomous stack. We’ve demonstrated it in autonomous mobile robots (AMRs), AGVs in industrial environments, and even humanoids. For instance, robots can move from one point to another, detect obstacles, wait, and then proceed. We’ve also tested capabilities like humanoids performing tasks such as yoga. The same chipset can be adapted for different use cases by modifying sensors and system configurations. However, our primary focus remains very clear — reducing road accidents, especially in the trucking sector. That’s the problem we set out to solve.
How are you approaching multiple verticals like healthcare and robotics?
We are a use-case-first company. Once we built the foundational transformer-native chipset, it became adaptable across verticals. For example, for drones, we need to modify hardware slightly. Healthcare is a potential area, but we’ve paused it because it requires FDA approvals and has a long regulatory cycle. Our immediate next focus is on radar SoC and software-defined vehicles (SDVs). Beyond that, we have plans for H100 (healthcare-focused) and T100 (data centre class chips), where we are targeting very high compute performance, even up to supercomputing levels. We’ve planned our roadmap for the next 15 years, with each chip taking about two years.
What is the core design philosophy behind your chipset?
Traditionally, chipsets are built first, and then software is written on top. Our philosophy is different — we build domain-specific chipsets tailored to applications. We created a domain-specific accelerator that became a transformer-native architecture. This allows transformer models — similar to LLMs — to run directly on the edge in real time. That’s the fundamental shift.
How does your chip handle real-time processing from multiple sensors?
We use high-speed interfaces like PCIe for computation and MIPI for camera input. These enable fast data transfer from sensors. The transformer model performs sensor fusion — it combines data from cameras, LiDAR, radar, and ultrasonic sensors depending on the use case. Because the accelerator is specifically designed for this transformer, it processes data extremely fast — around 24 to 30 frames per second. That means real-time inference, matching camera speed.
What challenges have you faced building semiconductors in India?
The biggest challenge is the lack of an ecosystem. If I were in the US, I could just build a chip and the ecosystem would take over — validation, system integration, scaling, and market adoption. In India, we have to do everything ourselves. I have to build the chipset, the software, the system, and even the use case. For example, to prove my chip works, I end up building AGVs or UGVs myself. There is no forward integration where someone takes the chip and builds on it. So we are forced to backward integrate everything. This increases cost, effort, and time. What could take a few years elsewhere ends up taking much longer here. It’s not just my challenge — it’s something everyone in the semiconductor space in India faces.