

The recent AI Impact Summit in Delhi was more than a diplomatic gathering—it was a declaration of intent. For years, the global conversation on artificial intelligence has been monopolised by the Global North—nations that, for all their computational prowess, lack firsthand experience of the challenges faced by the developing world. India’s singular achievement at this summit was to force a reckoning with that asymmetry.
The landmark New Delhi Frontier AI Impact Commitments—signed by nearly all major AI companies, Indian and international alike—commits to context-aware evaluation and multilingual deployment tailored to local environments. This is no symbolic gesture. For a country as linguistically and culturally diverse as India, it is a structural necessity. Unless the entities controlling AI power are willing to include the rest of the world in the conversation, our demands for inclusion will remain unheard. That they signed is significant.
But the most striking revelation was not in a policy document—it was in the expo’s hallways. People were literally pushing doors to join sessions. Global tech leaders, accustomed to cautious or hostile public reactions in Western capitals, were blindsided by the sheer enthusiasm. Young Indians do not view AI through the lens of fear—of displacement, control or existential risk. They see it as a partner: a tool for uplift of self and progress of the nation. That is a civilisational mindset shift, and the world should pay attention.
Enthusiasm, however, is not policy. Converting this energy into genuine AI sovereignty begins with a hard question: do we have enough people who can build AI from the ground up? Not just fine-tune it or wrap it in an app, but construct it from first principles?
The answer today is: barely. Building foundational AI for India requires a fundamentally different way of thinking. Indian languages demand novel approaches to tokenisation that cannot simply be borrowed from English-centric architectures. The constraint of limited compute is not merely a handicap; it is a design challenge that forces us to innovate.
A clear proof of this potential was on display at the summit. Sarvam AI, the Bengaluru-based startup, unveiled sovereign foundational models like Sarvam-30B and Sarvam-105B, built specifically to reason and code within India’s unique constraints. But Sarvam’s impact goes beyond the models it releases; the company itself operates like a foundry for talent. By tackling the hardest problems in AI engineering—training large-scale models on constrained infrastructure—it is forging a new generation of research engineers who possess rare, high-value skills. We will soon see new companies launched by folks trained at this ‘Sarvam foundry’, carrying that expertise into the wider ecosystem.
But a few companies acting as a training ground is not enough. We need a concerted, nationally coordinated effort to expand this talent pool—through universities and industry partnerships that rethink what engineering education in an AI-native India should look like.
Gaps no one is talking about
Even the best engineers hit a ceiling without deep fundamental research. Pushing the boundaries of what AI can do—particularly under severe compute and data constraints—requires sustained investment in academic enquiry that does not strictly aim for a product launch in six months.
This is precisely where India's AI ambitions are most vulnerable. Building models that genuinely serve the population—reason about land records in regional scripts, or interpret a health worker's voice note—requires original research. The solution lies in structured academia-industry joint ventures: partnerships where university rigour meets industry scale. These exist informally today; they need to be formalised, funded and multiplied.
The Safe and Trusted AI working group at the Summit proposed setting up a Trusted AI Commons—an open platform for safe AI tools across the Global South. But ‘safe and trusted AI’ in India is far more complex than the global discourse suggests. The standard challenges—bias, opacity, non-determinism—take on an entirely different dimension here.
Consider farmer advisory bots. While celebrated as a way to democratise agricultural expertise, we lack a standardised benchmark to evaluate if they actually work—not even in English, let alone in Kannada or Bhojpuri. Without rigorous evaluation frameworks, we risk deploying tools that are untested in the wild.
The bias problem is even more fraught. India has axes of discrimination, such as caste, that are invisible to models trained on Western data. While early efforts from IIT Bombay and IIT Madras have begun surfacing examples of such bias, we lack datasets large enough to actually de-bias models during training. Addressing this is not technically intractable, but it requires resources we have not yet allocated.
There are subtler nuances—like how AI represents India's geographic boundaries or regional identities— that are political and cultural landmines. These are not problems AI researchers can solve in isolation; they require historians and sociologists in the loop.
Perhaps the most important framing shift is viewing AI not as a disruptor, but as a co-creator. Realising AI's potential for social uplift requires carefully integrating it into existing workflows rather than imposing it over them.
Take India's ASHA health workers. If AI is deployed as a ‘replacement’, it threatens the livelihoods of the very women who form the backbone of rural healthcare. But if designed as an augmentation—helping them triage high-risk pregnancies or digitise records—it transforms them into AI-assisted healthcare providers. The difference between rejection and adoption is entirely a function of design and respect.
Finally, on data sovereignty, the position must be unambiguous: Indian data should be processed on Indian hardware, preferably by Indian models. Investment is welcome, but not at the cost of digital colonisation.
India’s demographic dividend and startup ecosystem are structural advantages, but they are not guarantees. Urban youth are already training themselves, finding resources, and building skills outside any formal system. That is admirable, but it means the dividend will pay out only in cities if we are not deliberate. Ensuring that AI bridges rather than widens existing inequalities is a prerequisite for inclusive growth.
The summit has ended. The real work—unglamorous, granular, generational—has just begun.
B Ravindran | Head, Wadhwani School of Data Science and AI, IIT Madras
(Views are personal)