

NEW DELHI: Army commanders will increasingly need to become “technocrats” capable of understanding algorithms and questioning machine-generated outputs as artificial intelligence becomes embedded in battlefield systems, Deputy Chief of Army Staff (Information Systems and Training) Lt Gen Vipul Shinghal said Wednesday.
“More and more commanders have to start becoming technocrats and understand what is happening inside, what data is being used, and how it is being manipulated to arrive at a decision,” he said while speaking on trust gaps in AI systems at the Synergia Conclave in the national capital.
The growing use of AI-enabled decision-support tools is also sharply shrinking the time available for battlefield decisions, he said. “The time between spotting a target and the system suggesting action is very small,” he said, noting that AI-driven systems can process real-time inputs from drones, satellites, and ground sensors and recommend strikes within seconds.
This creates a dilemma for commanders operating in high-tempo combat environments. “If he doesn’t press that button and doesn’t act on what the AI has told him, he may lose the opportunity and may be questioned about it later. Whereas if he does it and goes wrong, then where is the moral buffer?” he said.
“The AI system cannot provide a moral buffer to the commander. He is still morally responsible since lives are at stake.” Trust in such systems is therefore critical in military applications of artificial intelligence, Lt Gen Shinghal added.
Illustrating the risks of relying solely on AI outputs, he described a scenario where a system detects movement and recommends a strike.
“The sensor is seeing movement. It assumes it is adversary troops because it is only supposed to be troops there,” he said. But the system may not know that a civilian evacuation is underway in the same area.
“The commander pauses and asks, ‘What does the system not know?’” he said, underlining the role of human judgement in preventing a mistaken strike.
The senior officer emphasised that the Indian armed forces are adopting AI-enabled systems across areas such as surveillance, reconnaissance, logistics, and inventory management. “As far as the Indian Army and armed forces are concerned, we are fully aligned with the transformational nature of AI,” he said.
However, he stressed that safeguards are necessary. “We should be sure which decisions can be delegated to AI and which must remain with the human in the room. And this must be built into law,” he said.
Even highly accurate systems cannot be allowed to make autonomous lethal decisions, he cautioned. “Even with 90 per cent accuracy… that 10 per cent is too dangerous to be allowed to operate automatically.”
He also underlined what he called a critical requirement for AI in military systems, technological sovereignty.
“Data, models, networks, and hardware need to be there. Otherwise the commander cannot have trust in the system,” he said, stressing that control over the entire technological stack is essential if the armed forces are to rely on AI in combat situations.
Transparency in how AI systems arrive at their recommendations is equally important, he added. “The black box has to become a glass box.”
The Deputy Army Chief also emphasised the need for what is increasingly being described globally as “meaningful human control” over AI-enabled weapons and decision-support systems.
“Commanders must have sufficient situational awareness, adequate time to take a decision, and the ability to override, abort, and intervene,” he said.
Ultimately, he said, the ethical framework guiding the use of force cannot be delegated to machines.
“In the Indian context, we have always believed that shakti must go hand in hand with dharma, force must go hand in hand with righteousness.”