Associated Press
Opinion

Risks and realities of killer robots

Several countries are developing lethal autonomous weapon systems that function without moral reasoning and situational awareness. Current international legal frameworks can’t address the deep ethical dilemmas they pose. Work towards a global treaty on their use is hobbled by geopolitical divisions

Aditya Sinha

In his sci-fi novel, Runaround, Isaac Asimov introduced the three laws of robotics to explore the moral boundaries of machine intelligence. His robots were programmed to preserve human life, obey ethical constraints, and act only within a tightly defined moral architecture. These laws forced readers to grapple with the limits of delegation and the necessity of conscience in decision-making. This insight is especially relevant today, as warfare increasingly incorporates unmanned systems.

In recent conflicts—India’s Operation Sindoor, Azerbaijan’s use of Turkish drones against Armenian forces, and Ukraine’s deep drone strikes into Russian territory— all offensive systems remained human-operated. Humans directed target selection, authorisation and engagement. But now, as the global defence landscape shifts toward lethal autonomous weapon systems (LAWS), Asimov’s warning grows more relevant. Unlike the author’s fictional robot Speedy, these systems will not hesitate when ethical ambiguities arise. They will not wait for human correction. They will act without the possibility of a moral pause.

LAWS are weapons that can select, track, and engage targets without real-time human control. LAWS rely on AI, sensor fusion, and machine learning algorithms to make independent targeting decisions. This autonomy dramatically accelerates response time and expands operational reach, but at significant ethical and legal cost. The development of LAWS is already underway in multiple countries.

The US, China, Russia, Israel and South Korea have invested heavily in autonomous platforms ranging from loitering munitions to swarming drones and autonomous ground systems. The US military has demonstrated autonomous swarms in exercises like Project Convergence; China is integrating AI into hypersonic systems and naval platforms; and Russia has tested autonomous tanks like Uran-9. Although fully autonomous systems capable of making unsupervised kill decisions are not yet officially deployed, the technological threshold is narrowing.

These systems come with six kinds of risk. First, LAWS rely on AI for target recognition; however, these models are vulnerable to adversarial attacks, such as infrared decoys or altered clothing patterns, which can lead to civilian misidentification and thereby violate the principle of distinction under international humanitarian law.

Second, as reaction times shrink to milliseconds, human oversight becomes impractical. Decisions will be driven by opaque AI models like deep neural networks, undermining transparency and making post-strike accountability nearly impossible.

Third, when deployed in swarms or decentralised formations, LAWS may exhibit emergent behaviour, actions not explicitly programmed, leading to unpredictable escalation. In high-stakes conflicts, such as those involving missile defence or critical infrastructure, unintended escalation could have strategic consequences.

Fourth, delegating lethal decisions to non-sentient machines erodes the ethical core of warfare, as LAWS cannot interpret surrender, intent, or moral nuance, thus violating principles like proportionality and humanity.

Fifth, the ‘simulation-to-real’ gap remains a persistent risk, as AI systems trained in constrained simulations may fail amid real-world battlefield complexity, sensor noise, or unexpected tactics. Sixth, the global spread of LAWS will lower the threshold for autonomous warfare. Cheap, dual-use AI components will empower both states and non-state actors, while the absence of verification norms will make it easy to embed autonomy covertly, destabilising arms control.

Thus, there is a need for a regulatory framework on LAWS. While the existing international legal regimes (including international humanitarian law, criminal laws, and Article 36 of Additional Protocol I of the Geneva Conventions) may formally apply, they are technically inadequate for regulating their unique risks. These frameworks presuppose predictable and human-controllable weapons, whereas LAWS make both ex ante legal review and ex post accountability nearly impossible. Core principles such as distinction, proportionality and precaution require moral reasoning and situational awareness.

Critically, current law offers no enforceable technical metrics, audit standards, or verification protocols. Ethical safeguards such as the Martens Clause are too abstract to effectively constrain systems that are incapable of recognising intent or emotions. In effect, the existing legal architecture, while symbolically comprehensive, is practically obsolete in the face of adaptive autonomy in warfare.

In 2023, the US came up with the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which 50 countries had endorsed until November 2024. While the effort marks a welcome attempt at global norm-building, it remains non-binding and lacks any enforcement or verification mechanisms.

Thus, there is a need for a legally binding instrument on LAWS, just like the Nuclear Non-Proliferation Treaty. While momentum is building for a treaty, its adoption remains stalled due to persistent geopolitical divisions and procedural gridlock within the Convention on Certain Conventional Weapons.

A meaningful treaty must go even beyond this proposal and impose a categorical ban on fully autonomous weapons. No amount of regulation can fix the accountability vacuum, algorithmic bias, or risk of escalation these systems pose. The treaty should outlaw pre-delegated authority to kill, mandate human decision-making in every lethal engagement. Ultimately, the goal must be to prevent the normalisation of machine-driven violence and uphold human dignity as a non-negotiable baseline in conflict.

Aditya Sinha | Public policy professional

(Views are personal)

(On X @adityasinha004)

Rahul Gandhi says Centre seeking to repeat 'farm laws mistake' with MGNREGA repeal

Meitei man abducted, shot dead in Manipur’s Churachandpur, breaking months-long lull

SC allows Hindu prayers at disputed Bhojshala-Kamal Maula Mosque on Basant Panchami

Karnataka Guv address: Gehlot read out his speech, failed to discharge constitutional duties, says CM

Mumbai to have woman mayor from general category

SCROLL FOR NEXT