

In the ‘Sauptika Parva’ of the Mahabharata, after the war at Kurukshetra, Ashvatthama and Arjuna each unleash the Brahmashirastra, a weapon of such totality that Vyasa himself must intervene. Arjuna, at Krishna’s counsel, withdraws his. Ashvatthama cannot, and in bitterness turns his towards the womb of Uttara. The sequence is instructive. Certain divyastras were withheld because their release would unmake the world, and the danger lay not in the weapon alone but in the asymmetry of those who held it.
In 2019, OpenAI announced that a large language model it had trained was too dangerous to release. It was GPT-2. It was released within months. Seven years on, the same warning has returned, and this time the warners are harder to dismiss.
On April 7, Anthropic—now valued at roughly $800 billion—declared its latest model, Claude Mythos, “substantially beyond” anything previously trained. CEO Dario Amodei decided against a general release. The UK’s AI Security Institute, the only non-US entity granted access, confirmed that Mythos completed a 32-step simulated cyber intrusion no prior model had accomplished.
Anthropic claims Mythos has identified severe ‘zero-day’ security flaws across every major operating system and web browser, including a flaw that had evaded detection for 27 years, and others in Unix-family systems including FreeBSD. CrowdStrike claimed that AI-enabled cyberattacks were up 89 percent in 2025, and the average time between initial access and malicious action has fallen to 29 minutes, a 65 percent acceleration over the 2024 average.
That is the controversy, shorn of dramatisation. A private American firm has built an autonomous agent capable of chaining multi-step intrusions—privilege escalation, lateral movement, exfiltration—against the commodity software stack on which the world’s banks, hospitals, power grids and payment systems rest. It has then decided, unilaterally, who may use it to patch themselves.
Anthropic’s Project Glasswing’s 40-odd early-access firms include Apple, Microsoft, Google, J P Morgan, Goldman Sachs, Cisco, CrowdStrike and the Linux Foundation. Eleven of the named partners are American. The only non-US door opened has been to the UK. The European Commission has met Anthropic three times and not been given the model. Germany’s BSI has received briefings, not access. Canada’s finance minister has compared the threat to a closure of the Strait of Hormuz. The Bank of England’s governor has said Anthropic may have “cracked the whole cyber-risk world open”.
India has no seat at this table. India’s scheduled commercial banks, NPCI’s UPI rails, the core banking stacks of public-sector lenders, GSTN, DigiLocker, FASTag, state load-despatch centres and CBDT servers run on precisely the commodity operating systems and browser runtimes in which Mythos has identified flaws. It was reported on April 22 that the Reserve Bank is reduced to consulting the Federal Reserve and Bank of England about a model it cannot examine. NPCI is attempting to secure early access and is being told that the servers are American and that local-data testing in foreign jurisdictions is not feasible. Britain’s worst-case bank-hack modelling, published before Mythos existed, described direct debits failing, rents and wages unpaid, ATMs frozen, point-of-sale payments rejected at petrol stations and a run beginning on rival lenders.
The challenges for the Indian government are of three kinds. First, access. The proposition that a private corporation may decide who is permitted to defend themselves is one no sovereign should accept.
Second, regulatory scaffolding. The IT Act of 2000 was drafted when broadband penetration in India was below 1 percent. Section 70A of that Act constitutes the National Critical Information Infrastructure Protection Centre (NCIIPC), but confers no evaluation power over foreign-trained models. The DPDP Act of 2023 addresses personal data, not frontier-model dual use. CERT-In’s directions of April 2022 mandate 6-hour incident reporting, but are silent on pre-incident model evaluation. MeitY’s AI advisories are, well, advisory.
Third, time. Anthropic itself expects comparable models to proliferate, including to less safety-conscious open-source labs, within 18 months.
What is to be done? Several concrete measures, each of them overdue. One, statutory evaluation access. The IT Act should be amended (whether by a new dedicated provision or by an extension of CERT-In's powers under Section 70B) to require any frontier-model developer offering commercial services in India to submit the model for capability and misuse evaluation by a designated national evaluator, on the AI Safety Institute’s template. No evaluation, no licence to operate. The UK, a jurisdiction of almost 70 million, secured such access in under two years. India has 1.4 billion, and a payments system processing over 18 billion transactions a month.
Two, an Indian AI Safety Institute, with statutory backing and the technical apparatus to conduct independent capability, jailbreak and agentic-behaviour evaluations. Not another advisory body. An institute with teeth.
Three, live red-team capacity at CERT-In and NCIIPC, running frontier-model-generated exploits against the critical information infrastructure sectors already notified (power, banking, telecom, transport, strategic public enterprises) rather than producing compliance checklists.
Four, a mandatory Software Bill of Materials (SBOM) for every critical information infrastructure operator, on the lines of US Executive Order 14028 of May 2021, with machine-readable SBOMs deposited with NCIIPC. Without an SBOM, a bank hearing that Mythos has found a FreeBSD zero-day cannot tell whether it runs FreeBSD, and where.
Five, revision of the RBI Cyber Security Framework for Banks of June 2016. Its 30-day patching window for critical vulnerabilities is obsolete. For zero-days of CVSS 9.0 or above (indicating critical severity of vulnerability, the highest risk level), the standard should be 72 hours, with direct regulatory reporting and secondary escalation to CERT-In. The Securities and Exchange Board’s parallel framework for market infrastructure institutions requires the same upgrade.
Six, a diplomatic track. The India-US Initiative on Critical and Emerging Technology (iCET) already covers AI cooperation on paper. Frontier-model safety evaluation access should be folded into its operative agenda, not left to the next ministerial communiqué. A parallel mechanism with the UK, given its privileged access, is more urgent. This is a bilateral question ex ante, not a multilateral aspiration ex post.
Seven, liability. Section 43A of the IT Act, which addresses corporate negligence with sensitive personal data, is silent on harm arising from frontier-model capability deployment. An equivalent strict-liability regime for foundational-model developers whose systems cause damage to notified CII is overdue.
India’s sovereignty over the computational substrate of its own financial system is, for the present, contingent on the goodwill of a San Francisco boardroom.
In the Mahabharata, Arjuna could withdraw his weapon. Ashvatthama could not. The lesson of that passage was never about weapons. It was about who held them, and on what terms. Project Glasswing contains no withdrawal clause for those not invited.
Aditya Sinha | Public policy professional
(Views are personal)
(On X @adityasinha004)