The five pillars of AI governance in India

Technologies evolve within interdependent, non-linear ecosystems where adaptability and interactivity are constant, underscoring the need for human intervention. Embedding human agency in AI oversight directly confronts the pitfalls of algorithmic governance
AFP
AFP
Updated on
3 min read

The US experiences with unregulated growth in social media and finance illustrate the dangers of delayed regulation. Social media platforms initially expanded without oversight, embedding practices that later heightened misinformation and privacy issues, while unchecked financial innovation led to the 2008 crisis, necessitating reactive measures like the Dodd-Frank Act.

These examples underline the value of ex-ante regulations—proactive rules to mitigate risks before they mature. Policymakers must rely on theoretical frameworks to anticipate risks and close regulatory gaps in regulating a complex field like AI.

R Baldwin contended that regulation must incorporate adaptive mechanisms in rapidly evolving sectors, enabling iterative updates as knowledge and technology develop. This aligns with K L Hood’s concept of ‘regulatory intelligence,’ which encourages regulators to monitor the sector continuously. Finally, David Collingridge’s dilemma—the notion that early intervention is difficult due to insufficient information, while later intervention is difficult due to entrenchment—underlines the importance of scalable, flexible regulatory structures that evolve alongside the sector. 

There should be five pillars of AI regulations in India. The first would be the precautionary principle and ethical guardrails. India could take cues from the tiered risk framework in the EU AI Act. But it should be tweaked. Risk in AI is inherently dynamic. Therefore, a robust regulatory framework should incorporate adaptive risk categorisation, where AI applications are periodically reassessed as new information and technologies emerge.

This could be achieved through a regulatory evolution mechanism that regularly updates risk levels based on real-world performance and emerging evidence. High-risk AI applications could initially enter a regulatory sandbox for controlled deployment, where risks are closely monitored, and adjustments to restrictions are made in real-time. Additionally, establishing an AI oversight committee with interdisciplinary experts to reassess and reclassify AI risks periodically would allow India to maintain a responsive, evidence-based regulatory environment.

The emphasis on human oversight, as advocated by an EAC-PM working paper, is integral to AI governance frameworks, especially when considered through the lens of Complex Adaptive Systems (CAS) theory. CAS, advanced by Holland (1992), suggests that technologies evolve within interdependent, non-linear ecosystems where adaptability and interactivity are constant, underscoring the need for human intervention to manage AI’s unpredictable effects. Collingridge’s (1980) “control dilemma” further reinforces this need, highlighting how regulatory intervention becomes increasingly challenging once technologies are entrenched.

Embedding human agency in AI oversight directly confronts the pitfalls of algorithmic governance, where self-reinforcing feedback loops can produce rigid path dependencies that not only intensify systemic biases but also compound unintended consequences. This adaptive regulatory framework thereby aligns with governance theories that advocate for flexibility and accountability in managing evolving systems.

The third pillar should focus on data sovereignty and strategic data ethics, explicitly targeting the protection of critical datasets within a geopolitical framework. This pillar advocates for data governance models that align with the objectives of the Digital Personal Data Protection Act and extend further, embedding principles of “contextual integrity” to address cross-border data flows and the strategic implications of genomic data localisation.

Fourth, a robust framework with actionable guidelines is essential to address bias and promote fairness in AI systems. AI developers should implement structured protocols grounded in the principles of fairness, accountability, and transparency in machine learning. Key actions include conducting pre-deployment bias audits and establishing fairness metrics as standard requirements, particularly in public-sector applications where equitable outcomes are critical. Furthermore, ensuring demographic representativity in training datasets can help address systemic disparities. Regular bias impact assessments should also be mandated, allowing ongoing evaluation and adjustment to correct for emerging biases.

Fifth, AI systems must adhere to rigorous standards for explainability, robustness, and traceability to achieve algorithmic accountability and transparency. Grounded in frameworks by Frank Pasquale and Brent Daniel Mittelstadt, an accountability structure can be built by requiring developers and operators to document critical information such as training data sources, preprocessing methods and the results of post-deployment audits.

To standardise transparency, ‘model cards’ and ‘datasheets for datasets’ should be mandated, ensuring users and regulators can assess AI systems’ objectives, limitations, and potential impacts. In a country like India, this can be achieved by embedding these standards within the regulatory framework for AI applications, prioritising high-stakes sectors such as healthcare, finance, and public services.

In short? Let’s ensure AI remains intelligent and transparent, not covertly overstepping its bounds. We want systems solving problems, not plotting global takeovers or making unsolicited choices on our behalf.

Aditya Sinha

Officer on Special Duty, Research, Economic Advisory Council to the Prime Minister of India

(Views are personal)

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com