Minds, machines & morality: Principles & paradoxes in regulating AI

A narrow definition of AI risks regulatory blind spots, while a broad one may stifle aspirational innovation. Without clarity on what is being regulated, no noble principle can be effectively applied
Representational image
Representational image
Updated on
3 min read

Artificial Intelligence (AI) is no longer emerging, it is embedding. From cancer diagnosis, surveillance and legal drafting; AI has become a crucial decision-making layer in our lives. Yet while innovation accelerates, regulatory frameworks lag. Across regions, lawmakers invoke a familiar set of principles: fairness, explainability, accountability, safety and human oversight. But behind this apparent consensus lies deeper issues— these ideals are difficult to define, harder to measure and often impossible to enforce. The result is not just a regulatory lag, but a growing mismatch between the complexity of modern AI systems and the simplicity of legal frameworks. Another foundational challenge is definitional—what exactly is AI? Too many definitions exist across the world. The term can refer to everything from statistical classifiers and chatbots, to autonomous drones and artificial general intelligence. Till we come to a consensus, we risk a fragmented approach to regulation. A narrow definition risks regulatory blind spots, while a broad one may stifle aspirational innovation. Without clarity on what is being regulated, no noble principle can be effectively applied. Fairness is the most cited principle in AI ethics. But what does “fair” mean? Metrics like demographic parity, equal opportunity and equalized odds can end up contradicting each other. Also, the idea of fairness is culturally contingent, like a gender-neutral hiring algorithm in the UK might completely miss caste or regional disparities in India.

Transparency and explainability are essential for building trust in AI. The deep learning models like GPT-4 are fundamentally “black boxes”, as they generate outputs by computing a large set of complex probabilistic relationships across billions of parameters. So, in such cases, forcing machines to produce human-understandable explanations for their decisions is not just difficult, but can also be misleading and counterproductive. Accountability, too, becomes quite murky in distributed AI systems. Meanwhile, the push for risk-based regulation, especially visible in the EU AI Act, 2024, offers a pragmatic pathway but also suffers from rigidity. A tool categorized as low-risk today might be repurposed into a high-risk context tomorrow. Generative models that began as daily assistants now offer professional level medical and legal advice. Without continuous and rigorous reassessment, risk tiers may become outdated and thus, allowing ‘risk-washing’ by actors who underreport capabilities to evade scrutiny. So, where do we go from here to ensure that AI gets regulated and serves humanity? First, AI regulations must be adaptive. Features like built-in sunset clauses, timely and regularly updating risk tiers and using sandboxes to test innovation under supervision can act as dynamic components. Second, there should be confluence of traceable causality with tiered liability across the AI lifecycle to assign accountability: developers for design flaws, deployers for misuse and auditors for systemic bias. Third, the laws must promote ethical pluralism as fairness which cannot be standardized globally but it can be localized. Fourth, we may think of integrating AI oversight with sectoral regulators—finance, health, transport; rather than building new siloed verticals. And fifth, we need global governance coalitions that bridge nations on core values and build consensus. In conclusion, it can be said that regulating AI isn’t just a legal exercise, rather it’s an ideological balancing act for human civilization. Like the samudra manthan in Hindu mythology, AI law must separate promise from peril, efficiency from exploitation and progress from predation. To do this, we must move towards responsive, adaptive, rights-anchored and risk-sensitive regulation.

Kunal Srivastava | Director (Finance), Department of Telecommunications

(Views are personal)

Related Stories

No stories found.

X
Open in App
The New Indian Express
www.newindianexpress.com