Why we need to be proactive on AI laws

Without proactive regulations, artificial intelligence can exacerbate inequality, displace segments of the workforce & concentrate power in the hands of a few.
Why we need to be proactive on AI laws
University of Amsterdam
Updated on
3 min read

The Indian IT sector’s rise in the late 1990s and 2000s presents a fascinating case in regulatory studies, particularly concerning the role—or notable absence—of ex ante regulations. During its formative years, as global regulators were entangled in understanding and governing IT complexities, Indian policymakers largely remained disengaged. This regulatory vacuum, analysed in works such as the research led by Robert Baldwin in 2012, provided a fertile ground for innovation unburdened by stifling constraints of premature regulation. The sector’s success can be partially attributed to this.

However, this does not suggest that the absence of ex ante regulations is inherently beneficial across all sectors. George J Stigler’s 1971 economic regulation model highlights how, in the absence of effective regulation, industries often co-opt regulatory bodies to serve their own interests. The self-regulation model sometimes doesn’t work. This phenomenon, known as regulatory capture, can lead to influential firms in emerging sectors like AI dominate the market, stifling competition, while imposing negative externalities on society.

Daron Acemoglu and Pascual Restrepo built on this in 2020 by examining the broader macroeconomic and societal impacts of automation and AI. Their research shows how, without proactive regulatory intervention, these technologies can exacerbate inequality, displace large segments of the workforce, and concentrate economic power in the hands of a few dominant firms. The absence of regulation can trigger societal disruptions, as the unchecked deployment of AI systems might entrench biases, amplify disparities, and even erode democratic institutions by undermining public trust in digital governance.

Ex ante regulations for AI require regulators to anticipate risks before they materialise. These regulations set forth legally enforceable standards to pre-emptively protect against systemic failures, unethical practices, and threats to market integrity. However, regulators face significant challenges, including information asymmetry, where they may lack knowledge that industry players possess. This disparity necessitates that ex ante regulations be not only legally robust but also highly adaptable.

We now have some understanding of the risks AI can generate. Aidan Slattery and others at MIT have developed an AI risk repository in 2024 that builds on previous efforts to classify risks by synthesising diverse perspectives into a unified classification system. Their approach identified two distinct types of classification systems: high-level categorisations of the causes of AI risks (for example, why or when the risks occur) and mid-level hazards or harms (misuse or noisy training data).

The main domains identified include discrimination and toxicity (unfair AI treatment, exposure to harmful content), privacy and security (compromise of personal data, system vulnerabilities), misinformation (spread of false information, erosion of consensus reality), malicious actors and misuse (AI-driven disinformation, cyberattacks, weaponisation), human-computer interaction (overreliance on AI, loss of human autonomy), socioeconomic and environmental harms (power centralisation, job quality decline, environmental impact), and AI system safety and limitations (misaligned goals, dangerous capabilities, lack of robustness and transparency).

This comprehensive system highlights the wide range of risks and emphasises the complexities in regulating AI. The implications for regulations are significant, as this tool can help policymakers identify, prioritise, and address these multifaceted risks.

The recently enacted EU AI Act has a risk-oriented regulation system, which is less comprehensive. It tries to regulate AI by introducing a risk-based framework that categorises AI systems into unacceptable, high and low-risk levels. This ensures AI applications are governed according to their potential for harm. However, the Act’s reliance on broad risk categories has been criticised for oversimplifying the risks’ complex and rapidly evolving nature. When the regulatory system and fails to be adaptive, it becomes rigid. This can result in outdated oversight, allowing harmful applications to slip through the cracks.

The US has taken a more sector-specific approach, with agencies like the Federal Trade Commission issuing guidelines on AI usage, particularly concerning fairness and transparency in consumer protection. The US has also proposed the COPIED Act, which aims to enhance transparency around AI-generated content by requiring disclosures about its role in creating or altering media. The UK has opted for a principles-based framework, emphasising ethical AI development and innovation without rigid compliance requirements.

China has a more stringent, centralised framework that prioritises national security and social stability. The government mandates strict oversight of AI technologies, with regulations like Algorithm Recommendation Service Management Regulations requiring companies to disclose the operational details of their algorithms. The enforcement of these rules is robust and is closely integrated with the nation’s surveillance apparatus. Ironically, despite this, between 2014 and 2023, an astonishing 54,000 GenAI-related inventions were filed in China, the highest in the world.

These varying approaches point to the challenges India faces in crafting effective regulations. It’s evident existing regulations and regulatory bodies are insufficient to address the complexities of AI. The next column in this series will explore what India’s regulations should encompass.

Aditya Sinha

Officer on Special Duty, Research, Economic Advisory Council to the Prime Minister of India

(Views are personal)

(On X @adityasinha004)

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com