The new, revolutionary frontier of technology—artificial intelligence or AI—is being debated intensely with fresh calls for regulation. The European Union already has a law in place, while the US and India are still mulling how to move forward.
The debate in India got a shot in the arm with the recent visit of chipmaking giant Nvidia’s co-founder Jensen Huang. He said, rather than regulating the technology and restricting its growth, it’s required to regulate AI’s deployment.
“We should regulate AI in the context of every application. When you use AI as an accountant, that accountant should be regulated. When you use AI as a lawyer, that lawyer should be regulated.” AI is a big technological leap that provides tools to improve analytics and resource use through automation.
On the other hand, AI can become invasive. Using algorithms, it can break into the private domains of individuals and companies, and harvest data illegally.
Aadhaar and tax identities submitted innocently have been used to collect mountains of personal data from cyber storage sites. At another level, spyware planted for sophisticated surveillance on mobile phones through programmes such as Pegasus, another AI application, has reached commercial proportions.
Worried about the breach of privacy, the EU was the first to come out with a comprehensive law categorising different AI systems based on the levels of risk they pose and allotting commensurate obligations.
As a technology, AI is neutral. Its risk depends on how it is coded and used. Does it take away jobs? Like computers, AI will initially eat up some human jobs; but over time, it will create new opportunities.
On the regulatory front, the government has been playing a wait-and-watch game with a series of advisories from March this year directing large platforms deploying “unreliable AI / large language models” to notify the government.
However, sooner rather than later, pending and existing laws like the Digital Personal Data Protection Act 2023 and the Information Technology Rules 2021 will have to be brought in sync with AI deployment.
In this, not only the users of AI—as suggested by Jensen Huang—but also the developers will have to be answerable for any violation of individual privacy. One crucial question remains: how will the law regulate the actions of the state as an AI user?