BENGALURU: The ethical concerns around Artificial Intelligence (AI) are serious and genuine and are not being addressed to the extent they should be.
“We are in an AI hype cycle with scant empirical data on how it will help us mitigate deep-seated and critical societal and developmental issues. The perceived benefits of AI and Large Language Models (LLMs) at the moment are more in the nature of conjectures and need backing with solid evidence of impact on the ground, especially when it comes to under-served and marginal social groups,” Prof Amit Prakash, head of the department, Digital Humanities and Societal Systems (DHSS), International Institute of Information Technology - Bangalore (IIIT-B) told TNIE.
On November 26, former OpenAI employee Suchir Balaji (26), who had openly criticised his former company’s practices, was found dead in his San Francisco apartment. Balaji had raised concerns about the ethics and use of copyrighted materials to train generative AI models like ChatGPT.
“His (Balaji) concerns around sourcing of data, and ethics are genuine and not unfounded. What data is being used to train AI generative models? Is there a robust consent mechanism? The issue of possible copyright infringement is real and needs to be addressed,” said Prakash.
The senior researcher elaborated that while there are laws in India and several other countries on IT and data protection as well as guidelines on ethical AI by multilateral organisations like Unesco (UN Educational, Scientific and Cultural Organisation) and professional bodies like IEEE (Institute of Electrical and Electronics Engineers), the “institutional arrangements to implement and enforce them are still in infancy. There is a huge vacuum there”.
In India, the rules for the Digital Personal Data Protection (DPDP) Act, 2023, are still to be formulated.
“The unqualified and almost universal claims that AI, especially LLMs, will change things for the better for entire humankind need to be subjected to greater critical scrutiny. It is not entirely clear how and what kind of AI will help mitigate maternal deaths, child malnutrition, and lead to improvements in the state of primary health centres (PHCs) and anganwadis etc. in India’s hinterland.
We don’t have solid examples of benefits yet. These are intense people-centric domains for which the state has to devise and implement policies, keeping people at the centre. Moreover, there is a need for greater transparency and acknowledgement of diverse perspectives, which is lacking in current brand AI,” added Prakash.
“The fascination with AI appears to derive more from a top-down and centralised approach. Questions which Balaji is attributed to have raised on the source and use of data for creating generative AI models often go unattended, perhaps because of the hype of inevitability that is being created around AI. Looks like many people are joining the bandwagon more out of fear of being left out.
Technologies promising to help the marginal sections of society and mankind at large are generally put under stringent scrutiny and regulatory oversight; so should AI and LLMs. Doubts and dilemmas around them need clear answers and resolution strategies. They should not be pushed over in the name of impacting ‘innovation’,” added Prakash.