Sebi proposes five-point rulebook for responsible use of AI, machine learning

In the new draft, Sebi has proposed that market participants disclose their use of AI and ML tools in operations like algorithmic trading, asset management, portfolio management, and advisory services.
Securities and Exchange Board (Sebi)
Securities and Exchange Board (Sebi)(File Photo | PTI)
Updated on
2 min read

MUMBAI: The markets watchdog Securities and Exchange Board (Sebi) has released a consultation paper proposing a five-point regulatory framework for the responsible use of artificial intelligence (AI) and machine learning (ML) in the securities markets. The guidelines specify the procedures and control systems to ensure responsible usage, transparency, fairness, data security, and risk controls, aiming to balance innovation with investor protection.

The proposed guidelines cover several key parameters, including governance, investor protection, disclosure, testing frameworks, fairness and bias, and data privacy and cybersecurity measures. AI and ML tools are widely used by stock exchanges, brokers, and mutual funds for various purposes such as surveillance, social media analytics, order execution, KYC processing, and customer support.

In the new draft, Sebi has proposed that market participants disclose their use of AI and ML tools in operations like algorithmic trading, asset management, portfolio management, and advisory services. Disclosures should include information on risks, limitations, accuracy results, fees, and data quality, says the draft paper.

The draft says market participants using AI and ML will now have to designate senior management with technical expertise to oversee the performance and control of these tools. They also have to maintain validation, documentation, and interpretability of these models. Additionally, they will also be required to share accuracy results and audit findings with Sebi on a periodic basis.

The regulator has emphasised the importance of defining data governance norms, including data ownership, access controls, and encryption. It has also noted that these technological tools should not favour or discriminate against any group of customers.

“Market participants should think beyond traditional testing methods and ensure continuous monitoring of AI/ML models as they adjust and transform,” says the Sebi draft paper.

In terms of cybersecurity and data privacy, the Sebi paper highlights the risks such as the use of generative AI to create fake financial statements, deepfake content, and misleading news articles. To mitigate these risks, Sebi recommends human oversight of AI systems, monitoring of suspicious activities, and the implementation of circuit breakers to manage AI-driven market volatility.

Sebi has formed a working group to prepare these guidelines and address concerns related to AI and ML applications. The regulator has suggested a ‘lite framework’ for business operations that do not directly impact customers.

The draft paper is open for public comments until July 11.

Related Stories

No stories found.

X
Open in App
The New Indian Express
www.newindianexpress.com