Representational photo X.com
Opinion

Financial fraud | Battling threats with AI-driven compliance

With the growth of digital transactions, frauds have also grown manifold—including those driven by AI tools. While the RBI and National Payments Corporation are deploying AI to detect and prevent threats, systemic compliance lapses can also be fixed with the help of AI to strengthen the system

Sasmit Patra

India’s digital payments landscape has witnessed exponential growth, with over 18,000 crore transactions recorded in 2024-25. UPI transactions alone surged by 137 percent to ₹200 trillion in 2023-24. 

However, this surge has been accompanied by a significant rise in digital financial frauds. Between April 2024 and January 2025, the country reported 24 lakh digital fraud incidents, amounting to losses of ₹4,245 crore, a 67 percent increase from the previous year. High-value cyber fraud cases, involving sums exceeding ₹1 lakh, have also escalated, with 29,082 such incidents causing losses of approximately ₹175 crore.

The rampant spike in financial frauds in India can be traced back to a range of contributing factors. One major driver has been the rapid shift to mobile-based and UPI platforms, which, while transformative, has outpaced user awareness and digital literacy. As a result, a large section of users remains highly susceptible to fake payment links, fraudulent apps and phishing attempts. At the same time, fraudsters are becoming more sophisticated, increasingly relying on AI-generated content, deepfakes and other advanced techniques to manipulate and mislead people.

But it’s not just about individual vulnerabilities or evolving fraud tactics. Beneath the surface lies a deeper issue: systemic compliance lapses. Weak enforcement of onboarding norms, gaps in merchant verification and inconsistent application of regulatory protocols are creating blind spots across the payments ecosystem.

Addressing these structural flaws is just as important as strengthening frontline defences, especially if we hope to build a secure and resilient digital financial system. Against this backdrop, institutions are turning to artificial intelligence for proactive threat detection, automated incident response and adaptive risk modelling.

Financial institutions and regulatory bodies are already turning to AI and machine learning (ML) technologies. The Reserve Bank of India has introduced MuleHunter.AI to detect and eliminate mule accounts, which are often instrumental in fraudulent financial schemes.

Additionally, the National Payments Corporation of India (NPCI) has launched a pilot project implementing a federated AI model in collaboration with leading banks to enhance fraud detection and risk assessment across the banking ecosystem. Mastercard’s decision intelligence platform analyses 16,000 crore transactions annually, assigning risk scores in milliseconds to block unauthorised activity. 

Some of the key roles of AI-driven models include threat detection and prevention through anomaly detection and behavioural analysis for recognising suspicious actions. With ML algorithms, AI models continuously monitor transactions and detect unusual activities. It also consists of automated incident response systems. With these systems, AI-based cybersecurity models can execute predefined responses by quickly recognising cyber incidents with AI-powered security orchestration, automation and response or SOAR systems. This incident response significantly minimises the damage before it escalates.

Further, AI-powered endpoint security solutions and antivirus provide protection from phishing attacks. AI models continuously adapt to detect new and emerging threats, ensuring real-time protection. For instance, the RBI’s AI/ML-based system has been designed to detect mule accounts that are used for phishing scams, enhancing accuracy and speed in detection and ultimately preventing fraudulent transactions.

Complementing these key elements can judiciously detect evolving threats. Natural language processing that can recognise phishing emails and malicious web links, and deep learning that can identify advanced persistent threats.

RBI’s MuleHunter.AI and NPCI’s federated AI model trial signify a pivotal shift toward collaborative, data-driven security frameworks. Coupled with Indian Computer Emergency Response Team’s mandated real-time incident reporting and RBI’s exclusive ‘.bank.in’ domain directive, these initiatives illustrate how AI can fill critical visibility gaps.

However, AI adoption faces a few key challenges. AI-generated threats aren’t easy to spot even with advanced detection models. While the need for large datasets for training AI models raises privacy concerns, using limited data may lead to false positives by unfairly flagging legal activities or vice versa. There is also a growing risk of adversarial AI, wherein attackers can manipulate open AI models by feeding deceptive information.

In the future, financial institutions must adopt strategies such as AI-driven zero trust architecture, an approach that demands rigorous verification of all users, systems and processes; with trust never assumed but consistently earned. Additionally, establishing a robust multi-stakeholder approach with integrated efforts spanning across regulators, financial institutions and technology providers would be crucial in ensuring that technological innovation is not undermined by vulnerabilities, and that trust in the financial system remains intact.  

Sasmit Patra | Rajya Sabha MP from Odisha

(Views are personal)

Iran’s supreme leader sets stage for tougher response as protest movement expands

'Not accurate': MEA rejects Lutnick’s Modi–Trump call claim, says monitoring US bill seeking 500% tariffs

Needed: A fresh policy framework to engage the US

'Tried to steal my party's data': Didi vs ED intensifies as Bengal CM protests raids on I-PAC chief

13 killed as bus plunges into 500- ft deep gorge in Himachal’s Sirmaur; PM announces ex-gratia for victims

SCROLL FOR NEXT