Future of crime: AI & deepfakes

Cybercrime trends show that in the coming years, fraudsters will blur reality to loot vulnerable sectors, by targeting their staffers
Representative image
Representative image
Updated on
5 min read

BENGALURU: In 2025, scams won’t just knock on your door asking for OTPs or requesting you to click suspicious links — they will come straight to you, looking like people you know and trust. Whether it is a video call from your boss or an official-looking government app offering benefits, cybercriminals will use Artificial Intelligence (AI) and Deepfakes -- manipulated videos or audio that appear real, to create lifelike traps. With convincing voices, realistic messages and fake platforms, they will make it almost impossible to tell what’s real and what’s a scam.

A recent report by the Data Security Council of India (DSCI) and Seqrite outlines the alarming evolution of these scams, fuelled by AI and deepfake technology. The report, which projects AI and deepfake generated cyber attacks to double in 2025, warns that cybercriminals are poised to target critical sectors like healthcare and finance with tailored phishing campaigns, fake video messages, and adaptive malware. The report highlights how these attacks will exploit trust, using realistic content and advanced techniques to compromise systems, steal sensitive data, and disrupt essential services.

Upcoming trends

Experts highlight that fraudsters always opt for sectors which are vulnerable. For example, the next trend could be a hospital’s staff receiving a video call that appears to be from their director, urgently instructing them to transfer funds to purchase critical medical equipment. In finance, scammers often target employees handling money or sensitive accounts. For instance, a deepfake of a CEO could be used to instruct a finance team member to urgently transfer money to a specific account, claiming it is for a confidential deal.

These scams are dangerous because they blur the line between reality and deception, making it critical for organisations to implement strong verification processes, such as multi-factor authentication and cross-checking requests through independent channels, to safeguard against such threats, experts highlight.

Cyberexperts also warn of increasing fake applications pretending to be official government services and cryptojacking -- hijacking someone’s computer to mine cryptocurrency. They advise the government to adopt AI machine learning, a type of AI that learns from data for better threat detection and emphasizes cyber resilience, the ability to recover and adapt from cyberattacks.

Impossible to track

Kuldeep Kumar Jain, Deputy Commissioner of Police, Traffic - East, Bengaluru, who headed the team set up to exclusively deal with courier-related scam cases in 2023, acknowledged that deepfake technology will get more realistic in the coming year. “As deepfakes evolve, tracking and identifying scams will become nearly impossible,” he said, adding that the challenges are already significant.

He also pointed out that scams are already at a professional level, with money being transferred to foreign accounts using cryptocurrency or other methods, all hidden under multiple layers. “The key to staying safe is awareness, and always questioning anything that comes from the internet,” the senior officer emphasized.

The senior official also mentioned that India has made significant progress in digitization, with more people and businesses using digital platforms for services and transactions. However, he stressed that as digital services grow, it’s important for the authorities to set clear rules. He explained that while private banks often try to make things easier for customers, they sometimes bypass regulations. This can lead to easier access to personal information, which can be risky. The official stressed that with these trends expected to continue, central bank authorities need to create stricter rules to protect customer data.

What do cyber experts say?

Independent researcher and technologist Rohini Lakshane, who was formerly with The Centre for Internet and Society, mentioned that the use of AI-manipulated or generated media, including deepfakes, is already widespread and expected to increase significantly in the coming year.

She pointed out that some sectors like healthcare and finance are particularly vulnerable to exploitation through deepfakes and AI, as through these industries, scamsters often create a sense of urgency or alarm in the minds of their victims. This urgency makes it difficult for individuals to think clearly and make informed decisions. When deepfakes are used, the situation becomes even more convincing, leaving the victim with little time to determine whether the situation is genuine or a scam. As a result, people often end up losing critical personal information or falling into traps without realizing it, she said.

Zubair Chowgale, head of engineering for the APMEA region at Securonix, a firm that offers solutions to cyber attacks, said that while AI will continue to be rampant in cyberattacks, deepfake-based threats are expected to rise alongside phishing and malware.

He explained that Distributed Denial of Service (DDoS) attacks, which flood systems with too much traffic to make them crash, a challenge to do it manually, are now being carried out using AI tools and large language models (LLMs). “These tools study large amounts of data to make the attacks more effective,” he said, adding that LLMs are also being used to create fake phishing emails and other scams.

Zubair warned that even malware designed to escape detection is being developed using these same technologies, making online security even harder to manage.

Staff shortage, lack of cyber training

Across Karnataka, cybercrime cases in 2021 totalled around 11,000, but this figure surged to 22,000 in just two years. In Bengaluru, one in four reported crimes is a cybercrime. With cases rising and recovery rates struggling to keep pace, Karnataka recently appointed a dedicated DGP for cybersecurity, becoming the first state to do so.

Police officials believe that post-pandemic, cybercrime has become more complex, with malware evolving into more sophisticated threats. To combat this, they say the department plans to strengthen its cyber forensic capabilities with AI and machine learning. However, officers also reflect on significant gaps as cybercrime police stations are staffed at less than 40 per cent of their required strength, and junior officers, who often handle cases, lack regular training.

Possible trends

Fake government official apps

Malware attacks

Phishing

Lifelike AI-generated content and deepfakes

Cryptojacking

ON YOUR GUARD

Be sceptical of unsolicited offers: Whether it’s a new investment opportunity or an urgent request for a donation, always question unsolicited offers, especially if they are delivered via video or audio

Check official sources directly: If you receive a message or call about a government scheme, bank alert or medical issue, visit the official website or call the customer support number to verify its authenticity before taking action

Limit voice and video data online: Be mindful of personal video or voice recordings you share online, as deepfake technology can use them to create fake content; reduce digital footprint when possible to minimize the risk

Passwords: Use strong, different and unique passwords across all apps. Avoid public Wi-Fi for any transactions

Suspicious apps or websites: If an app or website offers a service that seems too good to be true, avoid downloading or interacting with it unless you have verified its legitimacy through trusted sources

Don’t ignore minor inconsistencies: If a video looks slightly off — perhaps the background is blurred or speech is out of sync with lip movement — don’t brush it off as these could be signs of deepfake technology in use

Software updates: Don’t ignore software updates, they include patches for security vulnerabilities

Related Stories

No stories found.

X
The New Indian Express
www.newindianexpress.com