Mandar Pardikar
Chennai

Caught in the cyber cobweb: How TN women are fighting the deepfake battle

A Tamil Nadu entrepreneur's battle against AI deepfakes reveals how technology has turned women's dignity into downloadable content and why the fight for justice has only just begun

Diya Maria George

August 21, 2025, started like any other day for Rajathi Kamalakannan until a familiar face popped up on her phone while scrolling. An AI-generated morphed picture of hers with her contact details was circulated on Facebook. “I was shocked,” says Rajathi, who operates a food business, Raji’s Kitchen in Dindigul, and runs a popular YouTube channel, RK Family Vlogs. When she checked this profile, it had 16K followers. “Photos of several other women were misused similarly,” says the 35-year-old.

What followed was a nightmare that consumed three months of Rajathi’s life. “I received more than a hundred calls from old friends, distant relatives, all enquiring about it. Many blamed me for having an ‘indecent’ photo on Facebook. Having to respond to each one of them was really painful,” she says.

Rajathi was determined to fight the case. She filed a complaint against Facebook, seeking the removal of the AI-generated image. In an email, Facebook refused to take action. The response read, “We’ll remove photos and videos reported through this channel if it infringes or violates your privacy. Unfortunately, based on the information you provided, we were unable to determine how the reported content infringes or violates your privacy. We recommend reaching out to the owner of this content to resolve this issue (sic).”

The email infuriated Rajathi. Along with her husband, she went to the Dindigul cybercrime office and filed a complaint on August 25. She recounts at least 10 visits to the police station, even on her husband’s birthday, giving statements and signing documents. The accused was eventually arrested on November 14, but Rajathi is yet to get full closure. “I couldn’t work for a week due to the fear of being shamed,” she says. But the larger question that she pondered was why was she targeted.

On the other hand, her business continued, and the popularity of the YouTube channel rose. Her online presence remained essential to her livelihood, even as it made her prone to abuse. “Such people get involved in crime since they think no one will complain,” she says, asserting, “You [perpetrator] hid your face while doing this. I will not hide mine while seeking justice.”

There is growing awareness around such crimes against women across Tamil Nadu, who realise that their faces from cooking videos, travel photos, business profiles, and even passport-size images are being weaponised by strangers. While some of them find the courage to speak out, many remain silent due to a wide range of social factors.

Systemic issue

Tamil Nadu recorded 1.75 lakh reported cybercrimes last year alone, with financial scams exceeding ₹1,600 crore, according to retired director general of police (DGP) C Sylendra Babu. The sheer scale of the loss reveals how the tentacles of crime have found new ways to evade police action despite the presence of cybercrime police stations in each of the 38 districts.

“If you put together crimes like chain snatching, motorcycle theft, robbery, murder for gain, these cannot cross ₹100 crore. But in cybercrime, money is lost just like that. A small percentage of these are women paying money due to harassment. With an overwhelming number of complaints, we can’t cope,” he states. He also observes that a vast majority of women never come forward. “They change their number or block the caller, but the damage is already done. They live in perpetual fear of ‘who will receive the photo or video next?’”

The methods have evolved beyond photo-morphing. The retired DGP describes a new pattern where criminals target women with “fake romantic or professional contacts”, then use AI to create explicit content when the women refuse to engage. They access the victim’s contact lists and send sample images to relatives, demanding anywhere from ₹10 lakh-₹20 lakh from women earning ₹30,000-₹40,000 monthly. “Women silently pay, but how long can they keep paying?” he asks.

Such crimes have links across the world, and it makes the job difficult for the police. “Gangs operating from Laos, Vietnam, the Philippines, and Thailand lure women through apps disguised as dating platforms. Indian scam hubs operate from Jamtara in Jharkhand, Mathura in UP, and Neem at the Rajasthan-UP-MP border,” Sylendra Babu notes. “It’s a new generation of organised crime with no borders.”

Easy access

Priyanshu Ratnakar, a security researcher who has been tracking AI-generated abuse, believes that the tools for carrying out such crimes have become more accessible. “Some widely available open-source image models that come with safeguards can be easily bypassed,” he says. Girithar Ram Ravindran, a cybersecurity specialist, seconds the opinion. “Most of these suspects do not have a strong technical background. If access becomes difficult, they will give up,” he says.

Comparing the models available, Priyanshu is shocked by the manner in which Chinese models manipulate images. “The shadows, skin texture, background details, everything is convincing. The people in the images don’t even exist in real life,” he states.

While the technology is getting better, the infrastructure for abuse online is already laid out. “You have click farms, fake followers and bots run from China. With one click, they can send you 1,000 likes; with one phone, they can manage 10,000 profiles. Now imagine this infrastructure combined with powerful AI models,” he explains.

On Reddit, Priyanshu even found numerous forums dedicated to AI-generated explicit content, including deepfakes of celebrities being sold commercially. “Some people in India are already producing non-consensual content and selling it,” he says. The crisis has gone to an extent where high-profile celebrities like actor Rashmika Mandanna and dancer Anita Ratnam had to move court to protect their digital identity.

Battles beyond deepfakes

AI-generated images represent only one facet of a broader crisis of image-based abuse. Swetha Shankar, executive director of programmes at the International Foundation for Crime Prevention & Victim Care (PCVC), works with survivors of various forms of digital violence and sees patterns that extend far beyond deepfakes. “A lot of abuse happens when intimate pictures shared during a relationship are used against the person after a breakup,” Swetha says.

PCVC works with the cyber cell and uses a resource called Stop NCII (Stop Non-Consensual Intimate Image sharing), a website that partners with large platforms like Meta and even Pornhub. “When someone uploads an image there, the site creates a digital ID for it. If anyone tries to upload the same image anywhere else, it automatically gets flagged. We help survivors create this digital ID so they can track and prevent their images from being re-uploaded.”

The fear extends beyond the immediate violation. “A lot of survivors are terrified that their parents or husbands will come to know. For married women whose ex-boyfriends are threatening them, there is fear that their marriages will end. Younger women who have moved away for work, are petrified that the police will involve their parents. They worry that if families find out, their mobility and freedom will be cut off, they may be taken back home, forced to marry, and even have their phones taken away,” she says, adding “They constantly check messages because they’re scared something might be leaked. It affects them very deeply.”

Police responses can compound the problem. “Even when a woman is above 18, the police say things like, ‘Call your parents’, ‘Are you married?’ This makes women even more worried that approaching the police will lead to further control,” she adds.

For minors, the situation is even more complex. “It’s hard for them to say they’re in an abusive relationship when being in a relationship itself is a taboo,” Swetha says.

The pattern of abuse is also consistent. “In many of the cases we work with, intimate images are used to blackmail individuals. There is a lot of assault, and the person feels helpless because they are scared the images will be leaked. It gets held over them repeatedly,” Swetha adds.

Never-ending trauma

Priyanshu shares the story of a minor girl whose Instagram account was hijacked by a gang. “They demanded ₹50,000, and when she couldn’t pay, the man forced her to send a nude video. Once she sent it, he leaked it anyway, then asked for more money,” he says, adding that the case exposes the relationship between financial blackmail and sexual exploitation.

For content creators and entrepreneurs, the threat creates a chilling effect. Many are afraid to maintain the online presence their livelihoods require.

Once the content spreads, tracking it becomes nearly impossible. Girithar describes how metadata, the digital fingerprints attached to photos, gets erased when images are shared. “When sent through WhatsApp or shared multiple times, metadata gets stripped away. Once it spreads through thousands of groups, connecting the dots becomes very hard,” he says.

Priyanshu adds that investigators rely on open-source intelligence tools to track where the content first appeared, but many platforms lack robust traceability. “People download leaked or generated videos from one platform, then re-upload them across many others. For each view and engagement, they get paid. That’s their business model.”

For women like Rajathi, the intensity of the crisis has a huge impact. Girithar warns, “Keep accounts private unless necessary. But even from a private account, photos give enough data for criminals to create fake images or videos.” Priyanshu is more practical about the reality: “Once you post anything online, you lose control over it. It’s your image, but if someone downloads and shares it, you can’t really do much.”

The courage to fight

The Facebook profile that violated Rajathi's privacy remains active, with its last post dated November, and is still accessible despite her repeated complaints and the accused's arrest. Meta's own Oversight Board has noted that the platform often fails to remove AI-generated explicit images unless cases receive significant media attention, leaving ordinary victims to hunt down and report every instance themselves.

Having gone through the ordeal, Rajathi says women should not suffer in silence, and points out how screenshots, URLs, and other evidence of such abuse could help their case. She also sought faster police action. “Response time shouldn’t be too late. Women need supportive and swift systems so that they don’t lose courage halfway,” she says.

Priyanshu suggests large-scale awareness campaigns. “Every household seems to have at least one creator or influencer now. We need the media, lawmakers and police to join together for nationwide education, in schools, colleges, and communities.”

While Sylendra Babu believes social media poses a threat to privacy, and that laws alone cannot help prevent these crimes, Swetha notes that gender-based violence has always adapted to whatever tools are available. “Long before AI, controlling devices was already a part of domestic violence, monitoring phones, knowing passwords, controlling mobility, and isolating women. Technology is just another tool. The deeper issue is how we view women and how we normalise abuse. We often elevate the idea of ‘family’ over the safety and well-being of women.”

The legal labyrinth

  • Current laws provide some recourse but lack teeth, specifically for AI-generated abuse. Sylendra Babu suggested that a separate clause for AI-enabled offences can be added to the Bharatiya Nyaya Sanhita due to the gaps in the existing laws. Priyanshu demands that authorities constitute a specific, standalone offence. “From here, it will only get worse if we don’t,” he says.

  • The IT Act 2000 contains relevant sections, particularly Section 67 with subsections for sexually explicit content and child protection, with potential punishments of 5-7 years. But Girithar notes these provisions were written before AI: “These open-source models must have stricter guardrails or should be banned.”

  • Updated IT rules from 2021-22 require large platforms to appoint officers responsible for handling takedown requests under the Indian law. Priyanshu says success stories remain exceptions, as most survivors have a difficult and uncertain path to justice.’

    If you or someone you know is affected by deepfake abuse, file a complaint at the National Cyber Crime Reporting Portal (cybercrime.gov.in) or call 1930. Tamil Nadu has cybercrime police stations in all 38 districts.

    (The author is a Laadli Media Fellow. The opinions and views expressed are those of the author. Laadli and UNFPA do not necessarily endorse the views.)

Budget numbers show Government is abandoning its kartavya and that's not good news

4.6-magnitude earthquake shakes Kashmir

Pakistan says it has killed 145 'terrorists' in Balochistan after deadly attacks

Union Budget 2026 decoded | Big on small, small on big

Opposition slams Union Budget 2026-27 as anti-people, anti-federal

SCROLL FOR NEXT