Deepfakes on Instagram, X Cast Shadow Over Women’s Dignity and Privacy — Price of Digital Freedom?

globaleyenews
15 Min Read

By Anushka Verma
Updated | November 3, 2025


Introduction

The internet was meant to empower voices, connect communities, and offer creative freedom. But in 2025, that same power has become a weapon for deception. Across Instagram and X (formerly Twitter), deepfakes — digitally manipulated videos created using artificial intelligence — are spreading faster than ever, leaving a trail of humiliation and trauma, particularly for women.

In one viral clip, a renowned Hindi film actress appears to smile and dance, but the movements of her body and expressions have been entirely altered. Her hands, digitally animated, undress her and mimic gestures that she never performed. Within hours, the clip is shared thousands of times — liked, commented on, remixed. For many who view it, the distinction between fake and real blurs instantly. For the victim, the damage to her dignity feels irreversible.

The Centre’s recent proposal to make labelling of AI-generated content mandatory on all social media platforms marks an urgent attempt to control this crisis. But experts warn that regulation alone cannot repair what deepfakes destroy — trust, consent, and the very sense of digital safety.


Table: The New Face of Digital Manipulation

CategoryPlatformNature of DeepfakePrimary VictimsImpact on ReputationPossible Action
Celebrity-targeted deepfakesInstagram, XExplicit morphs, AI-lip sync videosFemale actors, influencersSevere mental trauma, image distortionLabeling, reporting, and takedown within 24 hrs
Revenge-based editsTelegram, Reddit, XFace-swap in private scenesCommon women, ex-partnersBlackmail, public shamingCriminal complaint under IT Act, IPC 354D
Political deepfakesX, FacebookFake speeches or endorsementsPublic figures (both genders)Misinformation, reputation lossElection Commission monitoring, mandatory watermark
Satirical/Entertainment AI editsYouTube Shorts, Instagram Reels“Funny” swaps with popular facesMostly celebritiesEthical debate, soft reputational harmMandatory AI-labeling tag
AI-audio manipulationsWhatsApp, TelegramVoice cloning for scams or blackmailWomen and professionalsFinancial loss, privacy violationCyber Cell FIR, AI origin tracing

The Rise of a Digital Monster

Deepfakes have existed for years, but 2025 has seen an unprecedented explosion in their accessibility. Today, free mobile apps and AI websites allow anyone with a smartphone to generate a realistic fake video in minutes. A single selfie and 15 seconds of audio are often enough.

The result: a disturbing new frontier in online harassment, where women’s bodies and identities are digitally violated without their consent.

According to cybercrime officials in Delhi, reports of AI-morphed videos involving women have risen by over 300% in the last six months. “What’s dangerous is not just the technology,” says a senior cyber cell officer, “but the social appetite for such content. People click, share, and react — without realizing they’re amplifying a digital assault.”

This appetite for sensationalism has transformed deepfakes from fringe curiosities into viral weapons of humiliation.


Women at the Crossfire of AI Misuse

While deepfake scams affect both men and women, the gendered impact is unmistakable. Women face a unique form of exploitation — one that attacks their dignity, morality, and social reputation.

Psychologist and gender researcher Dr. Rachita Sen explains,

“When a woman’s image is morphed into a sexually suggestive video, it’s not just defamation — it’s a form of digital rape. Society still tends to judge women through moral lenses, so even if the video is proven fake, the psychological and social scars remain.”

Victims describe a chilling cycle: disbelief, shame, and helplessness. Many are forced to prove their innocence for something they never did. Families are torn apart, careers halted, and public figures find themselves trapped in endless clarification statements.

In smaller towns and rural areas, where digital literacy remains low, the stigma is even harsher. A fake video is often accepted as “truth,” pushing victims into isolation or depression.


The Government’s Move: Mandatory AI Labelling

On October 22, 2025, the Central government proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, aiming to make labelling of AI-generated content mandatory across platforms like Instagram, X, Facebook, and YouTube.

This move, officials say, is designed to restore digital accountability and reduce misinformation.

Under the draft proposal:

  • All AI-generated or synthetic content must carry a visible “AI-generated” label.
  • Social media intermediaries will be responsible for flagging and removing unlabelled or misleading deepfakes within 24 hours.
  • Failure to comply could attract penalties under the Information Technology Act, 2000, and its 2021 amendments.

But the implementation raises complex challenges. How will platforms identify every deepfake in the flood of daily uploads? How will smaller creators distinguish harmless filters from potentially misleading manipulations?


The Price of Digital Freedom

Freedom of expression is the soul of social media — yet it’s this very freedom that deepfakes exploit. The “price” of this digital liberty, many argue, is the erosion of privacy and trust.

Social commentator Arjun Malhotra notes:

“We once believed the internet would democratize information. Now, it’s democratizing deception. The cost is borne disproportionately by women, who are turned into unwilling content.”

The irony is painful: tools originally built for creativity — AI art, filters, entertainment — have become instruments of exploitation. And while technology evolves every day, legal systems and awareness lag behind.


Behind the Screen: The Technology and Its Traps

Deepfakes operate on deep learning models — typically Generative Adversarial Networks (GANs) — where one neural network creates fake data and another verifies it. This iterative process leads to ultra-realistic visuals indistinguishable from reality.

However, what makes this technology frightening is not just its sophistication, but its availability. Anyone can use free tools like “DeepFakeApp,” “FaceFusion,” or “AI Swap” to merge faces seamlessly into videos, without needing technical skill.

A digital rights activist from Bengaluru remarks,

“This isn’t about a hacker sitting in a dark room anymore. It’s your neighbor, your ex-partner, or a random teenager online with access to AI tools. That’s how normalized it’s become.”


Case Studies: When Deepfakes Turn Destructive

  1. The Actress Incident:
    A popular Bollywood actress discovered an explicit deepfake of herself circulating on X. Even after public denial and police complaints, the video kept reappearing on new accounts. The delay in takedown amplified her trauma.
  2. Corporate Blackmail:
    A Bengaluru-based marketing professional was blackmailed with a deepfake video created using her LinkedIn profile picture. The extortionist demanded ₹5 lakh to “prevent” its release. The case is under Cyber Cell investigation.
  3. Student Humiliation:
    In Lucknow, a 19-year-old student’s morphed video went viral among classmates. Though proven fake, she faced suspension and social ostracism — a stark reminder that truth often arrives too late online.

Psychological Fallout: A Silent Epidemic

The emotional impact of deepfake abuse is far-reaching. Victims experience anxiety, social withdrawal, insomnia, and fear of being online. Many delete their social media profiles altogether, losing personal and professional opportunities.

Dr. Sen adds,

“The internet never forgets. Even after deletion, people keep screenshots, copies, and memes. The psychological recovery takes years.”

This invisible suffering rarely gets the attention it deserves. Unlike physical assault, deepfake abuse often lacks visible evidence — yet its emotional wounds run just as deep.


The Role of Platforms: Responsibility or Evasion?

Social media companies, often caught between free speech and regulation, claim they are improving their detection tools. Instagram and Meta say they’re working on AI-based watermarking, while X’s latest update promises automated identification of synthetic media.

However, enforcement remains weak. Reports suggest that takedown requests are often delayed by weeks, giving fake content enough time to spread uncontrollably.

Digital policy researcher Megha Arora argues,

“Platforms profit from engagement — even when that engagement comes from fake or harmful content. So there’s a moral conflict built into their algorithms.”

The proposed government mandate could pressure these platforms to act faster — but critics warn it must be balanced carefully to avoid stifling artistic or political expression.


India’s Information Technology Act (2000) and the Indian Penal Code already provide some protection — including sections on cyber defamation, identity theft, and sexual harassment. But these laws were written long before AI manipulation became mainstream.

Currently, victims can file complaints under:

  • Section 66E (Violation of privacy)
  • Section 67A (Publishing obscene material)
  • IPC Section 354D (Cyber stalking)

However, legal experts note that conviction rates remain below 10%, largely because of slow forensic verification and jurisdictional confusion (where was the content created, uploaded, or hosted?).

A Delhi High Court lawyer explains,

“The law is still chasing technology. Until we have AI-specific legislation, deepfake perpetrators will continue slipping through the cracks.”


The Feminist Perspective: Reclaiming Digital Agency

Beyond regulation and punishment, feminist activists emphasize education and empowerment. Campaigns like “My Face Is Not Your Canvas” and “She Owns Her Image” are gaining momentum, urging women to report, not retreat.

Technology educator Kavita Bhattacharya says,

“We must teach digital consent like we teach physical consent. Young people, especially men, must understand that sharing a fake clip is an act of violence, not humor.”

Grassroots digital literacy programs are also helping women understand privacy settings, legal rights, and reporting mechanisms. But these initiatives remain underfunded and localized.


International Comparisons: Learning from the World

Countries like the UK, South Korea, and the US have already criminalized the creation and distribution of sexually explicit deepfakes. South Korea’s “Digital Sex Crime Act”, for instance, imposes up to 5 years in prison for offenders.

The EU’s AI Act (2024) requires AI-generated media to carry visible watermarks — a framework India’s proposal seems to draw inspiration from. However, the challenge lies in scale: India’s internet population exceeds 900 million, with millions of videos uploaded daily.

Without automated, transparent, and accountable mechanisms, even the strongest policy can turn symbolic.


Future of AI Ethics in India

AI is not inherently evil. In fact, it has revolutionized art, education, and communication. The problem arises when ethics fail to evolve alongside innovation.

Experts call for a “Digital Ethics Code” — a multi-stakeholder charter involving tech firms, educators, and citizens. It would focus not only on punishment but also prevention, through transparency, traceability, and awareness.

“AI should assist creativity, not assassinate character,” says policy analyst Ritesh Dubey.
“Our challenge now is to build a society that can tell the difference.”


The Human Cost

Behind every viral fake lies a real person — often a woman — whose confidence, family, and professional life are shattered. The damage is not just digital; it’s deeply human.

One survivor of deepfake abuse told this reporter:

“People tell me to ignore it because it’s fake. But when thousands see that video, do they all know it’s fake? I still walk on the street wondering who believes it and who doesn’t.”

Her words capture the paradox of the digital age — where visibility is power, but also peril.


Conclusion: The Path Forward

As India stands at the intersection of technological progress and ethical reckoning, the fight against deepfakes is more than a legal battle — it’s a moral one.

The price of digital freedom cannot be women’s dignity. The internet must evolve into a space that values consent as much as creativity, truth as much as innovation.

While the government’s labelling mandate is a step forward, the real change will come when every user recognizes their role — to verify before sharing, to report before judging, and to respect before reacting.

In the end, the responsibility is collective. Because in the era of AI illusions, protecting women’s privacy is not just a gender issue — it’s a test of our humanity.

Share This Article
Leave a Comment