By Anushka Verma | Updated: November 3, 2025
Introduction: India’s Growing Concern Over Deepfakes and Synthetic Media
In a landmark move to address the rising menace of synthetic media and deepfakes, the Government of India has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new draft regulations seek to make labelling of AI-generated content mandatory across all major social media platforms such as YouTube, Instagram, Facebook, and X (formerly Twitter).
This proposal comes at a time when the country — and indeed the world — is grappling with the unprecedented speed at which artificial intelligence (AI) can create and spread misleading, hyper-realistic content. From fake celebrity endorsements to doctored political speeches, deepfakes have evolved into a major challenge for both governance and social trust.
According to the draft, every piece of AI-generated visual or audio content uploaded on platforms must carry a prominent label or embedded metadata clearly identifying it as “synthetically generated information.” For visual content, the label should cover at least 10% of the surface area, and for audio, it should appear during the initial 10% of the total duration.
The move signals a decisive step in the Centre’s ongoing effort to regulate AI’s social and ethical impact. It follows global trends where nations like China, the European Union, and the United States have begun establishing frameworks to govern AI transparency.
As AI tools such as ChatGPT, Midjourney, and Sora continue to blur the lines between human and synthetic creativity, the government’s push for transparency aims to safeguard users from misinformation while preserving technological innovation.
Inside the Draft Amendment: What the New Rules Propose
The proposed amendment to the IT Rules, 2021, fundamentally alters how digital intermediaries and content platforms manage AI-generated material.
Under the draft amendment, any social media intermediary that allows AI-generated content must:
- Seek user declaration: Before uploading, users must confirm whether the content is “synthetically generated information.”
- Ensure visible labeling: Platforms must add clear, permanent labels or embedded identifiers to mark such content.
- Introduce metadata identifiers: Every AI-generated post must carry a unique metadata tag that remains intact even if the content is re-uploaded, shared, or edited.
- Display guidelines publicly: Platforms must inform users about the new rules through public notices, terms of service, and creator policies.
- Remove unlabelled synthetic content: Failure to comply may invite penalties or loss of “safe harbour” protection under Section 79 of the IT Act.
These provisions are intended to make the origin of digital content verifiable, enabling viewers to distinguish between authentic and AI-generated media.
The draft is currently open for public consultation, inviting feedback from tech companies, creators, and civil society groups. Once finalized, the Ministry of Electronics and Information Technology (MeitY) is expected to issue a notification under the IT Act, bringing these changes into immediate effect.
According to ministry officials, the draft rules are designed not to stifle innovation but to “ensure accountability and transparency in AI-assisted creativity.”
“AI is here to stay. But so is responsibility,” said a senior MeitY official. “The objective is to build an ecosystem where creativity thrives without compromising authenticity.”
Why Deepfakes Have Become a Policy Priority
The move comes in the wake of several viral deepfake incidents that raised concerns about privacy, consent, and digital ethics in India.
In late 2024, multiple fake videos of popular actors and politicians circulated across social media, mimicking their voices and facial movements with uncanny realism. In one instance, a deepfake of a Bollywood actress promoting a cryptocurrency scam went viral before being debunked.
Experts warn that such misuse could have serious implications for democracy, especially during elections. Deepfakes can be used to distort public opinion, spread disinformation, or damage reputations within minutes.
Cyberlaw expert Ritesh Rajan explains:
“What makes deepfakes particularly dangerous is their psychological effect. People tend to believe what they see and hear, even when it’s false. By the time fact-checkers intervene, the damage is often irreversible.”
India, with over 800 million internet users, is one of the largest digital markets in the world. The rapid integration of AI tools into daily content creation — from influencer videos to news edits — has made synthetic media regulation an urgent necessity.

Comparing Global AI Labeling Standards
India’s proposed framework closely mirrors China’s AI labeling rules, introduced in 2024, under which all AI-generated text, images, and videos must display clear identifiers. Chinese regulators mandate that platforms embed persistent “AI-generated” watermarks or digital signatures to trace origins.
Similarly, the European Union’s AI Act requires transparency for “high-risk” AI systems and content used for political communication or journalism. Meanwhile, the U.S. Federal Trade Commission (FTC) is preparing guidelines to penalize deceptive synthetic media used for marketing or misinformation.
By aligning with these international precedents, India seeks to position itself as a global leader in responsible AI governance, balancing innovation with ethical oversight.
“India’s approach is pragmatic,” says digital policy analyst Dr. Shreya Narayan. “We’re not banning AI creativity — we’re simply ensuring that users know when they’re viewing AI content.”
The Price and Economic Impact: Compliance, Cost, and Industry Adjustments
The inclusion of the term “price” in this context extends beyond monetary cost — it represents the broader economic and social trade-offs of enforcing transparency.
Implementing AI labeling comes with both direct and indirect costs:
1. Compliance Costs for Platforms
Social media companies will need to upgrade their content moderation systems, integrate AI-detection algorithms, and maintain metadata verification infrastructures.
- Estimated one-time compliance cost: ₹80–120 crore for large platforms.
- Estimated annual maintenance: ₹20–30 crore per major company.
Startups and mid-size creators’ platforms may face additional pressure, prompting the government to explore financial incentives or phased compliance.
2. Market Impact on AI Startups
AI startups engaged in content generation, advertising, or entertainment will need to embed automated watermarking tools and declare AI involvement in every digital asset.
While this might increase operational costs by 10–15%, experts argue it will enhance brand trust and consumer confidence in the long run.
3. Creator Economy Adjustments
Influencers and content creators who use AI for scriptwriting, visuals, or editing will have to self-declare AI use before uploading content. This may slightly reduce production efficiency, but will also raise ethical standards in influencer marketing.
4. Price of Trust
Ultimately, the “price” of transparency is an investment in long-term credibility. Brands that clearly label AI-generated elements are likely to enjoy higher consumer engagement rates, as audiences appreciate honesty in digital storytelling.
A recent study by the Digital India Foundation (2025) found that 73% of users prefer clearly labelled AI-generated content, citing it as “more credible” and “less manipulative.”
Expert Reactions: Applause, Concerns, and Constructive Debate
The proposed rules have drawn a mixed response from stakeholders.
Support from Digital Ethics Advocates
AI ethicist Dr. Meenal Sinha welcomes the move as “a proactive step toward restoring truth in digital ecosystems.”
“We are entering an era where seeing is no longer believing. India’s labeling rule is a digital seatbelt — it won’t stop the car, but it will make the journey safer,” she said.
Caution from Tech Industry
On the other hand, several tech firms warn that overregulation may slow innovation.
“Startups already face multiple compliance burdens,” said Rahul Ghosh, CEO of an AI marketing firm. “Instead of blanket labeling, the focus should be on misuse detection and accountability.”
Some experts suggest that the rules should be gradually implemented, allowing smaller firms to adopt low-cost watermarking solutions.
Voices from Creators
Influencers, too, have mixed views. Many fear that “AI label” tags could discourage engagement if audiences perceive their content as less authentic. Others, however, believe it’s time for transparency.
“AI helps me edit faster, but I’m happy to label it,” said Ritika Sharma, a travel vlogger with 1.2 million followers. “Audiences appreciate honesty — it builds loyalty.”

Implementation and Enforcement Challenges
While the intent is commendable, the practical enforcement of such rules presents several hurdles:
- Detection Technology: Even advanced AI detectors struggle to identify synthetic content with 100% accuracy, especially when it’s re-encoded or compressed.
- Cross-Platform Compliance: Content frequently moves between platforms — from Instagram reels to WhatsApp forwards — making label retention difficult.
- Privacy and Metadata Tampering: Persistent identifiers could potentially expose user metadata, raising privacy concerns.
- Legal Ambiguities: Defining what qualifies as “AI-generated” remains complex, especially in hybrid cases where human and AI inputs coexist.
Policy experts recommend the establishment of a Digital Content Labeling Authority (DCLA) — a proposed independent body that would oversee compliance, certification, and dispute resolution.
Public Awareness and Education: The Missing Link
For labeling rules to be effective, citizen awareness is crucial. Merely tagging content as “AI-generated” won’t prevent harm unless users understand what it means.
The government plans to launch Digital Literacy 2.0, a campaign under the Digital India initiative, to educate users about recognizing AI content, understanding metadata labels, and verifying authenticity.
“Technology cannot solve misinformation alone — awareness must accompany regulation,” said Priya Mehta, a public policy advisor.
This multi-pronged approach — combining law, technology, and education — could transform how India manages its digital future.
Policy Alignment with India’s AI Mission
The proposed labeling mandate fits within the broader framework of IndiaAI Mission 2025, which emphasizes ethical AI deployment, data privacy, and transparency.
Under this mission, the government aims to:
- Establish national AI computing centers for research and innovation.
- Develop ethical AI standards aligned with global norms.
- Encourage startups to build trust-based AI solutions.
Labeling synthetic content thus becomes a foundational step in responsible AI governance — one that strengthens India’s digital credibility globally.
The Global Stakes: Balancing Innovation and Regulation
Across the world, governments are grappling with the same dilemma — how to regulate AI without crushing innovation.
If overregulated, the AI sector risks losing momentum and global competitiveness. If underregulated, society risks falling victim to large-scale manipulation.
India’s balanced approach — label, don’t ban — could become a model for other democracies seeking to maintain this equilibrium.
“This is India’s moment to lead in digital ethics,” said Karan Bhagat, founder of PolicyEdge Think Tank. “The challenge is to implement smart regulation, not hard regulation.”
A Glimpse into the Future: The Next Phase of Digital Authenticity
Looking ahead, AI labeling may evolve beyond visual tags. Emerging technologies like blockchain authentication, content hashing, and digital provenance tools could automate content verification at scale.
Global consortia such as Content Authenticity Initiative (CAI) and Coalition for Content Provenance and Authenticity (C2PA) are already developing open standards that embed “truth certificates” directly into media files.
India’s participation in such initiatives could further strengthen its commitment to truth, transparency, and innovation.

Conclusion: Towards an Ethically Intelligent Future
The Centre’s proposal to amend the IT Rules marks a defining moment in India’s digital evolution. By mandating AI content labeling, the government sends a clear message — innovation must walk hand in hand with integrity.
While challenges remain — from enforcement to compliance — the long-term benefits outweigh the short-term costs. The “price” of transparency may seem high today, but it’s a small investment for a more trustworthy and resilient digital ecosystem tomorrow.
As the lines between real and artificial blur, India’s initiative reminds the world that the future of AI depends not just on intelligence — but on honesty.
Table: Overview of India’s Proposed AI Content Labeling Rules (2025)
| Parameter | Proposed Requirement | Impact Area |
|---|---|---|
| Declaration | Users must declare whether uploaded content is AI-generated | Accountability |
| Visual Labeling | Label covering at least 10% of surface area | Visual transparency |
| Audio Labeling | Label audible for initial 10% of total duration | Audio transparency |
| Metadata Embedding | Permanent unique identifier embedded in content | Traceability |
| Platform Liability | Non-compliance may remove “safe harbour” protection | Legal responsibility |
| Compliance Cost | ₹80–120 crore setup; ₹20–30 crore annual maintenance | Economic impact |
| Public Consultation | Open until December 2025 | Democratic process |
| Global Alignment | Mirrors China’s and EU’s transparency standards | International parity |

