
On February 10, 2026, the Ministry of Electronics and Information Technology (“MeitY“) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Amendment Rules“), introducing comprehensive regulations to combat deepfakes and AI-generated misinformation. The Amendment Rules come into force on February 20, 2026, giving intermediaries barely 10 (ten) days to overhaul their content moderation systems, deploy automated detection tools, and implement mandatory labelling infrastructure. The amendments follow public consultation on draft rules released in October 2025 and introduce a detailed framework for regulating synthetically generated information (“SGI“).
Synthetically Generated Information
The Amendment Rules introduce a new definition of SGI under Rule 2(1)(wa): “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event“. This broad definition captures deepfakes, AI-generated content, and algorithmically manipulated media designed to deceive viewers. However, recognizing industry concerns, the Amendment Rules carve out three critical exclusions:
- Routine editing activities such as formatting, colour adjustment, noise reduction, transcription, or compression that do not materially alter the substance, context, or meaning of the content;
- Routine creation of documents, presentations, PDFs, educational materials, and research outputs using illustrative or template-based content, provided no false document is created; and
- Use of computer resources solely to improve accessibility, clarity, quality, translation, or searchability without manipulating the underlying content.
Compliance Obligations on Intermediaries
The Amendment Rules impose a two-tier compliance framework depending on whether intermediaries merely host SGI or actively enable its creation, generation, modification, alteration, publication, transmission, sharing or dissemination.
Prohibited and Labelled SGI
Under new Rule 3(3) of the Amendment Rules, intermediaries offering computer resources that “enable, permit, or facilitate the creation, generation, modification, alteration, publication, transmission, sharing, or dissemination of information as synthetically generated information” must deploy “reasonable and appropriate technical measures, including automated tools” to prevent users from creating, generating, modifying, altering, publishing, transmitting, sharing or disseminating unlawful SGI. Prohibited categories include:
- Child sexual exploitative and abuse material, non-consensual intimate imagery, and obscene or sexually explicit content;
- Content creating false documents or false electronic records;
- Content relating to explosives, arms, or ammunition; and
- Content falsely depicting individuals or events in a manner likely to deceive.
For SGI not falling within the prohibited categories, intermediaries must ensure prominent labelling with visible markers for visual content and audio disclosures for audio content. Additionally, intermediaries must embed permanent metadata and unique identifiers to trace the computer resource used to create, generate, modify or alter the SGI, to the extent that is technically feasible. Critically, intermediaries cannot enable removal of these labels or metadata.
User Notifications and Periodic Reminders
Amended Rule 3(1)(c) mandates intermediaries inform users at least once every 3 (three) months about consequences of non-compliance with the intermediary’s rules/regulations and applicable law, including (i) the intermediary’s rights to terminate access or remove or disable access to non-compliant information; (ii) potential penalties under the Information Technology Act, 2000 (“IT Act“), Bharatiya Nyaya Sanhita, 2023, Protection of Children from Sexual Offences Act, 2012, and (iii) mandatory reporting obligations for cognizable offences.
For intermediaries enabling SGI, additional notices under Rule 3(1)(ca) must warn users that violations may attract penalties under the Representation of the People Act, 1951, Indecent Representation of Women Act, 1986, Sexual Harassment at Workplace Act, 2013, and Immoral Traffic Act, 1956, and can lead to immediate disabling of access to or removal of such information, account suspension, identity disclosure to victims, and reporting to appropriate authorities.
Enhanced Duties for Significant Social Media Intermediaries
Under new Rule 4(1A), significant social media intermediaries which enable displaying, uploading, or publishing any information on their computer resource must:
- Require users to declare whether content is SGI before publication;
- Deploy automated tools to verify such declarations; and
- Ensure verified SGI is clearly labelled with appropriate notices.
The Amendment rules explicitly provide that intermediaries knowingly permitting or failing to act upon SGI in violation of the Amendment Rules shall be deemed to have failed to exercise due diligence.
Consequences for Non-Compliance
Intermediaries failing to comply with these due diligence obligations will lose safe harbour protections under Section 79(1) of the IT Act, exposing them to direct liability for user generated content. Users creating or disseminating prohibited SGI may attract penalties under multiple statutes, ranging from fines to imprisonment.
Takedown and Grievance Timelines
A significant amendment introduced by the Amendment Rules is that they drastically reduce response timelines. Intermediaries must now remove or disable access to unlawful information within 3 (three) hours (reduced from the earlier 36 (thirty-six) hours) of receiving court orders or authorized government notices. The Amendment Rules also tighten authorization procedures providing that notices must be issued ‘by order in writing’, and for police administration, authorized officers must be at least Deputy Inspector General rank.
For user grievances, general complaints must be resolved within 7 (seven) days (reduced from 15 (fifteen) days). Complaints relating to intimate images or content of an individual will require action within 36 (thirty-six) hours (reduced from 72 (seventy-two) hours). For the most sensitive category, content exposing private areas, nudity, sexual acts, or artificially morphed images, intermediaries must act within 2 (two) hours (reduced from 24 (twenty-four) hours).
Ambiguities Despite Stakeholder Feedback
Despite incorporating industry feedback, the Amendment Rules retain interpretive challenges. The standard of ‘reasonable and appropriate technical measures’ for detecting prohibited SGI remains undefined, with no performance benchmarks or acceptable error-rate thresholds. The mechanism for verification required for user declarations is also unclear. Further, exclusions for ‘routine editing’ and ‘good-faith creation’ remain subject to interpretation, particularly for satire, parody, or artistic expression.
Conclusion
The Amendment Rules mark a watershed moment in India’s regulation of AI-generated content. By introducing mandatory labelling, proactive detection obligations, and drastically compressed takedown timelines, MeitY has signalled its intent to address issued caused by synthetic media with urgency.
For intermediaries, the ten-day implementation window demands immediate action by deploying automated detection tools, implementing labelling infrastructure, developing user declaration workflows, and training content moderation teams on compressed timelines. Given the breadth of obligations and interpretive uncertainties around ‘reasonable technical measures’ and verification standards, close engagement with regulators will be essential as the framework matures.
For users and businesses leveraging AI tools, the Amendment Rules underscore the importance of transparency. Consequences of creating prohibited SGI or failing to label permitted SGI extend beyond account termination to potential criminal liability. As India establishes guardrails for generative AI, the success of this framework will depend on balancing regulation of harmful content with protection of innovation and expression.













