Drafting deepfake defences: comments on India’s proposed amendments to the IT Rules

On October 22, 2025, the Ministry of Electronics and Information Technology (“MeitY“) proposed significant amendments (“Proposed Amendments“) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Rules“). These Proposed Amendments aim at regulating synthetically generated information including deepfakes, artificial intelligence (“AI“) generated imagery, manipulated audio, and other forms of synthetic digital content, in response to rising concerns around misinformation, impersonation, and non-consensual imagery created using rapidly advancing generative AI technologies.

1.  Rationale behind the Proposed Amendments?

MeitY’s explanatory note to the Proposed Amendments cites a series of alarming trends, deepfake videos impersonating public figures, synthetic audio used in financial fraud and non-consensual intimate deepfakes. Globally and domestically, fabricated media has begun eroding public trust and exposing individuals to reputational, financial, psychological, and physical harms.

Concerns have also been raised in both houses of the Parliament of India, post which advisories were issued to social media intermediaries to curb deepfakes. The Proposed Amendments, seek to go a step further and provide a clear legal backbone for detection, labelling, and accountability in relation to synthetic content.

2.  Key Features of the Proposed Amendments

(i) Defining Synthetically Generated Information (“SGI“)

The Proposed Amendments under Rule 2(1) (wa) define SGI as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true”. The ambit of this definition has been intentionally kept broad to capture artificially as well as algorithmically altered images, audio, and video.

(ii) Mandatory Labelling and Metadata Embedding

Perhaps the most far-reaching obligation under the Proposed Amendments is the requirement that any SGI created, must carry a visible or audible label covering at least 10% of the visual area or 10% of an audio file’s initial duration, in addition to a permanent, unique metadata identifier that cannot be removed, altered, or suppressed. This aims to ensure immediate user awareness that the content is synthetically modified.

(iii) New Obligations for Significant Social Media Intermediaries (“SSMI(s)“)

A significant social media intermediary, i.e., a social media intermediary with over 50 lakh registered users, that enables displaying, uploading or publishing any information on its computer resource shall, prior to such display, uploading, or publication require users to declare whether uploaded content is synthetic. They must further deploy reasonable and appropriate automated tools to verify the submitted declaration and display a clear label on the content identified as SGI. If an SSMI with due knowledge permits or fails to act on synthetic content in violation of the Rules, it may be deemed to have failed its due diligence obligations and in turn its ability to avail of the safe-harbour protections.

(iv) Good-Faith Removal Protection

The Proposed Amendments shield respective intermediaries from potential liability which may arise when they remove or disable access to SGIs or other harmful information as part of reasonable efforts or in response to grievances, ensuring that such removals do not compromise their safe-harbour protections under Section 79 of the Information Technology Act (“IT Act“).

3.  Concerns with the Proposed Amendments

(i) Overbreadth definition of SGI

The definition of SGI under the Proposed Amendments could possibly capture almost all modern digital content, since nearly everything online is artificially or algorithmically modified to some extent. This definition would include even basic filters, colour correction, AI-assisted writing, AI-assisted editing, computer generated imagery and animation.

Additionally, the phrase “reasonably appears to be authentic or true” appears to be an arbitrary assessment and will exclude AI modified content that does not appear to be true but is still harmful. It also may expose users and platforms to frivolous claims.

(ii) Imposition of the 10% labelling requirement are unwarranted

The imposition of mandatory labelling of at least 10% of the visual or audio display of all possible forms of SGI, imposes an onerous and unwarranted burden on intermediaries. It undermines creative expression, hinders everyday AI workflows and indiscriminately fails to differentiate between the different kinds of SGI. Further, imposing a blanket labelling requirement could result in “banner blindness”, i.e., users may become inured to labels over time. Therefore, it would be relatively preferable that if such labelling requirements were to be introduced, they be restricted to SGI content which poses a reasonable likelihood of risk or harm to a person or a group of people.

(iii) Possible Dilution of Safe-Harbour Protections

Although the Proposed Amendments attempt to preserve the protections provided under Section 79 of the IT Act, they also impose supervisory roles that could expose intermediaries to liability if any synthetic content escapes detection or labelling. This may result in over-cautious censorship and mass takedowns to avoid legal risks.

(iv) Compliance Burden and Technological Feasibility

The requirements under the Proposed Amendments in relation to metadata embedding, automated verification and user declarations may be extremely difficult for platforms to implement. The Rules assume the availability of advanced AI detection systems, which are still nascent and often inaccurate. A regulatory impact assessment is therefore strongly recommended before finalisation.

(v) Need for Broader Consultation

Given the disproportionate harm suffered by women, children, and marginalised groups due to deepfakes, it is recommended that consultations be undertaken with relevant ministries (Women & Child Development, Information & Broadcasting, etc.), civil society, technologists, and digital rights organisations. It is further recommended that these Proposed Amendments ought to be finalized after a consultation process with all stakeholders to understand the issues sought to be addressed, and the practicalities of implementation of the proposed solutions.

4. Conclusion

The Proposed Amendments represent a bold step towards regulating deepfakes and synthetic media at a time when generative AI poses unprecedented risks to social trust, privacy and individual dignity. Their emphasis on transparency, traceability, and platform accountability as the key pillars of a safe and trustworthy digital ecosystem, is a step in the right direction.

However, the Proposed Amendments do raise concerns around overbreadth, feasibility, potential over-censorship, and burdens on smaller platforms. A more calibrated, risk sensitive approach aligned with global standards and shaped by broad stakeholder consultation may be more effective to achieve the dual goal of safeguarding users and supporting innovation.

LEAVE A REPLY