The DPIIT Working Paper on AI and Copyright: Regulatory Signals and Practical Implications

Context and Regulatory Background

Artificial intelligence (“AI“) is no longer at the margins of business strategy. For many organisations, it has become embedded in product design, customer engagement, internal workflows and long-term planning. As AI moves closer to core operations, regulatory attention has shifted away from the narrow questions of permissibility and towards broader concerns around governance, accountability and economic impact.

In India, that shift is beginning to take clearer shape. The recent Working Paper on Generative AI and Copyright (Part 1) released by the Department for Promotion of Industry and Internal Trade (“DPIIT“) on December 08, 2025 (“Working Paper“) is a case in point. While the Working Paper is framed around copyright and AI training, its relevance extends well beyond that subject. It offers an early indication of how Indian regulators are thinking about AI as an economic activity and how they may seek to structure regulation for emerging technologies more generally.

AI as a Governance and Business Issue

Until recently, decisions around AI were often treated as technical or product-led choices. Questions around data sourcing, model development and deployment typically sat with the engineering or product teams, with legal input being provided subsequently.

AI systems today have a direct bearing on revenue models, pricing strategies, cost structures and regulatory exposure, often across multiple jurisdictions at once. They also carry reputational and operational risk at a scale that few other technologies do. The Working Paper reflects this reality. By treating AI training as an economic activity with system-wide implications, rather than a narrow technical process, it aligns with a broader regulatory trend in India where technology is being assessed through its business and market impact, not merely through formal legal compliance.

A Preference for Predictability Over Permissions

One of the more telling aspects of the Working Paper is what it suggests about regulatory design. Rather than relying on models that involve individual licenses, opt-outs or transaction-by-transaction approvals, the Working Paper points towards standardised, statutory mechanisms. This reflects a growing perception that permission-based regulation, struggles in complex digital ecosystems and the Working Paper suggests that future technology regulation in India is likely to prioritise scalability over theoretical individual control.

Commercial Impact of Compliance

AI related regulatory exposure increasingly affects commercial decisions such as pricing, monetisation, investment and market entry and not just legal risk assessments. For multinational organisations, the point is even sharper. AI systems trained in one jurisdiction may trigger regulatory consequences when deployed or commercialised elsewhere, and the DPIIT framework recognizes this obstacle.

Where the Assumptions Start to Fray

That said, the Working Paper also rests on assumptions that may prove difficult to operationalize:

  1. To begin with, AI systems rarely follow neat, linear lifecycles. Training, testing, fine-tuning and deployment often overlap, and models may be retrained continuously or embedded incrementally into larger products. The Working Paper assumes a relatively clear distinction between training and commercial exploitation, a distinction that is not always easy to draw in practice.
  2. There are also practical challenges around attribution and valuation. AI systems typically draw on multiple datasets and models, and generate value only as part of broader platforms or services. Assigning economic value to specific training activities, particularly across jurisdictions, is unlikely to be straightforward.
  3. A further complication arises from reliance on third-party or pre-trained models. Many organisations have limited visibility into how such models were trained or what data was used. If regulatory obligations are linked with training-related activities, this places greater pressure on contractual controls, diligence and internal documentation, all of which are areas that are still evolving for many businesses.
  4. Finally, the Working Paper operates alongside, rather than within, other regulatory regimes. Issues relating to personal data protection, automated decision-making, platform accountability and competition law continue to sit elsewhere and separate. As a result, AI governance in India is likely to remain multi-layered, requiring companies to navigate overlapping obligations rather than rely on a single regulatory solution.

The Data Protection Overlay

One area the Working Paper leaves open is how AI training interacts with personal data protection. This is a gap that companies should not underestimate.

Published works frequently contain personal data, including data relating to identifiable third parties. While authors may choose to disclose their own personal information, third parties mentioned in books, memoirs or biographies have not necessarily done so. The fact that information appears in a published work does not automatically remove it from the scope of India’s data protection regime. Additionally, AI models trained on books may reproduce identifiable personal facts, generate behavioural or psychological inferences, or re-express sensitive information. Such outcomes raise questions under the Digital Personal Data Protection Act, 2023 (“DPDP“) particularly around whether the input was personal data at that stage, liability, lawful basis, purpose limitation and safeguards.

Section 3(c) of the DPDP, which excludes certain personal data made available pursuant to an obligation to that effect under law, may be cited in this context. However, it does not afford much clarity since it does not clearly extend to third-party data embedded in published works and where individuals become contextually identifiable after AI processing.

Conclusion

The Working Paper represents an attempt to regulate AI without stifling innovation. By focusing on AI training and the economics involved, it offers a workable starting point but, it has made the limitations of doing so evident. It is now clear more than ever that AI governance will not be solved by a single policy or license. It will require sustained engagement across functions, and a willingness to adapt as India’s approach to regulating emerging technologies continues to evolve.

[1] https://www.dpiit.gov.in/static/uploads/2025/12/ff266bbeed10c48e3479c941484f3525.pdf

[2] “3(c)(ii) Subject to the provisions of this Act, it shall not apply to personal data that is made or caused to be made publicly available by—

(A) the Data Principal to whom such personal data relates; or

(B) any other person who is under an obligation under any law for the time being in force in India to make such personal data publicly available.”

LEAVE A REPLY