|

AI in the Spotlight: Insight’s from MeitY’s Latest Advisory

The Ministry of Electronics and Information Technology (“MeitY“), on March 1, 2024, issued an advisory for all intermediaries under the Information Technology Act, 2000 (“IT Act“) and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules“). The advisory is in continuation of the advisory dated December 26, 2023, issued by the MeitY addressing some of the growing issues surrounding misinformation through deepfakes. The MeitY, while pointing out that intermediaries are failing to carry out their due diligence obligations as stipulated under the IT Act and the IT Rules, advised all intermediaries and platforms which use and host Artificial Intelligence (“AI“) systems, to comply with the following obligations:

  1. Use of AI Models

Through the use of AI models, Large Language Models (“LLMs“) and/or generative AI models, intermediaries must undertake steps to ensure that users do not upload any unlawful content as set forth in the IT Act or violate any provisions of the IT Act. Furthermore, intermediaries must clearly inform the users about the consequences for hosting such unlawful content through user agreements.

The advisory had also included an enormously contentious provision over the use of under-testing and/or unreliable AI models, LLMs and generative AI models. It was stipulated that the use of the aforementioned technologies and its deployment on the Indian internet should be done only with the explicit approval of the Central Government. Additionally, provision of such under-testing models to the citizens should accompany a ‘consent popup’ to inform users about the inherent fallibility of the output of the AI system. This provision was met with immense backlash from industry stakeholders especially from start-ups who claimed that this could potentially be detrimental to AI innovation in the country.

In response to the immense retaliation, on March 15, 2024, the MeitY reportedly issued a revised advisory to 8 (eight) of the largest intermediaries namely Facebook, Instagram, WhatsApp, Google/YouTube, Twitter, Snap, Microsoft and ShareChat. The revised advisory toned down the language of the March 1st advisory and clarified that the provision on government approval required to provide under-testing and/or unreliable AI models to the general public would only be applicable to these 8 (eight) platforms.

  1. Prevention of Discrimination and Bias

As argued by several industry experts and as recently demonstrated by Google’s generative AI model – Gemini, AI technology models are still susceptible to allow certain bias and discrimination to affect its statistics and predictions and ultimately provide a biased output. Through the advisory, the MeitY has called all intermediaries to ensure that no AI models or LLMs that permit bias or discrimination are used. The ministry has also emphasized on the potential consequences of a discriminate AI system on a country’s electoral process. In this regard, the IT Secretary Mr. S. Krishnan had said that the advisory was issued primarily because AI tools had differing outcomes depending upon the territory from which it is used, and that this aspect could become a significant concern ahead of the upcoming general elections in the country. Consequentially, the advisory had directed intermediaries to ensure that their AI models do not threaten the integrity of the electoral process of India.

Moreover, intermediaries and platforms that use their computer resources to provide audio-visual content were advised to ensure that content which could potentially be deemed to be misinformation or deepfakes are not hosted and displayed on the platform. Any content generated from such computer resources and hosted on a platform must either be labelled or embedded with a permanent unique metadata or identifier in order to identify where such information or content has been generated from.

While the IT Minister, Mr. Ashwini Vaishnaw clarified that this advisory is not meant to be a regulatory framework, but rather a set of guidelines for entities to test their AI products before launching it, there is still uncertainty on whether the advisory is legally binding, as intermediaries have been reminded about their due diligence obligations in order to avoid prosecution under the IT Act and obtaining safe harbour protection under the IT Act.

In conclusion, the advisory issued by the MeitY regarding the use of AI models by intermediaries marks a significant step towards addressing concerns surrounding misinformation and bias in digital content. While the revised advisory aims to strike a balance between regulatory oversight and fostering innovation, it has sparked debated within the industry. Future developments in this area will undoubtedly shape the trajectory of AI governance and innovation in India, as stakeholders navigate the complexities of ensuring compliance with regulatory standards while pushing the boundaries of technological advancement.

LEAVE A REPLY