The government of India has introduced stricter compliance requirements for online platforms in handling artificial intelligence (AI)-generated and synthetic content, including deepfakes. Platforms such as X and Instagram will now be required to remove such content within three hours if flagged by a competent authority or court.

Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have been notified, formally defining AI-generated and synthetic content. The revised rules will come into effect from February 20, 2026.

Further, the amendments define audio, visual or audio-visual information and synthetically-generated information, covering AI-created or altered material that appears real or authentic. However, routine editing, accessibility enhancements, and good-faith educational or design-related uses are excluded.

Under the revised framework, synthetic content will now be treated as information for the purpose of determining unlawful acts under the IT Rules. Intermediaries must comply with government or court orders within three hours, a significant reduction from the earlier 36-hour window, as per a gazette notification issued by the Ministry of Electronics and Information Technology (MeitY). The timelines for user grievance redressal have also been shortened.

Furthermore, platforms that enable the creation or sharing of synthetic content must ensure that such content is clearly and prominently labelled. Where technically feasible, permanent metadata or identifiers must be embedded to signal AI generation.

The rules also mandate platforms to deploy automated tools to prevent circulation of illegal, deceptive, sexually exploitative or non-consensual AI content, as well as material related to false documentation, child abuse, explosives or impersonation. Additionally, intermediaries will not be permitted to remove or suppress AI labels or associated metadata once applied.