India has ordered social media platforms to step up enforcement of deepfakes and other AI-generated impersonations, while significantly reducing the time they have to comply with takedown orders. It’s a move that could reshape the way global tech companies manage content in one of the world’s largest and fastest-growing internet services markets.
The changes (PDF), announced on Tuesday as amendments to India’s 2021 IT Rules, bring deepfakes under a formal regulatory framework and mandate labeling and traceability of synthetic audio and video content, while shortening compliance timelines for platforms, including a three-hour deadline for official takedown orders and a two-hour window for certain emergency user complaints.
India’s importance as a digital market will amplify the impact of the new rules. With more than 1 billion internet users and a predominantly young population, the South Asian country has become a key market for platforms such as Meta and YouTube, and compliance measures adopted in India are likely to influence products and moderation practices worldwide.
Under the revised rules, social media platforms that allow users to upload or share audiovisual content will have to require disclosure of whether the material is synthetically generated, deploy tools to verify those claims, and ensure that deepfakes are clearly labeled and embedded with traceable provenance data.
Certain categories of synthetic content (such as deceptive impersonation, non-consensual intimate images, and material related to serious crimes) are completely prohibited by the rules. Non-compliance, especially if flagged by authorities or users, can jeopardize safe harbor protections under Indian law and expose companies to greater legal liability.
The Regulation relies heavily on automated systems to meet these obligations. Platforms are expected to deploy technological tools to verify user disclosures, identify and label deepfakes, and prevent the creation and sharing of synthetic content, which is prohibited in the first place.
“The revised IT rules signal a more tailored approach to regulating AI-generated deepfakes,” said Rohit Kumar, founding partner at New Delhi-based policy consulting firm Quantum Hub. “Significant reductions in complaint resolution timelines, such as two- to three-hour response windows, significantly increase compliance burdens and merit closer scrutiny, especially given that noncompliance can lead to loss of safe harbor protection.”
tech crunch event
boston, massachusetts
|
June 23, 2026
Arajita Rana, a partner at India’s leading corporate law firm AZB & Partners, said the rules currently focus on AI-generated audiovisual content rather than all online information, while making exceptions for routine, superficial or efficiency-related uses of AI. However, he warned that the requirement that intermediaries remove content within three hours of becoming aware of it departs from established free speech principles.
“However, the law still requires intermediaries to remove content upon learning or receiving actual knowledge, even within three hours,” Rana said, adding that labeling requirements apply across formats to curb the spread of child sexual abuse material and deceptive content.
The Internet Freedom Foundation, a New Delhi-based digital advocacy group, said the rules risk accelerating censorship by significantly shortening takedown schedules, leaving little room for human review and pushing platforms toward automated over-takedowns. In a statement published on
“These impossibly short timelines preclude meaningful human review,” the group said, warning that the changes could undermine free speech protections and due process.
Two industry sources told TechCrunch that the changes went through a limited consultation process, with only a small number of suggestions reflected in the final rule. While the Indian government appears to have accepted a proposal to narrow the scope of covered information (focusing on AI-generated audiovisual content rather than all online material), other recommendations were not adopted. The scale of the changes between the draft and final regulations was such that another round of consultation was needed to provide clear guidance to businesses about compliance expectations, the people said.
The government’s removal powers are already at issue in India. Social media platforms and civil society groups have long criticized the breadth and opacity of content removal orders, with even Elon Musk’s Mr.
Meta, Google, Snap, X and India’s IT Ministry did not respond to requests for comment.
The latest changes come just months after the Indian government reduced the number of authorities that can order content removed from the internet in October 2025 in response to a legal challenge by X over the scope and transparency of the removal powers.
The revised rules will take effect on February 20, giving platforms little time to adjust their compliance systems. The development will coincide with the AI Impact Summit being hosted by India in New Delhi from February 16 to 20, and is expected to draw senior technology executives and policymakers from around the world to India.
