India's New IT Rules: Deepfake Labels Mandatory, Platforms Face 2-Hour Takedown Windows
New regulations require AI-generated content to be clearly labelled and give platforms just 2-3 hours to remove flagged synthetic media.
The Ministry of Electronics and Information Technology (MeitY) has notified sweeping new IT rules that mandate clear labelling of all AI-generated synthetic content, including deepfakes, AI-assisted articles, and voice clones.
Under the new framework, platforms must implement automated detection systems capable of identifying synthetic media and tagging it with a visible "AI Generated" watermark. Content reported to platforms or flagged by government agencies must be taken down within a "lightning-fast" 2-3 hour window.
The rules come amid growing concerns about electoral misinformation, with multiple instances of deepfake videos featuring politicians going viral ahead of state assembly elections in Tamil Nadu, Kerala, and Puducherry.
"This is not about censorship โ it's about transparency," said MeitY Secretary S. Krishnan. "Citizens have a right to know when the content they're consuming is generated by a machine rather than a human."
Tech companies have raised concerns about the operational feasibility of the 2-hour takedown window, calling it "technically impractical at scale." Industry body NASSCOM has requested a phased implementation timeline.
Priya Sharma
Editor
Comments (0)
No comments yet. Be the first to share your thoughts!