The Government of India has proposed new rules requiring clear labelling of all AI-generated content on social media. The draft, issued by the IT Ministry, aims to curb the growing misuse of deepfakes and synthetic media used to spread misinformation.
Platforms with over five million users must now verify and label AI-generated material. Facebook, YouTube, Instagram, and X will need to deploy tools to detect and mark synthetic visuals, audio, and text.
According to the proposal, AI-generated visuals must carry a visible label covering 10 per cent of the screen, while audio clips should include a notice during the first 10 per cent of playback. Users will also need to declare if their uploads are synthetic, and platforms must prevent any removal or modification of these labels.
Accountability and Legal Action
IT Minister Ashwini Vaishnaw said the move responds to rising public concern about deepfakes that harm reputations and violate privacy. “It is important that users know whether something is synthetic or real,” he stated.
The obligations will apply once synthetic content is posted publicly. Messaging platforms like WhatsApp will need to act when such material is reported to prevent further circulation.
Recent deepfake cases involving Sadhguru and Aishwarya Rai Bachchan have already prompted court action, highlighting the urgency of regulation.
PM Modi’s Consistent Focus on Responsible AI
The proposed rules align closely with Prime Minister Narendra Modi’s repeated warnings about the dangers of unchecked AI use. During Bill Gates’ 2024 visit to India, PM Modi had flagged the deceptive potential of deepfakes, calling for transparent labelling and source disclosure for all AI-generated content.
He also emphasised the need for a strong legal framework to govern AI technologies, ensuring innovation does not come at the cost of public trust and safety. The government’s latest proposal reflects that same approach — focusing on responsibility, awareness, and accountability in the use of artificial intelligence.
The draft rules further clarify that once synthetic content is posted publicly, the responsibility for compliance rests with both users and platforms. Messaging services like WhatsApp will need to limit the spread of such material once notified.
India has already witnessed court cases and viral incidents involving AI deepfakes, such as fake ads using Sadhguru’s image and manipulated videos of Aishwarya Rai Bachchan and Abhishek Bachchan. These incidents have strengthened calls for tighter regulation and transparency.
Stakeholders have until November 6, 2025, to submit feedback on the draft. The move is being seen as a decisive and proactive step by the government to ensure ethical AI use and protect digital integrity — once again showing that PM Modi’s leadership prioritises action over rhetoric.