||

Connecting Communities, One Page at a Time.

advertisement
advertisement

Govt moves to tighten AI labels on social media as platforms face poor compliance

New rules may force YouTube, Instagram and X to keep AI tags visible throughout videos as Centre pushes for stronger transparency.

EPN Desk 22 April 2026 06:45

IT

The government has moved to tighten rules for labeling artificial intelligence-generated content on social media platforms, citing what it described as “unsatisfactory compliance” by major intermediaries such as YouTube, Instagram and X.

The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to the IT Rules, 2021, making it mandatory for platforms to ensure a continuous and clearly visible display of labels on AI-generated content for the full duration of the visual material.

Advertisement

The move is expected to directly impact AI-generated videos, where disclosures may now need to remain visible from start to finish.

A senior government official said the amendments were prompted by repeated failures by platforms to properly implement rules notified earlier in February, which had required AI-generated content to carry labels “prominently” but did not define how long those labels should remain visible.

“Compliance has not been satisfactory. We have repeatedly pointed out that labeling has been inconsistent, and many AI-generated videos still carry no disclosures at all,” the official said.

According to the proposed changes issued on April 22, intermediaries must ensure labels remain visible throughout the duration of the content in any visual display, leaving little room for temporary or easily missed notices.

Another official said the ministry had sent multiple examples to social media companies where AI-generated videos were either not labeled or the disclosure appeared only briefly.

“It should not happen that a user watches an AI-generated video and the label flashes only for a few seconds. People must clearly know they are viewing content created through AI,” the official added.

The draft amendments have now been opened for public consultation, with comments invited until May 7. Queries sent to MeitY, Google and Meta had not received responses at the time of publication.

Under existing IT Rules, the definition of synthetically generated information (SGI) excludes assistive and quality-enhancing uses of AI, along with routine good-faith editing of audio or video content.

However, when platforms become aware that their services are being used to create, share or host unlawful SGI, they are required to take “appropriate” and “expeditious” action. This can include disabling access, removing content, suspending accounts or terminating users.

Platforms that enable users to create, modify or distribute SGI must also deploy reasonable technical safeguards to prevent misuse, particularly where such content misrepresents real-world events or impersonates individuals.

Big tech companies are further required to ask users to declare when uploaded content is AI-generated, verify such claims through technical means where possible, and ensure accurate labels are prominently displayed.

Concerns around AI misuse intensified earlier this year after Grok, the AI chatbot service of X, generated images of women in revealing clothing in response to user prompts, triggering criticism over privacy and dignity concerns.

The backlash drew international scrutiny, including from India. Following the controversy—and bans in some countries—X later revised Grok’s safeguards to block such image generation.

Also Read


    advertisement