Regulating online speech: Risky ‘cure’ for a genuine problem

Regulating online speech: Risky ‘cure’ for a genuine problem
X
While apprehension about victimised millions, the Chief Justice of India said that “Dissent is part of democracy. Every day, people write against the government. But the problem arises when you suddenly put something on YouTube and there are millions and millions who are victimised. They do not have a voice. They do not have a platform, and by the time they rush to court, the damage is done.”

The Supreme Court’s concern over the unchecked spread of harmful user-generated content is legitimate and timely. It directed the Ministry of Information and Broadcasting on November 13 to work on guidelines for user-generated content to protect innocents from becoming victims of obscene, even perverse, “anti-national” or personally damaging online content.

A bench of Chief Justice of India Surya Kant and Justice Joymalya Bagchi cautioned that user-generated content, potentially disastrous to reputations or even having “adult content”, go viral even before social media intermediaries could take them down. The bench considered the idea of an “impartial and autonomous authority”, neither bound to private broadcasters nor the government, to vet “prima facie permissible” content.

What is UGC?

To understand the problem of regulating social media, there is a need to know what User-generated content (UGC) is. It is any original material, such as text, photos, or videos, created and published by users on platforms like social media, rather than by the brand or company itself. UGC can include customer reviews, social media posts, and testimonials, and it is a valuable marketing tool because it builds trust and authenticity through real-life experiences and perspectives. A customer posting a picture of a product on Instagram, a video review on YouTube, or a testimonial on a company’s website are all examples of UGC. Consumers often trust content from other users more than traditional advertising because it feels more genuine. UGC can increase social media engagement and drive traffic to a website, as it is often more relatable and persuasive. Seeing real people use and enjoy a product or service acts as a powerful social proof, helping to influence purchasing decisions. And the companies can leverage UGC as an authentic and cost-effective way to showcase real-life customer experiences in their marketing campaigns.

SC’s suggestion

The Supreme Court’s primary concern is the “24-hour gap”—the critical lag between content going viral and its removal. The merits of establishing an impartial, autonomous authority include:

Speed of redressal: A dedicated authority could act faster than the current court processes, preventing reputational destruction before it becomes irreversible. As Justice Kant noted, “By the time they rush to court, the damage is done.” Accountability for ‘self-styled’ channels: Currently, individual creators act as broadcasters without the regulatory compliance required of traditional media. An authority can help standardise accountability.

Protection of the Vulnerable (Article 21): It prioritizes the ‘Right to dignity’ and the ‘Right to reputation’ for victims who lack the resources to fight defamation suits against viral mobs.

Curating “Adult” Access: The suggestion of age verification (via Aadhaar or otherwise) attempts to create a digital barrier protecting minors from perverse content, filling the gap where simple “warnings” fail.

Worries over violation of Article 19(1)(a):

The “preventive” nature of the proposal raises significant constitutional concerns regarding Freedom of speech and expression: a) Pre-Censorship (Prior restraint): In democratic jurisprudence, preventing speech before it occurs is far more dangerous than punishing illegal speech after it occurs; b) Vagueness of “anti-National”: Advocate Prashant Bhushan highlighted that terms like “anti-national” are ambiguous. c) Chilling effect: If creators know their content must be “vetted” or that they are being tracked via Aadhaar, they may self-censor valid opinions out of fear, leading to a sterile digital environment.

Protecting millions of victims:

While being apprehensive about victimised millions, the Chief Justice of India said that “Dissent is part of democracy. Every day, people write against the government. But the problem arises when you suddenly put something on YouTube and there are millions and millions who are victimised. They do not have a voice. They do not have a platform, and by the time they rush to court, the damage is done.”

When proposing to restrain free speech guidelines must be made mandatory prior and extensive public consultations. Arguing for such mandatory consultation Prashant Bhushan cautioned that the term ‘anti-national’ was both over-broad and ambiguous.

Media reporting the salient points of the remarks of the Bench, the Chief Justice said there were enough laws to turn to after the damage was done. But there was nothing to protect them before the post went online. “A takedown consumes at least 24 hours. By the time it is effectuated, the harm is already done... This preventive exercise is not to throttle anyone but to have a certain degree of stick. Technology with AI makes you (social media intermediaries) enormously powerful, to curate your material, assess its impact. Platforms are monetising content,” Justice Bagchi observed.

The judge termed prosecution of the creator of the offending social media post a “post-occurrence penalty”, saying “we must have preventive mechanisms to ensure there is no spread of misinformation, loss of property as well as sometimes lives”.

Similarly, senior advocate Amit Sibal, for Indian Broadcast and Digital Foundation, expressed reservations about the court using the term ‘preventive’ to describe the proposed guidelines. ‘Preventive’ could be read as ‘pre-censorship’, he said. He suggested changing the prefix to ‘effective’.

Technical prevention vs. pre-punishment:

The goal is to stop the harm without stopping the speech in advance. Technical solutions can prevent the spread of cybercrime without acting as a ‘censor’.

Virality circuit breakers: Instead of blocking uploads, platforms can technically limit how fast a piece of content can be shared. If a video violates “velocity checks” (going viral too fast without verification), its reach is temporarily throttled until reviewed.

Hashing and fingerprinting: Technologies like PhotoDNA create a digital “fingerprint” of known illegal content (e.g., child abuse material). This prevents the re-upload of known criminal content automatically without needing human vetting for every new post.

Contextual labelling: AI can detect controversial claims and append “Context Cards” or “Unverified” labels immediately, warning viewers without removing the video. This mitigates misinformation without violating free speech.

Trusted flagger programs: Prioritizing reports from certified civil society organizations allows for near-instant review of dangerous content (like doxxing) without subjecting all users to pre-screening.

Abuse of power:

The “impartial authority” carries risks of becoming a tool for state control: a) Surveillance state: Linking Aadhaar to social media accounts eliminates online anonymity. This allows the state to profile citizens based on their viewing habits and political posts, potentially targeting critics “in advance.” b) Administrative overreach: An “autonomous” body appointed by the government may lack the judicial independence of a court. If this body has the power to ban users or content without a trial, it bypasses the “due process” promised by the Constitution, c) Guilty until proven Innocent: “preventive” vetting system assumes that user content is likely harmful until proven safe. This reverses the burden of proof, treating citizens as potential offenders rather than free agents.

(The writer is Professor and Advisor, School of Law, Mahindra University, Hyderabad)

Next Story
Share it