Authors: Ujval Mohan and Salil Ahuja
Published: 12th February, 2024 in Hindu BusinessLine
The Bombay High Court’s split verdict on the constitutionality of the Indian Government’s proposed fact check unit (FCU) exemplifies the conundrum between countering the threat of misinformation and government involvement in fact checks. In 2023, FCUs emerged as the favoured policy intervention with governments in Karnataka, Tamil Nadu, and Uttarakhand each citing the need for government intervention to control misinformation.
Safeguarding the integrity of civic discourse from manipulative disinformation campaigns is paramount, especially as India enters a pivotal election season. In principle, fact-checks can effectively counter false narratives that mislead users and cause real-world harm. While social media platforms have long partnered with third-party fact-checkers to warn users of false information, the threat of ‘fake news’ has grown in scale and sophistication.
However, FCU proposals denote a novel trend, where governments seek to fact check misleading narratives. This idea of governments emerging as official arbiters of truth is the subject of widespread scepticism. As more governments pour already scarce resources into setting up their own FCUs, addressing systemic limitations becomes crucial.
Who watches the watchdog?
With easy access to generative AI technologies, information pollution is becoming more abundant, powerful, and deceptive. At the same time, a large share of ‘false information’ online is likely innocuous and often a form of satire or artistic expression.
FCUs face the daunting task of sifting through this digital haystack to handpick harmful narratives that deserve their attention. This entails identifying information emerging from suspicious/inauthentic sources while analysing trends to look for harmful content. Justice Patel, who led the Bombay HC bench, raised a concern about “how few things are immutably black- or-white, yes or no, true or false” which could lead to an untenable system of coercive censorship of alternative views by the government.
Government actors ultimately sway to political incentives, which skews their outlook on narrative selection. Consequently, government FCUs may disproportionately target content critical of the government, while ignoring falsehoods that support its outlook. For instance, government-run FCUs in Malaysia and Thailand conspicuously stayed away from narratives about controversial regime changes and protests. In Singapore, the Minister empowered to issue directions to counter ‘fake news’, overwhelmingly used the power to target dissenting voices.
Unsurprisingly, FCUs proposed by both the Union Government and Tamil Nadu target misinformation only about themselves. Other FCUs are less clear about what narratives they will prioritise and how these choices will be made. With this format, FCUs will morph into tools of government counter-speech, deviating from their intended purpose of debunking falsehoods that bear the greatest risk of harm. The public interest in scrutinising claims solely about the government was questioned by the Court.
State action is often disproportionate
Each proposed Indian FCU has a different structure, but all of them are designed to either label content as misleading, facilitate take down, or prosecute errant social media users. Owing to inherent conflicts of interest, fact checks by the state are prone to public distrust, as well as legal challenges arising from free speech concerns.
For example, not all fake posts warrant penalties, but once flagged as ‘false’ by FCUs, users posting such content face the real possibility of being subject to prosecution. With instances of Indian police overriding legal safeguards to arrest users for innocuous social media content, citizens and journalists will be discouraged from online speech fearing FCU action. This is precisely at issue in another case before the Madras High Court, where petitioners argue that the FCU will muzzle voices critical of the state government.
Designing an effective FCU
The structural conflict of interest resulting from government intervention necessitates institutional independence and transparency in narrative selection. For example, proposed FCUs should insulate editorial decisions from government influence, regularly publish transparency reports, and decentralise fact checking functions to numerous independent fact checkers.
While the design of state-led FCUs can be improved, government efforts to counter misinformation would be far more effective if it instead focussed on enabling partnerships between social media platforms and a vibrant ecosystem of independent third-party checkers, rather than doing the fact checks themselves.
That said, even independent fact checkers need time to curate priority narratives, gather precise evidence, and fact-check claims, all before dangerous falsehoods mutate and gain traction. Therefore, fact checks alone cannot effectively counter the threat of harm from misinformation unless we slow down the spread of unverified/unsafe content. Creating ecosystem incentives that deprioritise virality in favour of trust should thus be another goal for policymakers.
Fact checks already battle challenges of online polarisation and the ‘backfire effect,’ where users double down on belief in falsehoods after they are debunked. Saddling fact checking with limitations that come with state control can render another blow to their efficacy.
—
Salil Ahuja is an Analyst and Ujval Mohan is a Senior Analyst working on technology policy issues at The Quantum Hub (TQH) – a public policy firm.