Tackling Deepfakes Requires All Hands on Deck

Tackling Deepfakes Requires All Hands on Deck

Authors: Rohit Kumar and Mahwash Fatima

Published: 8th January 2024 in the Hindustan Times

What would your elderly father’s response be if they received an emergency video message from you requesting a large sum of money? With rapid advances in artificial intelligence, normal human reaction to such situations can easily be exploited through the creation of deepfakes.

Deepfakes is undoubtedly one of the biggest threats our society is likely to face in 2024. No wonder the union government has taken up this issue on priority. It has already sent an advisory to social media intermediaries asking them to strengthen their systems for detecting and taking down deepfakes. News reports also suggest that the Ministry of Electronics and IT is considering fresh amendments to the Information Technology (IT) Rules to include specific obligations for intermediaries to contain the deepfake menace.

It was in 2017 when deepfake content made its first notable appearance with a Reddit user named ‘deepfakes’ posting fake videos of celebrities. Over the years, with the development of the underlying technology, these videos have become increasingly realistic, and deceptive. Between 2019 and 2020, the number of deepfake online content has increased by over 900%, with some forecasts predicting that as much as 90% of online content may be synthetically generated by 2026.

The most worrying societal harm from the rise of misinformation and deepfakes is the erosion of trust in our information ecosystem. Not knowing who or what to believe can do unimaginable damage to how humans interact and engage with each other. A recent empirical study has in fact shown that the mere existence of deepfakes feeds distrust in any kind of information, whether true or false.

In India, while no legislation specifically governs deepfakes, existing laws such as the IT Act and the Indian Penal Code already criminalise online impersonation, malicious use of communication devices, obscene publishing etc. Social media platforms are also obligated under the IT Rules to take down misinformation and impersonating content; failure to do so means risking their ‘safe harbour’ provision and being liable for the harm that ensues.

Unfortunately, while these legal provisions already exist, it is challenging to execute what the law demands. First, identifying deepfakes is a massive technical challenge. Currently available options – AI powered detection and watermarking/labelling techniques – are inconsistent and inaccurate. Notably, OpenAI pulled its own AI detection tool due to ‘low accuracy’ in July 2023.

Second, technologies that are used to create deepfakes have positive use-cases too. For instance, these same technologies can be used to augment accessibility tools for persons with disabilities, deployed in the entertainment industry for more realistic special effects, and even used in the education sector. Essentially, what this means is that every piece of content that has been edited digitally doesn’t necessarily make it harmful. This further complicates the job of content moderation.
Third, the volume of content uploaded every second makes meaningful human oversight difficult. Unfortunately, by the time problematic content is detected, it has often already spread.

Policymakers around the world are struggling to find a good solution to the problem. The US and the EU seem to have taken some initial steps, but their efficacy remains untested. In the US, President Biden signed an executive order in October 2023 to address AI risks. Under this order, the Department of Commerce is creating standards for labelling AI-generated content. Separately, states like California and Texas have passed laws criminalising the dissemination of deepfake videos influencing elections, while Virginia penalises the distribution of non-consensual deepfake pornography. In Europe, the Artificial Intelligence Act will categorise AI systems into unacceptable, high, limited, and low risk. Notably, AI systems that generate or manipulate image, audio or video content (i.e. deepfakes), will be subjected to transparency obligations.

Technologists are also working on ways to accurately trace the origins of synthetic media. One of these attempts by the Coalition for Content Provenance and Authenticity (C2PA) aims to cryptographically link each piece of media with its origin and editing history. However, the challenge with C2PA’s approach lies in widespread adoption of these standards by devices and editing tools, without which unlabelled AI-generated content will continue to deceive.

Therefore, while watermarking and labelling may help, what we need urgently is a focused attempt to reduce the circulation of deepfake content. Slowing down the circulation of flagged content until its veracity is confirmed can be crucial in preventing real-world harm. This is where intermediaries such as social media platforms can perhaps be required to step in more strongly. If an uploaded piece of content is detected to be AI modified or flagged by users, platforms should mark such content for review before allowing unchecked distribution.

Finally, there is no substitute to building resilience among the audience. Fostering media literacy to help people of all ages better understand the threat of misinformation, to make them more conscious consumers of information is the need of the hour.

Navigating the new digital era where ‘seeing is no longer believing’ is undoubtedly challenging. We need a multi-pronged regulatory approach that nudges all ecosystem actors to not only prevent and detect deepfake content, but also to engage with it more wisely. Anything less is unlikely to retain our trust in the digital world.


Rohit is Founding Partner and Mahwash a Senior Analyst at The Quantum Hub (TQH), a public policy firm.