Laying the foundation for a future-ready digital India

Laying the foundation for a future-ready digital India

The proposed ‘Digital India Bill’ holds out the promise of not only upgrading the current legal regime but also redefining the contours of how technology is regulated.

Authors: Rohit Kumar and Kaushik Thanugonda
Published: June 27, 2023 in The Hindu

The Ministry of Electronics and IT has been actively organising consultations on the proposed “Digital India Bill” to build conceptual alignment on a new law that will replace India’s 23-year-old Information Technology (IT) Act.

The goal is to upgrade the current legal regime to tackle emerging challenges such as user harm, competition and misinformation in the digital space. The Union Minister of State for Electronics and Technology, Rajeev Chandrasekhar, said that the first draft of the Bill should be out by the end of June. This is a much-anticipated piece of legislation that is likely to redefine the contours of how technology is regulated, not just in India but also globally. Changes being proposed include a categorisation of digital intermediaries into distinct classes such as e-commerce players, social media companies, and search engines to place different responsibilities and liabilities on each kind.

Why the present regime is untenable

The current IT Act defines an “intermediary” to include any entity between a user and the Internet, and the IT Rules sub-classify intermediaries into three main categories: “Social Media Intermediaries” (SMIs), “Significant Social Media Intermediaries” (SSMIs) and the recently notified, “Online Gaming Intermediaries”. SMIs are platforms that facilitate communication and sharing of information between users, and SMIs that have a very large user base (above a specified threshold) are designated as SSMIs. However, the definition of SMIs is so broad that it can encompass a variety of services such as video communications, matrimonial websites, email and even online comment sections on websites.

The rules also lay down stringent obligations for most intermediaries, such as a 72-hour timeline for responding to law enforcement asks and resolving ‘content take down’ requests. Unfortunately, ISPs, websites, e-commerce platforms, and cloud services are all treated similarly.

Consider platforms such as Microsoft Teams or customer management solutions such as Zoho. By virtue of being licensed, these intermediaries have a closed user base and present a lower risk of harm from information going viral. Treating these intermediaries like conventional social media platforms not only adds to their cost of doing business but also exposes them to greater liability without meaningfully reducing risks presented by the Internet.

Globally, not much to build on

So far, only a handful of countries have taken a clear position on the issue of proportionate regulation of intermediaries, so there is not too much to lean on. The European Union’s Digital Services Act is probably one of the most developed frameworks for us to consider. It introduces some exemptions and creates three tiers of intermediaries — hosting services, online platforms and “very large online platforms”, with increasing legal obligations. Australia has created an eight-fold classification system, with separate industry-drafted codes governing categories such as social media platforms and search engines. Intermediaries are required to conduct risk assessments, based on the potential for exposure to harmful content such as child sexual abuse material (CSAM) or terrorism.

Focus areas for India

While a granular, product-specific classification could improve accountability and safety online, such an approach may not be future-proof. As technology evolves, the specific categories we define today may not work in the future. What we need, therefore, is a classification framework that creates a few defined categories, requires intermediaries to undertake risk assessments and uses that information to bucket them into relevant categories. As far as possible, the goal should also be to minimise obligations on intermediaries and ensure that regulatory asks are proportionate to ability and size.

One way to do this would be to exempt micro and small enterprises, and caching and conduit services (the ‘pipes’ of the Internet) from any major obligations, and clearly distinguish communication services (where end-users interact with each other) from other forms of intermediaries (such as search engines and online-marketplaces). Given the lower risks, the obligations placed on intermediaries that are not communication services should be lesser, but they could still be required to appoint a grievance officer, cooperate with law enforcement, identify advertising, and take down problematic content within reasonable timelines.

Intermediaries that offer communication services could be asked to undertake risk assessments based on the number of their active users, risk of harm and potential for virality of harmful content. The largest communication services (platforms such as Twitter) could then be required to adhere to special obligations such as appointing India-based officers and setting up in-house grievance appellate mechanisms with independent external stakeholders to increase confidence in the grievance process. Alternative approaches to curbing virality, such as circuit breakers to slow down content, could also be considered.

For the proposed approach to be effective, metrics for risk assessment and appropriate thresholds would have to be defined and reviewed on a periodic basis in consultation with industry. Overall, such a framework could help establish accountability and online safety, while reducing legal obligations for a large number of intermediaries. In doing so, it could help create a regulatory environment that helps achieve the government’s policy goal of creating a safer Internet ecosystem, while also allowing businesses to thrive.

Rohit Kumar is Founding Partner and Kaushik Thanugonda is Senior Analyst at The Quantum Hub (TQH).