Authors: Rohit Kumar & Sidharth Deb
Published: 29th March, 2024 in Economic Times
On March 15th the Ministry of Electronics and Information Technology (“MeitY”) issued a fresh AI advisory reversing key provisions from the March 1st version. It overturned a controversial requirement for intermediaries to obtain government approval before publicly launching ‘under-tested’ or ‘unreliable’ generative AI or other AI diffused deployments. The unclear scope and applicability of the original advisory, and the control the government was assigning itself, triggered widespread concerns about its legality and overall prospects for AI innovation.
Both advisories reflect the government’s concern about the rushed public launch of generative AI solutions. While the new advisory’s shift from approval-seeking towards labelling represents greater balance, this episode holds structural lessons on the need to align India’s approach to AI regulation with its ambition to lead the global frontier on AI development.
Considerations for Balanced AI Regulation
Firstly, regulation should avoid one-size-fits-all prescriptions. The language in the advisories do not appropriately differentiate between various use cases and deployments. This clubs all market participants and actors across the AI value chain. For instance, the advisories (especially the original advisory) fail to make any distinction between software, content recommending algorithms, generative AI deployments and larger foundation models. AI’s complexities means that each layer of the value chain poses a different level of risk and consequently requires a different targeted intervention. Classification is therefore needed to facilitate proportionate, risk-based and fit for purpose regulation.
Secondly, regulation should be rooted in AI’s technical realities. For example, the advisories state that AI deployments should not permit any bias or discrimination. While well intentioned, this is inconsistent with the technical consensus that completely eliminating AI bias is nearly impossible. Such regulations make innovators risk averse, cause widespread non-compliance, and invite the risk of arbitrary enforcement. Bias can perhaps be better tackled through standards on platform design and requirements of transparency, testing with diverse groups, human involvement, and weightage in training data.
Thirdly, without nuance, government permissions can stifle innovation. While concerns of under-testing and unreliability are valid, approvals prior to product rollout – akin to aviation, automobiles and pharmaceuticals – may be incompatible with fast-moving digital markets and may not be required in most use cases. In sectors where we create such controls for product safety, this is usually done to prevent immediate risk of public injury, death, health and safety. However, with digital technologies product safety often entails iterative processes where businesses adapt to live feedback loops from the market. For that reason, approval based regimes or special regulatory sandbox frameworks should be reserved only for the highest risk cases with clear demarcations between domains that are low risk and those that pose risk to human life or public safety e.g. the military, protected systems, critical information infrastructures and biosecurity.
Fourthly, while watermarking and labelling can be advised, we should not over index on any technology. The March 15th notice advises platforms to adopt watermarking technologies along the lines of open protocols developed by initiatives like the C2PA. The advisory also suggests that platforms build capabilities to identify which users or systems make changes to any piece of content. While there is merit in exploring these ideas, it is a fact that watermarking remains an experimental technology and is prone to circumvention.
Way Forward
To ensure the development of India’s AI ecosystem, regulation must strike a balance between erecting appropriate safeguards and preserving market agility. We must urgently commence a comprehensive discussion on AI regulation. Advisories and amendments to regulate emerging technology through India’s IT Rules is unsustainable; these proposals are often untethered to the parent IT Act and are not grounded in adequate evidence.
The first pillar of reform should prioritise inclusive regulation through public consultations which marshal the collective intelligence of government, industry and civil society. These would help produce solutions which alleviate the burden of responsibility on government’s shoulders. Consultations would also help avoid reactive directives like the original advisory which can unintentionally erode value from the market.
Reform should also entail suitable investments in setting up an independent regulator. Such a regulator should be empowered through staffing, resources and tools which facilitate evidence-based regulation. India should minimise the discretionary involvement of the political executive in the next cycle of AI regulation. Instead, an independent regulator should promote standardisation, transparency, consumer redressal and public accountability.
Next, interventions on bias, trust and safety must serve local contexts. AI regulation should facilitate international businesses in forming local partnerships to solve for localised harms arising out of discrimination and exclusion. India’s diversity lends itself to competing narratives and consequently, deep local partnerships are essential to build for its socio-cultural heterogeneity.
Finally, similar to the US Executive Order, India should attempt to develop guidelines and benchmarks for AI assurance audits. Other tools worth exploring include AI impact assessments, security incident and vulnerability reporting databases etc.
At the end of the day, AI regulation needs robust future-proofing that is capable of swiftly adapting to the rapidly evolving tech landscape. A fragmented approach that is tied to an outdated legislation won’t cut it.
—
Rohit is the Founding Partner and Sidharth is a Manager at The Quantum Hub (TQH) – a public policy firm