Lead the Way in AI Governance

Lead the Way in AI Governance

Author: Sidharth Deb
Published: 12th December 2023 in the Economic Times

India’s MoS for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar made two notable observations at the UK’s international AI Safety Summit last month. First, he argued that different governments must collaborate on AI governance. Second, he contended that authorities must learn from earlier experiences with social media where regulation struggled to keep pace with the ecosystem’s evolution. Mr. Chandrasekhar concluded his speech by inviting participants to the Global Partnership of Artificial Intelligence (GPAI) summit to be hosted by India in December. India was also one of 29 signatories to the Summit’s Bletchley Declaration which largely addresses mitigation strategies against existential risks emanating from ‘frontier’ AI models.

GPAI: An Opportunity for Enduring Policy Leadership

As GPAI’s global chair, India has an opportunity to contribute progressively to the international AI governance discourse. This will require shifting away from traditional notions of command-and-control regulation premised on prescriptive compliance and liability.

When technologies like AI evolve at exponential rates there is an inordinate risk of widespread non-compliance. Additionally, enforcement becomes challenging, and regulations can quickly become redundant. This creates widespread uncertainty and undue liability risks. Ultimately, prescriptive regulation can inhibit competition since only those market participants with the adequate risk appetite will continue to innovate.

Instead, India should favour partnerships which pursue flexible safeguards, transparency, knowledge sharing, accountability, economic growth, and development. To ensure balance, governments must attempt to dynamically mitigate AI’s multifaceted risks and create a framework for responsible innovation. The framework should constructively engage with substantive issues without getting bogged down with challenges like the feasibility of prescriptive regulation. This can be viewed as phase one in the life cycle of AI governance where India lays sound foundational aspects which advance state capacity.

India’s GPAI stewardship could echo some contemporary international developments like the US’ Presidential executive order (EO) on AI safety and security, the G7 Hiroshima AI Process, and other voluntary commitments made by tech majors at prior government interactions.

Six Ideas for India’s AI Stewardship

First, governments must raise their capacity to engage with AI’s wide applicability across domains like healthcare, climate change, financial services, education, agriculture, housing, and urban development. Such broad applicability requires knowledge exchange. MeitY, under its IndiaAI initiative, should facilitate a whole of government approach to AI oversight. Different sectoral authorities should collaborate with stakeholders to develop a publicly accessible repository of AI deployments and use cases. This will empower sectoral authorities with better information to commence dialogues around developing sector specific codes of practice on responsible AI development.

Second, robust standards development will assist with quality assurance. India should grant the appropriate resources to technical institutions like the Bureau of Indian Standards (BIS) and the Standardisation Testing and Quality Certification (STQC) Directorate to pursue such conversations across AI use cases. India should leverage government-to-government channels to facilitate MoUs through which these institutions can collaborate with international counterparts like the US Department of Commerce’s National Institute of Standards and Technology. In due course MeitY, BIS and STQC could codify standards for AI safety and trustworthiness which could serve as nutrition label equivalents for India’s AI ecosystem.

Third, India should commence an international project to explore scientific solutions to navigate the negative impacts of deep fake technologies. India’s current criminal and intermediary legal systems only offer after-the-fact remedies. However, the damage from malicious deployments commences as soon as content is created and distributed. The US EO discusses examining digital watermarking technologies as a possible solution. India should commence dialogue with international initiatives like the Coalition for Content Provenance and Authenticity. Decision makers need to  better understand the capabilities and limitations of these technologies, and commence a dialogue to reorient how the public recognises artificial content over the internet.

Fourth, the US and UK have announced setting up national AI Safety Institutes which will work with companies to monitor and ensure the safety of ‘frontier’ AI models. This is to manage the unintended consequences of powerful AI models and the risks stemming from potential misuse by malicious actors to carry out cyber-enabled attacks against critical information infrastructures. India should consider setting up a similar AI safety institute which closely works with cybersecurity institutions like CERT-In and the NCIIPC. Such an institution should also be pushed to interface with the aforementioned international equivalents.

Fifth, governments must proactively address AI’s impact on labour markets. This impact is not uniform across sectors and varies substantially depending on the nature of deployment. Relevant ministries should support studies to quantify the impact of AI on labour markets to estimate job substitution and adaption. Such studies will inform policymakers on appropriate social security and upskilling interventions.

Finally, AI’s risks are well documented across criminal justice/policing, housing, financial services and healthcare. The risks intersect with issues like accuracy, bias, discrimination, exclusion, citizen privacy, etc. As governments explore how AI can improve public service delivery and other government functions, public trust will be imperative for long run sustainability. India should establish legislation which safeguards citizens’ rights against the risks of Government AI deployments. Such legislation will bring more certainty to Government projects, minimise unforeseeable litigation risks, and position India as an international exemplar for government use of AI.


Sidharth Deb is Public Policy Manager at The Quantum Hub (TQH Consulting).