India’s AI Safety Institute Should Tap into Parallel International Initiatives

India’s AI Safety Institute Should Tap into Parallel International Initiatives

Author: Sidharth Deb
Published: 2nd December, 2024 in The Hindu

Last month, India’s IT Ministry convened meetings with industry and experts to discuss setting up an AI Safety Institute under the IndiaAI Mission. Curiously, this came on the heels of PM Modi’s visit to the US that was punctuated by the Quad Leaders’ Summit and the UN’s Summit of the Future. AI appeared high on the agenda in the run up to the Summit of the Future, with a high-level UN advisory panel producing a report on Governing AI for Humanity.

Policymakers should build on India’s recent leadership at international fora like the G20 and GPAI, and position it as a unifying voice for the global majority in AI governance. As the IT Ministry considers the new Safety Institute, its design should prioritise raising domestic capacity, capitalise on India’s comparative advantages and plug into international initiatives.

Notably, the UN’s Summit of the Future yielded the Global Digital Compact that identifies multistakeholder collaboration, human-centric oversight and inclusive participation of developing countries as essential pillars of AI safety and governance. As a follow up the UN will now commence a Global Dialogue on AI. It would be timely for India to establish an AI Safety Institute which engages with the Bletchley Process on AI Safety. If executed correctly, India can deepen the global dialogue on AI safety and bring human centric perspectives to the forefront of discussions.

Decoupling Institutional Capacity from Regulation Making

In designing the institute, India should learn from concerns levelled against MeitY’s AI Advisory from March 2024. The advisory’s proposal for government approvals prior to the public rollout of experimental AI systems was met with widespread criticism. A fundamental critique was what kind of institutional capability resides within India’s government to suitably determine the safety of novel AI deployments. Other provisions within the advisory on bias, discrimination and the one size fits all treatment of all AI deployments, further indicated that the advisory was not based on technical evidence.

Similarly, India should be cautious and avoid prescriptive regulatory controls which have been proposed in the EU, China and the recently vetoed California proposal. The threat of regulatory sanction in a rapidly evolving technological ecosystem, quells proactive information sharing between businesses, governments and the wider ecosystem. It nudges labs to only undertake the minimum steps towards compliance. Yet each jurisdiction demonstrates a recurring recognition of establishing specialised agencies e.g. China’s algorithms registry, EU’s AI Office, and California’s scrapped proposal to set up a Frontier Models Board. However, to maximise the promise of institutional reform, India should decouple institution building from regulation making.

The Promise of the Bletchley Process and Shared Expertise

The Bletchley process is underscored by the UK Safety Summit in November 2023 and the South Korea Safety Summit in May 2024. The next summit is set for France and this process is yielding an international network of AI Safety Institutes.

The US and UK were the first two to set up these institutes and have already signed an MoU to exchange knowledge, resources and expertise. Both institutions are also signing MoUs with AI labs and receiving early access to large foundation models. They have installed mechanisms to share technical inputs with the AI labs prior to their public rollouts. These Safety Institutes facilitate proactive information sharing without being regulators. They are positioned as technical government institutions that leverage multistakeholder consortiums and partnerships to augment testing capabilities of assessing the risk of frontier AI models to public safety. However, these Institutes largely consider AI safety through the lens of cybersecurity, critical infrastructure security, safety of the biosphere, and other national security threats.

These safety institutes aim to improve government capacity and mainstream the idea of external third-party testing, risk mitigations and assessments, red teaming protocols and standardisation’s role in shaping responsible AI development. AI safety institutions aim to deliver insights which can transform AI governance into an evidence-based discipline– a prerequisite for proportionate, fit for purpose regulation. The Bletchley process presents India with an opportunity to collaborate with governments and stakeholders from across the world. Shared expertise will be essential to keep up with AI’s rapid innovation trajectories.

Charting India’s Approach

India should establish an AI Safety Institute which integrates into the Bletchley network of safety institutes. For now, the Institute should be independent from rulemaking and enforcement authorities. Instead, it should operate exclusively as a technical research, testing, and standardisation agency. The Institute would allow India’s domestic institutions to tap into the expertise of other governments, local multistakeholder communities and international businesses. While upscaling its AI oversight capabilities India can also use the Bletchley network to advance the global majority’s concerns with AI’s individual centric risks.

The Institute could champion perspectives on risks relating to bias, discrimination, social exclusion, gendered risks, labour markets, data collection and individual privacy. Consequently, the Indian Institute could deepen the global dialogue around harm identification, big picture AI risks, mitigations and standards. If done right India may become a global steward for forward thinking AI governance which embraces multistakeholderism and government collaboration. Moreover, the AI Safety Institute can demonstrate India’s scientific temper and willingness to implement globally compatible, evidence-based and proportionate policy solutions.

Sidharth is Associate Director, Public Policy at The Quantum Hub (TQH) – a leading public policy firm based in Delhi.