Author/s:
Rohit Kumar and Sidharth Deb
The Global AI Race and India’s Regulatory Dilemma
Much like earlier regimes governing nuclear and satellite technologies, AI is now being managed through international controls rather than traditional technology regulation.
Artificial intelligence is rapidly becoming one of the most consequential technologies of our time. Like steam engines, electricity, and the internet before it, AI is reshaping productivity, labour markets, public administration, and state power, with applications spreading rapidly across key sectors.
Alongside this promise, however, lie significant risks, including misinformation, privacy erosion, and labour displacement.
Geopolitical dimension to AI
Against this backdrop of unprecedented opportunity and risk, AI has become central to geopolitical competition. Countries are not merely competing to innovate first, but to shape the rules of the game. Leadership in AI development, infrastructure, and governance now underpins economic power and strategic leverage, and this reality is defining regulatory and policy choices across jurisdictions.
Three issues at the heart of AI advances
At the heart of the global AI race are three hard questions: who controls access to frontier technologies, how states hedge against geopolitical risk, and how rapidly scaling harms are contained. These tensions are no longer abstract. They have already shaped India’s policy choices in 2025, and will define the contours of its AI strategy going forward.
Europe’s emphasis on hard regulation meets implementation challenges
Much like the GDPR, the European Union sought to create a “Brussels effect” through the EU AI Act passed in 2024. The Act introduced a risk-based framework that banned certain AI uses outright, imposed heightened compliance obligations on “high-risk” systems, and established transparency requirements for general-purpose AI models.
In practice, however, the EU’s rapid push toward comprehensive AI legislation produced unintended consequences. Industry flagged unclear standards and compressed timelines as risks to innovation, alongside growing concerns that Europe could become a market for foreign AI products rather than a hub for building them. These pressures have prompted a recalibration over the past year, reflected in initiatives such as the Digital Omnibus, which softens certain obligations and compliance timelines.
The lesson from Europe is clear: regulatory ambition untethered from industrial competitiveness risks economic marginalisation.
American pivot from safeguards to industrial acceleration
Unlike the EU, the United States has shifted course on AI policy under the Trump administration, moving away from its earlier emphasis on safeguards and oversight. Regulatory efforts around frontier model evaluations, AI safety, civil rights protections, and national standards have been scaled back, most notably through an executive order aimed at pre-empting state-level AI safety laws. These measures signal a clear preference for a light national framework that prioritises innovation over precaution.
In parallel, the US has accelerated large-scale investment in AI compute, energy capacity, data centres, and domestic semiconductor production. The prevailing view in Washington is that heavy regional regulation could weaken the country’s core advantage of hosting the world’s leading AI firms. Rather than relying on domestic regulation, American AI governance is now increasingly been shaped through trade and industrial policy, including expanded export controls that restrict technology access to strategic rivals, particularly China.
Much like earlier regimes governing nuclear and satellite technologies, AI is now being managed through international controls rather than traditional technology regulation.
The dragon lurks in the background
China is navigating global AI shifts through a mix of strategic regulation and targeted public investment. It has streamlined approvals, boosted capital flows into domestic AI firms, and allowed experimentation with large models at scale.
International attention sharpened with the release of DeepSeek’s reasoning model, which demonstrated China’s ability to deliver globally competitive systems at relatively low cost despite US efforts to restrict access to advanced chips. This has further reinforced Beijing’s push to rapidly diffuse AI use cases across the economy and government, underscoring its view of AI as central to national competitiveness.
At the same time, China is advancing AI regulation on an issue-by-issue basis. Regulatory efforts have so far focused on algorithmic recommendation systems, synthetic and generative content, and broader AI service management, with an emphasis on building bureaucratic capacity to govern AI in line with technical realities. The result is an ecosystem designed to develop and scale cost-efficient AI models, at speed.
India’s pragmatic middle path
In this context, India has tried to chart its own path among the competing pressures. It brings important strengths, including large-scale digital public infrastructure, a deep technology talent pool, and market scale that could support AI breakthroughs. At the same time, it lacks mature AI research ecosystems, domestic compute capacity, and semiconductor supply chains.
Recognising these constraints, the government has opted to build industrial capability first rather than pursue a comprehensive AI law. Through the IndiaAI Mission, the focus is on developing domestic compute stacks, datasets, foundation models, and chip manufacturing. This approach also reflects a deliberate effort to strengthen domestic resilience and mitigate vulnerabilities arising from geopolitical uncertainty. Taken together, it signals a clear “innovate first, regulate later” posture.
This sequencing is broadly sensible. Yet it also carries a familiar risk in Indian policymaking: the tendency to defer hard regulatory questions until crises force reactionary responses.
The case for clear direction and early guardrails
AI governance does not require heavy-handed restrictions at this stage. What India needs instead are light, principled guardrails that set expectations early without freezing innovation. The absence of such direction risks creating uncertainty for developers while leaving users exposed to avoidable harms.
Some areas cannot wait. These include red-teaming and safety testing for high-risk use-cases, protections for vulnerable users such as children, clarity on human-in-the-loop requirements for consequential decisions, and liability standards for harmful AI outputs. Clearer boundaries are also needed on what is permitted, including anonymisation thresholds for personal data and the use of copyrighted content in model training. While discussions on these issues have begun, they remain far from settled.
Relying solely on legacy instruments such as the IT Act and IT Rules is inadequate for addressing emerging challenges, given their poor fit for the risks posed by generative and agentic AI systems. Without reform, India risks imposing mismatched compliance obligations that undermine both innovation and safety.
While new AI legislation will ultimately be necessary, 2026 could perhaps serve as a bridge year to develop standards and guardrails that later anchor proportionate regulation. The AI race is already underway, and its winners will not be determined by who regulates first or least, but by who governs best. For India, the choice is no longer between innovation and regulation, but between deliberate leadership and reactive catch-up.
Rohit Kumar is Founding Partner and Sidharth Deb is Associate Director at The Quantum Hub (TQH), a public policy consulting firm focused on technology governance and regulatory strategy.