Submissions on the Draft Online Gaming Rules, 2025

Submissions on the Draft Online Gaming Rules, 2025

Date: 29th Oct, 2025

The Union Cabinet cleared the new online gaming bill in August 2025, following which it cleared both houses of parliament within the same week. The newly-christened “Promotion and Regulation of Online Gaming, Act, 2025” would go on to provide India’s online gaming sector decisive regulatory clarity. It explicitly promotes e-sports and online social games; and after decades of legislative uncertainty and courtroom battles, comprehensively bans all kinds of online money games, irrespective of whether predominantly shaped by skill or chance.

Under the parent Act, the Ministry of Electronics & IT (MeitY) has now issued “The Promotion and Regulation of Online Gaming Rules, 2025” (hereafter the “Draft Rules”) for public consultation. Here we present TQH’s submission on the Draft Rules. Our goal is to support regulatory clarity and balanced implementation – ensuring that digital enterprises, both gaming and non-gaming, can comply effectively with the law while continuing to innovate and enhance user experiences.

The linked document comprises: (i) our concise submissions to MeitY; and (ii) a detailed clause-by-clause analysis of the impacts of the Draft Rules in the annexure. Specifically, our submission highlights the need for greater clarity and procedural efficiency on specific aspects.

Key recommendations include clearly distinguishing between online game service providers (OGSPs) and online gaming intermediaries under the IT Act, defining the boundary between online games and general gamified experiences, and specifying whether indirect partners such as sponsors fall within the OGSP ambit. To ensure efficiency, we recommend a self-declaration and deemed approval mechanism for non–money games, along with defined timelines for registration and recognition to avoid delays. Further, reporting of “material changes” to the regulator should be limited to monetisation shifts that could convert a game into an online money game. Lastly, MeitY should clarify permissible monetisation models, including whether user participation fees can be used to fund e-sport prize pools without violating wagering prohibitions.

The Draft Rules reflect a noteworthy effort by MeitY to create a structured and accountable framework for India’s fast-growing online gaming ecosystem, but a few definitional and procedural ambiguities risk introducing uncertainty for legitimate businesses. Addressing key concerns can help balance regulatory oversight with industry agility, fostering consumer trust while enabling the Indian gaming and e-sports industries to grow sustainably and competitively within the global digital economy.

Access the submission here

Improving Online Safety for Women and Children in India

Improving Online Safety for Women and Children in India

Published October 2025
A TQH Study

India’s digital revolution has opened doors to unprecedented opportunities for women and children – enabling learning, expression, entrepreneurship, and civic participation. But it has also brought with it new and complex online risks that mirror and amplify the country’s offline inequalities.

This report examines the evolving landscape of online harms faced by women and children in India, exploring how anonymity, permanence of content, and cross-platform abuse heighten the impact of technology-facilitated gender-based violence and child exploitation. While India now has over 965 million internet users, its mechanisms for online safety remain fragmented and reactive, often mobilising only after high-profile incidents.

Drawing from multi-stakeholder consultations – including civil society organisations working with survivors, listening sessions with young users, and extensive secondary research – the report maps the risks, gaps, and systemic drivers shaping online harm in India. It finds that official data captures only a fraction of the problem: weak taxonomies, poor data disaggregation, and inconsistent reporting frameworks make it difficult for policymakers and enforcement agencies to understand or address the full scope of harm.

The study also identifies key structural issues in India’s legal and institutional frameworks. Current laws fail to clearly distinguish between cyber-enabled and cyber-dependent offences, complicating enforcement. Meanwhile, platforms face significant informal pressures without being sufficiently incentivised to adopt safety-by-design measures, and civil society groups – despite their reach – remain under-supported.

Building on global lessons, the report offers a systemic roadmap for India’s online safety ecosystem. Its recommendations call for:

  • Modernised definitions of cybercrime and “legal but harmful” online risks
  • Robust data collection and risk-assessment requirements
  • Strengthening platform accountability through systemic risk assessment and disclosure requirements
  • Gender-sensitive and child-specific legal reforms
  • Standardised law-enforcement protocols and training
  • Formal collaboration with civil society in prevention and survivor support
  • Industry-led codes of practice on safe and inclusive platform design

Ultimately, the report argues that safeguarding women and children online cannot rest on law enforcement alone. It requires a coordinated, prevention-first approach – combining legal clarity, institutional capacity, platform accountability, and survivor-centred support. By embedding these principles into policy and practice, India can move from reacting to crises toward building a safe, inclusive, and trusted digital environment for all.

Access the report here

The Uneven Odds of India’s Digital Gaming Marketplace

The Uneven Odds of India’s Digital Gaming Marketplace

Authors: Deepro Guha & Aarathi Ganesan
Published: August, 2025 in Tech Policy Press

There’s an old saying that goes, ‘you only know what you have, or don’t have, when it gets taken away.’ It’s a universal axiom, one finding relevance in love, health, and as of late, in India’s digital economy. For the better part of the last three years, most real money gaming (RMG) companies offering skill-based games—ranging from chess to checkers—have found themselves operating in a deeply uneven playing field. Google, which serves as the oxygen for India’s digital businesses through its ads and app store platforms—offering access to a universe of potential users, and their billions of eyeballs—opened its ecosystem to them selectively. While a handful of RMG categories were granted privileged access, the vast majority remained excluded.

The root cause is an innocuous RMG pilot program launched by Google in 2022 in India, mirroring a similar initiative launched by Google in Mexico (2022). In the Indian scenario, the pilot went on to leave all but two RMG sub-sectors on their knees. The India pilot (later indefinitely extended) only allowed daily fantasy sports (DFS) and rummy on Google’s Play Store, with similar changes implemented in advertising policies later on. Operators offering non-DFS and non-rummy skill games were interminably excluded from the world’s most powerful discoverability engine, on inscrutable grounds. User acquisition metrics and profit margins nosedived for many of those excluded, while they catapulted for a chosen few.

It is perhaps no surprise then that the past year bore witness to protracted arguments before India’s antitrust regulator, the Competition Commission of India (CCI), on a singular question: did Google abuse its market dominance to selectively distort India’s online gaming market? In its initial observations on the complaint filed by gaming company WinZO, the CCI answered yes, directing an investigation into the matter. However, before courtrooms could heat up, Google had a change of heart. The tech giant recently informed the CCI that it would amend its app store and advertising policies, so that all RMG formats offering skill games can access Google’s digital ecosystem.

The regulatory landscape for online real money money gaming is shifting in India, most notably through the passing of a recent bill banning the sector altogether. Irrespective of that fact, Google’s submission before the CCI marks an important leap forward for India’s digital economy. What emerges from the ongoing case is a clear principle—that irrespective of the technology in question, antitrust regulators are paying closer attention to the anticompetitive actions of digital gatekeepers. The discussion around ex ante digital competition laws in India, which aim to address potential harms proactively rather than retrospectively, is therefore gaining renewed significance.

The perils of the Play Store pilot

Google’s RMG pilot program in India extended well beyond its intended cut-off date of September 2023 (with an indefinite extension post July 2024). This is despite simultaneous test drives for online gaming in Mexico ending by 2024. Among other arguments, in the CCI order, Google justified the extension, due to delays in the implementation of a regulatory framework for online gaming in India, leaving little clarity on which skill games are legal. Although online real money gaming now stands to be banned in India, with companies expected to challenge the law in court and seek a stay, there has so far been no unified federal framework clarifying which skill-based games were legally permissible. In the absence of clear laws, Google followed what India’s courts had to say on the matter, inevitably platforming the distribution of skill games like DFS and rummy, both of which had judicial verdicts confirming them to be skill-based.

However, Google’s approach belies a fundamental question: Does a game’s character cease to exist if not decided upon by a constitutional court? Chess undeniably remains a game of skill, irrespective of whether it is hosted on an RMG platform or not. So do many other new-age RMG formats—many of which have been unable to access Google’s formats, and compete effectively, simply because they haven’t been taken to court. To that end, court orders, while critical instruments shaping policy, are not the only ways in which the ‘skill’ of games can or should be determined.

Google’s approach, in its commitment application with the CCI, is therefore a welcome recognition of an alternative approach to determining whether a game is skill-based or not. As per its submissions to the CCI, Google will allow all games where developers provide evidence that a “recognized third party” (like a gaming industry association) has certified the game to be skill-based based on Google’s specified standards and prevailing law. While this could open up the vast digital ecosystem for India’s game developers, concerns remain around how certifiers will be selected and how fairly the policy will be enforced. If implemented with genuine transparency and consistency, however, it could mark a significant shift in platform access for the sector.

Looking ahead

Google’s remedial efforts are in the right direction, and will hopefully, level the competitive field. However, one cannot help but wonder about the scale of what may have been lost. India’s gaming market has visibly concentrated in favor of some formats, a tale best told by cricket jersey sponsors, bus stop ads, and non-stop SMS promotions. The pilot also went on for two years—a long time in the world of business, and certainly enough time for DFS and rummy operators to leap ahead in the highly competitive gaming sector. Those excluded will have to make up for lost time, and whether they will be able to, is circumspect.

This also lays bare the clear discretionary power that gatekeepers hold over a privately-owned digital economy—digital businesses are operating at the mercy of opaque infrastructure and policies. Fixing the imbalance requires relaying the foundations upon which digital innovation is built. On the infrastructure side, India’s early (although not faultless) attempts at solving these issues include Digital Public Infrastructure (DPIs), wherein the Indian state has taken the responsibility of building core digital infrastructural services embedded in principles of openness, transparency, and interoperability, amongst others. Payments services like the Unified Payments Interface, and open e-commerce platforms like Open Network for Digital Commerce, provide a fillip to Indian-made distribution channels that every business can equally access and participate in.

However, while good examples of how platforms can be designed to counter private monopolies, DPIs can only do so much—sometimes the law needs to incentivize good behavior too. If anything, the gaming-Google showdown is a case study in why India needs ex ante antitrust approaches for digital sectors, which preemptively ensure that the private builders of the digital economy do not give preferential access to services favored by them. In a digital economy built on private pipes, ex ante competition regulation should act as a valve—ensuring fair, open access to essential platforms from the start.

Deepro Guha is an Associate Director, Public Policy and Aarathi Ganesan is a Manager, Public Policy at TQH

When walking, seeing and hearing are taxed: The case against GST on disability aids

When walking, seeing and hearing are taxed: The case against GST on disability aids

Authors: Nipun Malhotra & Harshita Kumari
Published: August, 2025 in Scroll

For most people, walking, seeing, and hearing are natural, untaxed parts of everyday life. You do not get taxed for turning the page of a book, crossing the street or listening to a conversation.

Yet in India, persons with disabilities are effectively taxed for those very same rights to mobility, communication and independence. The Goods and Services Tax rate of 5% on disability aids and rehabilitation services is not merely an economic burden – it is a structural injustice that has persisted far too long in our tax system.

To put it plainly: this is the equivalent of taxing someone for walking, seeing or hearing. Assistive devices – wheelchairs, crutches, hearing aids, Braille paper, screen readers – are not luxuries. They are fundamental enablers of daily life, education and employment. Taxing them undermines the very idea of equality.

Households with a member with disability already face much higher living costs than others. Global research shows they require at least 17% more income to maintain the same standard of living. When the additional costs of disability are properly accounted for, poverty rates nearly double.

In India, this strain is compounded by the absence of accessible infrastructure and the prohibitive cost of assistive devices and rehabilitation services. For many families, the GST burden is simply indefensible. The GST Council has argued that the concessional 5% rate benefits manufacturers through Input Tax Credit. But this benefit never reaches the end user. With raw materials taxed at 18%, costs are passed on to consumers, leaving people with disabilities to pay more for basic accessibility.The result is stark: those who already face barriers to employment and income are further penalised for seeking independence.

Necessities, not optional goods

This inequity has been flagged before. A recent report by the Parliamentary Standing Committee on the Ministry of Social Justice and Empowerment, chaired by MP Rama Devi, stated unambiguously: disability aids are necessities, not optional goods. They enable independence, literacy, and employment with dignity.

The fiscal case is clear. The size of the assistive devices market is tiny compared to overall GST collections. Exempting these devices from tax will not destabilise revenue but the social return will be profound: easing household burdens, expanding access to assistive technologies and affirming that persons with disabilities are not second-class citizens.

The constitutional case is even stronger. By continuing to tax disability aids, the government undermines guarantees under Articles 14, 15, 19, and 21, which enshrine equality, non-discrimination and dignity. The Supreme Court has repeatedly affirmed that taxation cannot infringe fundamental rights. Taxing the very tools of survival and mobility runs afoul of this principle.

There is also a gendered dimension. Disability prevalence is higher among women (19%) than men (12%), and in low- and middle-income countries women comprise nearly three-quarters of persons with disabilities. Women with disabilities are already among the most excluded, facing barriers in education, healthcare, and work. Continuing to tax disability aids pushes them further to the margins, stripping away the chance of a dignified life.

Internationally, the direction of reform is obvious. Countries like Australia and Canada exempt disability aids entirely, recognising that taxing disability is both unjust and inefficient. India should not lag behind. As one of us writes this from lived experience as a wheelchair user, this issue is not abstract. A wheelchair is not an add-on to life, it is life. Paying a tax on it feels like being penalised for existing. For every person with a disability who uses a hearing aid, a prosthetic or a Braille kit, the story is the same: these devices are the bridge to equality, and taxing them only widens the gap.

Zero-rating

What we’re asking for is simple and it is urgent: disability aids and rehabilitation services must be zero-rated under GST. Zero-rating ensures that the entire value chain is not taxed – this can be implemented through proper classification of GST aids under separate HSN or Harmonised System of Nomenclature codes that categorise goods based on their characteristics and facilitate uniform taxation categories. The government should act on the Standing Committee’s recommendation and make this change in the next GST Council meeting. This is a fiscally modest reform with outsized social impact.

In the larger conversation about equity, inclusion and growth, tax reform may seem technical. But here, the issue is moral clarity. No society that aspires to dignity for all its citizens can justify a system where people without disabilities move freely, while people with disabilities pay extra to do the same. Exempting disability aids from GST is not charity. It is recognition of rights. It is acknowledgement that independence and mobility are not privileges to be taxed, but freedoms to be enabled.

Nipun is the Founder & CEO, Nipman Foundation and Director, Disability Rights & Inclusion at TQH and Harshita is an Analyst at TQH

From Risk to Resilience: Rights-based DPI Readiness in the Global Majority

From Risk to Resilience: Rights-based DPI Readiness in the Global Majority

Published July 2025


 
As Digital Public Infrastructure (DPI) gains global momentum as tools for inclusive digital transformation, public conversations can ensure that they are deployed in ways that safeguard human rights and keep citizen interests at the center. In this context, a closed-door workshop at RightsCon 2025 (Taipei, Taiwan) was convened, bringing together a diverse group of stakeholders from civil society, academia, and the private sector across over 10 countries. The workshop was co-organized by The Quantum Hub (TQH), the National Institute of Strategic Resilience (NISR), and Access Now.

The objective of the workshop was to deepen international discourse around trust and DPIs through candid reflection, dialogue, and shared learning across countries. Through a critical approach, the workshop aimed not to dismiss the potential of DPIs, but to strengthen it. By interrogating gaps in existing systems, participants hoped to advance a more resilient and inclusive DPI ecosystem that truly serves public interest.

The workshop explored four key themes:

  • User Choice, Grievance Redressal, Accountability, and Safety:
  • DPI systems often lack robust accountability mechanisms, particularly in areas with low digital literacy and fragmented regulatory systems. Therefore, the mandatory use of technologies in welfare delivery can result in exclusion and data misuse. Participants emphasized the need for effective grievance redressal systems, clear institutional structures, and user-centric design.

  • Data Privacy Concerns and Responsible Data Sharing:
  • Current consent-based privacy models place a disproportionate burden on individuals, limiting their effectiveness. Participants recommended a shift towards use-based governance, proportional data sharing, along with systemic safeguards and greater public accountability, especially where intermediaries manage consent. They emphasized that privacy and security by design must become default standards in DPI systems, requiring solution providers to embed protections from the outset.

  • Fair Private Sector Participation:
  • Private innovation is important, but unregulated market entry can lead to monopolistic control, market concentration, and conflicts of interest. Participants raised concerns about “open-washing”, where claims of openness are not backed by transparent or interoperable systems. The discussions called for clear governance standards, sustainable open-source models, and capacity building to balance market participation with public interest.

  • Responsible Rollout and Equitable DPI Design:
  • Trust in DPI systems can be strengthened through community engagement, transparency, and inclusive governance. Participants explored how ownership models of DPIs – state-led, public-private partnerships, or nonprofit – impact citizen trust. They advocated for citizen-led consultations and proactive accountability mechanisms.

Overall, the workshop highlighted the need to move beyond techno-solutionism and focus on local realities, especially for marginalized groups, towards decision-making for DPIs. It called for public debate on key questions: How should DPI grievances be addressed? Who safeguards data privacy in systems of mandatory participation while mitigating the risks of exclusion? What ownership and governance models uphold citizen rights? When is DPI truly the right solution?

These questions offer a roadmap for public debate and policymaking that can help ensure that DPI systems are not only scalable but also measurable, accountable, and firmly rooted in people-centric values.

Read the full discussion paper here

Transforming India’s Inter-City Bus Transport System

Transforming India’s Inter-City Bus Transport System

Published April 2025
A TQH and FlixBus Study on the All India Tourist Permit System

India is a country on the move, and bus services are a key cog in the transportation machine. In light of this, our report explores the challenges and opportunities for transforming India’s inter-city bus transport system, which plays a critical role in the nation’s economic and social mobility. The focus is primarily on the regulatory, operational, and enforcement issues within the All India Tourist Permit (AITP) framework, which governs inter-state bus transport.

The report identifies key issues in the bus transport sector, facing bus operators, passengers, and governments, and suggests actionable steps for reform. For bus operators, the key issues include a fragmented regulatory framework and multiple layers of taxation as well as ambiguity on rules related to AITP. States, meanwhile, face revenue losses due to lacunae in the abovementioned rules. Passengers, on the other hand, face significant inconvenience due to the inadequate facilities, such as poor washrooms, limited seating, which also pose additional risks for women and persons with disabilities.

Based on extensive study and stakeholder consultations, the report suggests certain changes to the current regulatory framework for inter-city bus transport to assuage key concerns, including:

  • Clarifying unclear provisions in the AITP Rules, like ones on Multi-Stop Operations
  • Renaming AITP to All India Passenger Permit (AIPP)
  • Introduce Incentive-Based Home-State Touch Requirement for Bus Operators
  • Providing Access to State-Run Terminals for Private Bus Operators
  • Digital Enforcement and Monitoring to reduce harassment of bus operators
  • Introducing India Bus Operations Index (IBOI)

 
Access the report here

Revisit digital search powers under the I-T Bill 2025

Revisit digital search powers under the I-T Bill 2025

Authors: Mahwash Fatima
Published: June, 2025 in The Hindu

The proposal to access an individual’s ‘virtual digital space’ raises significant concerns about privacy, overreach, and surveillance.

The Finance Minister recently introduced a proposal in Parliament to allow tax authorities to access, under the Income-Tax Bill, 2025, an individual’s “virtual digital space” during search and seizure operations. The justification is straightforward: as financial activity moves online, so must enforcement. However, this glosses over the far-reaching implications of such a shift, which raises significant concerns about privacy, overreach, and surveillance.

A blurring, open-ended provision

Currently, India’s tax law already provides for search and seizure under Section 132 of the Income-Tax Act, 1961. But those powers are limited to physical space such as a house, office, and locker. Since such operations are based on suspicion of undisclosed income or assets, there is a connection between the objective, which is finding undisclosed income and getting access to physical assets.

The new Bill, however, blurs this link by including an individual’s digital presence which is not only vast but often contains much more than what is relevant to a tax investigation. Without clear limits, such access can lead to disproportionate intrusion. For example, under the existing regime, what could be searched was what concerned only the individual under investigation. In contrast, digital spaces involve multiple stakeholders. Accessing a social media account also exposes friends, family, and professional contacts, through photographs and posts.

The proposed definition of ‘virtual digital space’ includes access to emails, personal cloud drives, social media accounts, digital application platforms, and more. Crucially, the phrase “any other semi-permanent nature” makes the list open-ended, potentially covering a wide range of digital platforms. Additionally, the proposed provision empowers tax authorities to override passwords and encryption to unlock devices or virtual digital spaces. It still remains unclear though how this power will be operationalised in practice particularly in cases involving encrypted messaging apps such as WhatsApp, as explicitly cited by the Finance Minister in Parliament.

The problem becomes even more of a concern when the individual involved is a professional whose work requires confidentiality. For instance, journalists whose devices and emails hold sensitive information, including confidential sources, unpublished material, and protected communications. If a search is conducted on flimsy or overly broad grounds, it not only violates their privacy but also endangers their ability to undertake reporting. Recognising these risks, the Supreme Court of India, in 2023, circulated interim guidelines on the seizure of digital devices and directed the Union Government to contemplate formulating necessary protocols. Moreover, the judicial interpretation of “reason to believe” emphasises the need for tangible material beyond mere suspicion. Even under existing law, courts have construed that the provision ought to be exercised strictly, acknowledging that search and seizure is a serious invasion of privacy.

The proposal to access an individual’s ‘virtual digital space’ raises significant concerns about privacy, overreach, and surveillance

A violation of transparency, accountability

Yet, the proposed provision goes against these principles and is devoid of guardrails, judicial oversight, and suggests a lack of understanding of the stakes. It fails to acknowledge, let alone address, the sheer breadth and layered sensitivity of information stored on electronic devices. In line with the current law, the proposed provision prohibits disclosure of the “reason to believe” — clearly violating principles of transparency and accountability.

Globally, privacy and transparency standards in search and seizure, especially where digital devices are involved, are grounded in statutory protections and procedural safeguards. In Canada, section 8 of the Charter of Rights and Freedoms guarantees the right to be secure “against unreasonable search or seizure”. It is designed to prevent unjustified searches and sets a three-part default standard: prior authorisation; approval by a neutral and impartial judicial authority; and reasonable and probable grounds. In the United States, the Taxpayer Bill of Rights, adopted by the Internal Revenue Service, affirms that taxpayers have the right to expect that any inquiry or enforcement action will be legally compliant and will not be more intrusive than necessary following due process rights, including search and seizure protections. The U.S. Supreme Court’s decision in Riley vs California also necessitated a warrant before accessing digital data, given the deeply personal nature of information stored on phones and devices.

Contradiction of the proportionality test

In contrast, India’s proposed Income Tax provision grants sweeping access to digital personal data without warrants, relevance thresholds, or any distinction between financial and non-financial information. This directly contradicts the proportionality test upheld by the Supreme Court in Justice K.S. Puttaswamy (Retd.) vs Union of India. The Court has held that any restriction to an individual’s privacy must meet a four-fold test, of which proportionality was key, requiring state action to pursue a legitimate aim, satisfy necessity and adopt the least intrusive means available. Allowing unfettered access to personal digital data, in the absence of judicial oversight or guardrails, fails this standard.

The way forward is not to abandon digital search and seizures. Rather, it is to root it firmly in principles of proportionality, legality, and transparency. The right to be free from surveillance must not be eroded under the garb of tax compliance. Unchecked surveillance in the name of compliance is not oversight but overreach. There is hope the Select Committee of Parliament currently reviewing the Bill will re-examine the notion of ‘virtual digital space’, disclose the reasons for such warrants and understand the risks to digital rights — and course correct by establishing mechanisms of redress for aggrieved individuals.

Mahwash Fatima is a Manager, Public Policy at TQH Consulting’s Policy Tech practice in Delhi.

Disinformation in the digital age cannot be fought by taking down content

Disinformation in the digital age cannot be fought by taking down content

Authors: Rohit Kumar & Paavi Kulshreshth
Published: 20th May, 2025 in The Indian Express

India’s military strength was on display in the recent conflict with Pakistan, where the Air Force responded with precision and resolve. But even as our forces have returned to base, a parallel battle has continued to rage online: one of narratives, falsehoods, and influence. This front – digital and unrelenting – requires not just speed, but strategy since conventional tools of control offer little defence against the evolving nature of information warfare.

Amid the conflict, reports flagged a surge in disinformation from pro-Pakistan social media handles, including absurd claims such as India attacking its own city of Amritsar. This pointed to deliberate, coordinated efforts to systematically weaponise disinformation on digital platforms. In response, India was quick to hold press conferences, present visual evidence, and have the PIB fact-checking unit debunk false claims, while also issuing an unprecedented number of account blocking orders. All of this put together, however, was not enough to prevent falsehoods from gaining traction.

Disinformation is not a new phenomenon – it has long been used as a tool in warfare and diplomacy. What’s changed is the scale, speed, and precision with which it now spreads through digital platforms, transforming old tactics into persistent and formidable threats. Around the world, policymakers have struggled to keep pace. In India, one of the recurring proposals has been to weaken safe harbour protections for online platforms. But this is a misdiagnosis of the problem – and a potentially counterproductive one.

Why Safe Harbour Isn’t the Problem

Today’s disinformation is not just about individual false posts; it is about coordinated influence operations that weaponise platform features to shape public perception at scale. Blocking a few posts or suspending some accounts is unlikely to stop narratives from being replicated and recirculated across the digital ecosystem. Nor does it disrupt the underlying dynamics – like trending algorithms or recommendation engines – that give such content disproportionate visibility.

In this context, calls to dilute safe harbour reflect a fundamental misunderstanding. Safe harbour, as it currently operates, holds platforms liable only if they have actual knowledge of illegal material and choose to keep it up. This framework exists because requiring platforms to pre-screen every post is not just technically infeasible given the sheer volume, but would also lead to over-censorship and weakening of the digital public sphere.

Crucially, much of the disinformation we see during geopolitical conflicts is not technically illegal. For instance, when a Chinese daily reportedly shared false information on X amid the India-Pak conflict, X’s legal obligation wasn’t clear, as the content wasn’t technically illegal. This would remain unchanged even if safe harbour is weakened.

Blunt instruments like safe harbour dilution are therefore unlikely to be effective against systemic challenges such as disinformation.

Shift from reactive content moderation to systemic resilience

To effectively counter disinformation, we must shift from reactive content moderation to a systems-level approach rooted in platform accountability and design resilience. This means recognising that disinformation thrives not only because of bad actors, but because of how platforms are built. Regulatory and platform responses must therefore focus on preventing exploitation of platform features, rather than merely responding to viral falsehoods.

A key step toward prevention is mandating periodic risk assessments for platforms that host user-generated content and interactions. These assessments should identify which design features – such as algorithmic amplification or low-friction-high-reach sharing – contribute to the spread of disinformation. Platforms should then be required to arrive at solutions and strengthen internal systems to slow the speed and breadth of spread of disinformation.

This approach matters because platform architecture directly influences how disinformation spreads. Bad actors exploit different services in different ways – gaming open feed algorithms to promote manipulative content on one platform, while leveraging mass forwards and group messaging on another. Risk assessments must capture these distinctions to inform tailored, service-specific mitigation strategies.

On public platforms, safety-by-design measures can include fact-checking nudges, community notes, content labelling (especially for AI-generated content). In encrypted messaging environments, where direct moderation is not possible, design interventions such as limiting group sizes, restricting one-click forwards, or introducing forwarding delays can reduce virality without compromising user privacy.

Equally important is the ability to detect and attribute coordinated disinformation activity – campaigns orchestrated by networks of actors often disguised as ordinary users. Addressing this requires both platforms and regulators to invest in tools and intelligence capabilities that go beyond flagging individual posts. Network analysis and behaviour-based detection systems can help identify the source and structure of such campaigns, rather than focusing only on visible front actors.

When platforms fail to act despite foreseeable risks, remedies should target specific penalties, calibrated to the severity and impact of the violation. This approach targets platform responsibility for system design and risk management – not for individual pieces of user content, and thus remains separate from content-level liability under safe harbour.

A Future-Ready Approach

While disinformation is especially dangerous during sensitive geopolitical moments, it festers even in peacetime, distorting everything from health to gender politics. The rapid evolution of technology, especially the rise of AI-generated content, is further blurring the line between fact and fiction. Regulation must start with a clear-eyed understanding of these dynamics – because if we misdiagnose the problem, we’ll keep fighting the wrong battle.

Rohit Kumar is the founding partner, and Paavi Kulshreshth a senior analyst at the public policy firm The Quantum Hub (TQH)

DPDP Act leaves Persons with Disabilities Vulnerable

DPDP Act leaves Persons with Disabilities Vulnerable

Authors: Nipun Malhotra & Senu Nizar

Published: 6th January, 2025 in The Print

Around half a decade ago, one of the co-authors of this article was invited by then secretary, Department of Disability Affairs, for a vision setting exercise for the department for the next 25 years. The co–author made a strong argument that while the Department continues to function as a nodal agency, disability cannot be limited to just one department. Disability is an intersectional issue and we need disability experts in each ministry.

Data Protection Laws Reinforcing Stereotypes and Limiting Autonomy

The need for this was reinforced when the Digital Personal Data Protection Act 2023 was notified, which clubbed children and Persons with Disabilities under the same provision titled “Processing of personal data of children”. This effectively infantilised them by taking away their right to provide consent for the processing of their data. Those drafting this Act clearly overlooked the fact that not all PwDs have guardians and assumed that PwDs are not capable of making their own decisions, violating their autonomy.

The recently released draft Digital Personal Data Protection Rules 2025 (Draft Rules) reflect an attempt by the Ministry of Electronics and Information Technology to address some of the previously raised concerns. However, there is still much to be desired in adequately meeting the needs of PwDs.

The Rules have limited the requirement for lawful guardian’s consent to only two sets of PwDs. The first group includes those who have “long term physical, mental, intellectual or sensory impairment… and who, despite being provided adequate and appropriate support, is unable to take legally binding decisions”. However, the inclusion of ‘physical impairment’ in this category appears poorly thought out and flawed as physical disability does not automatically imply a lack of mental capability to make decisions.

Conflict with Rights-Based Legal Frameworks and Risks of Regression

Moreover, the Rules also conflict with the Rights of Persons with Disabilities Act 2016 (RPwD Act) which provides a limited guardianship model. Under this model, a court-appointed guardian makes decisions—limited to a “specific period”, “specific decision”, and “situation”—in consultation with PwDs. It is therefore unclear how a limited guardian can make decisions concerning PwDs’ data continuously over an indefinite period. Or would it be the case that lawful guardians must repeatedly approach the court to obtain guardianship arrangements tailored specifically for data processing? In that case, would it not render limited guardianship into a lifelong one, as PwDs inevitably need access to the digital space throughout their lives?

The second group comprises individuals “suffering from any of the conditions relating to autism, cerebral palsy, mental retardation… and includes an individual suffering from severe multiple disabilities”. This mirrors the definition of PwDs under the National Trust Act 1999 (NT Act)—which predates the UN Convention on Rights of Persons with Disability (UNCR/PD) 2006 and is rooted in a medical model of disability (using terms like ‘suffering’). However, not all persons on the autism spectrum necessarily require a guardian to make decisions. Similarly, having one or more disabilities, even if severe (over 80 per cent or more of disabilities), does not automatically indicate that the person is unable to make decisions or needs a guardian. While some might need a guardian for financial planning, surely most can and should be able to choose which burger to order from which restaurant app.

Besides, with the implementation of the RPwD Act, all guardianships for PwDs shall be deemed to be limited guardianship. Therefore, even guardians appointed under the NT Act cannot exceed their specified mandates.

Erroneous assumptions

The DPDP Act has done away with the distinction between personal data and sensitive personal data and consequently offers no special protection for disability data. This is unlike other jurisdictions like Australia where disability data is classified as health data and afforded a higher level of protection that is usually granted to sensitive personal data.

The absence of safeguards for sensitive personal data raises concerns about potential misuse as PwDs are more vulnerable to prejudicial treatment, with instances of disability disclosures leading to discrimination. For instance, declaring a disability to seek exam accommodations could result in an insurance company raising the premium amount or cab aggregators hiking ride charges. Further, these Rules lack clarity on technical measures and means to verify PwD status or court-appointed guardianship, adding more user friction and exacerbating existing barriers to digital accessibility for PwDs.

The crux of the problem lies in equating disability with the inability to consent. This is an erroneous assumption. Even the Contract Act 1872 does not use disability as the standard. Instead, it requires that a person be of sound mind—i.e. that the person is capable of understanding the contract and its effects. If at all special protection for PwDs is desirable, then Australia provides a useful framework. In Australia, data fiduciaries are required to provide assistive resources—like interpreters and translators—to enable consent from PwDs. Only if consent is still not possible can the right be assigned to a nominated guardian, while involving the PwD in the decision-making process.

The Draft Rules if implemented in the current form will turn the clock back for PwDs by denying them the most basic right to make decisions. Ironically, while this law has been created for data privacy, it does exactly the opposite for PwDs who would wish to keep their disability data private. It is unfortunate that in many ways it has created further confusion when it comes to guardianship provisions in India. Besides, it definitely goes against the spirit of the UNCRPD. With the draft rules now open to public consultation, we do hope these provisions are relooked at and our fears are corrected.

Nipun Malhotra is the Founder & CEO, Nipman Foundation and Director, Disability Rights & Inclusion at The Quantum Hub. Senu Nizar is a lawyer and Senior Analyst, Public Policy at The Quantum Hub.

Centering Care in India’s Economic Policy

Centering Care in India’s Economic Policy

Authors: Sreerupa & Harshita Kumari
Published: 4th March, 2025 in The Hindu

Budget Allocation and Gaps in Care Infrastructure

The Union Budget for 2025 allocated a record ₹44,9,028.68 crore to the Gender Budget (GB), marking a 37.3% increase from FY24 and accounting for 8.86% of the total Budget. This rise is primarily due to the inclusion of the PM Garib Kalyan Anna Yojana, which accounts for 24% of the GB, rather than being driven by substantial investments in care infrastructure or new gender-responsive schemes. Despite this increase, critical investments in care infrastructure remain absent, reinforcing the persistent invisibilisation of care work in India’s economic planning. While the Economic Surveys of 2023-24 and 2024-25 highlight care infrastructure as central to women’s empowerment, the current Budget misses the opportunity to make tangible investments to strengthen India’s care economy in line with its socio-economic realities.

The Burden of Unpaid Care Work on Women

Globally, women spend an average of 17.8% of their time on unpaid care and domestic work (UCDW), with women in the Global South bearing higher burdens. India is especially concerning, as Indian women shoulder 40% more of this burden than their counterparts in South Africa and China. The International Labour Organization reports that 53% of Indian women remain outside the labour force due to care responsibilities, compared to just 1.1% of men, underscoring entrenched inequities. For poor and marginalised women, this burden is severe as women in low-income families often juggle 17–19 hours of daily tasks, balancing paid work with domestic duties, intensifying ‘time poverty’, and eroding their well-being. Feminist economists from the Global South emphasise that unpaid work in these regions encompasses a broader range of tasks compared to the Global North, extending beyond household care giving to include work on family farms, water and fuel collection, cleaning, and cooking. Limited access to essential infrastructure — such as water, clean energy, and sanitation — means women spend up to 73% of their time on these unpaid activities. For example, women spend nearly five hours daily collecting water, compared to 1.5 hours for men. Climate change exacerbates this burden, with water-related unpaid labour in India projected to reach $1.4 billion by 2050 under a high-emissions scenario. This stems from low public investment in care infrastructure and entrenched social norms that assign care work to women.

National-Level Solutions: Applying the ‘Three R + 1’ Approach

The Economic Survey 2023-24 highlights that direct public investment equivalent to 2% of GDP could generate 11 million jobs while easing the care burden. Applying the expanded ‘Three R framework’—Recognise, Reduce, Redistribute, and Represent — can ensure policies are both contextually relevant and transformative.

The first step is recognising the full spectrum of UCDW women perform. India’s 2019 Time Use Survey marked a milestone in acknowledging this issue, revealing that women spend an average of seven hours daily on UCDW. Despite the policy benefits that these surveys carry, their costs can make implementation challenging. One solution is to integrate time-use modules into existing household surveys.

The second step is reducing the UCDW burden through time-saving technologies and expanded access to affordable care infrastructure. The Centre has acknowledged gaps in access to essential services by extending the Jal Jeevan Mission (JJM) until 2028, aiming for 100% potable water coverage. However, funding delays and underutilisation hinder implementation. While the scheme’s Budget declined by 4.51% from last year’s Budget Estimates (BE), it saw a 195% jump over Revised Estimates (RE), highlighting allocation-spending gaps. With less than half of villages having functional household tap connections, JJM requires stronger implementation and water sustainability measures. Expanding childcare centres, eldercare support, and assistive technologies would ease women’s care burden, and boost their workforce participation.

The third key step is redistributing care work — from the home to the State and within households. The newly announced ₹1 lakh crore Urban Challenge Fund, with ₹10,000 crore allocated for FY 2025-26, can be transformative. This will finance up to 25% of bankable projects, encouraging private and public sector participation in urban redevelopment, water, and sanitation initiatives. By leveraging this fund, India can scale up pilot care infrastructure models already under way through the Smart Cities Mission. Taking inspiration from Bogotá’s Care Blocks, which centralise care giving services to reduce women’s unpaid work, this approach aligns with the fund’s broader goal of sustainable urban development.

Women’s Representation

Finally, women’s representation in decision-making and implementation is crucial for creating gender-transformative policies. Excluding women from this leaves them vulnerable to policies that fail to address their lived realities. In fact, involving women in decision-making processes enhances their effectiveness significantly, sometimes by six to seven times.

With the Centre’s emphasis on Nari Shakti as a driver of economic growth, India has the opportunity to set a global example for a gender and care-sensitive economy. However, the current Budget falls short by not prioritising care as a central pillar. A more deliberate, well-funded strategy is necessary to ensure that care work is not treated as an afterthought but as a core component of inclusive growth.

Sreerupa is a Research Fellow and Program Lead at Institute of Social Studies Trust, and Harshita is an Associate at The Quantum Hub (TQH)

Photo by Frederick Shaw on Unsplash

A Seat at the Digital Table: Centering Disability in Digital Public Infrastructure

A Seat at the Digital Table: Centering Disability in Digital Public Infrastructure

Published: May 2025

This paper examines the intersection of disability and digital public infrastructure (DPI). Why disability? Persons with disabilities stand to benefit the most from the inclusive potential of DPI technologies. They stand to suffer the most when these technologies are designed without taking their needs into account. They stand to offer the most to economies and societies when new technologies enable their full participation. Nevertheless, to date, disability has largely remained at the periphery of the DPI conversation.

Three case studies from India—Aadhaar (identity), UPI (payments), and ONDC (e-commerce)—shed light on the reality of DPI and disability, as well as the possibility of building a more fully inclusive “Purple Stack.” Each of these case studies highlights different aspects of disability inclusion, reflected through different roles of government, civil society, and the private sector. Lessons include:

  • Speed and scale alone do not guarantee inclusion—accessibility must be an intentional design choice from the outset.
  • Processes are as important as products—user journeys, not just discrete technologies, determine real-world accessibility.
  • Governance has a critical role to play—just as security and privacy are embedded into DPI governance, accessibility must be codified through policies and standards.
  • Accessibility must exist at every level of the DPI stack—from frontend applications to backend protocols.

To translate these lessons into action, the community of DPI architects and advocates should take steps to build an open-source repository of DPI accessibility solutions. An additional recommendation is to develop a structured research agenda to assess the impact of DPI on persons with disabilities—including by filling in data gaps and mapping user journeys.

Disability is a complex and evolving concept. After defining key terms such as “accessibility” and “universal design,” this paper puts forward a working definition of a Purple Stack: a suite of digital public technologies that (a) embody the philosophy of universal design such that (b) the technologies themselves are accessible in ways that lead to (c) inclusive outcomes for persons with disabilities in key social, economic, and political domains.

Though a Purple Stack benefits persons with disabilities, disability inclusion is not the only reason to build one. Disability-inclusive DPI technologies are good for growth and will benefit everyone, eventually. Moreover, a Purple Stack is a powerful argument in favor of the DPI approach to decentralization and modularity.

Access the paper here

Navigating the Future of Work

Navigating the Future of Work

Published: May 2025
Authors: Swathi Rao, Shubham Mudgil & Kaushik Thanugonda under the guidance of Deepro Guha

Rapid technological advancements in recent decades have significantly altered how we work and where we work. These changes are transforming humanity’s relationships with labour, marking a fresh phase in human progress. Advancements like artificial intelligence (AI) and machine learning (ML) are blurring the lines between physical and digital domains, presenting a new landscape of both immense opportunity and potential risk.

India is at the forefront of this revolution. The economy-wide adoption of advanced technologies like AI and expedited digitization due to initiatives like the Digital India Mission is driving transformation across industries. With the rapid uptake of frontier technologies, human-machine interaction is likely to dictate the skills and competencies required of employees in order to remain competitive and employable. As automation increasingly augments and, in some cases, substitutes human labour, the workforce must adapt by acquiring new proficiencies and embracing interdisciplinary collaboration. An example is the growth of the gig economy. Enabled by technology and characterised by task-based work, short-term contracts or freelance work, the gig economy offers both opportunities and challenges for workers and employers alike.

While adoption of technology is changing the nature of work, other “non-tech” elements like climate change are also redefining how we work. Increasingly there is growing emphasis on sustainability and reducing the carbon footprint of human activity. This has led to the emergence of “green jobs” that require specialised “green skills”. This shift towards sustainability aligns with global efforts to combat climate change and promote environmental conservation.

Additionally, societal shifts have brought forth new priorities in the workplace. The COVID-19 pandemic accelerated the adoption of flexible work arrangements, prompting companies to prioritise employee-centric approaches that enhance work-life balance and productivity. Flexible work models empower employees to dictate their work schedules, enable organisations’ access to diverse talent pools, and also aid in minimising environmental impact. This evolving nature of work is also altering established concepts of the workplace – redefining it beyond physical boundaries to encompass private spaces like homes and even virtual spaces that are being enabled via the use of technologies like augmented and virtual reality.

This new transformed landscape necessitates a reevaluation of our definition of work and the regulations governing it. Definitions and regulations governing workplaces and workers have hitherto been driven by infrastructure-heavy businesses, classical manufacturing or IT/ITes models, and overlook the agility demanded by today’s work ecosystem.

This report sheds light on the trajectory of these developments, and offers policy recommendations to navigate the complexities of the “Future of Work” in India.

Access the research here