Revisit digital search powers under the I-T Bill 2025

Revisit digital search powers under the I-T Bill 2025

Authors: Mahwash Fatima
Published: 6th January, 2025 in The Hindu

The proposal to access an individual’s ‘virtual digital space’ raises significant concerns about privacy, overreach, and surveillance.

The Finance Minister recently introduced a proposal in Parliament to allow tax authorities to access, under the Income-Tax Bill, 2025, an individual’s “virtual digital space” during search and seizure operations. The justification is straightforward: as financial activity moves online, so must enforcement. However, this glosses over the far-reaching implications of such a shift, which raises significant concerns about privacy, overreach, and surveillance.

A blurring, open-ended provision

Currently, India’s tax law already provides for search and seizure under Section 132 of the Income-Tax Act, 1961. But those powers are limited to physical space such as a house, office, and locker. Since such operations are based on suspicion of undisclosed income or assets, there is a connection between the objective, which is finding undisclosed income and getting access to physical assets.

The new Bill, however, blurs this link by including an individual’s digital presence which is not only vast but often contains much more than what is relevant to a tax investigation. Without clear limits, such access can lead to disproportionate intrusion. For example, under the existing regime, what could be searched was what concerned only the individual under investigation. In contrast, digital spaces involve multiple stakeholders. Accessing a social media account also exposes friends, family, and professional contacts, through photographs and posts.

The proposed definition of ‘virtual digital space’ includes access to emails, personal cloud drives, social media accounts, digital application platforms, and more. Crucially, the phrase “any other semi-permanent nature” makes the list open-ended, potentially covering a wide range of digital platforms. Additionally, the proposed provision empowers tax authorities to override passwords and encryption to unlock devices or virtual digital spaces. It still remains unclear though how this power will be operationalised in practice particularly in cases involving encrypted messaging apps such as WhatsApp, as explicitly cited by the Finance Minister in Parliament.

The problem becomes even more of a concern when the individual involved is a professional whose work requires confidentiality. For instance, journalists whose devices and emails hold sensitive information, including confidential sources, unpublished material, and protected communications. If a search is conducted on flimsy or overly broad grounds, it not only violates their privacy but also endangers their ability to undertake reporting. Recognising these risks, the Supreme Court of India, in 2023, circulated interim guidelines on the seizure of digital devices and directed the Union Government to contemplate formulating necessary protocols. Moreover, the judicial interpretation of “reason to believe” emphasises the need for tangible material beyond mere suspicion. Even under existing law, courts have construed that the provision ought to be exercised strictly, acknowledging that search and seizure is a serious invasion of privacy.

The proposal to access an individual’s ‘virtual digital space’ raises significant concerns about privacy, overreach, and surveillance

A violation of transparency, accountability

Yet, the proposed provision goes against these principles and is devoid of guardrails, judicial oversight, and suggests a lack of understanding of the stakes. It fails to acknowledge, let alone address, the sheer breadth and layered sensitivity of information stored on electronic devices. In line with the current law, the proposed provision prohibits disclosure of the “reason to believe” — clearly violating principles of transparency and accountability.

Globally, privacy and transparency standards in search and seizure, especially where digital devices are involved, are grounded in statutory protections and procedural safeguards. In Canada, section 8 of the Charter of Rights and Freedoms guarantees the right to be secure “against unreasonable search or seizure”. It is designed to prevent unjustified searches and sets a three-part default standard: prior authorisation; approval by a neutral and impartial judicial authority; and reasonable and probable grounds. In the United States, the Taxpayer Bill of Rights, adopted by the Internal Revenue Service, affirms that taxpayers have the right to expect that any inquiry or enforcement action will be legally compliant and will not be more intrusive than necessary following due process rights, including search and seizure protections. The U.S. Supreme Court’s decision in Riley vs California also necessitated a warrant before accessing digital data, given the deeply personal nature of information stored on phones and devices.

Contradiction of the proportionality test

In contrast, India’s proposed Income Tax provision grants sweeping access to digital personal data without warrants, relevance thresholds, or any distinction between financial and non-financial information. This directly contradicts the proportionality test upheld by the Supreme Court in Justice K.S. Puttaswamy (Retd.) vs Union of India. The Court has held that any restriction to an individual’s privacy must meet a four-fold test, of which proportionality was key, requiring state action to pursue a legitimate aim, satisfy necessity and adopt the least intrusive means available. Allowing unfettered access to personal digital data, in the absence of judicial oversight or guardrails, fails this standard.

The way forward is not to abandon digital search and seizures. Rather, it is to root it firmly in principles of proportionality, legality, and transparency. The right to be free from surveillance must not be eroded under the garb of tax compliance. Unchecked surveillance in the name of compliance is not oversight but overreach. There is hope the Select Committee of Parliament currently reviewing the Bill will re-examine the notion of ‘virtual digital space’, disclose the reasons for such warrants and understand the risks to digital rights — and course correct by establishing mechanisms of redress for aggrieved individuals.

Mahwash Fatima is a Manager, Public Policy at TQH Consulting’s Policy Tech practice in Delhi.

Disinformation in the digital age cannot be fought by taking down content

Disinformation in the digital age cannot be fought by taking down content

Authors: Rohit Kumar & Paavi Kulshreshth
Published: 20th May, 2025 in The Indian Express

India’s military strength was on display in the recent conflict with Pakistan, where the Air Force responded with precision and resolve. But even as our forces have returned to base, a parallel battle has continued to rage online: one of narratives, falsehoods, and influence. This front – digital and unrelenting – requires not just speed, but strategy since conventional tools of control offer little defence against the evolving nature of information warfare.

Amid the conflict, reports flagged a surge in disinformation from pro-Pakistan social media handles, including absurd claims such as India attacking its own city of Amritsar. This pointed to deliberate, coordinated efforts to systematically weaponise disinformation on digital platforms. In response, India was quick to hold press conferences, present visual evidence, and have the PIB fact-checking unit debunk false claims, while also issuing an unprecedented number of account blocking orders. All of this put together, however, was not enough to prevent falsehoods from gaining traction.

Disinformation is not a new phenomenon – it has long been used as a tool in warfare and diplomacy. What’s changed is the scale, speed, and precision with which it now spreads through digital platforms, transforming old tactics into persistent and formidable threats. Around the world, policymakers have struggled to keep pace. In India, one of the recurring proposals has been to weaken safe harbour protections for online platforms. But this is a misdiagnosis of the problem – and a potentially counterproductive one.

Why Safe Harbour Isn’t the Problem

Today’s disinformation is not just about individual false posts; it is about coordinated influence operations that weaponise platform features to shape public perception at scale. Blocking a few posts or suspending some accounts is unlikely to stop narratives from being replicated and recirculated across the digital ecosystem. Nor does it disrupt the underlying dynamics – like trending algorithms or recommendation engines – that give such content disproportionate visibility.

In this context, calls to dilute safe harbour reflect a fundamental misunderstanding. Safe harbour, as it currently operates, holds platforms liable only if they have actual knowledge of illegal material and choose to keep it up. This framework exists because requiring platforms to pre-screen every post is not just technically infeasible given the sheer volume, but would also lead to over-censorship and weakening of the digital public sphere.

Crucially, much of the disinformation we see during geopolitical conflicts is not technically illegal. For instance, when a Chinese daily reportedly shared false information on X amid the India-Pak conflict, X’s legal obligation wasn’t clear, as the content wasn’t technically illegal. This would remain unchanged even if safe harbour is weakened.

Blunt instruments like safe harbour dilution are therefore unlikely to be effective against systemic challenges such as disinformation.

Shift from reactive content moderation to systemic resilience

To effectively counter disinformation, we must shift from reactive content moderation to a systems-level approach rooted in platform accountability and design resilience. This means recognising that disinformation thrives not only because of bad actors, but because of how platforms are built. Regulatory and platform responses must therefore focus on preventing exploitation of platform features, rather than merely responding to viral falsehoods.

A key step toward prevention is mandating periodic risk assessments for platforms that host user-generated content and interactions. These assessments should identify which design features – such as algorithmic amplification or low-friction-high-reach sharing – contribute to the spread of disinformation. Platforms should then be required to arrive at solutions and strengthen internal systems to slow the speed and breadth of spread of disinformation.

This approach matters because platform architecture directly influences how disinformation spreads. Bad actors exploit different services in different ways – gaming open feed algorithms to promote manipulative content on one platform, while leveraging mass forwards and group messaging on another. Risk assessments must capture these distinctions to inform tailored, service-specific mitigation strategies.

On public platforms, safety-by-design measures can include fact-checking nudges, community notes, content labelling (especially for AI-generated content). In encrypted messaging environments, where direct moderation is not possible, design interventions such as limiting group sizes, restricting one-click forwards, or introducing forwarding delays can reduce virality without compromising user privacy.

Equally important is the ability to detect and attribute coordinated disinformation activity – campaigns orchestrated by networks of actors often disguised as ordinary users. Addressing this requires both platforms and regulators to invest in tools and intelligence capabilities that go beyond flagging individual posts. Network analysis and behaviour-based detection systems can help identify the source and structure of such campaigns, rather than focusing only on visible front actors.

When platforms fail to act despite foreseeable risks, remedies should target specific penalties, calibrated to the severity and impact of the violation. This approach targets platform responsibility for system design and risk management – not for individual pieces of user content, and thus remains separate from content-level liability under safe harbour.

A Future-Ready Approach

While disinformation is especially dangerous during sensitive geopolitical moments, it festers even in peacetime, distorting everything from health to gender politics. The rapid evolution of technology, especially the rise of AI-generated content, is further blurring the line between fact and fiction. Regulation must start with a clear-eyed understanding of these dynamics – because if we misdiagnose the problem, we’ll keep fighting the wrong battle.

Rohit Kumar is the founding partner, and Paavi Kulshreshth a senior analyst at the public policy firm The Quantum Hub (TQH)

DPDP Act leaves Persons with Disabilities Vulnerable

DPDP Act leaves Persons with Disabilities Vulnerable

Authors: Nipun Malhotra & Senu Nizar

Published: 6th January, 2025 in The Print

Around half a decade ago, one of the co-authors of this article was invited by then secretary, Department of Disability Affairs, for a vision setting exercise for the department for the next 25 years. The co–author made a strong argument that while the Department continues to function as a nodal agency, disability cannot be limited to just one department. Disability is an intersectional issue and we need disability experts in each ministry.

Data Protection Laws Reinforcing Stereotypes and Limiting Autonomy

The need for this was reinforced when the Digital Personal Data Protection Act 2023 was notified, which clubbed children and Persons with Disabilities under the same provision titled “Processing of personal data of children”. This effectively infantilised them by taking away their right to provide consent for the processing of their data. Those drafting this Act clearly overlooked the fact that not all PwDs have guardians and assumed that PwDs are not capable of making their own decisions, violating their autonomy.

The recently released draft Digital Personal Data Protection Rules 2025 (Draft Rules) reflect an attempt by the Ministry of Electronics and Information Technology to address some of the previously raised concerns. However, there is still much to be desired in adequately meeting the needs of PwDs.

The Rules have limited the requirement for lawful guardian’s consent to only two sets of PwDs. The first group includes those who have “long term physical, mental, intellectual or sensory impairment… and who, despite being provided adequate and appropriate support, is unable to take legally binding decisions”. However, the inclusion of ‘physical impairment’ in this category appears poorly thought out and flawed as physical disability does not automatically imply a lack of mental capability to make decisions.

Conflict with Rights-Based Legal Frameworks and Risks of Regression

Moreover, the Rules also conflict with the Rights of Persons with Disabilities Act 2016 (RPwD Act) which provides a limited guardianship model. Under this model, a court-appointed guardian makes decisions—limited to a “specific period”, “specific decision”, and “situation”—in consultation with PwDs. It is therefore unclear how a limited guardian can make decisions concerning PwDs’ data continuously over an indefinite period. Or would it be the case that lawful guardians must repeatedly approach the court to obtain guardianship arrangements tailored specifically for data processing? In that case, would it not render limited guardianship into a lifelong one, as PwDs inevitably need access to the digital space throughout their lives?

The second group comprises individuals “suffering from any of the conditions relating to autism, cerebral palsy, mental retardation… and includes an individual suffering from severe multiple disabilities”. This mirrors the definition of PwDs under the National Trust Act 1999 (NT Act)—which predates the UN Convention on Rights of Persons with Disability (UNCR/PD) 2006 and is rooted in a medical model of disability (using terms like ‘suffering’). However, not all persons on the autism spectrum necessarily require a guardian to make decisions. Similarly, having one or more disabilities, even if severe (over 80 per cent or more of disabilities), does not automatically indicate that the person is unable to make decisions or needs a guardian. While some might need a guardian for financial planning, surely most can and should be able to choose which burger to order from which restaurant app.

Besides, with the implementation of the RPwD Act, all guardianships for PwDs shall be deemed to be limited guardianship. Therefore, even guardians appointed under the NT Act cannot exceed their specified mandates.

Erroneous assumptions

The DPDP Act has done away with the distinction between personal data and sensitive personal data and consequently offers no special protection for disability data. This is unlike other jurisdictions like Australia where disability data is classified as health data and afforded a higher level of protection that is usually granted to sensitive personal data.

The absence of safeguards for sensitive personal data raises concerns about potential misuse as PwDs are more vulnerable to prejudicial treatment, with instances of disability disclosures leading to discrimination. For instance, declaring a disability to seek exam accommodations could result in an insurance company raising the premium amount or cab aggregators hiking ride charges. Further, these Rules lack clarity on technical measures and means to verify PwD status or court-appointed guardianship, adding more user friction and exacerbating existing barriers to digital accessibility for PwDs.

The crux of the problem lies in equating disability with the inability to consent. This is an erroneous assumption. Even the Contract Act 1872 does not use disability as the standard. Instead, it requires that a person be of sound mind—i.e. that the person is capable of understanding the contract and its effects. If at all special protection for PwDs is desirable, then Australia provides a useful framework. In Australia, data fiduciaries are required to provide assistive resources—like interpreters and translators—to enable consent from PwDs. Only if consent is still not possible can the right be assigned to a nominated guardian, while involving the PwD in the decision-making process.

The Draft Rules if implemented in the current form will turn the clock back for PwDs by denying them the most basic right to make decisions. Ironically, while this law has been created for data privacy, it does exactly the opposite for PwDs who would wish to keep their disability data private. It is unfortunate that in many ways it has created further confusion when it comes to guardianship provisions in India. Besides, it definitely goes against the spirit of the UNCRPD. With the draft rules now open to public consultation, we do hope these provisions are relooked at and our fears are corrected.

Nipun Malhotra is the Founder & CEO, Nipman Foundation and Director, Disability Rights & Inclusion at The Quantum Hub. Senu Nizar is a lawyer and Senior Analyst, Public Policy at The Quantum Hub.

Centering Care in India’s Economic Policy

Centering Care in India’s Economic Policy

Authors: Sreerupa & Harshita Kumari
Published: 4th March, 2025 in The Hindu

Budget Allocation and Gaps in Care Infrastructure

The Union Budget for 2025 allocated a record ₹44,9,028.68 crore to the Gender Budget (GB), marking a 37.3% increase from FY24 and accounting for 8.86% of the total Budget. This rise is primarily due to the inclusion of the PM Garib Kalyan Anna Yojana, which accounts for 24% of the GB, rather than being driven by substantial investments in care infrastructure or new gender-responsive schemes. Despite this increase, critical investments in care infrastructure remain absent, reinforcing the persistent invisibilisation of care work in India’s economic planning. While the Economic Surveys of 2023-24 and 2024-25 highlight care infrastructure as central to women’s empowerment, the current Budget misses the opportunity to make tangible investments to strengthen India’s care economy in line with its socio-economic realities.

The Burden of Unpaid Care Work on Women

Globally, women spend an average of 17.8% of their time on unpaid care and domestic work (UCDW), with women in the Global South bearing higher burdens. India is especially concerning, as Indian women shoulder 40% more of this burden than their counterparts in South Africa and China. The International Labour Organization reports that 53% of Indian women remain outside the labour force due to care responsibilities, compared to just 1.1% of men, underscoring entrenched inequities. For poor and marginalised women, this burden is severe as women in low-income families often juggle 17–19 hours of daily tasks, balancing paid work with domestic duties, intensifying ‘time poverty’, and eroding their well-being. Feminist economists from the Global South emphasise that unpaid work in these regions encompasses a broader range of tasks compared to the Global North, extending beyond household care giving to include work on family farms, water and fuel collection, cleaning, and cooking. Limited access to essential infrastructure — such as water, clean energy, and sanitation — means women spend up to 73% of their time on these unpaid activities. For example, women spend nearly five hours daily collecting water, compared to 1.5 hours for men. Climate change exacerbates this burden, with water-related unpaid labour in India projected to reach $1.4 billion by 2050 under a high-emissions scenario. This stems from low public investment in care infrastructure and entrenched social norms that assign care work to women.

National-Level Solutions: Applying the ‘Three R + 1’ Approach

The Economic Survey 2023-24 highlights that direct public investment equivalent to 2% of GDP could generate 11 million jobs while easing the care burden. Applying the expanded ‘Three R framework’—Recognise, Reduce, Redistribute, and Represent — can ensure policies are both contextually relevant and transformative.

The first step is recognising the full spectrum of UCDW women perform. India’s 2019 Time Use Survey marked a milestone in acknowledging this issue, revealing that women spend an average of seven hours daily on UCDW. Despite the policy benefits that these surveys carry, their costs can make implementation challenging. One solution is to integrate time-use modules into existing household surveys.

The second step is reducing the UCDW burden through time-saving technologies and expanded access to affordable care infrastructure. The Centre has acknowledged gaps in access to essential services by extending the Jal Jeevan Mission (JJM) until 2028, aiming for 100% potable water coverage. However, funding delays and underutilisation hinder implementation. While the scheme’s Budget declined by 4.51% from last year’s Budget Estimates (BE), it saw a 195% jump over Revised Estimates (RE), highlighting allocation-spending gaps. With less than half of villages having functional household tap connections, JJM requires stronger implementation and water sustainability measures. Expanding childcare centres, eldercare support, and assistive technologies would ease women’s care burden, and boost their workforce participation.

The third key step is redistributing care work — from the home to the State and within households. The newly announced ₹1 lakh crore Urban Challenge Fund, with ₹10,000 crore allocated for FY 2025-26, can be transformative. This will finance up to 25% of bankable projects, encouraging private and public sector participation in urban redevelopment, water, and sanitation initiatives. By leveraging this fund, India can scale up pilot care infrastructure models already under way through the Smart Cities Mission. Taking inspiration from Bogotá’s Care Blocks, which centralise care giving services to reduce women’s unpaid work, this approach aligns with the fund’s broader goal of sustainable urban development.

Women’s Representation

Finally, women’s representation in decision-making and implementation is crucial for creating gender-transformative policies. Excluding women from this leaves them vulnerable to policies that fail to address their lived realities. In fact, involving women in decision-making processes enhances their effectiveness significantly, sometimes by six to seven times.

With the Centre’s emphasis on Nari Shakti as a driver of economic growth, India has the opportunity to set a global example for a gender and care-sensitive economy. However, the current Budget falls short by not prioritising care as a central pillar. A more deliberate, well-funded strategy is necessary to ensure that care work is not treated as an afterthought but as a core component of inclusive growth.

Sreerupa is a Research Fellow and Program Lead at Institute of Social Studies Trust, and Harshita is an Associate The Quantum Hub (TQH) – a public policy firm

A Seat at the Digital Table: Centering Disability in Digital Public Infrastructure

A Seat at the Digital Table: Centering Disability in Digital Public Infrastructure

Published: May 2025

This paper examines the intersection of disability and digital public infrastructure (DPI). Why disability? Persons with disabilities stand to benefit the most from the inclusive potential of DPI technologies. They stand to suffer the most when these technologies are designed without taking their needs into account. They stand to offer the most to economies and societies when new technologies enable their full participation. Nevertheless, to date, disability has largely remained at the periphery of the DPI conversation.

Three case studies from India—Aadhaar (identity), UPI (payments), and ONDC (e-commerce)—shed light on the reality of DPI and disability, as well as the possibility of building a more fully inclusive “Purple Stack.” Each of these case studies highlights different aspects of disability inclusion, reflected through different roles of government, civil society, and the private sector. Lessons include:

  • Speed and scale alone do not guarantee inclusion—accessibility must be an intentional design choice from the outset.
  • Processes are as important as products—user journeys, not just discrete technologies, determine real-world accessibility.
  • Governance has a critical role to play—just as security and privacy are embedded into DPI governance, accessibility must be codified through policies and standards.
  • Accessibility must exist at every level of the DPI stack—from frontend applications to backend protocols.

To translate these lessons into action, the community of DPI architects and advocates should take steps to build an open-source repository of DPI accessibility solutions. An additional recommendation is to develop a structured research agenda to assess the impact of DPI on persons with disabilities—including by filling in data gaps and mapping user journeys.

Disability is a complex and evolving concept. After defining key terms such as “accessibility” and “universal design,” this paper puts forward a working definition of a Purple Stack: a suite of digital public technologies that (a) embody the philosophy of universal design such that (b) the technologies themselves are accessible in ways that lead to (c) inclusive outcomes for persons with disabilities in key social, economic, and political domains.

Though a Purple Stack benefits persons with disabilities, disability inclusion is not the only reason to build one. Disability-inclusive DPI technologies are good for growth and will benefit everyone, eventually. Moreover, a Purple Stack is a powerful argument in favor of the DPI approach to decentralization and modularity.

Access the paper here

Navigating the Future of Work

Navigating the Future of Work

Published: May 2025
Authors: Swathi Rao, Shubham Mudgil & Kaushik Thanugonda under the guidance of Deepro Guha

Rapid technological advancements in recent decades have significantly altered how we work and where we work. These changes are transforming humanity’s relationships with labour, marking a fresh phase in human progress. Advancements like artificial intelligence (AI) and machine learning (ML) are blurring the lines between physical and digital domains, presenting a new landscape of both immense opportunity and potential risk.

India is at the forefront of this revolution. The economy-wide adoption of advanced technologies like AI and expedited digitization due to initiatives like the Digital India Mission is driving transformation across industries. With the rapid uptake of frontier technologies, human-machine interaction is likely to dictate the skills and competencies required of employees in order to remain competitive and employable. As automation increasingly augments and, in some cases, substitutes human labour, the workforce must adapt by acquiring new proficiencies and embracing interdisciplinary collaboration. An example is the growth of the gig economy. Enabled by technology and characterised by task-based work, short-term contracts or freelance work, the gig economy offers both opportunities and challenges for workers and employers alike.

While adoption of technology is changing the nature of work, other “non-tech” elements like climate change are also redefining how we work. Increasingly there is growing emphasis on sustainability and reducing the carbon footprint of human activity. This has led to the emergence of “green jobs” that require specialised “green skills”. This shift towards sustainability aligns with global efforts to combat climate change and promote environmental conservation.

Additionally, societal shifts have brought forth new priorities in the workplace. The COVID-19 pandemic accelerated the adoption of flexible work arrangements, prompting companies to prioritise employee-centric approaches that enhance work-life balance and productivity. Flexible work models empower employees to dictate their work schedules, enable organisations’ access to diverse talent pools, and also aid in minimising environmental impact. This evolving nature of work is also altering established concepts of the workplace – redefining it beyond physical boundaries to encompass private spaces like homes and even virtual spaces that are being enabled via the use of technologies like augmented and virtual reality.

This new transformed landscape necessitates a reevaluation of our definition of work and the regulations governing it. Definitions and regulations governing workplaces and workers have hitherto been driven by infrastructure-heavy businesses, classical manufacturing or IT/ITes models, and overlook the agility demanded by today’s work ecosystem.

This report sheds light on the trajectory of these developments, and offers policy recommendations to navigate the complexities of the “Future of Work” in India.

Access the research here

The DPI Decade: A Review of Research on India’s Digital Public Infrastructure

The DPI Decade: A Review of Research on India’s Digital Public Infrastructure

Published: May 2025

The rapid advancement of technology – driven by increasing affordability and accessibility – is giving rise to a generation whose identities and interactions are primarily digital. This widespread integration of technology has led to the adoption of digital tools as essential components of governance and public service delivery around the world. These tools are collectively referred to as Digital Public Infrastructure (DPI). DPI refers to foundational digital platforms – along with the supporting institutional and legal frameworks – that enable society-wide functions and services. Globally, there is growing optimism about the transformative potential of DPI across sectors and use cases, including its ability to spur economic growth, improve access to justice, and support climate goals in low- and middle-income countries.

The Indian example is often cited as a successful model of the development and deployment of DPIs for large-scale impact. India’s digital ID program, Aadhaar, has near universal coverage, with over 1.42 billion Aadhaar numbers issued as of March 2025. Authentication of identity through Aadhaar is plugging leakages in targeted subsidy schemes of the government, and reducing the costs incurred by businesses in customer acquisition and verification. The peer-to-peer payment system, UPI (Unified Payments Interface) recorded a peak of 18 billion monthly transactions in March 2025. Other services are being built on top of the underlying DPI protocols and consequently improving residents’ access to government and commercial services. India has also launched the Global Digital Public Infrastructure Repository to export the systems that underpin IndiaStack with the promise of transformational change.

While the aggregate platform adoption/use statistics are impressive, the impact of DPIs has not been studied and evaluated extensively, particularly its contribution to financial inclusion and overall economic growth. Our conversations with researchers reveal a lack of granular and disaggregated data on DPIs that can facilitate the estimation of its overall impact on society. This report provides an overview of existing research on DPIs and the data currently available, while strongly advocating for greater public access to disaggregated and open data.

Access the research here

Quick Policy Review: Laws on Obscenity

Quick Policy Review: Laws on Obscenity

Authors: Senu Nizar & Manan Katyal
Published: May 2025

Amid growing public and political demands for new legislation to tackle obscene content online, this quick policy review by TQH contends that India already has a robust legal framework in place to address obscenity, particularly in the digital era.

Triggered by recent controversies surrounding OTT platforms and influencers, the brief reviews existing laws including the Bharatiya Nyaya Sanhita (BNS), the Indecent Representation of Women Act, and the IT Act and Rules.

Rather than proposing more laws, the brief highlights the real issue: uneven and politicised enforcement. It emphasises that existing provisions, when interpreted through progressive judicial precedents, are robust enough to deal with obscenity without compromising free expression. The brief also critiques the overuse of obscenity laws as moral policing tools and urges a shift in focus toward tackling serious online harms like non-consensual image sharing and gender-based abuse.

Through a review of legislation and landmark case law, the brief calls for a more nuanced and rights-based approach to digital safety.

Read the brief here

Rethinking Workplace Accommodations for Persons with Disabilities

Rethinking Workplace Accommodations for Persons with Disabilities

Published: January, 2025

 

Multiple legislative and policy reforms have been undertaken in the past decades, both domestically and internationally, to recognise and strengthen the rights of persons with disabilities. These efforts, although certainly not infallible, mark a first step towards addressing and redressing years of discrimination, by ensuring inclusion and participation in society.

Many of these efforts zero in on promoting dignified and meaningful employment. However, global workforce participation presents a striking disparity: while persons with disabilities constitute 16% of the world’s population, with 80% being of working age, only one-third actively participate in the workforce. This challenge is particularly acute in India, where official statistics from the late 2010s indicate that just 2.2% of the population has disabilities—a proportion significantly below global averages—of which merely one-fifth are employed. These statistics underscore the urgent need to address workplace accessibility and inclusion barriers.

Alongside the equity and rights-based concerns this situation raises, India also loses out from under-employing persons with disabilities—the economic case for inclusive employment is particularly compelling. Research indicates that countries can raise their GDP by three to seven percentage points by increasing the employment rate of persons with disabilities to match that of persons without disabilities. Despite common perceptions, workplace adjustments often involve minimal or one-time costs while yielding significant benefits in employee retention and productivity.

In light of these gaps, The Quantum Hub (TQH), and disability rights NGO Youth4Jobs (Y4J) have undertaken research and released two reports concerning workplace accessibility for persons with disabilities in India. Supported by Zoom India, these two reports delve into how the provision of reasonable accommodations for persons with disabilities at the workplace, such as assistive technologies and flexible work, have proven effective in removing infrastructural barriers and enhancing their workforce participation. While TQH’s report highlights best practices emerging from a comprehensive review of Indian and global laws covering workplace accommodations, Y4J’s provides quantitative insights into the state of employment for this demographic in India, through a detailed survey of over 200 employees with disabilities.

The path forward requires integrating accessibility considerations into all aspects of workplace planning. While the initial investment in creating inclusive workplaces may seem challenging, the long-term benefits—both social and economic—make it an imperative for modern organizations and societies. A comprehensive approach, supported by clear policy frameworks and organizational commitment, will help create workplaces where persons with disabilities can participate fully and meaningfully in the workforce.

Read the reports here:
Y4J’s Survey Report on Accessibility Challenges in the Workplace
TQH’s Global Policy Review of Reasonable Accommodations

Social Media Bans for teens will not succeed: India needs finer policy interventions

Social Media Bans for teens will not succeed: India needs finer policy interventions

Authors: Aparajita Bharti & Sidharth Deb
Published: 6th January, 2025 in The Economic Times

In an unprecedented move, Australia in its effort to protect children from online harms amended its Online Safety Act to prohibit access to social media platforms for those under the age of 16.  Last week, this was a trending topic in many Indian parent communities with some arguing that India should perhaps consider a similar policy. However, it is likely that well-meaning parents have not considered unintended consequences of such a move.

Bans are the bluntest instruments in public policy. Within the Australian establishment itself, Australia’s Human Rights Commissioner, National Children’s Commissioner and the Privacy Commissioner have all argued that this will not effectively protect children online and cuts them off from essential resources and communities. Notwithstanding these concerns, India’s sociocultural dynamics, stretched state capacity and the opportunity to grow through digital transformation further complicate the situation.

Limitations in Australia’s Approach

The Australian law introduces the concept of age-restricted social media platforms” defined as those that solely or have significant characteristics of enabling social interactions. This vague classification has been designed to facilitate exemptions for gaming, messaging, health and education apps, among others acknowledging the need to retain teens’ access to the internet. However, this approach fails to align with the shape shifting qualities of digital services and children’s rapid ability to adapt. Many ‘messaging’, ‘streaming’ and ‘gaming’ apps increasingly exhibit characteristics affiliated with social media such as community interactions and user statuses. Children are likely to find other alternatives to banned platforms to communicate with their peers and others, but now with parents having a false sense of security about their children’s safety. A greater risk is that adults will find out much later when lesser known apps disguised as “non-social media” will fill this void akin to when unsafe spurious liquor proliferates in areas of prohibition.

Second, as argued by Carly Kind– Australia’s Privacy Commissioner– the legislation will create an obligation for age-restricted services to collect sensitive information of all users for age assessments causing an increase in data security and privacy risks. Recognising these concerns, the Australian law itself does not allow companies to use government ID systems for age verification, instead it mentions that platforms must take “reasonable steps” towards compliance. These undefined, reasonable steps’ taken by each platform will be benchmarked against detailed age assurance trials that Australia’s e-Safety Commissioner will undertake by September 2025. Perhaps recognising these uncertainties and the limitations of age assurance technologies the law has a mandatory performance review within two years of implementation.

Third, the Australian Senate committee received representations on how social media improves accessibility for children with disabilities and peer support and solidarity for those facing marginalisation due to their gender, sexuality, cultural or other identities. Lack of such access can lead to social isolation and encourages riskier behaviors. Thus, a decision to ban young people from these platforms is riddled with multifaceted risks.

Indian Considerations and Tradeoffs

Now let’s come to India. Indian parents’ concerns typically include excessive screen time, access to inappropriate content, unsafe contact by strangers, pressure to conform, etc. These are all legitimate concerns but we need nuanced solutions. First, norms around parenting and children’s autonomy differ across cultures. While Australia has seen debate over mobile phone usage in schools, with no outright bans until 2023, Indian school boards across states have been proactive in imposing strict limitations on device access in schools. Second, outside metros, children usually share devices with their families. Therefore, children’s access to social media is often in fact on mixed-use phones through an adult account. A ban on social media for teens in India would struggle to account for this complexity. Third, in low digital literacy households, children help their parents navigate the internet. Often, their family’s access to the digital economy is tied to children’s familiarity and access to digital platforms.

Despite these complexities, discussions around children’s safety online are a wake up call. Regulatory vacuums and ambiguities become a hotbed for bad policy ideas like the Australian legislation that emerged in an emotive political environment. We need interventions that balance children’s continued access to digital services while addressing these concerns.

A middle path could be to incentivise and provide concrete guidance to platforms on safer design for children through ‘child safety codes’. Inspiration can be drawn from age appropriate platform design codes (AADC) in the UK and California. These codes identify common platform design principles around children’s best interests, age appropriateness, data processing, default safety settings, and parental controls. A May 2024 report observed that the implementation of UK’s AADC triggered several positive platform design changes like defaults settings and parental oversight features to make children’s online experience safer. The Indian government should facilitate large-scale surveys, and consultations with organisations working with children and the industry to have a better understanding of unique challenges faced by Indian children and design a code to define platform responsibility.

Apart from regulation, the education system needs to re-think its increasing dependence on devices that makes it difficult for parents to supervise their children’s activities online. We need to invest resources in building children’s own resilience and wisdom in navigating the internet safely by adding these topics in the curriculum.

There is no doubt that parents across the world are struggling to reach a fine balance with their children around internet access. However, policy interventions should be tailored to help parents achieve their goals instead of lulling them into a false sense of security. We fear that Australia’s social media ban will exactly do that.

Aparajita is the Founding Partner and Sidharth is an Associate Director at The Quantum Hub (TQH) – a public policy firm

India’s AI Safety Institute Should Tap into Parallel International Initiatives

India’s AI Safety Institute Should Tap into Parallel International Initiatives

Author: Sidharth Deb
Published: 2nd December, 2024 in The Hindu

Last month, India’s IT Ministry convened meetings with industry and experts to discuss setting up an AI Safety Institute under the IndiaAI Mission. Curiously, this came on the heels of PM Modi’s visit to the US that was punctuated by the Quad Leaders’ Summit and the UN’s Summit of the Future. AI appeared high on the agenda in the run up to the Summit of the Future, with a high-level UN advisory panel producing a report on Governing AI for Humanity.

Policymakers should build on India’s recent leadership at international fora like the G20 and GPAI, and position it as a unifying voice for the global majority in AI governance. As the IT Ministry considers the new Safety Institute, its design should prioritise raising domestic capacity, capitalise on India’s comparative advantages and plug into international initiatives.

Notably, the UN’s Summit of the Future yielded the Global Digital Compact that identifies multistakeholder collaboration, human-centric oversight and inclusive participation of developing countries as essential pillars of AI safety and governance. As a follow up the UN will now commence a Global Dialogue on AI. It would be timely for India to establish an AI Safety Institute which engages with the Bletchley Process on AI Safety. If executed correctly, India can deepen the global dialogue on AI safety and bring human centric perspectives to the forefront of discussions.

Decoupling Institutional Capacity from Regulation Making

In designing the institute, India should learn from concerns levelled against MeitY’s AI Advisory from March 2024. The advisory’s proposal for government approvals prior to the public rollout of experimental AI systems was met with widespread criticism. A fundamental critique was what kind of institutional capability resides within India’s government to suitably determine the safety of novel AI deployments. Other provisions within the advisory on bias, discrimination and the one size fits all treatment of all AI deployments, further indicated that the advisory was not based on technical evidence.

Similarly, India should be cautious and avoid prescriptive regulatory controls which have been proposed in the EU, China and the recently vetoed California proposal. The threat of regulatory sanction in a rapidly evolving technological ecosystem, quells proactive information sharing between businesses, governments and the wider ecosystem. It nudges labs to only undertake the minimum steps towards compliance. Yet each jurisdiction demonstrates a recurring recognition of establishing specialised agencies e.g. China’s algorithms registry, EU’s AI Office, and California’s scrapped proposal to set up a Frontier Models Board. However, to maximise the promise of institutional reform, India should decouple institution building from regulation making.

The Promise of the Bletchley Process and Shared Expertise

The Bletchley process is underscored by the UK Safety Summit in November 2023 and the South Korea Safety Summit in May 2024. The next summit is set for France and this process is yielding an international network of AI Safety Institutes.

The US and UK were the first two to set up these institutes and have already signed an MoU to exchange knowledge, resources and expertise. Both institutions are also signing MoUs with AI labs and receiving early access to large foundation models. They have installed mechanisms to share technical inputs with the AI labs prior to their public rollouts. These Safety Institutes facilitate proactive information sharing without being regulators. They are positioned as technical government institutions that leverage multistakeholder consortiums and partnerships to augment testing capabilities of assessing the risk of frontier AI models to public safety. However, these Institutes largely consider AI safety through the lens of cybersecurity, critical infrastructure security, safety of the biosphere, and other national security threats.

These safety institutes aim to improve government capacity and mainstream the idea of external third-party testing, risk mitigations and assessments, red teaming protocols and standardisation’s role in shaping responsible AI development. AI safety institutions aim to deliver insights which can transform AI governance into an evidence-based discipline– a prerequisite for proportionate, fit for purpose regulation. The Bletchley process presents India with an opportunity to collaborate with governments and stakeholders from across the world. Shared expertise will be essential to keep up with AI’s rapid innovation trajectories.

Charting India’s Approach

India should establish an AI Safety Institute which integrates into the Bletchley network of safety institutes. For now, the Institute should be independent from rulemaking and enforcement authorities. Instead, it should operate exclusively as a technical research, testing, and standardisation agency. The Institute would allow India’s domestic institutions to tap into the expertise of other governments, local multistakeholder communities and international businesses. While upscaling its AI oversight capabilities India can also use the Bletchley network to advance the global majority’s concerns with AI’s individual centric risks.

The Institute could champion perspectives on risks relating to bias, discrimination, social exclusion, gendered risks, labour markets, data collection and individual privacy. Consequently, the Indian Institute could deepen the global dialogue around harm identification, big picture AI risks, mitigations and standards. If done right India may become a global steward for forward thinking AI governance which embraces multistakeholderism and government collaboration. Moreover, the AI Safety Institute can demonstrate India’s scientific temper and willingness to implement globally compatible, evidence-based and proportionate policy solutions.

Sidharth is Associate Director, Public Policy at The Quantum Hub (TQH) – a leading public policy firm based in Delhi.

The State of Disability in India

The State of Disability in India

Authors: Nipman Foundation, Young Leaders for Active Citizenship (YLAC) in collaboration with Hyundai India and NDTV
Published: November, 2024

The Samarth Initiative by Hyundai India, in partnership with NDTV, is advancing the conversation on disability inclusion by building awareness and advocating for meaningful, systemic change. Over the past year, the initiative has launched important discussions on assistive technology, inclusive education, and accessible infrastructure, challenging outdated perceptions and promoting equity for Persons with Disabilities (PwDs).

A whitepaper titled ‘The State of Disability in India,’ authored by Nipman Foundation and Young Leaders for Active Citizenship (YLAC), a sister organization of The Quantum Hub (TQH) was launched at Samarth this year. Unveiled by the Hon’ble Union Minister of Social Justice, Shri Virendra Kumar, the whitepaper sheds light on critical issues facing Persons with Disabilities (PwDs) in India, including accessibility, social security, employment opportunities, and societal attitudes. It also presents a Charter of Recommendations aimed at guiding policy, raising public awareness, and strengthening protections for PwDs.

The paper traces the history of the disability discourse, which has evolved from outdated charity and medical models to a rights-based approach that views PwDs as individuals entitled to full and equal rights. Groundbreaking international and Indian legislation, including the UN Convention on the Rights of Persons with Disabilities (CRPD) and the Rights of Persons with Disabilities Act (RPwD Act) of 2016, has enshrined these rights into law. However, real-world application has lagged, leaving significant gaps in accessibility, representation, and equity across public life.

The RPwD Act set important precedents, expanding the definition of disability and securing rights such as job reservations and accessibility standards for PwDs. Yet, its impact remains limited due to implementation challenges, inadequate social security support, and lack of political prioritization. The invisibility of PwDs persists in educational institutions, workplaces, healthcare, and public spaces, pointing to an urgent need for data-driven policy interventions that accurately capture the experiences and needs of PwDs.

In this context, The State of Disability in India paper calls for a collaborative approach to overcome these barriers. Through the Samarth Initiative there has been a beginning to take proactive steps to raise the profile of disability issues in mainstream media, highlight success stories of PwDs, and challenge prevailing misconceptions. Additionally, the initiative has hosted sensitization programs in schools across major Indian cities, inspiring the next generation to embrace inclusivity. By featuring the journeys of para-athletes and organizing the Samarth Championship for Blind Cricket, Samarth celebrates the capabilities of PwDs and illustrates the transformative potential of support and recognition.

As a critical contribution to this ongoing effort, The State of Disability in India paper offers recommendations to enhance accessibility, foster inclusive education and employment opportunities, and integrate PwDs into all facets of society. The whitepaper represents a roadmap for achieving a truly inclusive India—one that acknowledges and celebrates the capabilities of all its citizens, regardless of physical or cognitive differences.

You can read the whitepaper here.

Research Team: Nipun Malhotra, Rohit Kumar, Jayashankar Vengathattil, Senu Nizar, Arushi Chopra, and Shivangi Tyagi