Designing Gender Responsive Apprenticeship Programs

Designing Gender Responsive Apprenticeship Programs

Authors: Swathi Rao, Avi Krish Bedi, Aparna G

Published: September 2023

It is widely known that women’s labour force participation in India needs policy attention. Although 66.8% women in India are in the working age, their labour force participation rate stands at a mere 35.6%, compared to men (81.8%); and their employment is mostly confined to the informal sector.

Skill development is an important lever for increasing female labour force participation and meeting the targets set by the United Nations Sustainable Development Goals (SDGs) of full and productive employment and decent work. However, skilling without the means to transition to an occupation cannot enhance economic prospects for women. Apprenticeships, therefore, offer the right mix of job relevant skill training with a career pathway.

In the context of the future of work, apprenticeships, and specifically quality apprenticeships, can improve the employability of youth and adults by skilling, reskilling, and upskilling, regardless of age or gender. They can also assist governments in keeping the learning systems contemporaneous with the job market. From a gender lens, apprenticeships become even more important, given that they not only promise technical skills to women, they also are a means for participants to get life skills and a sense of agency needed for more permanent employment.

Government initiatives like the National Apprenticeship Promotion Scheme (NAPS) and the National Apprentice Training Scheme (NATS) aim to enhance skill development and employment prospects. However, a marked gender imbalance persists in these programs, with the majority of apprentices being male. This underscores the urgent need for reforms to establish gender-inclusive apprenticeship programs in India.

To address these gaps, this brief proposes several recommendations:

    • Firstly, collecting gender-disaggregated data can provide insights into women’s choices and open new avenues for their participation.
    • Secondly, incentivizing employers to hire more female apprentices and offering additional allowances can stimulate greater female engagement.
    • Thirdly, targeted awareness campaigns for women can enhance understanding and interest in apprenticeship programs.
    • Fourthly, creating gender-sensitive infrastructure and challenging social norms inhibiting female participation will foster inclusivity.
    • Finally, integrating NAPS into the DESHStack portal can improve women’s access to employment opportunities and streamline their entry into the labour market.

Implementing these recommendations promises a more gender-inclusive apprenticeship system, fostering economic growth and prosperity for women.

Read the full brief on Gender Responsive Apprenticeship Schemes here.

Policy Dialogue on the upcoming Digital India Act

Policy Dialogue on the upcoming Digital India Act

Authors: Mahwash Fatima

Published: November 2023

The Quantum Hub (TQH) organised a policy dialogue on the proposed Digital India Act (DIA) – an upcoming legislation that aims to replace the Information Technology Act, 2000 (IT Act) to provide a comprehensive principle-based legal framework for the digital sector in India. Held in partnership with the US-India Strategic Partnership Forum (USISPF) on October 12, 2023 in New Delhi, the event featured two panel discussions on the contours of DIA and the principle of safe harbour thereunder.

Attended by a diverse group of stakeholders, the event provided a platform for nuanced discussions that highlighted the opportunities and challenges of the DIA and offered some key insights and recommendations for the government and the stakeholders to consider while drafting and implementing the legislation.

The first panel explored the scope of the proposed law in the backdrop of pressing concerns that are necessitating the introduction of DIA. The discussions focused on ensuring user safety in the wake of harms arising out of existing and emerging technologies, regulatory approaches and the establishment of effective institutional bodies. The panel also examined the delicate balance between innovation and regulation. A key point highlighted by the panel was the necessity of an adaptive risk-based regulatory approach over exhaustive enumeration of user harms keeping in view the importance of a principles-based regulatory framework to adapt to the dynamic nature of emerging technologies.

The second panel scrutinised the intricate aspects of safe harbour principle. Discussions revolved around the potential impact of rethinking safe harbour on user safety, free speech and the fundamental functioning of digital platforms. While complete immunity of platforms was challenged, the panel underscored the importance of safeguarding safe harbour principles and raised concerns about the potential negative consequences of eliminating safe harbour, impacting innovation and flexibility in response to changing market dynamics.

The points that emerged from this policy dialogue highlight the need for a principled, adaptive framework to navigate the dynamic digital landscape in India, fostering innovation while safeguarding user safety.

Read the full event report here

Navigating Children’s Privacy and Parental Consent Under The DPDP Act 2023

Navigating Children’s Privacy and Parental Consent Under The DPDP Act 2023

Towards a safe and enabling ecosystem for India’s young digital nagriks. 

Authors: Aparajita Bharti, Nikhil Iyer, Rhydhi Gupta, & Sidharth Deb

Published: November 2023

In August 2023, the Indian government enacted the landmark Digital Personal Data Protection Act, 2023 (“DPDP Act”) after six years of consultation. Section 9 of the DPDP Act is one of the act’s most notable provisions, and outlines a mechanism through which data fiduciaries (platforms, browsers, OS providers, etc.) can process the personal data of “children”. It requires all data fiduciaries to obtain ‘verifiable parental consent’ if they process data of users aged below 18 years. Any mechanism to fulfil this legal requirement must look to satisfy three elements:

– Verify the user’s age with reasonable accuracy,
– Ascertain the legitimacy of the relationship between the user and the parent or guardian, and
– Record evidence of their consent.

This paper delves into pathways via which Indian authorities can implement the required provision. It provides a summary of the global regulatory and technical experience with age verification, while drawing on insights from the ‘YLAC Digital Champions’ program that runs in schools across the country. Run by TQH’s citizen engagement arm Young Leaders for Active Citizenship (YLAC), the Digital Champions program engages with young adults between the ages of 13 and 18 around various facets of online safety, risks and potential threats on the internet, conscious consumption of information, and fostering a healthy and meaningful relationship with technology.

Both age verification and parental consent has been discussed extensively in other jurisdictions. It has been acknowledged widely that any regulation to safeguard children’s privacy requires balancing children and adolescent’s safety, whilst contending with the limitations and tradeoffs associated with available technical methods. Our research shows that hard verification mechanisms (i.e. based on documentary evidence using government IDs) which have been proposed across countries, encounter concerns and criticisms that they create inequity in internet access, inadvertently cause privacy concerns, and impose costs and other practical barriers to children’s access to online services and platforms. In India implementation will also have to navigate concerns around circumvention by children and the feasibility of verifying parental consent at scale. Further complications may arise owing to our gender digital divide, low digital literacy, linguistic heterogeneity, and shared device usage in low-income households.

Keeping this digital reality and our digital inclusion goals in mind, our recommendations propose that the Ministry of Electronics and Information Technology (MeitY) should avoid a prescriptive one-size-fits-all mandate of hard verification across all digital products and services. Instead, we urge authorities to suggest a list of methods that adequately fulfil the underlying objective of parental consent for most data fiduciaries. To give effect to this approach, we recommend that the Government of India pass rules which help develop a code of practice for age assurance that prescribes a range of mechanisms, corresponding to the level of risk involved in data processed by a particular data fiduciary. We envisage that this approach will enable India’s youth to meaningfully engage with the growing digital economy while keeping them safe online. Our proposals envisage a vital role for civil society, organisations working with children, academia and media in getting this regulatory framework right.
 

Relevant links:
1. Full research study
2. Presentation highlighting key issues
3. Short video introduction to the YLAC Digital Champions Program
4. Digital usage patterns – Findings from a children’s survey

Need To Algo The Distance

Need To Algo The Distance

Author: Deepro Guha

Published: October 5, 2023 in The Economic Times.

Meta recently made a groundbreaking announcement for its European users, offering them the option to opt out of its recommendation algorithm. This move signals a potentially pivotal shift in how social media services are offered in Europe and was necessitated by the implementation of the Digital Service Act (DSA) in EU, which mandates algorithmic transparency by digital intermediaries. In this article, I aim to delve deeper into the concept of algorithmic transparency and explore other avenues of algorithmic regulation.

Ubiquity of algorithms

But let’s start with a simple question: Have you ever found yourself endlessly scrolling through social media, wondering why you can’t seem to stop? The answer likely lies in the algorithm that powers your social media feed. These algorithms have the remarkable ability to curate content that keeps you hooked on the platform. Not only do algorithms decide content shown on social media feeds, they also influence consumer choice by controlling suggestions on e-commerce websites, and are even used by governments to process data for the provision of citizens benefits. In essence, algorithms, which are fundamental instructions governing how specific sets of information are treated, have become potent tools for shaping society.

However, these powerful tools also create a host of complex issues that need careful consideration and perhaps even regulation. First, algorithms employed by digital intermediaries are often so complex that they are inscrutable to the average person and sometimes even to regulators. This creates a stark information asymmetry problem. Moreover, certain algorithms, such as those used to train generative AI, are adaptive, offering little control over the models they create, even to their own creators.  An example of problems created by such models was highlighted in the recent episode of Microsoft’s AI software professing love to a New York Times journalist, and also attempting to convince him to leave his wife. Microsoft in response admitted that it may not know the exact reason behind the AI software’s erratic behaviour.

Second, there is a constant risk of bias creeping into algorithmic decision-making, especially when algorithms are used for targeting or identifying specific individuals. If left unaddressed, this can exacerbate socioeconomic inequalities. For instance, Meta recently settled with U.S. authorities in a case where its algorithms displayed bias against certain communities when showing housing ads for specific localities.

Third, when bias-related problems emerge, there should ideally be a human point of contact for grievance redressal. However, many companies employing algorithms offer limited recourse in such instances. For example, recent reports shed light on how Instagram’s algorithms often flag content posted by influencers as “violating community guidelines,” limiting their ability to monetize such content, without offering a robust grievance redressal system or even an explanation of which specific community guideline has been violated.

Global movement towards algorithmic regulation

As these issues gain global attention, there is a growing movement towards preparing for a future regime of algorithmic regulation. In the United Kingdom, digital regulators have outlined a vision document for the future of algorithmic regulation. The European Union has established the European Centre for Algorithmic Transparency (ECAT). Even in India, the earlier draft of the Data Protection Bill (2022) proposed algorithmic transparency in the treatment of personal data.

Challenges in Mandating Transparency

However, while the need to regulate algorithmic decision-making is urgent, the effectiveness of mandating algorithmic transparency remains questionable. Firstly, there is the issue of proprietary concerns. Companies may be hesitant to share such information because these algorithms often form the foundation of their business, as argued by Google when asked for more information about its algorithms by its own shareholders. Secondly, as Microsoft argued before the European Parliament, knowing how an algorithm is coded can be useless without knowledge of the data fed into it. This was also highlighted in Twitter’s recent move to make its source code public, with experts pointing out that while Twitter’s source code reveals the underlying logic of its algorithmic system, it tells us almost nothing about how the system will perform in real-time.

Alternative Approaches

Given these challenges with mandating algorithmic transparency, experts have suggested some alternative solutions that could alleviate the problems with algorithmic decision making. For instance, stakeholders can collaborate to create algorithmic standards, with the objective of mitigating adverse consequences of algorithmic decision making.  For example, ALGO-CARE, a standard created in the UK, sets out a model of algorithmic accountability in predictive policing. This standard ensures measures like including other decision making mechanisms to supplement the algorithm, creating additional oversight to identify bias etc.

Additionally, there is a growing movement toward mandating algorithmic choice. This could involve companies offering users the option to choose which algorithms are used to provide services (similar to Meta’s move in Europe). Alternatively, third-party algorithm services could give users more options in terms of the information they receive. For instance, consumers could select services that adjust their e-commerce search results to favour domestic production or refine their Instagram feed to focus only on specific topics of interest.

While these interventions may create their own complications and need substantial capacity building, they are undoubtedly worth exploring. Therefore, as the Indian government works on the Digital India Bill, it would be prudent to keep a focus on algorithms and create capacity to allow for future regulation.

Deepro is Senior Manager at The Quantum Hub (TQH Consulting), a public policy firm in Delhi

Children, a key yet missed demographic in AI regulation

Children, a key yet missed demographic in AI regulation

Authors: Rhydhi Gupta and Sidharth Deb

Published: September 26, 2023 in The Hindu

The Indian Government is poised to host a Global Summit on Artificial Intelligence (AI) this October. Additionally as the Chair of the Global Partnership for Artificial Intelligence (GPAI), Delhi will also be hosting the GPAI global summit this coming December. These events suggest the strategic importance of AI, as it is projected to add $500 billion to India’s economy by 2025, accounting for 10 percent of the country’s target GDP.

Against this backdrop, PM Modi recently called for a global framework on the ethical expansion of AI. Given the sheer volume of data that India can generate, it has an opportunity to set a policy example for the Global South. Observers and practitioners will track closely India’s approach to regulation and how it balances AI’s developmental potential against its concomitant risks.

One area where India can assume leadership is how regulators address children and adolescents who are a critical – yet less understood – demographic in this context. The nature of digital services means that many cutting edge AI deployments are not designed specifically for children but are nevertheless accessed by them.

The Governance Challenge

Regulation will have to align incentives to reduce issues of addiction, mental health, and overall safety. In absence of that, data hungry AI-based digital services can readily deploy opaque algorithms and dark patterns to exploit impressionable young people. Among other things this can lead to tech-based distortions of ideal physical appearance(s) which can trigger body image issues. Other malicious threats emerging from AI include misinformation, radicalisation, cyberbullying, sexual grooming, and doxxing.

The next generation of digital nagriks must also grapple with the indirect effects of their families’ online activities. Enthusiastic ‘sharents’ regularly post photos and videos about their children online to document their journeys through parenthood. While moving into adolescence we must equip young people with tools to manage the unintended consequences. For instance, AI-powered deep fake capabilities can be misused to target young people wherein bad actors create morphed sexually explicit depictions and distribute them online.

Beyond this, India is a melting pot of intersectional identities across gender, caste, tribal identity, religion, linguistic heritage, etc. Internationally AI is known to transpose real world biases and inequities into the digital world. Such issues of bias and discrimination can impact children and adolescents who belong to marginalised communities.

Alleviate the Burden On Parents

AI regulation must improve upon India’s approach to children under India’s newly minted data protection law. The data protection framework’s current approach to children is misaligned with India’s digital realities. It transfers an inordinate burden on parents to protect their children’s interests and does not facilitate safe platform operations and/or platform design. Confusingly it inverts the well known dynamic where a significant percentage of parents rely on the assistance of their children to navigate otherwise inaccessible UI/UX interfaces online. It also bans tracking of children’s data by default, which can potentially cut them away from the benefits of personalisation that we experience online. So how can the upcoming Digital India Act (DIA) better protect children’s interests when interacting with AI?

Shift the Emphasis to Platform Design, Evidence Collection, and Better Institutions

International best practices can assist Indian regulators in identifying standards and principles that facilitate safer AI deployments. UNICEF’s guidance for policymakers on AI and children identifies nine requirements for child-centred AI which draws on the UN Convention on the Rights of the Child– to which India is a signatory. The Guidance aims to create an enabling environment which promotes children’s well being, inclusion, fairness, non-discrimination, safety, transparency, explainability and accountability.

Another key feature of successful regulation will be the ability to adapt to the varying developmental stages of children from different age groups. California’s Age Appropriate Design Code serves as an interesting template. The Californian code pushes for transparency to ensure that digital services configure default privacy settings; assess whether algorithms, data collection, or targeted advertising systems harm children; and use clear, age-appropriate language for user-facing information. Indian authorities should encourage research which collects evidence on the benefits and risks of AI for India’s children and adolescents. This should serve as a baseline to work towards an Indian Age Appropriate Design Code for AI.

Lastly, better institutions will help shift regulation away from top-down safety protocols which place undue burdens on parents. Mechanisms of regular dialogue with children will help incorporate their inputs on the benefits and the threats they face when interacting with AI-based digital services. An institution similar to Australia’s Online Safety Youth Advisory Council which comprises people between the ages of 13-24 could be an interesting approach. Such institutions will assist regulation to become more responsive to the threats young people face when interacting with AI systems, whilst preserving the benefits that they derive from digital services.

The fast evolving nature of AI means that regulation should avoid prescriptions and instead embrace standards, strong institutions, and best practices which imbue openness, trust, and accountability. As we move towards a new law to regulate harms on the internet, and look to establish our thought leadership on global AI regulation, the interest of our young citizens must be front and centre.

Rhydhi and Sidharth are, respectively, Analyst & Manager, Public Policy at The Quantum Hub (TQH Consulting)

Building Digital Ecosystems for India: From Principles to Practice

Building Digital Ecosystems for India: From Principles to Practice

An Implementation Blue Book

Authors: Aishwarya Viswanathan, Deepro Guha and Bhavani Pasumarthi
Research Lead: Rohit Kumar

Published: 2022

Over the last decade, India has pioneered a new approach to building GovTech – one which prioritises the creation of technology ‘building blocks’ that multiple innovators can leverage to build citizen-centric solutions: in other words, an approach that focuses on creating open ecosystems instead of closed systems. This approach recommends the use of Free and Open Source Software (FOSS), open standards, and open APIs and encourages interoperability. By doing so, it allows different systems to talk to each other seamlessly, empowers stakeholders, distributes the ability to solve complex societal problems and unleashes innovation to enhance service delivery. Starting with Aadhaar, India has built a menu of such digital solutions that today includes eKYC, DigiLocker, a Unified Payments Interface (UPI) and many other sector-specific solutions.

Three interrelated concepts: NODEs, Public Digital Platforms and IndEA

The Ministry of Electronics and Information Technology (MeitY), on behalf of the Government of India (GoI), has been a key advocate and custodian of this approach, putting forth three interrelated concepts – India Digital Ecosystem Architecture (IndEA) Framework, Public Digital Platforms and National Open Digital Ecosystems (NODEs).

The India Digital Ecosystem Architecture (IndEA) Framework provides a set of architectural principles, reference models and standards to support the seamless flow of data across government departments. Leveraging these principles, India has made tremendous strides in building critical Public Digital Platforms such as Aadhaar and UPI which have also facilitated the creation of National Open Digital Ecosystems (NODEs).

All three concepts adopt architecture thinking and interoperability – The IndEA Framework at a ‘whole of government’ level, and Public Digital Platforms & National Open Digital Ecosystems at the sectoral or segment-specific level. They build on common tech elements and strive for one common outcome – namely adopting a de-siloed approach to GovTech, to unlock greater economic and societal value for the citizen.

The strategy for NODEs consultation white paper, released in early 2020, and the latest IndEA 2.0 draft framework have both generated wide public interest and engagement. It is now timely to take this approach forward by codifying the details into an implementation blue book so that the adoption of the IndEA & NODE approaches can be mainstreamed across various sectors and simplified for all government departments. This is what our research aims to do.

 
Detailed documents

Implementation Blue Book
Case Study – Ayushman Bharat Digital Mission

The Political Journey of Healthcare in Select Indian States

The Political Journey of Healthcare in Select Indian States

Authors: Sandhya Venkateswaran, Mayank Mishra and Nikhil Iyer
Published: October 2022

To comprehend the advancements in health within India, it’s essential to analyse the progress achieved in different states. While many aspects, including fiscal health, governance, institutional capacity, and others influence health progress, the political priority accorded to health by a state’s leadership remains a key driver.

How the leadership views and positions healthcare in their vision for the state and its development path and the potential incentives they see accruing as a result of healthcare improvements, forms the basis of the attention given to this issue in a state.

In light of this context, The Quantum Hub worked with the Lancet Citizen’s Commission on Reimagining India’s Health System to examine the socio-political determinants of attention to health in five Indian states – Tamil Nadu, Rajasthan, Andhra Pradesh, Bihar and Jharkhand.

The analysis surfaces four insights. One, political ideology plays a role in driving attention to health, but political legitimacy can be linked with healthcare to drive attention in the absence of an ideological driver. External stakeholders can create an environment where legitimacy is linked with healthcare.

Two, sensitizing politicians to electorally rewarding policies elsewhere, and relevant to the state’s development journey, can motivate them to act.

Three, state capacity is a key variable in the confidence to undertake reforms and the choice of reforms.

Four, both the Central government and external stakeholders such as civil society can contribute to agenda setting at the state level.

Go back to a clean slate on data protection for children

Go back to a clean slate on data protection for children

Children can use platforms with parental consent but child users cannot be tracked, making safety measures hard to apply, even as the law’s exemptions could have unintended adverse effects.

Author: Nikhil Iyer
Published: August 24, 2023 in Livemint

After nearly a decade of discourse around a data protection law for India, a requirement that was given urgency by a landmark Supreme Court judgement in 2017 on the fundamental right to privacy, the Indian Parliament has finally passed the Digital Personal Data Protection Bill, 2023. This is the third version; previous drafts of the Bill were circulated in 2019 and 2022. However, in each draft, the law’s approach to protecting children’s privacy has remained hazy.

To recap, the current law sets the age of consent to use online services at 18 years. If you are younger than 18, then the online platform has to obtain “verifiable parental consent,” failing which the platform can incur massive penalties up to 200 crore. While a child can use an internet platform once a parent provides consent, the platform is completely prohibited from tracking and monitoring the behaviour of child users, irrespective of the purpose for which such data processing is to be conducted.

This is where this approach becomes problematic. How can online platforms prevent a child from being exposed to harmful, risky or illegal content, interactions and experiences without tracking or monitoring their behaviour? How are they expected to take precautionary measures, such as alerting parents or law enforcement agencies, if the child is getting drawn towards self-harm, bullying, harassment, hate speech or other dangers? While other jurisdictions have chosen to place high responsibility on platforms for keeping children safer, the Indian law takes a diametrically opposite approach.

The law looks at “verifiable parental consent” as an end-all solution. This is in a country where less than 40% Indians are digitally literate, as per the National Sample Survey’s 78th Round (2020-21) data, with the distinct possibility of children gaming the system by using their parents’ phones/email IDs to provide consent without their knowledge. The mere fact of parental consent is presumed to take care of any harm or risk which may befall children after they begin using the platform.

To add to this, the law is willing to provide exemptions from parental consent requirements for certain platforms that will be certified as being “verifiably safe,” allowing them to process data of children above a certain age (16 years) without parental consent. As per an interview of the IT minister of state, this certification could be reserved for platforms that ensure “100% KYC,” through identity-proofs such as government ID cards. The exemption may be available to specific entities such as “education, skilling, some vocational music websites where children are learning music and they [platforms, i.e.] take all kinds of precautions” and “certainly not social media.”

This exemption carrot is riddled with issues as well. One, it is prima facie in conflict with the data minimization principle: platforms should only collect data necessary for achieving specific purposes. It is unclear how collecting parents’ IDs will help in keeping children safe while using the platform. Two, by laying down a white-listing process, where platforms have to apply for ‘verifiably safe’ certification, the law will increase bureaucratic entanglement in a dynamic digital economy. It is unclear if entities will have to apply to a government authority for every incremental change by which they seek to create more value for children using their products or services.

Further, the rationale for singling out certain categories of platforms is not immediately clear. Today, the lines between a platform’s purpose are blurred. For instance, YouTube is perhaps the world’s biggest ed-tech platform, with its invaluable and democratized repository of knowledge on everything from exam preparation to art and music lessons and personal development; but the government may categorize it as a social media or a streaming platform. Through this certification, the law would end up discriminating among entities that may be offering equally strong protections while processing children’s data, but may either be shut out at the door itself or not have applied for ‘verifiably safe’ certification for other reasons. This may also reduce the incentive of online platforms to innovate for children, as they may want to avoid an over-regulated market segment. India’s young netizens would then be at a disadvantage vis-a-vis their global peers in all these situations.

An alternative approach could have been to uphold ‘best interests of the child’ obligations under an internationally recognized standard of the UN convention on child rights, to which India is a signatory. In practice, the design of platforms would have to uphold this standard in terms of default settings, nudges, location tracking, publishing regular risk self-assessments, issuing prescriptions against detrimental use of data and so on. Beyond this, the government could blacklist any platform found to be violating the rules and stop it from processing children’s data. This will push all platforms to adhere to high data protection standards based on associated risk, while children would be able to freely access the internet based on varying levels of maturity.

How to protect children online is a global debate and there are no easy answers. However, the approach that we have taken to this issue suits neither the realities of India nor the challenges of cyberspace.

Nikhil Iyer is Senior Analyst, Public Policy at The Quantum Hub.

Tackling multidimensional poverty must for inclusive development

Tackling multidimensional poverty must for inclusive development

Tackling the root causes of deprivation, investing in education, improving nutrition and food security, and accelerating access to clean energy can help transform the lives of millions.

Authors: Mayank Mishra and Swathi Rao
Published: July 11, 2023 in Outlook

As the world commemorates World Population Day today, it is important to look at the most pressing issue faced by millions in India: poverty. While poverty is often used to describe monetary poverty, this definition is often too simplistic. Therefore, the United Nations recommends a different measure of poverty – the multidimensional poverty index (MPI), which is a measure of deprivation that encompasses a web of interconnected disadvantages, including limited access to basic infrastructure, education, healthcare, housing, and more. In fact, the Sustainable Development Goals (SDGs) aim to address precisely this complex nature of poverty.

In simple terms, MPI measures deprivation of three important aspects of a person’s life – their health, education, and living standards – each with specific indicators (10 in total) and having the same weightage. If a person is multidimensionally poor, it means that s/he is deprived in at least three indicators.

Globally, over a billion people are reported to be multidimensionally poor. In the context of India, a country with a diverse population and varying socio-economic landscapes, measuring and addressing multidimensional poverty takes on even greater significance. It necessitates an understanding of the nuanced challenges faced by individuals and communities in different regions.

The story of India’s poverty alleviation in the decade between 2005-06 and 2015-16 is nothing short of extraordinary. Over the last 15 years, the number of multidimensionally poor people in India has fallen by more than 415 million. According to the SDG Tracker developed by the India Policy Insights (IPI) initiative at Harvard University, a significant majority of districts (52 per cent) have achieved their SDG targets, demonstrating commendable efforts in alleviating multidimensional poverty. Encouragingly, 41 per cent of the districts are on track to meet these targets by 2030, indicating positive momentum towards sustainable development. However, despite notable progress, there are still areas that demand urgent attention.

Data on MPI from the UN reveals that nutrition, years of schooling, and cooking fuel, in particular, are areas of high deprivation in India, compared to other indicators, implying that a considerable number of people are undernourished, have not completed six years of schooling, and still use dung, agricultural crops, shrubs, wood, charcoal or coal as primary sources of cooking fuel.

While comprising only 6 per cent of the total number of districts in India, there are still 43 districts in the north, eastern, and north-eastern parts of India that are lagging in the pursuit of poverty alleviation. Although a relatively small proportion, when looked at in terms of population, this represents a substantial number of communities grappling with multidimensional poverty, and calls for immediate attention and investigation.

MPI considers undernourishment, based on age-specific BMI, weight-for-age, and stunting (for children 5 years and under), in any member of a household, as nutritional deprivation. According to the IPI Districts Tracker, the aforementioned 43 districts have some of the country’s highest prevalence of stunting and underweight women. Additionally, over 62 per cent districts in the country are not on track to meeting the SDG target for nutrition, serving as a grim reminder of the burden of undernutrition and highlighting the urgent need to address the nutritional crisis that continues to plague India.

According to the latest National Family Health Survey (NFHS-5) data, the number of years of schooling among urban and rural areas was 7.5 years versus 4.0 years among women, and 8.8 years versus 6.5 years among men. While this is a remarkable improvement from the previous NFHS round (2015-2016), it is also important to note that men consistently have a higher average number of years of schooling compared to women. Further, 27 of the 43 districts falling behind have female attendance rates below the national average (71.76 per cent). This disparity not only perpetuates inequality but also hampers the nation’s progress in achieving sustainable development.

In terms of access to clean cooking fuel, although India has made significant progress through schemes like the Pradhan Mantri Ujjwala Yojana (PMUY), nearly 68 per cent districts in the country are still unlikely to meet the SDG target for clean cooking fuel. At the current rate of progress, most districts in central, northern, and north eastern India are unlikely to have 100 per cent access to clean fuel by 2030. This means that a significant portion of the population may continue to rely on primitive cooking methods, exposing them to hazardous fumes and compromising their health.

Amidst these concerning statistics, it is also worth noting that Indian districts have shown exceptional performance on living standard indicators like sanitation and electricity; while 43 per cent districts have already met the SDG target for electricity, 78 per cent districts are on-target to meet the SDG target for sanitation and an additional 47 per cent for electricity.

The year 2023 is an important year for India. It not only signifies the halfway point to the 2030 Sustainable Development Agenda but also marks a significant demographic shift, with India surpassing China as the world’s most populous country. In light of these developments, it becomes imperative for us to take stock of our development goals and course-correct. India’s journey towards inclusive and sustainable development hinges on our ability to address the multidimensional nature of poverty. By tackling the root causes of deprivation, investing in education, improving nutrition and food security, and accelerating access to clean energy, we can transform the lives of millions.

Swathi Rao and Mayank Mishra are with The Quantum Hub (TQH).

Infocalypse: AI, fake reviews and trust in digital commerce

Infocalypse: AI, fake reviews and trust in digital commerce

Authentic reviews, whether positive or negative, serve as a valuable resource by sharing genuine experiences. In contrast, fake reviews distort the marketplace by depriving consumers of accurate information and swaying their decisions with misleading endorsements.

Authors: Mayank Mishra and Salil Ahuja
Published: July 27, 2023 in Hindustan Times

While there have been several discussions on how this may impact our society, we specifically examine the challenges that may emerge in digital commerce. Within a few months of generative AI’s rollout for common use, we’re witnessing the early signs of AI tools, like ChatGPT, being misused to power bots, generate fake reviews, and saturate the web with subpar content. The products and services listed on online marketplaces are already witnessing reviews that begin with the telltale words, “As an AI language model…”, which clearly implies that these reviews have been written with the help of generative AI.

This is a significant threat to trust in digital platforms because when it comes to making purchasing decisions, the opinions of consumers, who have used a particular product, hold significant sway. According to a survey by an SEO platform, a staggering 76% of consumers regularly consider reviews before making a product choice. Furthermore, nearly 75% admit that a positive review is the key factor influencing their perception of the product. However, the proliferation of fake reviews jeopardises consumer trust in online reviews as a whole. Authentic reviews, whether positive or negative, serve as a valuable resource by sharing genuine experiences. In contrast, fake reviews distort the marketplace by depriving consumers of accurate information and swaying their decisions with misleading endorsements. Fake reviews also hold the power to either damage or bolster a business’s reputation, with far-reaching consequences. Research conducted by Harvard Professor Michael Luca suggests that a mere one-star change in a company’s Yelp rating can substantially impact its revenue by 5-9%.

In the early days of digital commerce, there wasn’t significant recognition of the need to regulate platforms to ensure genuineness of reviews. Unfortunately, this led to the rise of paid-review farms, which exploit businesses seeking to artificially boost their ratings on review platforms. A different study by Luca highlighted that in 2015, fake reviews could be brought for as little as 25 cents per review, indicating that the cost of posting fake reviews was much lower than the benefit one could reap out of them.

Recognising the severity of the problem, the European Commission (EU) and national consumer protection authorities conducted an extensive EU-wide website screening to assess the credibility of consumer reviews. The results revealed deep doubts about the authenticity of reviews on the majority of the analysed websites. In India, the proliferation of fake and misleading reviews also violates consumers’ right to be informed, as enshrined in the Consumer Protection Act, 2019. The Department of Consumer Affairs therefore took notice after the release of the EU report and promptly invited key stakeholders to address the issue.

As a result of these discussions, the Department in collaboration with the Bureau of Indian Standards (BIS), introduced voluntary standards for online reviews in November 2022. However, when these standards were created, they did not proactively account for the generative abilities of AI. It is now crucial to consider the ongoing AI boom and its implications on the industry.

AI is a dynamic field, and regulating fake reviews online requires the support and collaboration of industry stakeholders. The government must encourage innovation by the private sector in collaboration with experts and consumer groups to address this problem at scale as proliferation of fake reviews is likely to increase given the short time required to generate new content. Online marketplaces also have market incentives to address this problem, as they are well aware that fake reviews undermine trust and credibility, leading to decreased user engagement and ultimately people’s reluctance to use their platforms. We must therefore also build systems so that platforms collaborate with each other to identify common bad actors who are operating across online marketplaces.

Further, in India, initiatives such as the Open Network for Digital Commerce (ONDC) which are encouraging interoperability could also play a potential role in building systems of cooperation to check fake reviews. Fixing accountability of fake reviews and law enforcement mechanisms may also need to be revisited as AI’s impact on this aspect becomes clearer.

As the AI revolution unfolds, our ability to preserve the integrity of digital commerce becomes paramount to ensure continued trust in digital markets. We should take proactive steps to regulate and combat the proliferation of fake reviews to enable informed decision-making and protect online commerce from AI-powered manipulative practices.

Mayank Mishra is Manager, Public Policy and Salil Ahuja is Associate, Public Policy at TQH Consulting.

India urgently needs a new policy for the elderly

India urgently needs a new policy for the elderly

We need a policy that views senior citizens not just as passive recipients of welfare schemes but as valuable assets who can contribute significantly to society.

Authors: Srijan Rai and Aparajita Bharti
Published: July 24, 2023 in Livemint

The wellbeing of the elderly is mandated by the Constitution, specifically the Directive Principles of State Policy. However, the fact that the existing policy on older people was introduced in 1999 reflects the lack of policy focus on this issue. According to a report by the Parliamentary Committee on Government Assurances, the Ministry of Social Justice and Empowerment has given 11 assurances in Parliament since 2011 about bringing in a new policy. However, a draft of the National Policy for Senior Citizens from 2016 is yet to be finalised.

As we set our sights towards the India of 2047, this lack of attention to the country’s elderly population needs to change. The National Family Health Survey (NFHS) provides valuable insights into the changing needs and dynamics of senior citizens. The survey reveals that the number of senior citizens living with chronic illness is rising every year. It showed that about a quarter of those aged 40-49 already have hypertension and that its prevalence increases sharply with age. The findings show a similar pattern for diabetes. These health challenges demand a well-structured policy that integrates healthcare services, promotes preventive measures, and ensures that all elderly citizens have access to quality medical care.

The NFHS data also shows that a significant proportion of the elderly population faces financial hardships and social isolation. Many are dependent on their families or meager pensions. For instance, the National Social Assistance Programme provides a pension of ₹500 rupees a month and the Atal Pension Yojana (available to a small proportion of elderly) offers ₹1,000-6,000 a month . These sums are insufficient to cover even basic needs.

Several other changes – such as a rise in the number of nuclear families, a decline in family size, and migration to urban areas and abroad – are also displacing the existing system of elderly care within families. India’s average family size decreased from 4.67 members in 2001 to 4.45 members in 2011 and is likely to drop further with fertility rates falling. The Maintenance and Welfare of Parents and Senior Citizens Act, 2007 – the law that allows the elderly to take legal action against their children if they fail to provide for them – is also insufficient. It does not take into account the reality of shame and social pressure that prevents many elders from taking legal recourse even if they are aware of their rights under this act. It is also silent on the government’s responsibilities towards elderly Indians.

Another aspect of India’s aging population is that the share of women in this group is increasing. In line with global trends, Indian women’s life expectancy is already more than men’s and this gap is only expected to increase, according to the UN Population Division.This means the issue must also be seen through the lens of gender. Older women are at more risk of neglect, abuse and ill-treatment due to their higher financial dependence and lack of agency. Apart from women, we also need to focus on other marginalised groups within the elderly population, such as those without any pension support, and individuals with chronic diseases and disabilities.

Given all these factors, a new policy approach is needed urgently to help enhance financial security through social-security measures, and promote opportunities for productive engagement to combat social isolation. From an infrastructure perspective, we need to start thinking about creating incentives for the private sector to build and maintain facilities, create budget space for public provision and explore community-led models with support from the government.

We also need to set up an institutional owner with a dedicated budget to help us achieve these goals. The Ministry of Social Justice, in its initial draft of the policy, emphasised the need for a dedicated Department for Senior Citizens, with state commissions and nodal officers at the district level to ensure policies are implemented.

Even as we wait for a clearer view of how India’s population has changed in the next census, we must act now based on what we already know. We need a new policy that views senior citizens not just as passive recipients of welfare schemes but as valuable assets who can contribute significantly to society. To enact such a policy, we need political foresight and the will to prepare for this imminent future.

This would entail exploring existing models in other countries such as Japan’s Community-Based Integrated Care System and Singapore’s ‘Aging-in-Place’ which focuses on supporting the well-being of elderly through home care, healthcare, day rehabilitation centers, senior activity centers, and so on.

Srijan Rai is Associate and Aparajita Bharti is Founding Partner, The Quantum Hub (TQH).

Data Reveals Regional Disparities in Menstrual Hygiene Management in India

Data Reveals Regional Disparities in Menstrual Hygiene Management in India

Addressing menstrual hygiene requires a multi-faceted and holistic approach, as it is both caused by and affects a range of other indicators, including gender equality, health, education, and economic empowerment.

Authors: Shubham Mudgil and Dr. S. V. Subramanian
Published: July 10, 2023 in IndiaSpend

Women menstruate for a total of approximately six to seven years in their lifetime. This makes menstrual hygiene a crucial concern for women and girls. Managing menstruation hygienically and with dignity requires adequate knowledge, appropriate facilities, and a supportive cultural environment. However, taboos related to menstruation continue to dominate the mainstream narrative in India and hinder progress in improving menstrual hygiene practices.

Inadequate access to proper menstrual hygiene management has far-reaching consequences, affecting individuals physically, emotionally, and economically. In some cases, it can even lead to depression due to societal stigmatization and cultural taboos associated with menstruation. Tragically, these circumstances may lead to girls discontinuing their education. Dropping out of school has a strong negative impact on girls as they become much more likely to be married young, bear children before the age of 18, experience poorer health and nutrition outcomes and experience reduced earnings in adulthood.

In recognition of the importance of menstrual hygiene, the Supreme Court deemed it an “important issue of public interest” and directed the Central Government to develop a National Policy specifically targeting school-going girls in April 2023. This decision serves as a potent reminder of the urgent need for comprehensive action to ensure access to quality menstrual hygiene management throughout India.

In this context, this data story delves into the prevailing trends of menstrual hygiene at the level of parliamentary constituencies using data from the NFHS Policy Tracker for Parliamentary Constituencies developed by the India Policy Insights initiative at Harvard University. It also tries to highlight the consequential impact of the inadequate management of menstrual hygiene.

1) Menstrual Hygiene levels vary across regions

Fig 1: NFHS 5 levels of prevalence of menstrual hygiene

The current levels of menstrual hygiene vary significantly across the Indian states. These levels vary from 34.5% to 96.6% across parliamentary constituencies. Areas with low levels of menstrual hygiene (highlighted by shades of red) are concentrated in central India. Bihar (59.9%), Madhya Pradesh (61.5%), Uttar Pradesh (74%) and Rajasthan (85.8%) lie in the bottom deciles while states like Tamil Nadu (99.1%), Goa (98.1%), Punjab (95.3%), Kerala (95.2%) and Haryana (95.2%) lead the way in menstrual hygiene.

2) Menstrual Hygiene levels have improved significantly over the years

Fig 2: Change in the prevalence of menstrual hygiene between NFHS 4 and 5

Fig 2: Change in the prevalence of menstrual hygiene between NFHS 4 and 5

Significant progress has been made in advancing menstrual hygiene between NFHS 4 and 5. An impressive 97.7% of parliamentary constituencies (PCs) registered an average increase of 20.1 percentage points in the prevalence of menstrual hygiene. Aska PC in Odisha saw the highest improvement of 54.7 percentage points in menstrual hygiene as the prevalence went up from 30.4 in NFHS 4 to 85.1 in NFHS 5. Barmer PC in Rajasthan and Maldaha Uttar PC in West Bengal followed next with each registering a 52.5 and 49.3 percentage points increase respectively.

Overall, Odisha, Bihar and Rajasthan have led the way by registering an increase of 34.5, 30.1 and 29.0 percentage points respectively. However, in the same time period, 12 PCs across the nation witnessed a decline in menstrual hygiene levels, 5 of which were in Gujarat and 3 in Kerala. For an indicator which saw vast improvement across the nation, the reasons for regression in these PCs are worth investigating and addressing immediately.

3) Access to Sanitation Facilities: An Important Element of Menstrual Hygiene

Fig 3: NFHS 5 levels of Population living in households that use an improved sanitation facility

Inadequate water and sanitation facilities pose a major impediment to maintaining proper hygiene and privacy for menstruating women. A UNICEF study noted that a higher percentage of women and adolescent girls practising adequate MHM live in households with improved sanitation facilities and with safe excreta disposal on site compared to those who practice inadequate MHM. The NFHS data highlights a similar trend for Indian parliamentary constituencies (PCs).

Most PCs with low levels of menstrual hygiene (highlighted by shades of red in Fig 1) are also found to have a low proportion of households using improved sanitation facilities (highlighted by shades of red in Fig 3). States like Uttar Pradesh, Bihar, Jharkhand, Madhya Pradesh and Tripura have notably low levels of sanitation as well as menstrual hygiene. This warrants immediate attention. Interestingly, Tamil Nadu stands to be an outlier as NFHS reports very high levels of menstrual hygiene but relatively low levels of households with improved sanitation facilities.

4) Inadequate Menstrual Hygiene aggravates the risk of Anaemia

Fig 4: Anaemia (All Women) Prevalence at NFHS 5 levels

Anaemia is a significant problem among females in India, where per NFHS-5, 57% of females in their childbearing age are estimated to be anaemic in the country. The government has been promoting initiatives to ensure Iron and Folic Acid supplementation, provide fortified food for women at risk and run large-scale awareness drives. However, improvements in promoting menstrual hygiene can also play a significant role in India’s fight against anaemia. Improper menstrual hygiene management can lead to urinary or reproductive tract infections and pelvic inflammatory disease that can further lead to blood loss ultimately resulting in a higher risk of anaemia.

Studies have established “a significant association between poor menstrual hygiene management and anaemia”. This appears to be the case in Fig 1 and Fig 4 as PCs with poor menstrual hygiene overlap with PCs with high levels of anaemia among females across multiple regions in India. Therefore, it is likely that a deeper understanding of the causes of poor menstrual hygiene and addressing related risk factors could also help in curbing anaemia in the country.

Conclusion

Addressing menstrual hygiene requires a multi-faceted and holistic approach, as it is both caused by and affects a range of other indicators, including gender equality, health, education, and economic empowerment. Therefore, a comprehensive policy response to this issue must weave together education, awareness, access to affordable and hygienic menstrual products and improved sanitation facilities. By prioritizing these areas and implementing evidence-based and focused policy strategies, India can make significant strides in ensuring that women and girls have the necessary support, knowledge, and facilities to manage menstruation hygienically and with dignity.

Such efforts would also promote progress on several Sustainable Development Goals (SDGs), including SDG 3: Good Health and Well-being, SDG 4: Quality Education, SDG 5: Gender Equality, and SDG 6: Clean Water and Sanitation. Therefore, by promoting menstrual hygiene management, we can contribute to building a sustainable and equitable future for all.

Shubham Mudgil is a Public Policy Associate at The Quantum Hub and Dr. SV Subramanian is Professor of Population Health and Geography at Harvard University and Principal Investigator of the Geographic Insights Lab at Harvard.

*This data story uses data from the NFHS Policy Tracker for Parliamentary Constituencies developed by India Policy Insights at Harvard University. The tracker is accessible here.