Tackling Deepfakes Requires All Hands on Deck

Tackling Deepfakes Requires All Hands on Deck

Authors: Rohit Kumar and Mahwash Fatima

Published: 8th January 2024 in the Hindustan Times

What would your elderly father’s response be if they received an emergency video message from you requesting a large sum of money? With rapid advances in artificial intelligence, normal human reaction to such situations can easily be exploited through the creation of deepfakes.

Deepfakes is undoubtedly one of the biggest threats our society is likely to face in 2024. No wonder the union government has taken up this issue on priority. It has already sent an advisory to social media intermediaries asking them to strengthen their systems for detecting and taking down deepfakes. News reports also suggest that the Ministry of Electronics and IT is considering fresh amendments to the Information Technology (IT) Rules to include specific obligations for intermediaries to contain the deepfake menace.

It was in 2017 when deepfake content made its first notable appearance with a Reddit user named ‘deepfakes’ posting fake videos of celebrities. Over the years, with the development of the underlying technology, these videos have become increasingly realistic, and deceptive. Between 2019 and 2020, the number of deepfake online content has increased by over 900%, with some forecasts predicting that as much as 90% of online content may be synthetically generated by 2026.

The most worrying societal harm from the rise of misinformation and deepfakes is the erosion of trust in our information ecosystem. Not knowing who or what to believe can do unimaginable damage to how humans interact and engage with each other. A recent empirical study has in fact shown that the mere existence of deepfakes feeds distrust in any kind of information, whether true or false.

In India, while no legislation specifically governs deepfakes, existing laws such as the IT Act and the Indian Penal Code already criminalise online impersonation, malicious use of communication devices, obscene publishing etc. Social media platforms are also obligated under the IT Rules to take down misinformation and impersonating content; failure to do so means risking their ‘safe harbour’ provision and being liable for the harm that ensues.

Unfortunately, while these legal provisions already exist, it is challenging to execute what the law demands. First, identifying deepfakes is a massive technical challenge. Currently available options – AI powered detection and watermarking/labelling techniques – are inconsistent and inaccurate. Notably, OpenAI pulled its own AI detection tool due to ‘low accuracy’ in July 2023.

Second, technologies that are used to create deepfakes have positive use-cases too. For instance, these same technologies can be used to augment accessibility tools for persons with disabilities, deployed in the entertainment industry for more realistic special effects, and even used in the education sector. Essentially, what this means is that every piece of content that has been edited digitally doesn’t necessarily make it harmful. This further complicates the job of content moderation.
Third, the volume of content uploaded every second makes meaningful human oversight difficult. Unfortunately, by the time problematic content is detected, it has often already spread.

Policymakers around the world are struggling to find a good solution to the problem. The US and the EU seem to have taken some initial steps, but their efficacy remains untested. In the US, President Biden signed an executive order in October 2023 to address AI risks. Under this order, the Department of Commerce is creating standards for labelling AI-generated content. Separately, states like California and Texas have passed laws criminalising the dissemination of deepfake videos influencing elections, while Virginia penalises the distribution of non-consensual deepfake pornography. In Europe, the Artificial Intelligence Act will categorise AI systems into unacceptable, high, limited, and low risk. Notably, AI systems that generate or manipulate image, audio or video content (i.e. deepfakes), will be subjected to transparency obligations.

Technologists are also working on ways to accurately trace the origins of synthetic media. One of these attempts by the Coalition for Content Provenance and Authenticity (C2PA) aims to cryptographically link each piece of media with its origin and editing history. However, the challenge with C2PA’s approach lies in widespread adoption of these standards by devices and editing tools, without which unlabelled AI-generated content will continue to deceive.

Therefore, while watermarking and labelling may help, what we need urgently is a focused attempt to reduce the circulation of deepfake content. Slowing down the circulation of flagged content until its veracity is confirmed can be crucial in preventing real-world harm. This is where intermediaries such as social media platforms can perhaps be required to step in more strongly. If an uploaded piece of content is detected to be AI modified or flagged by users, platforms should mark such content for review before allowing unchecked distribution.

Finally, there is no substitute to building resilience among the audience. Fostering media literacy to help people of all ages better understand the threat of misinformation, to make them more conscious consumers of information is the need of the hour.

Navigating the new digital era where ‘seeing is no longer believing’ is undoubtedly challenging. We need a multi-pronged regulatory approach that nudges all ecosystem actors to not only prevent and detect deepfake content, but also to engage with it more wisely. Anything less is unlikely to retain our trust in the digital world.

Rohit is Founding Partner and Mahwash a Senior Analyst at The Quantum Hub (TQH), a public policy firm. 

The New Telecom Act: A Schrödinger’s cat paradox?

The New Telecom Act: A Schrödinger’s cat paradox?

Author: Sumeysh Srivastava
Published: 10th January 2024 in the Economic Times

The Telecommunications Bill 2023 has been notified into law; it is now an Act. While the revamp of the primary 1885 legislation that has so far governed telecommunications in India is a welcome move, the new law has led to mixed reactions.

The treasury benches and some other commentators have indeed welcomed the Act, calling it a future proof framework which will boost growth of India’s digital economy. Provisions related to spectrum allocation, right of way and deployment of infrastructure are being seen as enablers. However, there has also been commentary which has critiqued the Act for its lack of detailing and for increasing government powers at the cost of user rights, specifically calling out the new provisions on interception and internet shutdowns.

The Act is indeed a manifestation of Schrödinger’s thought experiment: it can be seen as both good and bad, depending on how it is interpreted and implemented.

A pivotal aspect of the Act is the introduction of the concept of authorization for the provision of telecom services. Authorization is basically like deciding entry to a party; it can manifest in various forms, from the velvet rope of licensing, to the full freedom of open entry, with other forms such as general authorization and registration being made available as well. However, the Act lacks specificity regarding the type of authorization that may be mandated or even the factors which would determine the scope of licensing. This is unlike other countries which clearly spell out the specifics. For instance, in Europe, Electronic Communication Services are subject to a general authorisation regime, with individual licensing being considered on the basis of factors such as usage of scarce resources, threat to public health etc. Similarly, the Nigerian Communications Commission Act details the principles and considerations to be kept in mind while formulating licensing procedures.

Not only is such detailing desirable, it is also of legal necessity. In the 2001 Kishan Prakash case, for instance, the Supreme Court had explicitly said that the legislature should not delegate its core law-making functions; it must set limits on the power that is being delegated by declaring the policy behind the law and laying down clear standards for guidance.

In a recent media interaction, the Honourable Minister has clarified that this government does not intend for the Act to cover OTT communication services like Whatsapp and Telegram which are separately regulated under the Information Technology Act. This is much appreciated. However, the broad definitions of terms such as “message” and “telecommunication” still leave room for the Act’s provisions to be extended to internet-based communication services via the Rules and the Minister’s clarification may hold no legal value in court if a future government were to interpret it so. Here we could have picked on best practices from other countries, such as the Telecommunications Act 2001 of New Zealand which also uses a broad definition of “telecommunication” but clarifies that only those telecommunication services which are explicitly listed under schedule 1 of the Act are to be regulated. A detailed procedure has also been given to add services to the schedule, which involves a recommendation from New Zealand’s Commerce Commission. This ensures that the scope of the legislation cannot be expanded easily; it also provides for more certainty.

A lack of detailing is also seen in other crucial aspects of the Telecom Act. For example, with reference to the measures related to interception or blocking, while the term “safeguards” is mentioned, there is no detailing on what these could be, or the framework to be followed to ensure that the power is not misused. The Honourable Minister has mentioned in Parliament that interception measures under the new law will adhere to the guidelines laid down by the Supreme Court in the telephone-tapping People’s Union For Civil Liberties (PUCL) case of 1996. However, in the PUCL case, the court had explicitly commented on the lack of procedural safeguards in the legislation. The Telecom Act 2023 was an opportunity to remedy this and provide a more secure framework for interception within the law itself.

When it comes to regulating rapidly evolving technology, the need to retain flexibility in implementation is understandable, and perhaps, even beneficial. But, the lack of guiding principles in the parent legislation also means that Rules issued under this Act can be changed easily. Not only can this risk user rights, it can also create uncertainty for businesses by leading to unnecessary litigation.

The Telecom Act 2023, in its current form, can be both – a shiny new phone with the same old software, or a revolutionary rocket that can turbocharge Bharat’s digital economy. Like Schrödinger’s cat, there is no way to know just yet.

Sumeysh is a Senior Manager at The Quantum Hub (TQH) – a public policy firm

When Paid Period Leave is Mandatory

When Paid Period Leave is Mandatory

Authors: Aparajita Bharti and Mitali Nikore
Published: 29th December 2023 in the Hindustan Times

“All women, girls and persons who menstruate are able to experience menstruation in a manner that is safe, healthy and free from stigma”. This is the overarching aim of India’s recent Draft Menstrual Hygiene Policy 2023. But are paid period leaves the best tool to achieve this aim?

Overall, even today, nearly 65% of all working women in India are employed in the agriculture sector, about 40% are helpers in household enterprises, and almost a third run their own small businesses. Only 24% of working age urban women are employed, as opposed to 40% of rural women. Even amongst urban women, almost half of whom are in regular salaried employment, close to 55% work without a written contract, and 45% are not eligible for any paid leave.

In this scenario, legally mandated paid period leaves funded by employers are likely to, first, only be offered to a small subsection of women working in the urban corporate sector, and, second, may serve to create additional barriers for those women who are yet to enter formal employment.  By imposing an additional cost on employers linked only to employees who menstruate, the unintended adverse impact on women’s participation in the workforce may end up far outweighing the benefit of any such legislation. Further, it discriminates against small and medium enterprises, nearly a fifth of which are led by women, which may lack financial resources to meet these legal obligations.

The Union Minister Ms. Smriti Irani recently alluded to this risk, which is in fact based on existing evidence from the implementation of laws such as the six months paid maternity leave. For this reason, even in countries like Spain, where menstrual leave has been legislated, the bill is footed by the public security system and menstruators need a doctor’s note certifying debilitating symptoms to avail them. Further, women who have not previously paid into Spain’s social security for the preceding six months are not eligible. All these guardrails are an explicit recognition of the risk of discrimination in hiring, retention, and promotion, in case employers are legally mandated to pay for unconditional menstrual leaves.

So where do we begin in India, given our large informal economy and low female labour force participation. Our answer – look beyond legally mandated paid period leave towards an all-of-society approach. 

First, focus on the infrastructure around menstrual hygiene management. It’s 2023, and workplaces still continue to exist without separate or clean toilets for women, even in major metro cities and even government’s own offices. Rather than paid period leave, Central and State governments can prioritize establishing minimum legally mandated standards for separate, clean, well-maintained toilets for men, women, persons with disabilities, as well as gender neutral toilets with sufficient provisions for free or subsidized period products, and dignified, green, menstrual waste disposal facilities.

Second, encourage the private sector to be a partner in menstrual hygiene management. The Draft Policy calls for private sector companies to allocate a portion of their corporate social responsibility funds towards MHM initiatives. Companies can allocate their CSR funds for distribution of free sanitary products, supporting social enterprises or community-based organizations engaged in production of sanitary products, improving sanitation infrastructure, and raising awareness.

Third, enhance government funding for menstrual hygiene management through effective gender budgeting. For truly implementing the Draft Policy objectives and targets, government would need to enhance the financial resource envelope for MHM initiatives. While many states have already launched schemes that involve free distribution of sanitary napkins to schoolgirls, these can be expanded to cover additional locations, such as government offices and construction sites. Moreover, government schemes can be developed to upgrade women’s toilets in public spaces, and offer subsidies to women entrepreneurs for manufacturing MHM products.

Fourth, voluntary codes and partnerships to uphold existing labour laws. Arguably, better quality working conditions should be accessible to all Indian women. However, as it is widely known, ensuring provision of minimum working conditions even under the existing labour laws is an ongoing challenge. Civil society can galvanise communities and create voluntary codes around minimum wages, paid weekly leave, overtime allowance, access to toilet and hygiene facilities for marginalised workers. These voluntary codes can be adopted by groups of citizens like private sector organisations, resident welfare associations, market associations and business chambers to improve enforcement of existing laws.

Fifth, organized sector enterprises can offer flexible work arrangements or leaves, as a benefit to their employees. Even without legal mandates, some companies are beginning to recognise that offering period leaves or work from home for menstruators who require rest / medical attention improves employee morale, increases loyalty, and boosts labor productivity. For instance, after Zomato Ltd. introduced a menstrual leave policy in 2020, many others followed, such as Swiggy, Byju’s, Orient Electric and Magzter, amongst others.

There is no argument that workplaces need to accommodate biological differences between co-workers. Further, there is no argument that a large section of menstruators experience a wide range of health complications— cramps, back and muscle pains, bloating, headaches, nausea, among others. However, it is arguable that a legal mandate for employer-funded menstrual leaves is the right course for India at this juncture. We need an all-of-society approach to ensure better conditions for menstruators in India.

Aparajita Bharti is a Founding Partner at TQH Consulting, a policy research and advisory firm; Mitali Nikore is Founder & Chief Economist, Nikore Associates. 

Lead the Way in AI Governance

Lead the Way in AI Governance

Author: Sidharth Deb
Published: 12th December 2023 in the Economic Times

India’s MoS for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar made two notable observations at the UK’s international AI Safety Summit last month. First, he argued that different governments must collaborate on AI governance. Second, he contended that authorities must learn from earlier experiences with social media where regulation struggled to keep pace with the ecosystem’s evolution. Mr. Chandrasekhar concluded his speech by inviting participants to the Global Partnership of Artificial Intelligence (GPAI) summit to be hosted by India in December. India was also one of 29 signatories to the Summit’s Bletchley Declaration which largely addresses mitigation strategies against existential risks emanating from ‘frontier’ AI models.

GPAI: An Opportunity for Enduring Policy Leadership

As GPAI’s global chair, India has an opportunity to contribute progressively to the international AI governance discourse. This will require shifting away from traditional notions of command-and-control regulation premised on prescriptive compliance and liability.

When technologies like AI evolve at exponential rates there is an inordinate risk of widespread non-compliance. Additionally, enforcement becomes challenging, and regulations can quickly become redundant. This creates widespread uncertainty and undue liability risks. Ultimately, prescriptive regulation can inhibit competition since only those market participants with the adequate risk appetite will continue to innovate.

Instead, India should favour partnerships which pursue flexible safeguards, transparency, knowledge sharing, accountability, economic growth, and development. To ensure balance, governments must attempt to dynamically mitigate AI’s multifaceted risks and create a framework for responsible innovation. The framework should constructively engage with substantive issues without getting bogged down with challenges like the feasibility of prescriptive regulation. This can be viewed as phase one in the life cycle of AI governance where India lays sound foundational aspects which advance state capacity.

India’s GPAI stewardship could echo some contemporary international developments like the US’ Presidential executive order (EO) on AI safety and security, the G7 Hiroshima AI Process, and other voluntary commitments made by tech majors at prior government interactions.

Six Ideas for India’s AI Stewardship

First, governments must raise their capacity to engage with AI’s wide applicability across domains like healthcare, climate change, financial services, education, agriculture, housing, and urban development. Such broad applicability requires knowledge exchange. MeitY, under its IndiaAI initiative, should facilitate a whole of government approach to AI oversight. Different sectoral authorities should collaborate with stakeholders to develop a publicly accessible repository of AI deployments and use cases. This will empower sectoral authorities with better information to commence dialogues around developing sector specific codes of practice on responsible AI development.

Second, robust standards development will assist with quality assurance. India should grant the appropriate resources to technical institutions like the Bureau of Indian Standards (BIS) and the Standardisation Testing and Quality Certification (STQC) Directorate to pursue such conversations across AI use cases. India should leverage government-to-government channels to facilitate MoUs through which these institutions can collaborate with international counterparts like the US Department of Commerce’s National Institute of Standards and Technology. In due course MeitY, BIS and STQC could codify standards for AI safety and trustworthiness which could serve as nutrition label equivalents for India’s AI ecosystem.

Third, India should commence an international project to explore scientific solutions to navigate the negative impacts of deep fake technologies. India’s current criminal and intermediary legal systems only offer after-the-fact remedies. However, the damage from malicious deployments commences as soon as content is created and distributed. The US EO discusses examining digital watermarking technologies as a possible solution. India should commence dialogue with international initiatives like the Coalition for Content Provenance and Authenticity. Decision makers need to  better understand the capabilities and limitations of these technologies, and commence a dialogue to reorient how the public recognises artificial content over the internet.

Fourth, the US and UK have announced setting up national AI Safety Institutes which will work with companies to monitor and ensure the safety of ‘frontier’ AI models. This is to manage the unintended consequences of powerful AI models and the risks stemming from potential misuse by malicious actors to carry out cyber-enabled attacks against critical information infrastructures. India should consider setting up a similar AI safety institute which closely works with cybersecurity institutions like CERT-In and the NCIIPC. Such an institution should also be pushed to interface with the aforementioned international equivalents.

Fifth, governments must proactively address AI’s impact on labour markets. This impact is not uniform across sectors and varies substantially depending on the nature of deployment. Relevant ministries should support studies to quantify the impact of AI on labour markets to estimate job substitution and adaption. Such studies will inform policymakers on appropriate social security and upskilling interventions.

Finally, AI’s risks are well documented across criminal justice/policing, housing, financial services and healthcare. The risks intersect with issues like accuracy, bias, discrimination, exclusion, citizen privacy, etc. As governments explore how AI can improve public service delivery and other government functions, public trust will be imperative for long run sustainability. India should establish legislation which safeguards citizens’ rights against the risks of Government AI deployments. Such legislation will bring more certainty to Government projects, minimise unforeseeable litigation risks, and position India as an international exemplar for government use of AI.

Sidharth Deb is Public Policy Manager at The Quantum Hub (TQH Consulting).

Efficient State Intervention Can Help Prevent Gender Based Violence in India

Efficient State Intervention Can Help Prevent Gender Based Violence in India

Authors: Devika Oberai and Ujval Mohan
Published: 13th December 2023 in The Mint

UN data over the past decade has maintained that as many as one in three women globally have experienced physical and/or sexual violence. Indian women too reel from exposure to risks of gender based violence (GBV), exacerbated by deeply entrenched patriarchy and limited state capacity to intervene.

To its credit, India has enacted strong legislative frameworks to instil deterrence against GBV and provide protective support to survivors. Aside from stringent penalties under the Indian Penal Code for sexual assault and harassment, dedicated gender-responsive laws address intimate partner and familial violence (PWDVA 2005), workplace sexual harassment (PoSH 2013), and female foeticide (PCPNDT 1994). Going further, the government has proactively conceptualised policies that set up one stop crisis centres (OSCs), fund safety upgrades in public spaces (Nirbhaya Fund) and set-up women’s shelter homes (Swadhar Greh).

Multi-pronged efforts and rising public awareness has helped India dent the under-reporting problem to a certain extent, with recent trends indicating more survivors coming forth to report GBV. However, even accounting for this, GBV casts an ominous shadow on India’s aspirations to foster women-led development. The International day for the Elimination of Violence Against Women, observed on 25th November, and the ensuing 16 Days of Activism are thus an opportune time to reflect on what India can do better to prevent GBV and protect survivors.

Allocative Efficiencies

First, we need to re-examine how resources to combat GBV are allocated. Today, state functionaries who respond and enforce anti-GBV laws often have to overcome inadequate resources, limited bandwidth, and a lack of meaningful supervision and coordination. For instance, laws related to domestic violence, female foeticide, and sexual violence are administered by different officials, who typically undertake these functions as ‘additional charges’ over and above their existing revenue/administration functions.

Re-imagining the administrative machinery with an eye on allocative efficiencies can optimise the use of available funding. Clubbing resources in the hands of a single motivated agency tasked solely with GBV response is likely to be more effective than thinly spreading resources out amongst a number of uncoordinated officials. This would unlock the potential to enable closer monitoring of GBV data and response efforts, prioritisation of more vulnerable groups or areas, and meaningful oversight and training for protection officers, police, and other functionaries. As an example, the US noted increased efficiency in combating GBV when it concentrated resources under the Office on Violence Against Women.

Aligning incentives 

Second, we must take stock of the deep trust deficit that has taken root between the state and her citizens in enabling justice. Despite continuous and appreciable efforts, survivors are still met with reluctance and scepticism when filing complaints, advised to ‘reconcile’ with aggressors, or find themselves in understaffed or ill-equipped Crisis Centres or shelter homes.

Turning the tide on these trends requires us to double down on gender sensitization training to augment responders’ capacity, which a vast civil society network is already engaged in.

At the same time, there is a need to strengthen the functionaries’ incentive to act in the best interests of the victim. Equipping survivors and watchdog organisations with a cause of action against officials for failing to discharge their duties can be an empowering tool for survivors to demand that the officials act as the law promises. The ‘carrot’ of meaningful capacity building, coupled with the ‘stick’ of consequences for inaction, can signal a reset in the relationship between survivors and the state.

Investing in Social Norm Change

Finally, it is imperative to acknowledge that, at its core, GBV is driven by deep-seated patriarchal norms that have been resilient against decades of state counter-efforts. Thus far, GBV response has largely remained reactive, and has even involved heightened surveillance on women. Without targeted interventions, the legacy of gender inequality is inherited by each next generation. India therefore needs to operationalise the equality objective of the National Education Policy, 2020 by implementing comprehensive gender-norm corrective interventions at all levels of school education.

There is now emerging evidence that early-age interventions reduce propensity towards GBV, thus preventing a problem well before it takes root. It is encouraging that states like Odisha (in partnership with UNICEF) are using gender-responsive modules to strengthen inclusive learning outcomes in students. Such initiatives recognise that while investing in social norm change is a long game that demands commitment and patience, it could by far be the most effective in protecting Indian women.

India correctly identified women-led development as a priority during its G20 presidency. However, preventing GBV and acting against it is an absolute prerequisite for women to realise their full potential.

Ujval Mohan and Devika Oberai are, respectively, Senior Analyst and Public Policy Associate at The Quantum Hub (TQH Consulting).

Making Apprenticeship Schemes Women Friendly

Making Apprenticeship Schemes Women Friendly

Authors: Swathi Rao, Aparajita Bharti, Sona Mitra
Published: 30th October 2023 in the Hindustan Times

The year 2023 marks a milestone as India becomes the world’s most populous country, accompanied by the promise of a burgeoning working-age population (15-59 years) that could drive an economic boom. However, this could be hampered by a lower number of women in the workforce. Government surveys reveal a decade of low rates of workforce participation (WPR) of women at a mere 26.6% (2021-22), further exacerbated by women’s confinement to low-paying and low-productivity jobs in the informal sector.

Skilling is an important lever for improving employability, and therefore, policymakers are prioritising these initiatives. Some of these include revamping Industrial Training Institutes (ITIs), setting up National Skill Training Institutes, and creating 5,000 skill hubs, aligned with the aim of the National Education Policy 2020 to integrate practical vocational training into school curricula. There are also focused initiatives to improve the uptake of such programmes among girls. The Ministry of Skill Development and Entrepreneurship signed an MoU with the Ministry of Women and Child Development last year to improve skills of girls in non-traditional livelihoods and to ensure a smooth transition from skills to jobs. In such an ecosystem, apprenticeships offer a transition pathway by extending job-relevant training.

Under the Apprenticeship Act 1961, firms in India with six or more employees could engage apprentices, forming 2.5% to 10% of their workforce. A 2015 Government assessment reported that a skilled workforce of over 20 lakhs could be created if central public undertakings, central government, banking, and eligible MSMEs were to engage the minimum prescribed number of apprentices. Currently, the Indian government administers two primary apprenticeship programs: the National Apprentice Promotion Scheme (NAPS) and the National Apprentice Training Scheme (NATS). However, the gender-neutral design of both initiatives has played a role in fostering gender bias, which in turn, has led to the underrepresentation of women in apprenticeships.

Gender Disparities in India’s apprenticeship programs.

As of June 2023, NAPS has engaged 20,49,297 apprentices, of which 80% are male (16,44,071) while only 20% are female (4,07,568). NAPS has leveraged monthly apprenticeship melas to spread awareness and recruit apprentices during the last two-three years. Even though these melas have successfully doubled the number of apprentices between 2020-21 and 2021-22, the engagement of female apprentices remains disproportionately low.

To ensure gender-inclusive apprenticeship programs and expand their impact, a comprehensive approach is essential. This involves both tailoring initiatives to address women’s needs and extending apprenticeship opportunities across sectors. We must formulate apprenticeship programs that are women and girls-friendly by creating appropriate infrastructure for supporting and enabling women and girls, providing access to mentorship, incentivising participation in male-dominated vocations, and encouraging a role-model approach towards skilling and apprenticeship. Encouraging government apprenticeship initiatives to drive demand in others can also be effective in bringing in more women and girls.

Enhancing NAPS also requires commitment to gathering gender-disaggregated data to tailor interventions effectively. It also demands laying out a clear choice of programs, highlighting successful stories, and supportive budgetary allocations with specific targets for women and girls. The key is also to reimagine skill development, integrating foundational, transferable, and vocational skills into secondary and higher education as women’s enrolment to formal institutions across education levels increases. Industry engagement is pivotal for the success of these interventions.

Notably, Switzerland’s Vocational and Professional Education and Training (VPET) model stands as a remarkable example of effective integration of the education and skilling ecosystem. About two-thirds of Swiss youth engage in apprenticeships by age 15. VPET offers impressive flexibility, enabling seamless transitions between professions and bridging academic and vocational paths. The dual VET system is recognised as instrumental to the nation’s economic prosperity.

Switzerland’s VPET system is part of a larger trend in Europe, where several countries embrace “dual” vocational education and training, effectively combining classroom learning with practical workplace experiences. In Switzerland, 41% of VET upper secondary students are women; this is lower than the overall OECD countries average, where women students account for 45% enrolments in vocational programs. However, even with a more overall equitable distribution of enrolments, across countries men tend to dominate STEM fields (Science, Technology, Engineering, and Mathematics), while women are more prominent in areas like business, administration, law, services, health, and welfare. To address this issue, some countries have implemented measures to encourage greater female participation. For instance, in Ireland, employers who hire female craft apprentices can receive financial incentives in the form of a bursary for each female apprentice they register. This initiative has recently expanded to include all programs with more than 80% representation of a single gender.

The overall success of the European models of apprenticeships underlines the importance of data driven policy designs that can potentially be transformative in nature. Apprenticeships are powerful tools and mechanisms for transitions from education and training into jobs and need to be gender inclusive in their design. While the European examples may not be directly applicable to the Indian context, the principles of leveraging high-quality data, embracing comprehensive and inclusive program designs, and harnessing the power of incentives can serve as valuable lessons to enhance the gender responsiveness of Indian apprenticeship initiatives.

Swathi Rao is Analyst and Aparajita Bharti is Founding Partner at The Quantum Hub (TQH Consulting), and Sona Mitra is Principal Economist, IWWAGE.

Designing Gender Responsive Apprenticeship Programs

Designing Gender Responsive Apprenticeship Programs

Authors: Swathi Rao, Avi Krish Bedi, Aparna G

Published: September 2023

It is widely known that women’s labour force participation in India needs policy attention. Although 66.8% women in India are in the working age, their labour force participation rate stands at a mere 35.6%, compared to men (81.8%); and their employment is mostly confined to the informal sector.

Skill development is an important lever for increasing female labour force participation and meeting the targets set by the United Nations Sustainable Development Goals (SDGs) of full and productive employment and decent work. However, skilling without the means to transition to an occupation cannot enhance economic prospects for women. Apprenticeships, therefore, offer the right mix of job relevant skill training with a career pathway.

In the context of the future of work, apprenticeships, and specifically quality apprenticeships, can improve the employability of youth and adults by skilling, reskilling, and upskilling, regardless of age or gender. They can also assist governments in keeping the learning systems contemporaneous with the job market. From a gender lens, apprenticeships become even more important, given that they not only promise technical skills to women, they also are a means for participants to get life skills and a sense of agency needed for more permanent employment.

Government initiatives like the National Apprenticeship Promotion Scheme (NAPS) and the National Apprentice Training Scheme (NATS) aim to enhance skill development and employment prospects. However, a marked gender imbalance persists in these programs, with the majority of apprentices being male. This underscores the urgent need for reforms to establish gender-inclusive apprenticeship programs in India.

To address these gaps, this brief proposes several recommendations:

    • Firstly, collecting gender-disaggregated data can provide insights into women’s choices and open new avenues for their participation.
    • Secondly, incentivizing employers to hire more female apprentices and offering additional allowances can stimulate greater female engagement.
    • Thirdly, targeted awareness campaigns for women can enhance understanding and interest in apprenticeship programs.
    • Fourthly, creating gender-sensitive infrastructure and challenging social norms inhibiting female participation will foster inclusivity.
    • Finally, integrating NAPS into the DESHStack portal can improve women’s access to employment opportunities and streamline their entry into the labour market.

Implementing these recommendations promises a more gender-inclusive apprenticeship system, fostering economic growth and prosperity for women.

Read the full brief on Gender Responsive Apprenticeship Schemes here.

Policy Dialogue on the upcoming Digital India Act

Policy Dialogue on the upcoming Digital India Act

Authors: Mahwash Fatima

Published: November 2023

The Quantum Hub (TQH) organised a policy dialogue on the proposed Digital India Act (DIA) – an upcoming legislation that aims to replace the Information Technology Act, 2000 (IT Act) to provide a comprehensive principle-based legal framework for the digital sector in India. Held in partnership with the US-India Strategic Partnership Forum (USISPF) on October 12, 2023 in New Delhi, the event featured two panel discussions on the contours of DIA and the principle of safe harbour thereunder.

Attended by a diverse group of stakeholders, the event provided a platform for nuanced discussions that highlighted the opportunities and challenges of the DIA and offered some key insights and recommendations for the government and the stakeholders to consider while drafting and implementing the legislation.

The first panel explored the scope of the proposed law in the backdrop of pressing concerns that are necessitating the introduction of DIA. The discussions focused on ensuring user safety in the wake of harms arising out of existing and emerging technologies, regulatory approaches and the establishment of effective institutional bodies. The panel also examined the delicate balance between innovation and regulation. A key point highlighted by the panel was the necessity of an adaptive risk-based regulatory approach over exhaustive enumeration of user harms keeping in view the importance of a principles-based regulatory framework to adapt to the dynamic nature of emerging technologies.

The second panel scrutinised the intricate aspects of safe harbour principle. Discussions revolved around the potential impact of rethinking safe harbour on user safety, free speech and the fundamental functioning of digital platforms. While complete immunity of platforms was challenged, the panel underscored the importance of safeguarding safe harbour principles and raised concerns about the potential negative consequences of eliminating safe harbour, impacting innovation and flexibility in response to changing market dynamics.

The points that emerged from this policy dialogue highlight the need for a principled, adaptive framework to navigate the dynamic digital landscape in India, fostering innovation while safeguarding user safety.

Read the full event report here

Navigating Children’s Privacy and Parental Consent Under The DPDP Act 2023

Navigating Children’s Privacy and Parental Consent Under The DPDP Act 2023

Towards a safe and enabling ecosystem for India’s young digital nagriks. 

Authors: Aparajita Bharti, Nikhil Iyer, Rhydhi Gupta, & Sidharth Deb

Published: November 2023

In August 2023, the Indian government enacted the landmark Digital Personal Data Protection Act, 2023 (“DPDP Act”) after six years of consultation. Section 9 of the DPDP Act is one of the act’s most notable provisions, and outlines a mechanism through which data fiduciaries (platforms, browsers, OS providers, etc.) can process the personal data of “children”. It requires all data fiduciaries to obtain ‘verifiable parental consent’ if they process data of users aged below 18 years. Any mechanism to fulfil this legal requirement must look to satisfy three elements:

– Verify the user’s age with reasonable accuracy,
– Ascertain the legitimacy of the relationship between the user and the parent or guardian, and
– Record evidence of their consent.

This paper delves into pathways via which Indian authorities can implement the required provision. It provides a summary of the global regulatory and technical experience with age verification, while drawing on insights from the ‘YLAC Digital Champions’ program that runs in schools across the country. Run by TQH’s citizen engagement arm Young Leaders for Active Citizenship (YLAC), the Digital Champions program engages with young adults between the ages of 13 and 18 around various facets of online safety, risks and potential threats on the internet, conscious consumption of information, and fostering a healthy and meaningful relationship with technology.

Both age verification and parental consent has been discussed extensively in other jurisdictions. It has been acknowledged widely that any regulation to safeguard children’s privacy requires balancing children and adolescent’s safety, whilst contending with the limitations and tradeoffs associated with available technical methods. Our research shows that hard verification mechanisms (i.e. based on documentary evidence using government IDs) which have been proposed across countries, encounter concerns and criticisms that they create inequity in internet access, inadvertently cause privacy concerns, and impose costs and other practical barriers to children’s access to online services and platforms. In India implementation will also have to navigate concerns around circumvention by children and the feasibility of verifying parental consent at scale. Further complications may arise owing to our gender digital divide, low digital literacy, linguistic heterogeneity, and shared device usage in low-income households.

Keeping this digital reality and our digital inclusion goals in mind, our recommendations propose that the Ministry of Electronics and Information Technology (MeitY) should avoid a prescriptive one-size-fits-all mandate of hard verification across all digital products and services. Instead, we urge authorities to suggest a list of methods that adequately fulfil the underlying objective of parental consent for most data fiduciaries. To give effect to this approach, we recommend that the Government of India pass rules which help develop a code of practice for age assurance that prescribes a range of mechanisms, corresponding to the level of risk involved in data processed by a particular data fiduciary. We envisage that this approach will enable India’s youth to meaningfully engage with the growing digital economy while keeping them safe online. Our proposals envisage a vital role for civil society, organisations working with children, academia and media in getting this regulatory framework right.

Relevant links:
1. Full research study
2. Presentation highlighting key issues
3. Short video introduction to the YLAC Digital Champions Program
4. Digital usage patterns – Findings from a children’s survey

Need To Algo The Distance

Need To Algo The Distance

Author: Deepro Guha

Published: October 5, 2023 in The Economic Times.

Meta recently made a groundbreaking announcement for its European users, offering them the option to opt out of its recommendation algorithm. This move signals a potentially pivotal shift in how social media services are offered in Europe and was necessitated by the implementation of the Digital Service Act (DSA) in EU, which mandates algorithmic transparency by digital intermediaries. In this article, I aim to delve deeper into the concept of algorithmic transparency and explore other avenues of algorithmic regulation.

Ubiquity of algorithms

But let’s start with a simple question: Have you ever found yourself endlessly scrolling through social media, wondering why you can’t seem to stop? The answer likely lies in the algorithm that powers your social media feed. These algorithms have the remarkable ability to curate content that keeps you hooked on the platform. Not only do algorithms decide content shown on social media feeds, they also influence consumer choice by controlling suggestions on e-commerce websites, and are even used by governments to process data for the provision of citizens benefits. In essence, algorithms, which are fundamental instructions governing how specific sets of information are treated, have become potent tools for shaping society.

However, these powerful tools also create a host of complex issues that need careful consideration and perhaps even regulation. First, algorithms employed by digital intermediaries are often so complex that they are inscrutable to the average person and sometimes even to regulators. This creates a stark information asymmetry problem. Moreover, certain algorithms, such as those used to train generative AI, are adaptive, offering little control over the models they create, even to their own creators.  An example of problems created by such models was highlighted in the recent episode of Microsoft’s AI software professing love to a New York Times journalist, and also attempting to convince him to leave his wife. Microsoft in response admitted that it may not know the exact reason behind the AI software’s erratic behaviour.

Second, there is a constant risk of bias creeping into algorithmic decision-making, especially when algorithms are used for targeting or identifying specific individuals. If left unaddressed, this can exacerbate socioeconomic inequalities. For instance, Meta recently settled with U.S. authorities in a case where its algorithms displayed bias against certain communities when showing housing ads for specific localities.

Third, when bias-related problems emerge, there should ideally be a human point of contact for grievance redressal. However, many companies employing algorithms offer limited recourse in such instances. For example, recent reports shed light on how Instagram’s algorithms often flag content posted by influencers as “violating community guidelines,” limiting their ability to monetize such content, without offering a robust grievance redressal system or even an explanation of which specific community guideline has been violated.

Global movement towards algorithmic regulation

As these issues gain global attention, there is a growing movement towards preparing for a future regime of algorithmic regulation. In the United Kingdom, digital regulators have outlined a vision document for the future of algorithmic regulation. The European Union has established the European Centre for Algorithmic Transparency (ECAT). Even in India, the earlier draft of the Data Protection Bill (2022) proposed algorithmic transparency in the treatment of personal data.

Challenges in Mandating Transparency

However, while the need to regulate algorithmic decision-making is urgent, the effectiveness of mandating algorithmic transparency remains questionable. Firstly, there is the issue of proprietary concerns. Companies may be hesitant to share such information because these algorithms often form the foundation of their business, as argued by Google when asked for more information about its algorithms by its own shareholders. Secondly, as Microsoft argued before the European Parliament, knowing how an algorithm is coded can be useless without knowledge of the data fed into it. This was also highlighted in Twitter’s recent move to make its source code public, with experts pointing out that while Twitter’s source code reveals the underlying logic of its algorithmic system, it tells us almost nothing about how the system will perform in real-time.

Alternative Approaches

Given these challenges with mandating algorithmic transparency, experts have suggested some alternative solutions that could alleviate the problems with algorithmic decision making. For instance, stakeholders can collaborate to create algorithmic standards, with the objective of mitigating adverse consequences of algorithmic decision making.  For example, ALGO-CARE, a standard created in the UK, sets out a model of algorithmic accountability in predictive policing. This standard ensures measures like including other decision making mechanisms to supplement the algorithm, creating additional oversight to identify bias etc.

Additionally, there is a growing movement toward mandating algorithmic choice. This could involve companies offering users the option to choose which algorithms are used to provide services (similar to Meta’s move in Europe). Alternatively, third-party algorithm services could give users more options in terms of the information they receive. For instance, consumers could select services that adjust their e-commerce search results to favour domestic production or refine their Instagram feed to focus only on specific topics of interest.

While these interventions may create their own complications and need substantial capacity building, they are undoubtedly worth exploring. Therefore, as the Indian government works on the Digital India Bill, it would be prudent to keep a focus on algorithms and create capacity to allow for future regulation.

Deepro is Senior Manager at The Quantum Hub (TQH Consulting), a public policy firm in Delhi

Children, a key yet missed demographic in AI regulation

Children, a key yet missed demographic in AI regulation

Authors: Rhydhi Gupta and Sidharth Deb

Published: September 26, 2023 in The Hindu

The Indian Government is poised to host a Global Summit on Artificial Intelligence (AI) this October. Additionally as the Chair of the Global Partnership for Artificial Intelligence (GPAI), Delhi will also be hosting the GPAI global summit this coming December. These events suggest the strategic importance of AI, as it is projected to add $500 billion to India’s economy by 2025, accounting for 10 percent of the country’s target GDP.

Against this backdrop, PM Modi recently called for a global framework on the ethical expansion of AI. Given the sheer volume of data that India can generate, it has an opportunity to set a policy example for the Global South. Observers and practitioners will track closely India’s approach to regulation and how it balances AI’s developmental potential against its concomitant risks.

One area where India can assume leadership is how regulators address children and adolescents who are a critical – yet less understood – demographic in this context. The nature of digital services means that many cutting edge AI deployments are not designed specifically for children but are nevertheless accessed by them.

The Governance Challenge

Regulation will have to align incentives to reduce issues of addiction, mental health, and overall safety. In absence of that, data hungry AI-based digital services can readily deploy opaque algorithms and dark patterns to exploit impressionable young people. Among other things this can lead to tech-based distortions of ideal physical appearance(s) which can trigger body image issues. Other malicious threats emerging from AI include misinformation, radicalisation, cyberbullying, sexual grooming, and doxxing.

The next generation of digital nagriks must also grapple with the indirect effects of their families’ online activities. Enthusiastic ‘sharents’ regularly post photos and videos about their children online to document their journeys through parenthood. While moving into adolescence we must equip young people with tools to manage the unintended consequences. For instance, AI-powered deep fake capabilities can be misused to target young people wherein bad actors create morphed sexually explicit depictions and distribute them online.

Beyond this, India is a melting pot of intersectional identities across gender, caste, tribal identity, religion, linguistic heritage, etc. Internationally AI is known to transpose real world biases and inequities into the digital world. Such issues of bias and discrimination can impact children and adolescents who belong to marginalised communities.

Alleviate the Burden On Parents

AI regulation must improve upon India’s approach to children under India’s newly minted data protection law. The data protection framework’s current approach to children is misaligned with India’s digital realities. It transfers an inordinate burden on parents to protect their children’s interests and does not facilitate safe platform operations and/or platform design. Confusingly it inverts the well known dynamic where a significant percentage of parents rely on the assistance of their children to navigate otherwise inaccessible UI/UX interfaces online. It also bans tracking of children’s data by default, which can potentially cut them away from the benefits of personalisation that we experience online. So how can the upcoming Digital India Act (DIA) better protect children’s interests when interacting with AI?

Shift the Emphasis to Platform Design, Evidence Collection, and Better Institutions

International best practices can assist Indian regulators in identifying standards and principles that facilitate safer AI deployments. UNICEF’s guidance for policymakers on AI and children identifies nine requirements for child-centred AI which draws on the UN Convention on the Rights of the Child– to which India is a signatory. The Guidance aims to create an enabling environment which promotes children’s well being, inclusion, fairness, non-discrimination, safety, transparency, explainability and accountability.

Another key feature of successful regulation will be the ability to adapt to the varying developmental stages of children from different age groups. California’s Age Appropriate Design Code serves as an interesting template. The Californian code pushes for transparency to ensure that digital services configure default privacy settings; assess whether algorithms, data collection, or targeted advertising systems harm children; and use clear, age-appropriate language for user-facing information. Indian authorities should encourage research which collects evidence on the benefits and risks of AI for India’s children and adolescents. This should serve as a baseline to work towards an Indian Age Appropriate Design Code for AI.

Lastly, better institutions will help shift regulation away from top-down safety protocols which place undue burdens on parents. Mechanisms of regular dialogue with children will help incorporate their inputs on the benefits and the threats they face when interacting with AI-based digital services. An institution similar to Australia’s Online Safety Youth Advisory Council which comprises people between the ages of 13-24 could be an interesting approach. Such institutions will assist regulation to become more responsive to the threats young people face when interacting with AI systems, whilst preserving the benefits that they derive from digital services.

The fast evolving nature of AI means that regulation should avoid prescriptions and instead embrace standards, strong institutions, and best practices which imbue openness, trust, and accountability. As we move towards a new law to regulate harms on the internet, and look to establish our thought leadership on global AI regulation, the interest of our young citizens must be front and centre.

Rhydhi and Sidharth are, respectively, Analyst & Manager, Public Policy at The Quantum Hub (TQH Consulting)

Building Digital Ecosystems for India: From Principles to Practice

Building Digital Ecosystems for India: From Principles to Practice

An Implementation Blue Book

Authors: Aishwarya Viswanathan, Deepro Guha and Bhavani Pasumarthi
Research Lead: Rohit Kumar

Published: 2022

Over the last decade, India has pioneered a new approach to building GovTech – one which prioritises the creation of technology ‘building blocks’ that multiple innovators can leverage to build citizen-centric solutions: in other words, an approach that focuses on creating open ecosystems instead of closed systems. This approach recommends the use of Free and Open Source Software (FOSS), open standards, and open APIs and encourages interoperability. By doing so, it allows different systems to talk to each other seamlessly, empowers stakeholders, distributes the ability to solve complex societal problems and unleashes innovation to enhance service delivery. Starting with Aadhaar, India has built a menu of such digital solutions that today includes eKYC, DigiLocker, a Unified Payments Interface (UPI) and many other sector-specific solutions.

Three interrelated concepts: NODEs, Public Digital Platforms and IndEA

The Ministry of Electronics and Information Technology (MeitY), on behalf of the Government of India (GoI), has been a key advocate and custodian of this approach, putting forth three interrelated concepts – India Digital Ecosystem Architecture (IndEA) Framework, Public Digital Platforms and National Open Digital Ecosystems (NODEs).

The India Digital Ecosystem Architecture (IndEA) Framework provides a set of architectural principles, reference models and standards to support the seamless flow of data across government departments. Leveraging these principles, India has made tremendous strides in building critical Public Digital Platforms such as Aadhaar and UPI which have also facilitated the creation of National Open Digital Ecosystems (NODEs).

All three concepts adopt architecture thinking and interoperability – The IndEA Framework at a ‘whole of government’ level, and Public Digital Platforms & National Open Digital Ecosystems at the sectoral or segment-specific level. They build on common tech elements and strive for one common outcome – namely adopting a de-siloed approach to GovTech, to unlock greater economic and societal value for the citizen.

The strategy for NODEs consultation white paper, released in early 2020, and the latest IndEA 2.0 draft framework have both generated wide public interest and engagement. It is now timely to take this approach forward by codifying the details into an implementation blue book so that the adoption of the IndEA & NODE approaches can be mainstreamed across various sectors and simplified for all government departments. This is what our research aims to do.

Detailed documents

Implementation Blue Book
Case Study – Ayushman Bharat Digital Mission