Policy Dialogue on Artificial Intelligence

Policy Dialogue on Artificial Intelligence

Authors: Mahwash Fatima and Srijan Rai

Published: March 2024

Against the backdrop of AI’s swift integration across various sectors, The Quantum Hub (TQH) convened a roundtable in New Delhi on 21st February 2024, titled ‘Towards safe and trustworthy Artificial Intelligence.’

Deliberating on AI governance and regulation in India, the participants explored challenges and considerations shaping policy frameworks with an emphasis on enabling innovation. Topics included operationalizing international principles of fairness and responsible use in the Indian context, the benefits and risks of legal regulation versus self-regulation, unintended consequences of AI regulation on security and civil liberties, prioritizing implementation efforts, and adapting existing standards. The role of sectoral regulators in ensuring AI compliance was also discussed. With regard to global regulations, the discussion stressed on the importance of learning from diverse jurisdictions and adopting a multi-stakeholder approach.

The conversation on responsible AI underscored the need for AI companies to prioritize local sensitivity and awareness alongside efforts towards red-teaming AI systems for risk identification. Initiatives such as self-regulation frameworks and transparency tools were proposed to foster accountability and transparency. Furthermore, investment priorities in AI within India were discussed, and it was suggested that focus should be placed on enhancing functionality through Learning, Labeling, and Monitoring systems. Discussion on AI development also addressed algorithmic bias and the need to promote diversity in AI development.

The discussion also explored the intersection of AI with copyright and data regulations, highlighting the need for nuanced approaches. Concerns around copyright maximalism and data protection were raised, urging for a balanced approach that fosters innovation while safeguarding individual rights.
Furthermore, the discourse also delved into challenges posed by misinformation, deepfakes, and their implications for electoral integrity. Strategies to raise awareness, particularly in relation to women’s online safety and novel approaches to combat misinformation were discussed, emphasizing transparency and accountability in content creation.

The points that emerged from the roundtable discussion acknowledged the numerous opportunities and vast positive potential of AI while underscoring the imperative of responsible AI development, transparency, and collaboration to effectively address its inherent challenges and associated risks.

Read the full event report here

Inputs on the MCA Fact Checking Network Framework

Inputs on the MCA Fact Checking Network Framework

Authors: Ujval Mohan, Salil Ahuja and Sidharth Deb

Published: Apr 2024

Misinformation presents a growing threat in India, with significant implications for law and order and the broader health of public discourse. The World Economic Forum (WEF)’s Global Risks report 2024 underscores the nation’s susceptibility to misinformation and disinformation, ranking it as the most exposed country to such risks.

As policymakers discuss strategies to address the inundation of misinformation in the information ecosystem, fact-checking emerges as a powerful tool in combating falsehoods. While not a panacea, it has proven effective in debunking beliefs, particularly among audiences less entrenched in partisan narratives. India benefits from a robust ecosystem of fact-checkers capable of operating in regional languages, flagging false narratives as they emerge in local discussions. Leveraging these resources could greatly enhance efforts to combat misinformation. However, it is crucial to establish and adhere to robust standards that uphold the highest levels of integrity in the fact-checking process.

In this regard, TQH welcomes the genesis of the Misinformation Combat Alliance (Alliance) – a collaborative cross-industry effort to combat misinformation in the Indian context. We believe that this initiative holds significant promise in driving a whole-of-ecosystem approach. Additionally, we believe that the Alliance would help uphold the integrity of the fact checking ecosystem in India, akin to the role of the International Fact Checking Network (IFCN) and the European Fact-Checking Standards Network (EFCN) globally.

On the occasion of the International Fact Checking Day on April 2, 2024, the Misinformation Combat Alliance, released their Oversight Board (Fact Checking Network Board) Charter and Code of Principles for public review.

The Quantum Hub (TQH) team has closely analyzed these documents and synthesized our insights into a submission. Through our submission, we attempt to provide inputs on how international frameworks may be adapted to better suit Indian realities, while creating the right impetus and incentives for all relevant stakeholders. We also suggest measures to increase the robustness of the evaluation process for verified signationaries, and optimize operations to facilitate smoother functioning for the MCA.

Read the submission here

Snooping isn’t a good way to ensure child safety online

Snooping isn’t a good way to ensure child safety online

Author: Aparajita Bharti

Published: 24th April 2024 in the Mint

Recently, reports emerged that India’s ministry of electronics and IT has been working on an app, Safenet, that links parents’ phones with those of their children, so that they can monitor the online activities of their offspring. While parental controls on internet platforms typically offer options of granting app or website access and placing limits on the time spent, Safenet is said to go further by sharing call details and SMS logs, apart from data on all content viewed on platforms like YouTube. The Internet Service Provider Association of India has suggested that this app should be made available by default on all personal devices. This proposal is a classic example of techno-solutionism, an attempt to use technology to solve a complex social problem.

Online safety for kids is a complicated issue, with debates over the overall impact of internet usage on children. Since this impact is highly context-dependent, policymakers present a strong argument that any ‘duty of care’ in the online environment should rest primarily with parents, just as it does in the physical environment. However, the devil always lies in the details.

First, in the physical environment too, parents do not have complete control over a child’s information ecosystem. Parents are often in fact surprised to discover how much their children know, because they do not control all their interactions in school either with peers or their environment. However, in the digital realm, while parents can potentially get complete visibility of their child’s online interactions, it could conflict with a teenager’s need for independence, as child psychologists point out. There’s a delicate balance between parental oversight and teen autonomy that needs to be addressed.

Second, tools made available that allow an intimate invasion of privacy are very likely to be misused. This is particularly concerning for children who face abuse from their own families. Further, one-third of women in India experience intimate partner violence, according  to National Family Health Survey-5. Abusive partners can also use such tools to monitor and exert complete control over their victims. Identity verification, often proposed as a solution to this, is far from foolproof, given low digital literacy among women. In the gender context, another unintended impact of such tools would be parents exerting more control over the activities of adolescent girls than boys, a phenomenon observed routinely in the offline world. Prime Minister Narendra Modi, in a recent interview with Bill Gates, spoke about the immense power of technology in the hands of women. However, in a deeply patriarchal society, tools that allow such control over the information ecosystem of girls would not sync with that vision. It could widen the gap between boys and girls with respect to information access on top of an already prevalent digital divide.

As is evident, over indexing on one part of the ‘online kids safety’ puzzle leads us to newer problems. We therefore need an all-of-ecosystem approach.

First, we need to update our laws. For example, under the new Indian law that replaces the IT Act, 2000, there should be room for codes of practices similar to the UK’s Age Appropriate Design Code and the Aotearoa New Zealand Code of Practice for Online Safety and Harms. These codes provide guidance to platforms on features to make them safer for children. The newly passed Online Safety Act in the UK also requires platforms to conduct a risk assessment from the perspective of children.

Second, we need platforms to design behavioural ‘nudges’ to drive uptake of the parental controls already available. Many popular platforms have parental controls or family centres that aim to maintain a balance which lets parents know about their child’s usage patterns without granting them the power to eavesdrop. Platforms should come forward to co-design child safety codes with the government that would suit the Indian context.

Third, we need education institutions to chip in. Among parents’ key concerns are screen time and their inability to monitor legitimate uses versus unwanted activities. However, even schools (especially affluent ones) have been driving up screen time by making education more tab or screen based in the post-pandemic world. This perhaps requires a rethink.

Fourth, every educator and parent would acknowledge that every solution, technological or otherwise, is prone to circumvention by children. Children are creative, often more adept at using the internet, and have networks with peers that adults often know little about. Therefore, we need to invest in fostering children’s own ability to navigate the online world safely. We should focus on inculcating  self-responsibility, so that kids are able to tell good apart from bad, feel free to seek support when needed, and develop mature relationships with technology where they are in control and not the other way around. For this, we should update school syllabi , introduce this as a life skill, revise civic education curriculums and also create space for discussions on tech and society.

Finally, a survey by Young Leaders for Active Citizenship revealed that 80% of parents seek guidance from their children off and on to navigate the internet. We perhaps ought to flip the entire household dynamic on its head, so that today’s up-to-date teenagers can become coaches for safer internet usage at home. After all, we know that many millennial parents themselves are hooked to Reels and may be in need of help too!

Aparajita Bharti is the co-founder of The Quantum Hub, a public policy firm and Young Leaders for Active Citizenship (YLAC)

TQH Submission to Meta’s Oversight Board on Cases involving Explicit AI Images

TQH Submission to Meta’s Oversight Board on Cases involving Explicit AI Images

Authors: Aparajita Bharti, Ujval Mohan, and Devika Oberai

Published: May 2024

On April 16, Meta’s Oversight Board announced its review of two cases involving Meta’s content moderation of AI-generated explicit images of women (#Deepfakes). Nicknamed the “Supreme Court” of Facebook at its launch, the Board invited public comments to scrutinize the initial uneven handling of cases in the US and India, aiming to ensure fair protection for women globally with the advent of GenAI.

In response, The Quantum Hub (TQH) has submitted recommendations to Meta’s Oversight Board. We recommend that Meta improve the design of reporting tools available on its platforms to allow users to give more context to their reports. Additionally, we recommend slowing the spread of content that starts to be reported by users (particularly in this category) as an interim protective measure and discontinuing its policy of automatically closing appeals within 48 hours.

Various studies indicate that a significant majority of deep fake content (over 90%) targets women with sexualized or derogatory material. Our response also considers contextual factors within India’s social norms, where online abuse and reputational harms can readily translate into real-world consequences for women. We have also drawn on insights from both our gender practice and technology practice to inform this submission.

Read the submission here

Women in #Elections2024

Women in #Elections2024

Authors: Akshat Sogani, Arun Sudarsan, Manas Pathak, Sohinee Thakurta, Teesta Shukla

Published: April–June 2024

The number of women electors and their share of participation has been increasing steadily in Indian elections. In the 2019 general elections, the polling percentage among women electors was 67.2%, only marginally lower than the national average of 67.4%. Political parties have also been targeting women electors with promises specifically aimed at them. Meanwhile, the Parliament of India in 2023 passed the 128th Constitutional Amendment Bill, 2023, or the Nari Shakti Vandan Adhiniyam, which reserved 33% of directly elected seats to the Lok Sabha and state legislatures, for women. The reservation of seats will be effective after the next delimitation exercise, scheduled to be held after 2026.

To shed light on the role and participation of women in the General Elections 2024, TQH is publishing a four-part series of factsheets. We analyse the number and share of “Women Electors” in Part #1, “Women Candidates” in Part #2, “Women in Manifestos,” covering key promises for women by different parties qualitatively in Part #3, and lastly, voter turnout among women and any other significant trends observed during polling in Part #4.

You can find the factsheets in this series below:

1. Women Electors

2.1  Women Candidates in Phase 1 & 2

2.2  Women Candidates in Phase 3

2.3 Women Candidates in Phase 4

2.4 Woman Candidates across Phase 1-7

2.5 Women MPs of the 18th Lok Sabha

3. Women in Manifestos

4. Women Electors’ Participation in Elections 2024

How to future-proof AI regulation

How to future-proof AI regulation

Authors: Rohit Kumar & Sidharth Deb

Published: 29th March, 2024 in Economic Times

On March 15th the Ministry of Electronics and Information Technology (“MeitY”) issued a fresh AI advisory reversing key provisions from the March 1st version. It overturned a controversial requirement for intermediaries to obtain government approval before publicly launching ‘under-tested’ or ‘unreliable’ generative AI or other AI diffused deployments. The unclear scope and applicability of the original advisory, and the control the government was assigning itself, triggered widespread concerns about its legality and overall prospects for AI innovation.

Both advisories reflect the government’s concern about the rushed public launch of generative AI solutions. While the new advisory’s shift from approval-seeking towards labelling represents greater balance, this episode holds structural lessons on the need to align India’s approach to AI regulation with its ambition to lead the global frontier on AI development.

Considerations for Balanced AI Regulation

Firstly, regulation should avoid one-size-fits-all prescriptions. The language in the advisories do not appropriately differentiate between various use cases and deployments. This clubs all market participants and actors across the AI value chain. For instance, the advisories (especially the original advisory) fail to make any distinction between software, content recommending algorithms, generative AI deployments and larger foundation models. AI’s complexities means that each layer of the value chain poses a different level of risk and consequently requires a different targeted intervention. Classification is therefore needed to facilitate proportionate, risk-based and fit for purpose regulation.

Secondly, regulation should be rooted in AI’s technical realities. For example, the advisories state that AI deployments should not permit any bias or discrimination. While well intentioned, this is inconsistent with the technical consensus that completely eliminating AI bias is nearly impossible. Such regulations make innovators risk averse, cause widespread non-compliance, and invite the risk of arbitrary enforcement. Bias can perhaps be better tackled through standards on platform design and requirements of transparency, testing with diverse groups, human involvement, and weightage in training data.

Thirdly, without nuance, government permissions can stifle innovation. While concerns of under-testing and unreliability are valid, approvals prior to product rollout – akin to aviation, automobiles and pharmaceuticals – may be incompatible with fast-moving digital markets and may not be required in most use cases. In sectors where we create such controls for product safety, this is usually done to prevent immediate risk of public injury, death, health and safety. However, with digital technologies product safety often entails iterative processes where businesses adapt to live feedback loops from the market. For that reason, approval based regimes or special regulatory sandbox frameworks should be reserved only for the highest risk cases with clear demarcations between domains that are low risk and those that pose risk to human life or public safety e.g. the military, protected systems, critical information infrastructures and biosecurity.

Fourthly, while watermarking and labelling can be advised, we should not over index on any technology. The March 15th notice advises platforms to adopt watermarking technologies along the lines of open protocols developed by initiatives like the C2PA. The advisory also suggests that platforms build capabilities to identify which users or systems make changes to any piece of content. While there is merit in exploring these ideas, it is a fact that watermarking remains an experimental technology and is prone to circumvention.

Way Forward

To ensure the development of India’s AI ecosystem, regulation must strike a balance between erecting appropriate safeguards and preserving market agility. We must urgently commence a comprehensive discussion on AI regulation. Advisories and amendments to regulate emerging technology through India’s IT Rules is unsustainable; these proposals are often untethered to the parent IT Act and are not grounded in adequate evidence.

The first pillar of reform should prioritise inclusive regulation through public consultations which marshal the collective intelligence of government, industry and civil society. These would help produce solutions which alleviate the burden of responsibility on government’s shoulders. Consultations would also help avoid reactive directives like the original advisory which can unintentionally erode value from the market.

Reform should also entail suitable investments in setting up an independent regulator. Such a regulator should be empowered through staffing, resources and tools which facilitate evidence-based regulation. India should minimise the discretionary involvement of the political executive in the next cycle of AI regulation. Instead, an independent regulator should promote standardisation, transparency, consumer redressal and public accountability.

Next, interventions on bias, trust and safety must serve local contexts. AI regulation should facilitate international businesses in forming local partnerships to solve for localised harms arising out of discrimination and exclusion. India’s diversity lends itself to competing narratives and consequently, deep local partnerships are essential to build for its socio-cultural heterogeneity.

Finally, similar to the US Executive Order, India should attempt to develop guidelines and benchmarks for AI assurance audits. Other tools worth exploring include AI impact assessments, security incident and vulnerability reporting databases etc.

At the end of the day, AI regulation needs robust future-proofing that is capable of swiftly adapting to the rapidly evolving tech landscape. A fragmented approach that is tied to an outdated legislation won’t cut it.

Rohit is the Founding Partner and Sidharth is a Manager at The Quantum Hub (TQH) – a public policy firm

Analysing India’s progress towards the elimination of child marriages

Analysing India’s progress towards the elimination of child marriages

Authors: Suhani Pandey & Sonakshi Chaudhry

Published: 22nd February 2024 in Hindustan Times

India has made remarkable strides in the decline of child marriages over the last few decades and has led global progress to eliminate the practice. Child marriage is prohibited in India and the Prohibition of Child Marriage Act, 2006 envisions protecting the fundamental rights and liberties of minor girls and boys. Despite this, 1 in 3 of the world’s child brides live in India, and girls face a heightened risk of violence and poverty, along with violation of their right to education, health, and protection. Illustrating this, UNFPA analysis of NFHS-V data suggests that in terms of demographics, 48% of girls who were married below 18 years of age had received no education as compared to only 4% who gained higher education.

There is also a discrepancy in reported numbers, where despite high prevalence of child marriages recorded under NFHS-V (23.3%), National Crime Records Bureau’s (NCRB) annual ‘Crime in India’ report recorded an average of only around 360 incidents per year of child marriages between 2011 and 2020 under the Act. Due to pandemic-induced poverty perhaps, a sudden spike in child marriage cases was also reported both across the country and globally. Reported cases per NCRB data during this time were 785 in 2020, and shot up to 1050 in 2021, lowering only slightly to 1002 in 2022. More recently, in a written response to Parliament in August 2023, the Ministry of Women and Child Development (WCD) suggested that an increase in the number of reported cases under the Act could also be attributed to awareness initiatives and enhanced reporting mechanisms undertaken by various states. While this may be the case, the issue remains pernicious and multilayered, and there are several areas that need to be addressed to solve the problem, ranging from social norms and beliefs to the implementation of policies.

Implementation Gaps

Research by the Kailash Satyarthi Children’s Foundation highlights that there is high underreporting, but even for reported cases, more than 90% of child marriage cases were pending trial as of 2021 and the overall conviction rate under the Act has been found to be poor. Other barriers have also impeded the effective implementation of the Act due to wide variations in state capacity and institutional support. For instance, as per the WCD website, Arunachal Pradesh, Nagaland, Uttar Pradesh and Uttarakhand had either not formulated or not uploaded their rules as of 2020 under the law leaving crucial roles like the Child Marriage Prohibition Officers (CMPOs) unappointed. In late 2021, Uttar Pradesh’s Directorate of Women Welfare circulated draft rules for public consultation but the outcome of this exercise is unknown as the rules have still not been notified.

UNICEF data suggests that over half of India’s child brides reside in Uttar Pradesh, Bihar, West Bengal, Maharashtra and Madhya Pradesh and civil society organizations have sought the court’s support to ensure the effective implementation of the Act. In response to a petition filed before the Supreme Court last year, the Ministry of Women and Child Development has been asked to provide the steps taken for the effective implementation of the provisions of the Act. In Maharashtra, emphasizing the need for effective implementation of laws against child marriage, the  Bombay High Court has also requested information on the appointment details of Child Marriage Prohibition Officers.

Addressing the issue

To address the issue of child marriage in India, a comprehensive and cross-cutting approach is imperative. Prioritizing the fast-track trial of reported incidents under the Act, immediate notification of rules and the appointment of a Child Marriage Prohibition Officer (CMPO) equipped with the necessary infrastructure, is crucial. However, legal measures alone can prove insufficient. Holistic socio-economic solutions must be implemented to raise awareness among girls and their families and facilitate their improved access to education and support institutions, including financial networks.

In addition to socio-economic solutions, comprehensive measures will be required to streamline coordination and promote convergence among key stakeholders, including those charged with implementing the provisions of the Juvenile Justice Act (JJA), the Protection of Women from Domestic Violence Act (PWDVA), the Protection of Children from Sexual Offenses Act (POCSO), as well as police officers, district magistrates, child helpline coordinators, and shelter homes, who collectively strive to achieve the common objective of preventing child marriages.

The government can lead the way by prioritizing the swift implementation of rules under the Act and devising key policy measures for the empowerment of adolescent girls. Socio-behavioral change campaigns can go a long way in collectively raising awareness of society, challenging social norms and ensuring better reporting of incidents.

Suhani is Analyst, Public Policy & Sonakshi is Manager, Strategic Partnerships & Communications, at The Quantum Hub (TQH) – a public policy firm.

Women in STEM – Challenges and Opportunities in India

Women in STEM – Challenges and Opportunities in India

Authors: Devika Oberai, Sayak Sinha, Srijan Rai

Published: February 2024

Women’s participation in Science, Technology, Engineering and Mathematics (STEM) is a global concern. According to a UNESCO report, only 35% of students in higher education worldwide are women. In India, nevertheless, as per the AISHE report, women comprise 43.2% of the sample across UG, PG, and PhD programmes. This upward trend, however, must be understood with caution, as evidence suggests that even though women enter the STEM ecosystem in higher proportions as compared to the rest of the world, their retention is met with several challenges. The “leaky pipeline” metaphor illustrates the gradual attrition of women and individuals from minority groups within STEM fields as they move from entry to employment, towards leadership.

Given that the labour market is constantly changing and evolving, especially owing to automation and Artificial Intelligence, a STEM education can enable women to keep up with this transformation by giving them transferable skills and closing the gender pay gap. Women in India who take up science are more likely to be employed and earn about 28% more than women who take up non-technical subjects. Moreover, the critical thinking skills developed through such programs can prove invaluable for individuals to approach problem solving.

While women’s participation in STEM is a complex puzzle, targeted interventions by key actors could help address some critical pieces of this puzzle. The government is currently working towards addressing the problem with the GATI (Gender Advancement for Transforming Institutions) charter, which is a voluntary, signatory charter to nudge research institutions to support diversity and inclusion across 30 pilot institutions), and programs like Vigyaan Jyoti encouraging high-school girls through experiential learning is promising. Some other recommendations that the brief highlights include:

  • Beginning interventions early, at primary school levels to develop interest towards STEM in girls at an early age.
  • Promoting affirmative action and specific provisions (such as the Supernumerary seats at IITs) to actively make space for women in the ecosystem.
  • Mandating charters like GATI to ensure a flexible and safe workplace to ensure retention of women in STEM.
  • Providing targeted support to women re-entering the workforce through returnship programs after a break so that they can transition back smoothly into their roles.

Implementing these recommendations promises a more gender-inclusive STEM ecosystem, fostering economic growth and prosperity for women.

About the series

IWWAGE- an initiative of LEAD at Krea University and The Quantum Hub have worked together to compile and present ‘Women in STEM: Challenges and Opportunities in India’ — the third policy brief in the ongoing “Women and Future of Work,” series. This brief explores the limitations both within the education and employment structures in India and addresses the issues that affect women’s participation in STEM.

The brief suggests that the challenges that affect women’s participation start at the entry level when it comes to STEM subjects. These challenges are further enhanced at every stage with multiple obstacles at the employment, retention, and leadership levels that hinder women’s progress in these fields. The brief then maps programs and schemes at national and international levels across public and private sectors at various levels of education and employment alike in order to make recommendations to increase female participation in STEM fields.

Read the full report on Women in STEM here

Additionally, based on this work, a data story on Women in STEM in India was published in ‘Vigyan Dhara’, the newsletter from the Office of the Principal Scientific Adviser to the Government of India. This data story can be found here.

Don’t go overboard on ‘fact checking’

Don’t go overboard on ‘fact checking’

Authors: Ujval Mohan and Salil Ahuja

Published: 12th February, 2024 in Hindu BusinessLine

The Bombay High Court’s split verdict on the constitutionality of the Indian Government’s proposed fact check unit (FCU) exemplifies the conundrum between countering the threat of misinformation and government involvement in fact checks. In 2023, FCUs emerged as the favoured policy intervention with governments in Karnataka, Tamil Nadu, and Uttarakhand each citing the need for government intervention to control misinformation.

Safeguarding the integrity of civic discourse from manipulative disinformation campaigns is paramount, especially as India enters a pivotal election season. In principle, fact-checks can effectively counter false narratives that mislead users and cause real-world harm. While social media platforms have long partnered with third-party fact-checkers to warn users of false information, the threat of ‘fake news’ has grown in scale and sophistication.

However, FCU proposals denote a novel trend, where governments seek to fact check misleading narratives. This idea of governments emerging as official arbiters of truth is the subject of widespread scepticism. As more governments pour already scarce resources into setting up their own FCUs, addressing systemic limitations becomes crucial.

Who watches the watchdog?
With easy access to generative AI technologies, information pollution is becoming more abundant, powerful, and deceptive. At the same time, a large share of ‘false information’ online is likely innocuous and often a form of satire or artistic expression.

FCUs face the daunting task of sifting through this digital haystack to handpick harmful narratives that deserve their attention. This entails identifying information emerging from suspicious/inauthentic sources while analysing trends to look for harmful content. Justice Patel, who led the Bombay HC bench, raised a concern about “how few things are immutably black- or-white, yes or no, true or false” which could lead to an untenable system of coercive censorship of alternative views by the government.

Government actors ultimately sway to political incentives, which skews their outlook on narrative selection. Consequently, government FCUs may disproportionately target content critical of the government, while ignoring falsehoods that support its outlook. For instance, government-run FCUs in Malaysia and Thailand conspicuously stayed away from narratives about controversial regime changes and protests. In Singapore, the Minister empowered to issue directions to counter ‘fake news’, overwhelmingly used the power to target dissenting voices.

Unsurprisingly, FCUs proposed by both the Union Government and Tamil Nadu target misinformation only about themselves. Other FCUs are less clear about what narratives they will prioritise and how these choices will be made. With this format, FCUs will morph into tools of government counter-speech, deviating from their intended purpose of debunking falsehoods that bear the greatest risk of harm. The public interest in scrutinising claims solely about the government was questioned by the Court.

State action is often disproportionate
Each proposed Indian FCU has a different structure, but all of them are designed to either label content as misleading, facilitate take down, or prosecute errant social media users. Owing to inherent conflicts of interest, fact checks by the state are prone to public distrust, as well as legal challenges arising from free speech concerns.

For example, not all fake posts warrant penalties, but once flagged as ‘false’ by FCUs, users posting such content face the real possibility of being subject to prosecution. With instances of Indian police overriding legal safeguards to arrest users for innocuous social media content, citizens and journalists will be discouraged from online speech fearing FCU action. This is precisely at issue in another case before the Madras High Court, where petitioners argue that the FCU will muzzle voices critical of the state government.

Designing an effective FCU
The structural conflict of interest resulting from government intervention necessitates institutional independence and transparency in narrative selection. For example, proposed FCUs should insulate editorial decisions from government influence, regularly publish transparency reports, and decentralise fact checking functions to numerous independent fact checkers.

While the design of state-led FCUs can be improved, government efforts to counter misinformation would be far more effective if it instead focussed on enabling partnerships between social media platforms and a vibrant ecosystem of independent third-party checkers, rather than doing the fact checks themselves.

That said, even independent fact checkers need time to curate priority narratives, gather precise evidence, and fact-check claims, all before dangerous falsehoods mutate and gain traction. Therefore, fact checks alone cannot effectively counter the threat of harm from misinformation unless we slow down the spread of unverified/unsafe content. Creating ecosystem incentives that deprioritise virality in favour of trust should thus be another goal for policymakers.

Fact checks already battle challenges of online polarisation and the ‘backfire effect,’ where users double down on belief in falsehoods after they are debunked. Saddling fact checking with limitations that come with state control can render another blow to their efficacy.

Salil Ahuja is an Analyst and Ujval Mohan is a Senior Analyst working on technology policy issues at The Quantum Hub (TQH) – a public policy firm.

 

 

Leaving No One Behind: Did Budget 2024 fulfil its promise for Persons with Disabilities?

Leaving No One Behind: Did Budget 2024 fulfil its promise for Persons with Disabilities?

Authors: Nipun Malhotra & Rohit Kumar
Date: 1st Feb, 2024

Budgets in India have often been criticised for completely ignoring the rights of Persons with Disabilities. The country often seesaws between budgets that are considered “populist” where the disabled are ignored as they aren’t considered a big enough vote bank, or those that are considered “growth oriented”, ignoring the disabled because they are not looked at as an engine of growth.

This was a major reason why we were elated at the Finance Minister’s focus on inclusive growth as a theme for this budget. Within the first five minutes, under the section Garib Kalyan, Desh ka Kalyan she went on to say, “The schemes for empowerment of Divyangs and Transgender persons reflect firm resolve of our Government to leave no one behind”. Our hopes had been raised.

In the speech that lasted slightly under an hour, disability would not be mentioned again. The devil as they say, is in the details. The allocations for the Department of PwDs has remained largely the same for several years now. This year again, the government has budgeted a mere 1,225 crore for a department that is supposed to cater to accessibility and other needs of Persons with Disabilities. This is just 0.02% of the total budget outlay for a population estimated to be 16% of the total as per the World Health Organisation. In past years, the actual spend by the department has been even lower. For example, in 2022-23 (two years ago) only around 990 crore of the allocated budget of 1,212 crore was spent.

The “Scheme for implementation of Persons with Disability Act” has actually reduced its budget from 150 crore to 135 crore, perhaps because the revised estimate for last year (2023-24) was a mere 67 crore. This is extremely unfortunate considering the struggles faced in the implementation of the RPwD Act. When the Act came into force in 2017, government departments were given five years to make themselves accessible. Unfortunately, in the last year alone we have seen incidents like the one where wheelchair model Virali Modi was forced to be carried up a floor of steps to complete her wedding registration. The fact is, Virali was vocal and based in the buzzing metropolis of Mumbai. It’s unlikely that such an incident would even be reported if it were to happen in the local office of a much smaller city.

Accessibility is both a stock and a flow problem. Many government departments and buildings hide behind the excuse of ‘lack of budgets’ to not make themselves accessible. The Scheme for implementation of Persons with Disability Act needs to have a dedicated budget to ensure retrofitting of solutions for accessibility in old buildings and infrastructure.

It is also sad that this year again, there has been no allocation for the “Artificial Limbs Manufacturing Corporation of India” (ALIMCO) through the budget. This is in contrast to 2019–20 when 60 crore was allocated; budgetary allocations have only been cut in subsequent years. ALIMCO is the government’s premier disability aid manufacturer. This decrease in support to ALIMCO becomes particularly disappointing because on the one hand, GST is being charged on disability aids manufactured by the private sector and on the other, investments are not being made for ALIMCO to expand at a fast enough rate to keep pace with requirements.

There is planned investment in ALIMCO of 80 crore, but this is coming from IEBR (Internal and Extra Budgetary Resources – which constitutes the resources raised by PSUs through profits, loans and equity). It is great that ALIMCO is raising these funds but a stimulus from the government would only have helped, considering India’s rapidly aging demography which needs these disability aids as well.

We realise this was only an interim budget. However, it is unfortunate how successive Finance Ministers across governments have failed in setting a vision for the disabled community. Health insurance for what is in any case a very vulnerable community is not easily accessible, and private companies are not incentivized to develop more products for PwDs, even though courts have repeatedly pushed them to. While benefits under Ayushman Bharat can be availed, the scheme is not extended by default to people holding disability certificates and UDID cards, despite repeated demands by the community to institute this change.

Finally, beyond the budget figures, what was missing in the speech today was a roadmap to promote entrepreneurs with disabilities, a vision for assistive technology, stimulus to provide accessible infrastructure and jobs for the disabled. But more importantly, it was a stark reminder that the promise of “sabka saath, sabka vikas” is predicated on collective action. We hope this serves as a reminder to India’s disabled community that we must mobilise, vocalise and channel our energies to ensure that our interests are taken into account when successive governments lay out the year’s accounts.

Nipun Malhotra is the founder of Nipman Foundation. Rohit Kumar is the co-founder of Young Leaders for Active Citizenship (YLAC) and The Quantum Hub (TQH) – a multi sectoral public policy firm.

Why Are Women Missing in STEM Spaces?

Why Are Women Missing in STEM Spaces?

Authors: Sona Mitra, Devika Oberai, and Sayak Sinha

Published: 19th January in Hindu BusinessLine

According to the Global Gender Gap Report 2023, women make up only 29.2% of all STEM (Science, Technology, Engineering and Mathematics) workers across 146 countries. In India, research by Muralidhar and Ananthanarayanan (2023) highlighted that across 100 Indian universities, only 16.6% of the overall STEM faculty were women. Within this, the latest All India Survey on Higher Education (2020-21) reports women make up 42.3% of the sample in STEM education- including undergraduate, postgraduate, MPhil, and PhD courses. However, within this too, girls are concentrated in life sciences, with programs such as B.Tech comprising only 28.7% of women. Across premier institutions such as IITs, women constitute about 20% of the sample.

The gaps in STEM arise due to certain factors that operate in the early phases in girls’ education. Social conditioning arising from existing norms and perceptions about the roles of girls and women in the society often leads in shaping the choices that girls make while enrolling themselves into higher education.  The conditioning of young children continues even within the school systems where the curriculum and pedagogical practises undermine the self-esteem and confidence of girls.

In order to address these gaps at the entry level, interventions that inspire younger girls to meaningfully engage with STEM need to take shape early. Programmes like Vigyaan Jyoti implemented by the Dept. of Science and Technology comprising  of activities such as counseling, role-model interactions, etc. currently target high school students. Similar interventions targeting younger students implemented at an earlier stage could prove useful. Similarly, fortifying Foundational Literacy and Numeracy outcomes through increased financial investment, gender-responsive teacher-training modules and robust assessment and monitoring frameworks can all contribute to improved higher-education outcomes for girls.

The other major challenge is the retention of women within the STEM ecosystem. Early data from Key Global Workforce Insights Report (2015) suggest that even when women choose STEM careers , 45% reported challenges in upward mobility and as many as 81% believed that there is a gender-bias in the internal evaluation processes. Further, the government’s labourforce survey in 2020-21 suggests a gender pay-gap with men earning 35 percent more than women across all sectors, thus demotivating the intent to stay in the labor force.  Evidence from professors at IIT Kanpur found that women working as scientists in lab-based occupations face isolation in male dominated labs that often manifest in lack of support for women colleagues, and losing out on networking opportunities for women that hinder upward mobility. Such trends often also end up in undervaluing women’s research and findings within the labs.

Promoting women and retaining them through targeted interventions by key actors becomes critical. At an institutional level, policies that afford flexibility of time, comprehensive child-care provisions, and supportive infrastructure are crucial in creating an environment conducive to sustained participation of women. Addressing the gender-pay gap in STEM holds potential to incentivize women to persist in STEM careers. The Dept. of Science and Technology has introduced the GATI (Gender Advancement for Transforming Institutions) charter which is a voluntary, signatory charter to nudge research institutions to support diversity and inclusion. The charter encourages gender-agnostic hiring, maternity leaves, non-discriminatory appraisals, etc. and has shown promising results across 30 pilot institutions such as IIT Delhi, University of Delhi, Jamia Millia Islamia, etc. Making these charters mandatory rather than voluntary, thus, has the potential to retain more women in prestigious institutions.

Facilitating re-entry is essential for retaining women in the field. Returnship programs adopted by few companies, have demonstrated promise in facilitating the reintegration of women into workplaces after career breaks, thereby allowing them to resume their professional trajectories.  Mahindra’s ‘Back to Mahindra’ initiative is specifically designed to aid former women employees transition back to work. The Federal Bank recently introduced the ‘Maternity Work Buddy’ initiative, offering support to expectant mothers by providing updates on the workplace during their maternity leaves. The HCL-Tech Returnship and Microsoft’s Leap Program offer short-term professional engagements to those out of the workforce.

While increasing women’s participation in STEM is challenging and layered with several dimensions – it can be meaningfully addressed by increasing targeted interventions as already discussed. The state has an important role – as the most powerful actor, it can not only raise awareness and improve its own initiatives but also effectively activate the private sector participation to enable entry and re-entry of women and girls in STEM education and occupations.

Sona and Sayak are at the Initiative for What Works to Advance Gender Equality (IWWAGE), and Devika is an Associate at The Quantum Hub (TQH).

Tackling Deepfakes Requires All Hands on Deck

Tackling Deepfakes Requires All Hands on Deck

Authors: Rohit Kumar and Mahwash Fatima

Published: 8th January 2024 in the Hindustan Times

What would your elderly father’s response be if they received an emergency video message from you requesting a large sum of money? With rapid advances in artificial intelligence, normal human reaction to such situations can easily be exploited through the creation of deepfakes.

Deepfakes is undoubtedly one of the biggest threats our society is likely to face in 2024. No wonder the union government has taken up this issue on priority. It has already sent an advisory to social media intermediaries asking them to strengthen their systems for detecting and taking down deepfakes. News reports also suggest that the Ministry of Electronics and IT is considering fresh amendments to the Information Technology (IT) Rules to include specific obligations for intermediaries to contain the deepfake menace.

It was in 2017 when deepfake content made its first notable appearance with a Reddit user named ‘deepfakes’ posting fake videos of celebrities. Over the years, with the development of the underlying technology, these videos have become increasingly realistic, and deceptive. Between 2019 and 2020, the number of deepfake online content has increased by over 900%, with some forecasts predicting that as much as 90% of online content may be synthetically generated by 2026.

The most worrying societal harm from the rise of misinformation and deepfakes is the erosion of trust in our information ecosystem. Not knowing who or what to believe can do unimaginable damage to how humans interact and engage with each other. A recent empirical study has in fact shown that the mere existence of deepfakes feeds distrust in any kind of information, whether true or false.

In India, while no legislation specifically governs deepfakes, existing laws such as the IT Act and the Indian Penal Code already criminalise online impersonation, malicious use of communication devices, obscene publishing etc. Social media platforms are also obligated under the IT Rules to take down misinformation and impersonating content; failure to do so means risking their ‘safe harbour’ provision and being liable for the harm that ensues.

Unfortunately, while these legal provisions already exist, it is challenging to execute what the law demands. First, identifying deepfakes is a massive technical challenge. Currently available options – AI powered detection and watermarking/labelling techniques – are inconsistent and inaccurate. Notably, OpenAI pulled its own AI detection tool due to ‘low accuracy’ in July 2023.

Second, technologies that are used to create deepfakes have positive use-cases too. For instance, these same technologies can be used to augment accessibility tools for persons with disabilities, deployed in the entertainment industry for more realistic special effects, and even used in the education sector. Essentially, what this means is that every piece of content that has been edited digitally doesn’t necessarily make it harmful. This further complicates the job of content moderation.
Third, the volume of content uploaded every second makes meaningful human oversight difficult. Unfortunately, by the time problematic content is detected, it has often already spread.

Policymakers around the world are struggling to find a good solution to the problem. The US and the EU seem to have taken some initial steps, but their efficacy remains untested. In the US, President Biden signed an executive order in October 2023 to address AI risks. Under this order, the Department of Commerce is creating standards for labelling AI-generated content. Separately, states like California and Texas have passed laws criminalising the dissemination of deepfake videos influencing elections, while Virginia penalises the distribution of non-consensual deepfake pornography. In Europe, the Artificial Intelligence Act will categorise AI systems into unacceptable, high, limited, and low risk. Notably, AI systems that generate or manipulate image, audio or video content (i.e. deepfakes), will be subjected to transparency obligations.

Technologists are also working on ways to accurately trace the origins of synthetic media. One of these attempts by the Coalition for Content Provenance and Authenticity (C2PA) aims to cryptographically link each piece of media with its origin and editing history. However, the challenge with C2PA’s approach lies in widespread adoption of these standards by devices and editing tools, without which unlabelled AI-generated content will continue to deceive.

Therefore, while watermarking and labelling may help, what we need urgently is a focused attempt to reduce the circulation of deepfake content. Slowing down the circulation of flagged content until its veracity is confirmed can be crucial in preventing real-world harm. This is where intermediaries such as social media platforms can perhaps be required to step in more strongly. If an uploaded piece of content is detected to be AI modified or flagged by users, platforms should mark such content for review before allowing unchecked distribution.

Finally, there is no substitute to building resilience among the audience. Fostering media literacy to help people of all ages better understand the threat of misinformation, to make them more conscious consumers of information is the need of the hour.

Navigating the new digital era where ‘seeing is no longer believing’ is undoubtedly challenging. We need a multi-pronged regulatory approach that nudges all ecosystem actors to not only prevent and detect deepfake content, but also to engage with it more wisely. Anything less is unlikely to retain our trust in the digital world.


Rohit is Founding Partner and Mahwash a Senior Analyst at The Quantum Hub (TQH), a public policy firm.