Children, a key yet missed demographic in AI regulation

Children, a key yet missed demographic in AI regulation

Authors: Rhydhi Gupta and Sidharth Deb

Published: September 26, 2023 in The Hindu

The Indian Government is poised to host a Global Summit on Artificial Intelligence (AI) this October. Additionally as the Chair of the Global Partnership for Artificial Intelligence (GPAI), Delhi will also be hosting the GPAI global summit this coming December. These events suggest the strategic importance of AI, as it is projected to add $500 billion to India’s economy by 2025, accounting for 10 percent of the country’s target GDP.

Against this backdrop, PM Modi recently called for a global framework on the ethical expansion of AI. Given the sheer volume of data that India can generate, it has an opportunity to set a policy example for the Global South. Observers and practitioners will track closely India’s approach to regulation and how it balances AI’s developmental potential against its concomitant risks.

One area where India can assume leadership is how regulators address children and adolescents who are a critical – yet less understood – demographic in this context. The nature of digital services means that many cutting edge AI deployments are not designed specifically for children but are nevertheless accessed by them.

The Governance Challenge

Regulation will have to align incentives to reduce issues of addiction, mental health, and overall safety. In absence of that, data hungry AI-based digital services can readily deploy opaque algorithms and dark patterns to exploit impressionable young people. Among other things this can lead to tech-based distortions of ideal physical appearance(s) which can trigger body image issues. Other malicious threats emerging from AI include misinformation, radicalisation, cyberbullying, sexual grooming, and doxxing.

The next generation of digital nagriks must also grapple with the indirect effects of their families’ online activities. Enthusiastic ‘sharents’ regularly post photos and videos about their children online to document their journeys through parenthood. While moving into adolescence we must equip young people with tools to manage the unintended consequences. For instance, AI-powered deep fake capabilities can be misused to target young people wherein bad actors create morphed sexually explicit depictions and distribute them online.

Beyond this, India is a melting pot of intersectional identities across gender, caste, tribal identity, religion, linguistic heritage, etc. Internationally AI is known to transpose real world biases and inequities into the digital world. Such issues of bias and discrimination can impact children and adolescents who belong to marginalised communities.

Alleviate the Burden On Parents

AI regulation must improve upon India’s approach to children under India’s newly minted data protection law. The data protection framework’s current approach to children is misaligned with India’s digital realities. It transfers an inordinate burden on parents to protect their children’s interests and does not facilitate safe platform operations and/or platform design. Confusingly it inverts the well known dynamic where a significant percentage of parents rely on the assistance of their children to navigate otherwise inaccessible UI/UX interfaces online. It also bans tracking of children’s data by default, which can potentially cut them away from the benefits of personalisation that we experience online. So how can the upcoming Digital India Act (DIA) better protect children’s interests when interacting with AI?

Shift the Emphasis to Platform Design, Evidence Collection, and Better Institutions

International best practices can assist Indian regulators in identifying standards and principles that facilitate safer AI deployments. UNICEF’s guidance for policymakers on AI and children identifies nine requirements for child-centred AI which draws on the UN Convention on the Rights of the Child– to which India is a signatory. The Guidance aims to create an enabling environment which promotes children’s well being, inclusion, fairness, non-discrimination, safety, transparency, explainability and accountability.

Another key feature of successful regulation will be the ability to adapt to the varying developmental stages of children from different age groups. California’s Age Appropriate Design Code serves as an interesting template. The Californian code pushes for transparency to ensure that digital services configure default privacy settings; assess whether algorithms, data collection, or targeted advertising systems harm children; and use clear, age-appropriate language for user-facing information. Indian authorities should encourage research which collects evidence on the benefits and risks of AI for India’s children and adolescents. This should serve as a baseline to work towards an Indian Age Appropriate Design Code for AI.

Lastly, better institutions will help shift regulation away from top-down safety protocols which place undue burdens on parents. Mechanisms of regular dialogue with children will help incorporate their inputs on the benefits and the threats they face when interacting with AI-based digital services. An institution similar to Australia’s Online Safety Youth Advisory Council which comprises people between the ages of 13-24 could be an interesting approach. Such institutions will assist regulation to become more responsive to the threats young people face when interacting with AI systems, whilst preserving the benefits that they derive from digital services.

The fast evolving nature of AI means that regulation should avoid prescriptions and instead embrace standards, strong institutions, and best practices which imbue openness, trust, and accountability. As we move towards a new law to regulate harms on the internet, and look to establish our thought leadership on global AI regulation, the interest of our young citizens must be front and centre.

Rhydhi and Sidharth are, respectively, Analyst & Manager, Public Policy at The Quantum Hub (TQH Consulting)