Over-reliance on parents for consent may curtail internet access for teenagers. The discussion on what is a good age-verification mechanism has been missing from the discourse.
Authors: Aparajita Bharti & Nikhil Iyer
Published: July 02, 2022 in The Economic Times
Imagine a 16-year-old boy getting his first smartphone in a tier-3 city. He has attended school online for two years of the pandemic. He helps his parents download and use new apps. His primary means of shopping is online and he orders for the family.
Contrast this with his 70-year-old grandmother, also a new smartphone user. Like many women of her age, she has had limited formal education and is learning to use messaging and social media apps to keep in touch with her family. Who is likely to be more vulnerable on the internet? And is age, then, a good indicator of a person’s ability to make decisions when it comes to their privacy and safety online?
This is a point of contention for policymakers across the world. Currently, as the Personal Data Protection Bill 2019 stands, any child below 18 years has to effectively obtain consent from their parent(s) or guardian(s) in all cases of their data being processed on the internet. Further, there is a blanket ban on profiling based on children’s data. If this provision remains unchanged, India will be an outlier globally.
In Britain and the US, for instance, parental consent is needed for those below 13, while in China this threshold is at 14. In the EU, the threshold age is 16, with an option for member-states to reduce it to 13. At the other end of the spectrum is Australia. Its Privacy Act, 1988, mentions no age of consent. Instead, consent is valid if the individual has ‘capacity to consent’. Entities handling individuals’ personal information have to decide on a case-by-case basis whether there is capacity to consent and take parental consent if they think fit.
In comparison, the high threshold of 18 years in India is out of touch with reality, and can seriously hamper Indian teenagers from fully experiencing the digital age. Nearly one-third of all internet users in the country were under 18 as of 2020. This number is likely to have increased in the Covid context.
Over-reliance on parents for consent may curtail access for teenagers due to various reasons, including parents’ lack of exposure, gender bias and unhealthy relationships. Further, the discussion on what is a good age-verification mechanism has been missing from the discourse, even as privacy experts concur that it should not itself lead to collection of more personal data and IDs.
Ctrl + Shift to Enter
To resolve this, policymakers could turn to the Convention on the Rights of the Child (CRC), 1989. It exhorts states – their legislative, executive and judicial arms – to act in the ‘best interests of the child’ in all matters pertaining to the realisation of their socioeconomic and political rights. India has upheld the principles of CRC in various legislations, such as the Commission for Protection of Child Rights, 2005, the Right of Children to Free and Compulsory Education, 2009, and the Protection of Children from Sexual Offences, 2012. This approach should also be applied to children’s data protection and privacy.
Britain’s Age-Appropriate Design Code (AADC), in force from September 2021, presents a model. AADC entrusts entities handling children’s data with a positive obligation to give primacy to the interests of the child. It lays down 15 standards, instead of strict dos and don’ts, directing entities to implement ‘age-appropriate’ design. This design should rest on principles of data minimisation, purpose limitation, transparency, avoiding usage of nudge techniques, default settings that safeguard children’s privacy, and so on.
Virtually all entities providing online products or services – apps, programs, websites, connected toys – are covered. AADC acknowledges that the ‘best interests of the child’ may differ on different platforms, depending on each platform’s use-case. For example, risks on a gaming platform may be different than on a video-streaming platform. The code, therefore, encourages platforms to consider their impact on children and build in mitigation strategies.
For example, while evaluating whether and how to process children’s data, entities must consider risks such as physical harm, mental health issues, excessive screen time, exposure to inappropriate content, etc. AADC also gives guidance on different age-verification mechanisms, including self-declaration, artificial intelligence (AI; by assessing usage patterns), third-party verification, and hard identification (through government-issued IDs), which can be applied proportionately to the risks faced by children on the platforms.
While India has a unique socioeconomic context, there are useful lessons from such models. In place of a blanket imposition, the data protection law must make room for a principles-based approach that allows both regulation and innovation to deal with online risks to children. Entrusting all responsibility to adults can prove to be ineffective, given the well-recorded consent fatigue, and lack of understanding among adults themselves.
Instead, regulation must make way for honest conversations among developers, regulators and parents on ‘what constitutes best interests’ of children, and how best can it be enabled on each platform while balancing their security and agency on the internet.