Children can use platforms with parental consent but child users cannot be tracked, making safety measures hard to apply, even as the law’s exemptions could have unintended adverse effects.
Author: Nikhil Iyer
Published: August 24, 2023 in Livemint
After nearly a decade of discourse around a data protection law for India, a requirement that was given urgency by a landmark Supreme Court judgement in 2017 on the fundamental right to privacy, the Indian Parliament has finally passed the Digital Personal Data Protection Bill, 2023. This is the third version; previous drafts of the Bill were circulated in 2019 and 2022. However, in each draft, the law’s approach to protecting children’s privacy has remained hazy.
To recap, the current law sets the age of consent to use online services at 18 years. If you are younger than 18, then the online platform has to obtain “verifiable parental consent,” failing which the platform can incur massive penalties up to ₹200 crore. While a child can use an internet platform once a parent provides consent, the platform is completely prohibited from tracking and monitoring the behaviour of child users, irrespective of the purpose for which such data processing is to be conducted.
This is where this approach becomes problematic. How can online platforms prevent a child from being exposed to harmful, risky or illegal content, interactions and experiences without tracking or monitoring their behaviour? How are they expected to take precautionary measures, such as alerting parents or law enforcement agencies, if the child is getting drawn towards self-harm, bullying, harassment, hate speech or other dangers? While other jurisdictions have chosen to place high responsibility on platforms for keeping children safer, the Indian law takes a diametrically opposite approach.
The law looks at “verifiable parental consent” as an end-all solution. This is in a country where less than 40% Indians are digitally literate, as per the National Sample Survey’s 78th Round (2020-21) data, with the distinct possibility of children gaming the system by using their parents’ phones/email IDs to provide consent without their knowledge. The mere fact of parental consent is presumed to take care of any harm or risk which may befall children after they begin using the platform.
To add to this, the law is willing to provide exemptions from parental consent requirements for certain platforms that will be certified as being “verifiably safe,” allowing them to process data of children above a certain age (16 years) without parental consent. As per an interview of the IT minister of state, this certification could be reserved for platforms that ensure “100% KYC,” through identity-proofs such as government ID cards. The exemption may be available to specific entities such as “education, skilling, some vocational music websites where children are learning music and they [platforms, i.e.] take all kinds of precautions” and “certainly not social media.”
This exemption carrot is riddled with issues as well. One, it is prima facie in conflict with the data minimization principle: platforms should only collect data necessary for achieving specific purposes. It is unclear how collecting parents’ IDs will help in keeping children safe while using the platform. Two, by laying down a white-listing process, where platforms have to apply for ‘verifiably safe’ certification, the law will increase bureaucratic entanglement in a dynamic digital economy. It is unclear if entities will have to apply to a government authority for every incremental change by which they seek to create more value for children using their products or services.
Further, the rationale for singling out certain categories of platforms is not immediately clear. Today, the lines between a platform’s purpose are blurred. For instance, YouTube is perhaps the world’s biggest ed-tech platform, with its invaluable and democratized repository of knowledge on everything from exam preparation to art and music lessons and personal development; but the government may categorize it as a social media or a streaming platform. Through this certification, the law would end up discriminating among entities that may be offering equally strong protections while processing children’s data, but may either be shut out at the door itself or not have applied for ‘verifiably safe’ certification for other reasons. This may also reduce the incentive of online platforms to innovate for children, as they may want to avoid an over-regulated market segment. India’s young netizens would then be at a disadvantage vis-a-vis their global peers in all these situations.
An alternative approach could have been to uphold ‘best interests of the child’ obligations under an internationally recognized standard of the UN convention on child rights, to which India is a signatory. In practice, the design of platforms would have to uphold this standard in terms of default settings, nudges, location tracking, publishing regular risk self-assessments, issuing prescriptions against detrimental use of data and so on. Beyond this, the government could blacklist any platform found to be violating the rules and stop it from processing children’s data. This will push all platforms to adhere to high data protection standards based on associated risk, while children would be able to freely access the internet based on varying levels of maturity.
How to protect children online is a global debate and there are no easy answers. However, the approach that we have taken to this issue suits neither the realities of India nor the challenges of cyberspace.
Nikhil Iyer is Senior Analyst, Public Policy at The Quantum Hub.