Author: Deepro Guha
Published: October 5, 2023 in The Economic Times.
Meta recently made a groundbreaking announcement for its European users, offering them the option to opt out of its recommendation algorithm. This move signals a potentially pivotal shift in how social media services are offered in Europe and was necessitated by the implementation of the Digital Service Act (DSA) in EU, which mandates algorithmic transparency by digital intermediaries. In this article, I aim to delve deeper into the concept of algorithmic transparency and explore other avenues of algorithmic regulation.
Ubiquity of algorithms
But let’s start with a simple question: Have you ever found yourself endlessly scrolling through social media, wondering why you can’t seem to stop? The answer likely lies in the algorithm that powers your social media feed. These algorithms have the remarkable ability to curate content that keeps you hooked on the platform. Not only do algorithms decide content shown on social media feeds, they also influence consumer choice by controlling suggestions on e-commerce websites, and are even used by governments to process data for the provision of citizens benefits. In essence, algorithms, which are fundamental instructions governing how specific sets of information are treated, have become potent tools for shaping society.
However, these powerful tools also create a host of complex issues that need careful consideration and perhaps even regulation. First, algorithms employed by digital intermediaries are often so complex that they are inscrutable to the average person and sometimes even to regulators. This creates a stark information asymmetry problem. Moreover, certain algorithms, such as those used to train generative AI, are adaptive, offering little control over the models they create, even to their own creators. An example of problems created by such models was highlighted in the recent episode of Microsoft’s AI software professing love to a New York Times journalist, and also attempting to convince him to leave his wife. Microsoft in response admitted that it may not know the exact reason behind the AI software’s erratic behaviour.
Second, there is a constant risk of bias creeping into algorithmic decision-making, especially when algorithms are used for targeting or identifying specific individuals. If left unaddressed, this can exacerbate socioeconomic inequalities. For instance, Meta recently settled with U.S. authorities in a case where its algorithms displayed bias against certain communities when showing housing ads for specific localities.
Third, when bias-related problems emerge, there should ideally be a human point of contact for grievance redressal. However, many companies employing algorithms offer limited recourse in such instances. For example, recent reports shed light on how Instagram’s algorithms often flag content posted by influencers as “violating community guidelines,” limiting their ability to monetize such content, without offering a robust grievance redressal system or even an explanation of which specific community guideline has been violated.
Global movement towards algorithmic regulation
As these issues gain global attention, there is a growing movement towards preparing for a future regime of algorithmic regulation. In the United Kingdom, digital regulators have outlined a vision document for the future of algorithmic regulation. The European Union has established the European Centre for Algorithmic Transparency (ECAT). Even in India, the earlier draft of the Data Protection Bill (2022) proposed algorithmic transparency in the treatment of personal data.
Challenges in Mandating Transparency
However, while the need to regulate algorithmic decision-making is urgent, the effectiveness of mandating algorithmic transparency remains questionable. Firstly, there is the issue of proprietary concerns. Companies may be hesitant to share such information because these algorithms often form the foundation of their business, as argued by Google when asked for more information about its algorithms by its own shareholders. Secondly, as Microsoft argued before the European Parliament, knowing how an algorithm is coded can be useless without knowledge of the data fed into it. This was also highlighted in Twitter’s recent move to make its source code public, with experts pointing out that while Twitter’s source code reveals the underlying logic of its algorithmic system, it tells us almost nothing about how the system will perform in real-time.
Alternative Approaches
Given these challenges with mandating algorithmic transparency, experts have suggested some alternative solutions that could alleviate the problems with algorithmic decision making. For instance, stakeholders can collaborate to create algorithmic standards, with the objective of mitigating adverse consequences of algorithmic decision making. For example, ALGO-CARE, a standard created in the UK, sets out a model of algorithmic accountability in predictive policing. This standard ensures measures like including other decision making mechanisms to supplement the algorithm, creating additional oversight to identify bias etc.
Additionally, there is a growing movement toward mandating algorithmic choice. This could involve companies offering users the option to choose which algorithms are used to provide services (similar to Meta’s move in Europe). Alternatively, third-party algorithm services could give users more options in terms of the information they receive. For instance, consumers could select services that adjust their e-commerce search results to favour domestic production or refine their Instagram feed to focus only on specific topics of interest.
While these interventions may create their own complications and need substantial capacity building, they are undoubtedly worth exploring. Therefore, as the Indian government works on the Digital India Bill, it would be prudent to keep a focus on algorithms and create capacity to allow for future regulation.
—
Deepro is Senior Manager at The Quantum Hub (TQH Consulting), a public policy firm in Delhi