#DoTheRightsThing: Meta’s HRIA for India

Gearing up for the 2024 general elections, we bring you #DoTheRightsThing. This series aims to assess the reporting on human rights impact undertaken by significant platforms—the first in line, Meta's 2023 annual Human Rights Report.

14 October, 2023
8 min read

tl;dr

As authoritarian threats to user rights on social media platforms increase, accountability of these platforms to their users must be upheld. A comprehensive, transparent, and accurate assessment of the impact on human rights of users is one way through which platforms do so. In light of the upcoming 2024 general elections, this series aims to assess the reporting on human rights impact undertaken and released by significant platforms, with a specific focus on the Indian context. The first post in this series covers Meta and its second annual Human Rights Report (“2023 Report”). We summarise the updates on India and reiterate the need for releasing a complete human rights impact assessment. We also highlight recent reporting around Meta’s inaction against violative content due to its inability to resist political pressure and stress upon the need for transparent disclosures by platforms. 

Background

The Human Rights Report reflects their progress on their commitments to the Corporate Human Rights Policy, which is based on the United Nations Guiding Principles on Business and Human Rights (UNGPs). 

Meta released its first annual Human Rights Report in July 2022 (“2022 Report”), which covered the calendar years 2020 and 2021. The 2022 Report included a brief summary of the independent human rights impact assessment (HRIA), commissioned by Meta in 2019, on potential human rights risks in India related to its platforms. This was initiated due to the publication of several reports by civil society groups in 2019, criticising Facebook’s content policy rules and content moderation processes in India. 

In a stark contrast to the then existing documentation of country specific HRIAs, Meta’s publicly-available summary on its HRIA for India in 2022 was barely four pages long. The summary seemed perfunctory and non-reflective of the inputs provided by several civil society organisations who participated in the assessment. Furthermore, Meta failed to publish the full report of the HRIA which led to concerns around transparency and accountability. 

Ratik Asokan of India Civil Watch International, one of the organisations that provided its inputs during the HRIA, expressed deep disappointment with Facebook’s decision to not release the full assessment: 

Further, the report gave a rather glowing review of Meta’s performance with respect to India, stating that it has “committed” to “platform safety and integrity in India” in addition to “protecting respect for its users and to protecting user safety”. Read our analysis of the 2022 Report and HRIA summary for India here

India due diligence updates as per the 2023 Report

The 2023 Report covers the calendar year 2022. A section of the 2023 Report includes updates on Meta’s action taken on the recommendations from previous human rights due diligence regarding the Philippines, Israel and Palestine, and India. As per the updates on India under the 2023 Reports, Meta reportedly refined its approach to civil society engagement and outreach, made progress on expanding partnerships with India civil society including on digital literacy, women and child safety, and countering extremism, and adapted the principles under the Rabat Plan into active content policy tools for assessing potential hate speech. With respect to transparency reporting on action taken against violent content, Meta mentioned its monthly publication of India transparency reports which covers multiple policy areas for Facebook, Instagram, and WhatsApp. The transparency reports include information on action taken, grievances received from users in India, and orders received from the recently formed Grievance Appellate Committees (“GAC”) constituted under the IT Rules, 2022. They also admitted to seeking ways to refine their systems for gathering metrics on content removed as a result of government requests and public reporting. 

In the 2023 Report, Meta referred to its experience in ensuring that election integrity informs its preparations for upcoming state and general elections. Their reported efforts included activating their elections operation centre, ensuring support by content reviewers in 20 Indian languages, expanding their independent fact-checking partners from 7 to 11 (now covering 15 Indian languages), enforcing political ads transparency, and working closely with both electoral authorities and civil society. 

Notably, this is in stark contrast to international reporting around drastic downsizing of Meta’s Trust and Safety teams which was focussed towards combating online misinformation. The mass layoffs are happening despite the upcoming 2024 US elections. This is a marked shift from the approach adopted post the 2016 elections, of expanding efforts to police disinformation, in light of the extensive reporting around disinformation campaigns and political manipulation on the platform. 

Tech accountability in the world’s largest democracy

India has an estimated ~310 Facebook users, ~230 million Instagram, ~450 million Youtube, ~20M Twitter, ~170 million Snap users. Given the country’s wide online digital presence, its national character is debated and forged online. There exists a possibility of rapidly disseminating information contributing towards narrative building in a way that indirectly influences perceptions and beliefs. Although latent, such narrative setting may have significant implications for electoral outcomes. Concerns around and allegations of the government's proximity to positions of power across social media platforms become weighty considerations in the run up to the 2024 elections. Adequate reporting exists to affirm that social media platforms have recently yielded to the government's authoritarian tendencies and turned a blind eye to inauthentic/ unfair online electoral campaigns (see here, here, here, here, here, here, and here). 

As per a recent Washington Post story, Facebook’s Coordinated Inauthentic Behaviour unit was reportedly unable to take action against inauthentic accounts which were in clear violation of their policies due to a fear of significant pressure or backlash from the incumbent government and its associated outfits. This story reaffirms that, in order to avoid enforcement action by the executive, Facebook has been unable to resist political pressures and failed in its efforts to censor content that may be unpalatable to the executive. Sufficient reporting exists, not just for Meta’s platforms but others as well, highlighting their problematic moderation practises (see here, here, here, here, here, here). Thus, there is adequate indication for all significant social media platforms to undertake and release a complete human rights impact assessment (HRIA) for India. 

Looming concerns 

Incomplete metrics: Meta’s adopted metrics of proactive monitoring in its monthly transparency reports paint an incomplete picture. According to its reports, Facebook and Instagram adopt the metrics of (i) ‘content actioned’ which measures the number of pieces of content (such as posts, photos, videos or comments) that they take action on for going against their standards and guidelines; and (ii) proactive rate which refers to the percentage of ‘content actioned’ that they detected proactively before any user reported the same. This metric only accounts for those pieces of content on which action was taken, and excludes such content on Facebook (which may otherwise be an area of concern) on which action was not taken, even if it had been reported.  Read more about this here

In the months of January, February, and March, 2023, for example, Meta has disclosed that it took action on around 72 million pieces of spam content in total (20 million, 20.7 million, and 32 million, per month respectively) on Facebook. Over 99% of this content was proactively actioned against. This percentage, only accounts for the portion of spam content that was proactively identified by Facebook.The metric does not reflect the total amount of spam content reported by users, or even the total amount of spam flagged offering no insight into what percentage of reports Facebook believes do not need any enforcement action. This incomplete metric also extends itself into other types of content such as hate speech, violent and graphic content, etc. This is particularly concerning when considering that as reported, such a metric does not help convey what percentage of reports Facebook is rejecting, or conversely, if any content is incorrectly classified, resulting in enforcement action against it.

Threats to digital electoral conduct: Meta’s decision to not release the entire HRIA for India injures user trust. In light of the upcoming state and general elections, Meta should publish a complete, unredacted impact assessment on rights affected/ impacted due to the misuse of digital platforms in the lead up to the elections. Such an assessment would address any concerns around the government's proximity with Meta officials, which are rooted in differential enforcement/special treatment given to some users. Further, in the report, Meta should make public efforts undertaken to uphold and defend the integrity of the upcoming elections. An assessment should also be conducted and findings must be published post the elections, to assess achievement of targets. 

Inadequate disclosure in transparency reports: As per the notified IT Amendment Rules, 2022, the intermediaries are required to upload on their website a report reflecting their compliance with the Grievance Appellate Committee (GAC) order. The GAC is an executive appointed body that would hear appeals against the decisions of social media platforms to take action against content. Given its composition, it runs the risk of becoming the arbiter of permissible speech on the internet in India, thus incentivising online platforms to remove/suppress/label any speech unpalatable to the government, or those exerting political pressure. Read more about the secrecy around the GAC’s establishment, functioning, and processes, as well as the lack of transparency in its reporting mechanism here.

Meta, in its latest monthly transparency report revealed that it received and complied with 8 orders from the GAC. However, it failed to publish any further information regarding these orders, such as the original decision taken by the intermediaries’ grievance redressal officer, the grounds for raising an appeal, and the grounds/ reasons for GAC’s final decision. Disclosure of such information, while protecting personal information of the user, may work in the public interest because it would reveal a lot about the intermediaries' and GAC’s redress mechanism and facilitate research around it​​.

The European Union recently launched the Digital Service Act (DSA) Transparency Database which allows all content moderation decisions by online platforms operating in the EU to be publicly accessible. The database reveals information on grounds for decision taken, fact relied on for taking decision, category of violation, involvement of automated means in decision making, etc. In contrast, the disclosure made by social media platforms with respect to India is severely opaque, and creates additional hurdles for users and content creators in ascertaining the legitimacy of government action, and being able to challenge it.

The way forward for platform governance

Meta must conduct and release a public, unredacted, and complete India Human Rights Impact Assessment for India without delay. We believe that this must be proactively done by all significant social media platforms and must be a recurring annual practice in addition to an assessment prior to the general and state elections. In addition to this, platforms should work in collaboration with policy and technology researchers and civil society organisations to improve transparency standards in India. Our previous recommendations for social media platforms to improve these reporting practices can be found here

Important Documents

  1. Meta Human Rights Report - Insights and Action 2022 released in 2023 (link)
  2. Meta Human Rights Report - Insights and Action 2020-2021 released in July 2022 (link)
  3. IFF’s post on Meta’s 1st Annual HR Report on India (link)

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

1
IFF’s cybersecurity report for the first quarter of 2024 #PlugTheBreach

This post is the first in a series tracking IFF’s work on data breaches and vulnerabilities every quarter. We will list the various cybersecurity incidents that occurred in the country and our actions in response to them.

6 min read

2
Petitioners Conclude Arguments Before Third Judge in Case Challenging Constitutionality of Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

After a marathon hearing before the Bombay HC spanning over 7 days, the Petitioners have concluded their arguments before the third Judge, Justice A.S. Chandurkar, in the petitions challenging the constitutionality of the Fact-Check Unit Conceptualised under IT (Amendment) Rules, 2023

5 min read

3
Why do we do the “Quarterly Members’ & Donors’ calls” / For all the johnny-come-lately`s

What goes on in these “Quarterly Members’ and Donors’ calls" and why do we host them? What kind of mangoes do we eat and how?

3 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!