[Report] Letting Hate Flourish: YouTube and Koo's lax response to the reporting of hate speech against women in India and the US

A joint investigation between Global Witness and Internet Freedom Foundation into hateful content in the US and India reveals ineffective reporting processes.

01 February, 2024
5 min read

Global Witness and Internet Freedom Foundation launch their joint report investigating how YouTube and Indian social media platform Koo respond to misogynistic hate speech, violating platform policies, titled 'Letting Hate Flourish: YouTube and Koo's lax response to the reporting of hate speech against women in India and the US.'

Flagged misogynistic hate-filled content was kept live on YouTube and Indian microblogging site Koo despite violating companies’ own policies, according to a new test of platform reporting mechanisms carried out by Global Witness and the Internet Freedom Foundation (IFF).

The joint investigation identified and reported real-life pieces of hate speech content on YouTube and Koo in both the US and India that targeted women on the basis of gender, some of which included Islamophobic, racist, and casteist hate. These included posts that described women as “dogs”, “whores” and “100 percent worthless,” and stated that their “genes are trash, absolute trash”, referred to a prominent Muslim journalist as a “terrorist”, demeaned black women, and targeted Dalits (a marginalised protected community in India) with denigrating slurs. All of these violated the platforms’ hate speech policies and should therefore be taken down once reported, according to both companies’ official processes.[1] To test these processes, we reported 79 videos on YouTube and 23 posts on Koo containing prohibited hate speech.[2]

Despite the fact that the content clearly contravenes the platforms’ policies, our test found that:

  • YouTube did not remove any of the 79 reported videos containing hate speech. A full month after the content was reported, the status of only one video changed, to require users to be over 18 to watch it, and all the videos remained live on the platform.  
  • Koo also failed to act on most of the content violating its policies we reported. Out of 23 posts we reported, the platform removed six, or just over a quarter. It reviewed and took no action on 15 others and failed to provide any response for two of the reported posts. 
  • While both platforms left the vast majority of reported content live on its site, Koo did review and respond to most within a day, in contrast to YouTube’s lack of response after more than a month.

Previous research by Global Witness and others has demonstrated that social media platforms’ failure to respond effectively to banned hate speech content is a widespread, systemic and ongoing problem.[3] In light of these exposés, many social media platforms have pointed to the reporting tools they give users, allowing harmful content to be reviewed and removed if it violates their policies. Yet our latest findings show that these reporting mechanisms are flawed and ineffective, with the lax responses of YouTube and Koo showing both platforms are failing to properly review and act on material they say has no place on their sites, even once reported. Attention-driven business models of platforms like YouTube or Koo may be playing into the amplification of hate speech by favouring expressions of outrage and polarising content. 

Our findings that social media platforms are enabling misogynistic hate online come against the backdrop of a surge of online violence against women and girls in recent years, threatening women’s safety, leading to serious and long-lasting mental health impacts, silencing women in online spaces and creating a chilling effect on their engagement in public and political life, from journalism to leadership roles. 

Moreover, our evidence of insufficient platforms is particularly concerning in a year in which both the US and India, together with over sixty other countries, are due to hold national elections. Online hate speech and disinformation have already led to attacks on journalists and the targeting and murders of religious minorities in both countries, as well as the offline harassment of election workers in the US. 

Prateek Waghre, Executive Director at Internet Freedom Foundation, said:

“Our investigation demonstrates that social media platforms continue to leave the door open for hateful speech to flourish, endangering women and minorities, and enabling a toxic information ecosystem in a critical year for global democracy.”
“Social media corporations’ failure to respond to content that is in violation of their own policies in the world’s two largest democracies shows their alarming lack of preparedness around elections, where the febrile political climate risks amplifying the threat and impact of extreme and harmful online content. The burgeoning social media user base and rapidly evolving digital landscape in India further heighten these risks in the lead-up to the elections.”

Henry Peck, Digital Threats to Democracy Campaigner at Global Witness, said:

“Time and time again, we have seen how online hate speech causes real-world harms, putting the lives of its targets at risk and fuelling broader conflict and violence, with a disproportionate impact on women and marginalised communities.”
“Instead of continuing to generate revenue from hate, YouTube, Koo, and other social media platforms must urgently act to properly resource content moderation, enforce their reporting processes, and disincentivise extreme and hateful content. 
“As close to half the world’s population potentially go to the polls in 2024, it has never been more crucial for social media corporations to learn from previous mistakes and protect their users’ safety and spaces for democratic engagement, both online and offline.”

In response to Global Witness and Internet Freedom Foundation’s investigation, a Koo spokesperson said the company is committed to making the platform safe for users and endeavours to keep developing systems and processes to detect and remove harmful content. They said it conducts an initial screening of content using an automated process which identifies problematic content and reduces its visibility. They said subsequently reported content is evaluated by a manual review team to determine if deletion is warranted, following several guiding principles.

Google was approached for comment but did not respond. 

Click here to read our report, 'Letting Hate Flourish: YouTube and Koo's lax response to the reporting of hate speech against women in India and the US'.


  1. YouTube states that videos that “claim that individuals or groups are physically or mentally inferior, deficient, or diseased” based on their sex/gender violate the company’s hate speech policy and are not allowed on the site. It continues, “this includes statements that one group is less than another, calling them less intelligent, less capable, or damaged.” The policy also prohibits the “use of racial, religious, or other slurs and stereotypes that incite or promote hatred based on protected group status”, which includes sex/gender. Koo states in its community guidelines on hate speech and discrimination that “we do not allow any content that is hateful, contains personal attacks and ad hominem speech. Any form of discourteous, impolite, rude statements made to express disagreement that are intended to harm another user or induce them mental stress or suffering is prohibited.” It continues, “examples of hateful or discriminatory speech include comments which encourage violence… attempts to disparage anyone based on their nationality; sex/gender; sexual orientation…”As part of the investigation, Global Witness and IFF identified real examples of gendered hate speech content in English and in Hindi that were viewable on the platforms but clearly violated the companies’ hate speech policies. The organisations reported the content and details of the violation in November 2023 using each platform’s reporting tool, totalling 79 videos on YouTube and 23 posts on Koo. The results were monitored for over a month.
  2. You can find further examples of the content of the videos and posts reported by Global Witness here.
  3. Rather than investing more in addressing harmful content on their platforms, Google (which owns YouTube), Meta/Facebook, and X/Twitter have reduced the trust and safety teams responsible for dealing with hate speech and disinformation over the last year. It was reported that YouTube owner Google cut a unit responsible for misinformation, radicalization, and toxicity by a third.

Subscribe to our newsletter, and don't miss out on our latest updates.

Similar Posts

Read our Open Letter to Electoral Candidates & Parliamentary Representatives on the Impact of Deepfakes on Electoral Outcomes

With endorsement from civil society organisations and individuals, we wrote to electoral candidates, political parties, and parliamentarians, urging them to publicly declare that they will not use deepfake technologies to create deceptive or misleading synthetic content for the 2024 Elections.

3 min read

Supreme Court Orders Publication of Review Committee Orders Relating to Internet Shutdowns

The Supreme Court has held that review committee orders under the Telecom Suspension Rules, 2017 must be published, while deliberations of the review committee need not necessarily be notified.

3 min read

Haryana police’s use of drones against protesting farmers is a disproportionate and unconstitutional response

Haryana police is deploying drones to drop tear gas shells and potentially surveil farmers protesting near the Shambhu border as part of their Delhi Chalo march. We write to state authorities contending that such use of technology is disproportionate and unconstitutional, and must cease immediately.

5 min read

Donate to IFF

Help IFF scale up by making a donation for digital rights. Really, when it comes to free speech online, digital privacy, net neutrality and innovation — we got your back!