Putting hate speech on mute: An Indian project helps social media users filter out abuse to feel safe online | Reclaiming social media | DW | 16.05.2023
  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages

Reclaiming Social Media

Putting hate speech on mute: An Indian project helps social media users filter out abuse to feel safe online

In response to high levels of online abuse and social media platforms’ bias towards English, a new browser plugin for Indian Twitter users offers local users to opportunity to engage safely in their native languages

This article is part of DW Akademie’s Reclaiming Social Media project, which aims to highlight how media outlets and journalists in the Global South develop innovative initiatives to enhance online discussions of public interest. Following the research phase, the project’s researchers and journalists discussed recommendations for various stakeholders on how to improve constructive public dialogue on social media. To gain inspiration from additional case studies and participate in the discussions, explore the Reclaiming Social Media dossier. 

“How do I explain to someone from outside India that being called sweety online amounts to harassment,” wonders 26-year-old Kirti Agarwal when speaking to DW Akademie, video editor for Boomlive, the fact-checking news organization based in India. 

Arey sweety, app uth gayi kya (Hey sweety, did you wake up already?)“ was one of the many comments that she came across under selfies posted by other female Twitter users. This, according to her, amounts to cyber stalking, where the troll wants to know what time the user wakes up and whether she is at home. But when she tried to report such comments to Twitter, her efforts were unsuccessful.  

Similarly, Shivani Yadav, a 24-year-old Delhi-based freelance journalist and translator, also failed to successfully report similar offensive content. 

“I reported posts on Twitter and Instagram many times, but received the response that it was not considered offensive, even though the post was very clearly a case of hate speech or a gender/caste-based attack on someone,” she explained in an interview with DW Akademie.  

Both Shivani Yadav and Kirti Rawat saw a serious problem while browsing social media. Existing content moderation solutions are biased towards the English language on social media platforms, and are not effective in a linguistically diverse country such as India. What’s more, exposure to online violence and hateful speech can discourage female social media users from participating.  

According to Amnesty International, the UN Special Rapporteur on Violence against Women has warned that online violence and abuse “can lead women and girls to limit their participation and sometimes withdraw completely from online platforms.” In response, Shivani and Kirti decided to take action: They helped develop a tool called Uli

 

Denormalizing everyday online abuse 

Uli is a free browser plugin. Once installed, it automatically obscures offensive content on Twitter. Users can then enjoy a hate-free timeline.  

Co-developed by the Centre for Internet and Society (CIS) and Tattle Civic Technologies in 2021, Uli is a pilot project funded by the Omidyar Network India and Mozilla’s Digital Society Challenge. For Tattle co-founder Tarunima Prabhakar, Uli is an attempt to “de-normalize everyday violence experienced online”, as she states in an interview with DW Akademie.  

Still in its development stage, Uli is an open source tool whose methodology, annotation guidelines, code, and dataset limitations are stored on the software development serviceGitHub. This encourages developers, researchers, and activists to provide feedback for improvement, and also allows users to add their local languages. At present, Uli is available for Twitter in Hindi, Tamil, and English. It has been downloaded 80 times since its launch in July 2022.  

“Twitter was the most open of all platforms, in that with a login, you could view the highest number of feeds and replies. Collecting data from the platform for research was easier,” explained Tarunima in an interview with DW Akademie. 

 

Marginalized voices affected 

In 2020, Tattle identified two overlapping challenges to social media users in India: First, online abuse in India regularly targets women and trans people, and second, platforms such as Twitter and Facebook fail to moderate such abuse in regional languages.  

A 2020 Amnesty International report revealed that 13.8% of tweets sent to 95 female politicians during the 2019 Indian general elections were abusive, and included sexual, sexist, Islamophobic, violent, and caste-based slurs. An IT for Change study published in 2022 noted 30,460 hateful online comments directed at women between November 26 and December 3, 2020, in English, Hindi, Bengali, Marathi, Punjabi, Gujarati, Tamil, and Urdu. 

To combat such phenomena, the developers initiated Uli, which means “chisel” in Tamil, aiming to “hand the chisel and power over to users most affected by online gender-based violence.” Another goal is to ensure that users do not retreat from online spaces or experience fatigue. Tarunima told DW Akademie that the levels of online abuse result in “reduced input and participation from marginalized voices on issues that affect them the most.”  

 

How does Uli work? 

First, Uli can archive offensive tweets. Once installed, a camera icon visible over an offensive tweet enabling users to screenshot the tweet. This screenshot can then be stored or emailed as evidence that can be shared with the platform and law enforcement. A short video explanation is available here. 

“I really like the feature that saves posts in case the user wants to file a police report. These things give the user a lot of agency and in a country like India. This agency is very valuable,” said Shivani Yadav, who helped develop Uli’s data set in Hindi. 

Second, the plugin uses a machine-learning feature that automatically hides slurs in real time from Twitter users in three Indian languages - Hindi, Tamil, and English. Such slurs are detected from a crowdsourced list a database of over 500 offensive terms. Tattle co-founder Tarunima told DW Akademie that this list was developed over a year-long pilot phase, during which she and her team carried out focus group discussions with 30 activists and journalists from various parts of India. Six participants speaking different languages were selected to identify tweets with gender-based violent content over a period of five months. Additionally, inputs from the Hate Base project were used to crowdsource slurs in English. 

Debarati, a gender activist in the Mumbai-based NGO Point of View, was thrilled that the plugin offered users like her the autonomy and choice over what content she can filter on Twitter.  

“Often they (women, girls, queer and trans persons) say they’d like more control over the content they’re exposed to online, as it has an impact on their mental and emotional well-being,” she shared with DW Akademie. Yet as Uli is still in development, it didn’t filter out all offensive comments during the trial run.  

Debarati continued: “I’ve flagged this problem with Tattle and when it’s fixed, I’d like to explore the extension myself and possibly use it in our work with communities of girls, women, queer and trans persons.”  

For Afrah however, who also works at POV, the plugin has been working well for over four months now. “Lately, Twitter has been showing me tweets from accounts I don’t follow. Given my nature of work, I see content that I don’t want to read and the plugin has helped me hide a few of them,” she said. 

 

Next steps for Uli 

Tattle’s two-year roadmap involves expanding the plugin to other platforms, crowdsourcing slurs in more Indian languages, and building the capacity to filter multi-media content online.  

For Tattle, news organizations are potential stakeholders, but they are not currently prioritized. This decision stems from the challenges Tattle has faced in generating income through news outlets, which would fund their project. Tattle is currently philanthropy-driven and supported via grants, but intends to collaborate with news organizations and journalists in future to encourage plugin use.  

Tarunima told DW Akademie that journalists found the redact element – which hides or blurs offensive content – very useful, protecting them from everyday abuse and enabling them to engage with their community better.  

The mental health impacts of online hate speech are real. Shivani recalls that half-way through the Uli trial, one annotator had to drop out because she was disturbed by the offensive content. Yet what bothered Shivani the most was how de-sensitized she had become to online hate speech.  

Given that social media platforms fail to adequately moderate online content, there is a risk that users start accepting offensive behavior. Women and transgender people have repeatedly stressed that freely expressing themselves on Twitter is not worth the risk of violence and abuse. 

Therefore, Shivani hopes that “filtering out offensive words will do wonders for protecting users’ mental health, considering how conservative India is.”  

Tools such as Uli aim to ensure that everyone can freely participate online, without fear of violence and abuse. This is vital to ensure that women and marginalized groups can effectively exercise their right to freedom of expression, and participate in constructive public dialogue. 

 

Insights and lessons learned: 

- Creating an online environment where women and marginalized users are not confronted with hate speech can make them feel safer and encourage them to participate dialogues  

- Language localization offers users more agency in multilingual societies 

- News organizations and social media platforms are not motivated to ensure constructive online conversations 

DW recommends