Online hate speech is like a window into a society's soul, says Nanjira Sambuli from Kenya's iHub. And monitoring hate speech, rather than rushing to remove it, can help discover how best to combat it.
Speech is a powerful mechanism of communication, with the power to rally people to action, to inspire, to challenge and to admonish. These, among other reasons, are why freedom of speech is a fundamental human right.
But while speech can be a tool for great good, it can also cause great harm. History is laden with many instances in which hate speech – that is, speech used to propagate hate or create an environment in which hatred and even violence are directed toward people based on their race, ethnicity, national origin, religion, sexual orientation or disability – has been a catalyst for violent action against fellow human beings.
Therein lies the power of speech. And this makes hate speech an enormous societal challenge.
Hate speech moving online
Digital communication technologies are increasing avenues for more voices to exercise their freedom of speech and expression. New media are diversifying the audiences engaging in online communication.
Since these online spaces are a new medium for the dissemination of hate speech, their influence on the audience's actions merits analytical observation.
One possible and emerging result is the creation of a vicious cycle in which audiences convene around hateful content, converse in a self-selected group, and form new ideas or support their original biases aided by the hateful beliefs of others.
However, a virtuous cycle is also possible. New media spaces can act to neutralize the negative impacts of offline hate speech. (It is important to keep in mind that a hateful comment about an individual does not necessarily constitute hate speech, unless it targets the individual as part of a group.)
Hate speech, especially in this interconnected and globalized era, is a crucial policy issue. How do we identify it? How do we address and mitigate it while protecting and upholding freedom of expression?
Violence prompts research
In Kenya, we have seen hateful speech catalzye an environment in which violence is either condoned or carried out. Events such as elections or terror attacks have triggered surges in hate speech, especially online. Hate speech, in such contexts, is an emotive and very subjective reaction.
At iHub Research we observed that in post-independence Kenya, hate speech has fueled violent reactions to election outcomes. In the worst such event in the country, violence flared up after Kenya's 2007 election, resulting in the deaths of about 1,200 people. At the time, there was much anecdotal evidence of hate rhetoric appearing on digital platforms.
iHub set out to investigate the use of Internet platforms to disseminate such messages. In order to effectively address hate speech, however, we needed a way of assessing and understanding its characteristics. And since popular Internet spaces, such as social media, facilitate the right of reply or response, we also needed a means to assess how various actors responded to hate speech.
These considerations resulted in the Umati project. Launched in 2012, its aim was to monitor and collect examples of online hate speech in six languages, including Kiswahili and English.
Initially, we had six monitors who manually checked blogs written in vernacular languages as well as blogs in English, Facebook pages and groups, Twitter timelines, online newspapers and video streams of the major media houses in Kenya.
Now, iHub has built a collection of tools that have enabled us to partly automate the collection of hate speech on Twitter, Facebook, blogs and forums. (You can read more about the tools we use here and read about the path to developing them here.)
Insights into online hate and rage
In the course of our research, we found monitoring to be a crucial way of informing effective interventions to counter online hate speech. (You can download a PDF of our report here.)
We learned that online hate speech is a symptom of a much more complex issue. It often has its origins in offline socializations, perceptions and prejudices that are formed before people interact online.
An important lesson from our research, therefore, is that online conversations offer insight into the conversations and convictions people have offline. And analysis of these online conversations offers a way to better understand which issues are recurring and are important to address.
At iHub, we have also monitored how "netizens" react to inflammatory online speech and observed the emerging phenomenon of self-regulation of the online space.
Over time, we have come to see the value of monitoring not just the hate speech itself, but also the responses to it – especially those that push back.
This broader approach has helped us to better understand self-regulation mechanisms employed by online communities. They include ridiculing a speaker or narrative that attempts to inflame hate or spread misinformation, flooding online spaces with positive counter-messages that diffuse tensions arising from hateful messages, and using humor and satire to "hijack" inflammatory narratives.
We have realized that observations of dangerous speech online should be put into context of other online speech, as rarely do such incidents happen in isolation.
We strongly encourage others to use a similar monitoring approach to address the challenge of online hate speech.
Rush to remove hate speech risks losing crucial knowledge
In Germany, for example, the government has enlisted the assistance and compliance of social media companies to take down content flagged as hate speech. While that is one possible measure, I wonder how much insight is lost by removing the content without assessments by other actors, such as those working on freedom of expression and Internet freedom.
As noted above, online hate speech is a window into people's belief systems and into conversations that take place offline. How do you design effective conflict resolution mechanisms for the digital era without studying the arising tensions disseminated online? How do you provide safe spaces to address grievances if we don’t have an understanding of what those might be?
With most nations facing the Herculean task of fostering a cohesive society of people from diverse backgrounds, I believe that beyond taking down hateful content, civil society actors, researchers and practitioners should look into monitoring online hate speech to explore non-punitive, citizen-centered approaches for reducing online hate speech in the long term.
iHub Research has made available the methodology and lessons learned from the Umati online hate speech monitoring project.
We have also released on GitHub the source code for the tools we use to monitor and collect hate speech – these range from Twitter and Facebook collectors, to tools to better label and categorize the data collected.
We have also supported similar projects in Nigeria, Myanmar and South Sudan. We are keen to help others, including those in Western societies, interested in effectively addressing online hate speech.
You can reach us via email: umati[at]ihub[co]ke.
Nanjira Sambuli is the Research Lead at iHub Nairobi, where she provides strategic guidance to aid the growth of technology research in the East Africa region and supports the team to surface information useful for the emerging technology ecosystem in Africa. You can follow Nanjira on Twitter.