#iComment: Debating hate speech and the rise and fall of online comments sections

Over the past few years, media channels internationally have been restricting, mostly disabling or completely removing their online comments sections. They cite reasons such as a spike in hate speech or even lack of interaction compared to the response on social media.

By Ana Ribeiro and Ingrida Milkaite ~ The article was republished from ECPMF

The European Union broadly defines “illegal hate speech” as that inciting violent action or hatred towards certain groups. With political forces urging moderation of hate speech, coupled with the current growth in the volume of such speech, maintaining these public interaction spaces is becoming costlier for online outlets.

There is now the distinct possibility that online platforms allowing users’ comments could face legal action. A crackdown on what is defined as hate speech or even otherwise offensive comments is ongoing, at the level of European institutions and governments. This is reflected in recent rulings, reports, as well as senior politicians’ calls for action and threats to punish platforms that do not comply with removing comments, as has been the case in Germany.

According to a report written for the Council of Europe, “hate speech is dangerous not only because it is damaging in itself, but also because it can lead to more serious human rights violations, including physical violence.”

“If unchecked, hate speech online feeds back into the offline world, inciting further racial tension and other forms of discrimination and abuse. The potential for hate to spread quickly in the virtual world increases its potential damage.”

The reactions of writers and commentators in European media have been mixed, even among some who are often victims of readers’ vitriol. At the same time, institutions scramble to find ways to keep the tide of hate speech in check, in a time of increased migration and globalisation in which blatant discrimination and radicalisation are multiplied via the reach of the web.

The ECPMF looks into these and other aspects surrounding the issue in its #iComment series.

Why we should discuss this

At the centre of the issue ring questions that are vital for a healthy democracy: How far should free speech be allowed to go? Is regulating any speech a paradox to “free speech”, a necessary tool for social coexistence, or both? Where should we draw the line?

Negative public sentiment has been growing in Europe and elsewhere in the world. Groups on opposite sides of the political spectrum are becoming more radicalised – partly as counterattacks to each other, partly due to events of seismic social and political proportions (such as the refugee crisis, Brexit and a rash of terrorist attacks).

Meanwhile, the unprecedented degree of access and anonymity online platforms provide users make it easier to broadcast all types of feelings and opinions. The average person is able not only to join the conversation, but to make offensive remarks he or she would normally not spew out in public or to someone’s face.

Institutions and platforms themselves worry, however, that people thus become emboldened to bring their behaviour offline and in some cases turn it into violence; encourage others to do the same; and do deep harm to victims, besides hampering constructive social dialogue on the issues surrounding the groups or individuals targeted. At the same time, this arguably provides legitimacy for institutions to regulate the Internet, a point of contention for long.

In the first part of its #iComment series, the ECPMF focuses on recent institutional and governmental interest, decisions and attempts at regulating online hate speech in the European Union and by the Council of Europe. Other articles will show how regulation efforts and debates on hate speech and comments sections have been playing out in media from different parts of Europe.

Regulating and combating online hate speech

Deleted hate speech comments at lvz.de (schreenshot: ECPMF)
Deleted hate speech comments at lvz.de (schreenshot: ECPMF)

The Council of Europe’s definition of hate speech covers “all forms of expression”, in other words, not only speech, but also images, videos, or any form of online activity. “Cyberhate” is therefore also hate speech.

Its Committee of Ministers’ Recommendation No. (97) 20 defines hate speech as “covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, antisemitism or other forms of hatred based on intolerance. These include intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.”

Efforts at regulation continue and are becoming more institutionalised, as authoritative and legislative bodies seek to draw the line between what kind of offensive speech is allowed or not within online public spaces, and how to hold portals accountable for their being posted.

In June 2016, and after a different earlier judgment on the case, the ECtHR decided that an online news portal being brought into court for not removing offensive comments under articles quickly enough should be held liable for such comments. The ECtHR’s final judgment in Delfi AS v. Estonia was that holding the portal liable for violating the plaintiff’s personality rights would not constitute, in this case, a violation of the portal’s freedom of expression.

Not all lawyers and academics agree with the court’s ruling, and the topic is bound to raise more controversy. Another decision from this summer has divided commentators in the public sphere.

In May 2016, the European Commission released a code of conduct urging IT companies to agree to “the continued development of internal procedures and staff training to guarantee that they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.” Microsoft, Facebook, Twitter and YouTube pledged their support for and compliance with the code in the announcement.

Researching the Topic

According to the European Court of Human Rights (ECtHR), “freedom of expression constitutes one of the essential foundations of a democratic society, one of the basic conditions for its progress and for the development of every man. It is applicable not only to ‘information’ or ‘ideas’ that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no ‘democratic society’. This means, amongst other things that every ‘formality’, ‘condition’, ‘restriction’ or ‘penalty’ imposed in this sphere must be proportionate to the legitimate aim pursued.”

The issue has incited empirical investigations at the institutional levels nationally and internationally, as well as by the media having to deal with a barrage of comments on a daily basis.

PRISM – “Preventing, Reddressing & Inhibiting Hate Speech in New Media” – received funding from the Fundamental Rights and Citizenship Programme of the European Union to research online hate speech in France, Italy, Romania, Spain, and the UK, among other work. The 2015 PRISM report “Backgrounds, Experiences and Responses to Online Hate Speech: A Comparative Cross-Country Analysis” includes material from social media and comments sections, as well as related interviews with a cross-section of society.

Carried out by University of Barcelona faculty, it starts out by acknowledging the importance the issue of online hate speech, or “cyber hate”, has gained in the institutional and governmental scenario. It mentions that both UNESCO and the ECRI published reports on the issue also in 2015: The former focused on “the existing initiatives to combat online hate speech”, while the latter focused on the worrying upward trend that hate speech had shown across social media over the previous year.

The PRISM report also mentions vigorous initiatives by the French president and German Minister of Justice to get Internet companies to actively monitor and remove hate speech from their spaces.

The report argues that online behaviour has repercussions for offline behaviour, as well, possibly as a foundational part of a “pyramid of hate” that reaches from the propagation of stereotypes at the bottom to the extreme of genocide at the top. The argument is that the more a bottom tier becomes normalised, the greater the chance is a higher tier will enter the process of being socially accepted as well. Overt discriminatory speech would fall into the second tier, and possibly lead into more structural discrimination (i.e. exclusion from education and job opportunities), which could then lead into individual- or community-based acts of physical violence, and finally into genocide.

Recommendations from civil society

The PRISM document presents different suggestions in each of the countries under the microscope. Young people in France recommend that, among other things, social media users sign a charter committing against online hate speech and actively ignore or delete comments that would fall into that category, and that the media bring “back the far-right into society”. In Italy, the police, for instance, recommend that institutions allocate resources for them “to effectively monitor and combat hate speech” and get training on the topic, and that EU institutions and states come up with common legal mechanisms to tackle it.

Suggestions from professionals in Romania range from promoting and protecting human rights issues as free speech on a systemic level to a targeted exclusion of hate speech from the category: namely legal authorities sanctioning hate speech and media outlets insisting “on quality content and quality interactions with their audiences”. In Spain, police, prosecutors, NGOs and young people all agreed that there should be “public campaigns to raise awareness among the population that online hate speech can and should be reported”. Another recommendation by young people in the country was that government institutions and representatives lead others by example by refraining from making hateful or discriminatory remarks on or off social media.

Finally, in the UK, professionals and young people suggested that policy and lawmakers clearly define what hate speech means and what the boundaries are. They would also like to see social media providers as well as users being held responsible for the issue.

Keep up-to-date, speak out

Join the conversation on Twitter or Facebook, under the hashtag #iComment. Keep an eye on the ECPMF pages for the next articles in this series.

Meet Franny 

The EU’s Fundamental Rights Agency (FRA) is working with developers to make a software bot that automatically recognises and responds to online hate speech. The team that came up with “Franny, the Fundamental Rights bot” got half of all the votes and a certificate of recognition for their idea at the #RightsHack hackathon, held in June 2016 in Vienna.

At the hackaton, participants were split into four teams, each tasked with identifying a fundamental rights issue and coming up with an innovative solution for it, says the FRA announcement.

“As the winning team explained, they realised that on Monday, World Refugee Day, people were sharing negative thoughts on social media including hashtags such as #NoRefugees and similar. But to respond individually is impossible given the huge volume of tweets. That’s where Franny comes in – it automatically recognises such negative hashtags and can respond with the relevant Charter Articles or give examples of good practices, as well as offer a virtual hug and a cup of tea.”

You can now find @bot_franny on Twitter.