Twittering #Extremist Propaganda: how to legitimately silence the voices of hate

By: Marco Reguzzoni

Twitter and Facebook, the major social networks services (SNS), are usually utilized for the so-called “Life streaming”. In fact, SNS users are primarily committed to post status updates, photos, and videos – and/or share these contents with online friends – with the purpose of having an appealing version of their persona online. Furthermore, in the last years, SNS have acquired also a crucial role in the development of the public discourse. In fact, by virtue of SNS, the ‘average citizen’ is now able to raise their voices, and thus mobilize social movements, alike the Occupy Movement. On the other hand, political personalities have discovered SNS as a wonderful tool to spread their ideas among new possible voters.

However, SNS do not always work as free speech engines.

Twitter constitutes the favorite (and most successful) instrument of propaganda used by the Arabic terrorist organization Daesh to recruit new affiliates while in the “Western World” leaders of political parties contiguous with fascist ideologies use SNS as megaphones of incitement to religious hatred. This leads to a vicious circle that has as the ultimate effect the “weaponization” of SNS.

From a legal standpoint, the arising question is therefore which content moderation policy gives adequate legal remedies to tackle the hateful propaganda that grows up in the SNS context.

The Italian Court of Cassation on SNS as tool of hateful propaganda

With the ruling n. 47489 dated 1° December 2015, the Italian Supreme Court established that the new means of expression including SNS have to be included among “the other means of propaganda” through which the criminal offences are committed publicly pursuant art. 266 of Italian Criminal Code.

This principle has been expressed in a case involving an Italian Muslim citizen that published on two different websites a document entitled “The Islamic State, facts I would like to communicate”.

According to the Court of Turin, this conduct advocated the Daesh unlawful terrorist-related contents on the Italian Territory. As such, the person was charged for the criminal offence of “apologia for terrorism” pursuant art. 414 of the Italian Criminal Code.

Before the Court of Cassation, the person convicted in the first grade argued the existence of the material element of the publicity requested by the criminal offence at hand, lacking his will to reach a large and indefinite number of people through the dissemination of the contested document online.

The Court of Cassation rejected the appeal, having scrutinized the will to spread the document indiscriminately unquestionable. The access to the websites was, in fact, free, without any filter. Moreover, the dissemination of Daesh propaganda was undoubtedly intentional, since the document triggered by the websites pleaded for “help to expand (this work) to other brothers or sisters”.

Revealingly, the Supreme Court recognized that a website has the same indefinite potential to spread contents of the press. Yet, the new online means of expression alike SNS, blogs, online forums were declared out of the scope of legal regime reserved to the press by the law dated 8 February 1948, n. 47. At Italian legal level, it remains therefore opened the question on how to regulate the emerging role of SNS to shape the public opinion. In this respect, the EU legal framework becomes relevant.

The EU legal provisions to guarantee content moderation on SNS

SNS fall under the application of a large variety of EU legislations. The first of which comes to mind is the Data Protection Legislation. There is no, however, a certain set of rules purported to moderate the contents displayed in the SNS context.

Yet, at the expense of sophistication, it must be admitted that the regime set forth by the e-commerce directive practically succeed to force SNS to moderate end-users for illegal contents.

According to art.14 of e-Commerce directive n. 2000/31, hosting service providers are liable for the illegal contents stored at the request of users on the condition that they have actual knowledge of said illegal contents and do not act expeditiously to remove them. Third parties trigger this removal obligation through notification of a notice to hosting providers alike the SNS, under the threat of subsequent litigation.

The major SNS implemented specific web pages and forms intended for reporting illegal contents to facilitate this process. However, SNS content removal instigated through the so-called notice and takedown procedure does not always correspond to a legal duty. SNS may decide to remove contents triggered by end-users notice, even though the same notice contains insufficient information on the alleged violation of law. Conversely, SNS may decide to comply with a meritless claim of an end user because the flagged content breaches their Terms of Use. As a result, due to the non-formalized nature of the notice and takedown process, it is not always easy to distinguish between content removal instigated by third party notices and content moderation interventions by SNS.

This demonstrates that SNS are not neutral carriers of information as commonly believed, but instead they are capable of influencing the flow of contents posted by the end-users regardless the application of any legal provision.

The surveillance role of SNS

The rules on content removal are usually established in the Term of Use (ToU) which govern the contractual relationship between the end users and the SNS.

As far as Facebook, removal is triggered for any content that infringes violates the law and/or falls under those categories prohibited by Facebook’s policy. Interestingly, these additional prohibitions include also any use of Facebook “to do anything unlawful, misleading, malicious or discriminatory”. Thus, grounds for removal reach far further than the requirements of the law. In fact, while child pornography or copyright infringement are relatively easy to detect, what constitutes, for instance, a discriminatory post is highly subjective, and could trigger the removal of a broad range of contents.

In comparison to Facebook, Twitter’s powers of intervention are even more unconditional. Article 8 of the Twitter ToU provides as follows: ‘We reserve the right at all times (but will not have an obligation) to remove or refuse to distribute any Content on the Services, to suspend or terminate users, and to reclaim usernames without liability to you”. Article 8 mention also the ‘Twitter Rules’ that offer some generic clarifications on Twitter’s policy as to prohibited content. The rules on impersonation, violent threats, and ‘unlawful use’ are in fact formulated as open clauses with a potential very large application.

Undue (?) governments’ influence on SNS content moderation policies

As above stated, SNS are taking on a quasi-judicial role in determining the limits of public discourse, having their ToU compensated the lack of legislative provisions.

Notwithstanding, governments succeed to influence SNS content policies through a system of informal pressure. This soft power approach involves a combination of close, non-transparent ties between SNS and law enforcement agencies, and publicly-voiced appeals to corporate responsibility by governments.

This clearly happened after the recent terrorist Daesh attacks in Paris when the head of governments of EU members, as particularly Francois Hollande, asked explicitly SNS to take any Daesh potential terrorist content offline. In response of these pressures, on 4 January 2016, Twitter has rephrased its definition of “hateful conduct” that promotes violence against specific groups and prompts Twitter to delete end users accounts. The company previously used a more generic warning that banned users from threatening or promoting “violence against others”. The revised rule states now that: “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability”.

The distinction between public and private interventions have thus become to blur, and this brings a large degree of legal uncertainty. In this respect, it must be considered that not rarely the line between legitimate speech and hateful propaganda is very problematic. For this reason, the freedom of speech is most likely more safeguarded by SNS, as being politically neutral and more reluctant to sacrifice the freedom of speech unreasonably. Instead, a systematic removal alleged unlawful contents of under governments’ request could easily lead to illegitimate acts of censorship.

Conclusion

SNS have taken a unique importance to set up the tone and topic of public debate.

Furthermore, SNS themselves are well-positioned to act as agents of moderation for the contents disseminated in the online environment, since the development on the public debate on SNS is essentially regulated only by their ToU. As a matter of the fact, the current EU framework does not place any direct legal instrument on end-users or governments with regard content moderation.

Yet, SNS are facing enormous pressure from EU members’ governments to take down online contents associated with terrorist propaganda. Some legal commentators have taken also the firm view that a more effective approach would involve a more active and explicit state regulation of SNS ToU. However, it is important to remember that SNS are undoubtedly useful tools for hateful speech, but they remain just tools. Hopefully, SNS must remain a bastion for freedom of speech outside state interventions.

 

*Originally posted at Maschietto, Maggiore & Besseguini website.

Leave your comments below