Censoring speech online is a mistake - and it won't work

Member ratings
  • Well argued: 95%
  • Interesting points: 92%
  • Agree with arguments: 77%
10 ratings - view all
Censoring speech online is a mistake - and it won't work

The internet has changed the way we understand and interact with the world. We increasingly conduct our lives online – posting pictures of our holidays and families, broadcasting our thoughts, learning about products, and consuming thousands of hours of content a year. As such, it is only natural that we analyse the impact of online activity.

On Monday, the government released the Online Harms White Paper, which seeks to outline “a programme of action to tackle content or activity that harms individual users, particularly children, or threatens our way of life in the UK”. This is a noble sentiment. In practice, however, tackling this content is much more complicated.

Monitoring ‘content or activity’, in practical terms, translates to monitoring ‘speech’. But monitoring speech is tricky business. Monitoring and clamping down on speech is predicated on the idea that there are always absolutely clear ‘right’ and ‘wrong’, ‘good’ and ‘bad’ instances of speech. When the Government outlines that it will target things like ‘trolling’, ‘disinformation’ and ‘offensive material’ you end up with the Government, either ministers or an appointed regulator, making sweeping judgements about what constitutes harm, what constitutes offence, what constitutes untruths.

In addition, the white paper proposes holding social media and online providers accountable for the content posted on their sites. In their exact words, “companies will be held to account for tackling a comprehensive set of online harms, ranging from illegal activity and content to behaviours which are harmful but not necessarily illegal”. How is a company, responsible for monitoring millions of posts every day, meant to quickly and effectively target ‘harmful’ material if the regulatory body can’t even define what they want to be targeted, never mind find a legal basis for restricting it? In this case, in order to avoid fines or personal liability, companies will be heavy handed with restricting content and implement blanket bans on certain speech or shut down sites entirely, throwing the baby out with the bathwater.

Any command to crack down on speech online is a slippery slope. This is especially the case in this white paper that links speech that is already explicitly illegal, such as inciting violence, with simply undesirable and anti-social activity like trolling.

Just this last week, a British woman was detained in Dubai after calling her ex-husband’s new wife ‘a horse’ on Facebook. She is facing two years in jail and up to a £50,000 fine. While it’s easy to say this won’t happen in the UK, it illustrates the danger of setting a precedent of going after people for what they set online. The UAE Government has decided that calling someone a horse on Facebook amounted to a crime, and that’s all it took to ruin this woman’s life. When a government proposes regulating an activity without a clear legal definition, we should be worried about the unforeseen implications.

As Olivia Utley wrote here on TheArticle last week, there are instances when online activity can have real-world negative consequences. She used the example of ‘anti-vaxxers’, parents who refuse to vaccinate their children due to the dubious belief that vaccines are harmful to children. It’s no question that the anti-vaxx community is loud and proud on social media and that their views can have seriously harmful implications. But the idea that restricting their content online will suddenly evaporate the anti-vaxx movement is naïve at best.

People who hold extreme views that fly in the face of logic, reason, and science are likely to identify strongly with an ‘us’ versus ‘them’ mentality – after all, everyone that says they’re wrong or tries to censor them are just part of the conspiracy. Formally censoring their views will only drive these communities further underground, to a part of the internet even harder to monitor, and respond to their absurd arguments.

Instead, the government – and society in general – should use the tools and legal frameworks already at their disposal to tackle the negative consequences of online activity. Olivia noted that countries around the world are making it harder for anti-vaxxers to participate in society – they’re restricted from sending their children to government-funded schools, face travel bans , and are unable to receive government benefits until they immunise. Since her piece was published, New York City declared a public health emergency in the face of a measles outbreak and mandated that everyone within the vicinity of the outbreak get vaccinated or face a $1,000 fine. These initiatives are  more effective at curtailing societal damage than monitoring who can say what online.

There clearly is harmful material online. But there is a point where the government, in its haste to be seen to be doing something, over-regulates and restricts freedoms for the vast majority of upstanding citizens in order to catch a few baddies. Instead of implementing sweeping reforms of a complicated and diverse digital economy, they should focus on implementing the tools at their disposal to target illegal activity with real-world consequences. In the meantime, the rest of us should judge people posting undesirable nonsense online in the court of public opinion.

Member ratings
  • Well argued: 95%
  • Interesting points: 92%
  • Agree with arguments: 77%
10 ratings - view all

You may also like