In the last decade, social media has become a powerful communications tool which dominates most of our day to day lives. However, with this comes a breeding ground for hate speech, and racial abuse online is rampant.
According to the Crown Prosecution Service (CPS), a hate crime is: "Any criminal offence which is perceived by the victim or any other person, to be motivated by hostility or prejudice, based on a person's disability or perceived disability; race or perceived race; or religion or perceived religion; or sexual orientation or perceived sexual orientation or transgender identity or perceived transgender identity".
This is legislated for in sections 28-32 of the Crime and Disorder Act 1998. Sections 145 and 146 of the Criminal Justice Act 2003 allow prosecutors to apply for an uplift in sentence for those convicted. Additionally, the CPS asserts that "There is no legal definition of hostility so we use the everyday understanding of the word which includes ill-will, spite, contempt, prejudice, unfriendliness, antagonism, resentment and dislike".
This term seemingly encompasses a wide range of actions and presumably, leaves little room for leniency when cracking down on racial abuse online. If so, why is it that horrific, racist content can be posted on social media platforms with oftentimes little to no consequence? Looking again at the CPS definition, it becomes clear why this is: for an incident to be deemed a hate crime, a criminal offence must be committed. So, when deciding whether a comment or post can constitute a hate crime, it must be asked whether the law has actually been broken.
There are a number of statutes available when dealing with online abuse. Under s.127(1) of the Communications Act 2003, sending a message via a public electronic communications network that is either "grossly offensive or of an indecent, obscene or menacing character" or "causes any such message or matter to be so sent" is punishable by law.
More specifically, under s.18(1) of the Public Order Act 1986 Part III, using "threatening, abusive or insulting words or behaviour" or displaying written material of the same description will constitute an offence if a) there was an intention of stirring up racial hatred or b) considering all the circumstances, racial hatred is likely to be stirred up.
Herein lies what I believe to be the loophole, the requirement of an intention or likelihood of inciting racial hatred. This is especially difficult to prove given the high threshold required that, according to the CPS guidelines, maintain that “likely" does not mean that racial hatred was simply possible.
Looking at examples of racial abuse online, it becomes evident how under the current laws, securing any legal sanction is difficult. In 2019, ex-BBC 5 Live presenter Danny Baker was fired for tweeting a racially offensive image depicting a couple holding hands with a chimpanzee captioned, ‘Royal baby leaves hospital.’ The Metropolitan Police conducted an investigation into the issue, but it was soon dropped on the basis that they did "not consider that a criminal threshold has been met" and no further action was taken. Perhaps there was difficulty proving either intention or likelihood of racial hatred being incited by this post. However, it is disappointing that the less severe offences under the 1986 Act were not deemed to have been met either. For instance, s.4A(B) under which "A person is guilty of an offence if, with intent to cause a person harassment, alarm or distress, he […] displays any writing, sign or other visible representation which is threatening, abusive or insulting, thereby causing that or another person harassment, alarm or distress".
So how bad does a tweet, comment or post have to be to be deemed as having the intention or likelihood of causing racial hatred? Using last week’s events as an example, it seems unfathomable that given the violence on the streets following England’s loss against Italy in the Euro 2020 final on Sunday, the use of racist language online would not be considered as stirring up circumstances of racial hatred. This is certainly true when online behaviour devolves beyond using slurs or name-calling in Instagram comments to, not only inciting, but even encouraging racially motivated violence, for instance the note posted on Snapchat calling for ‘Punish a n*gger day’.
Sanctions for offensive material online simply must go further than the typical remedy consists of content being removed, or the account being suspended. However, realistically it is impossible and impractical for every potentially criminal incident to be passed through the justice system, especially when so much of this content is posted by anonymous users. In fact, in 2019 The Independent revealed that according to Scotland Yard data, only 17 cases – 0.92% of those reported to its “online hate crime hub” launched in 2017 have resulted in charges, with just seven prosecutions.
We are clearly in urgent need of some form of legislation which places social media companies under an obligation to adequately deal with racial abuse online.
Last week, Prime Minister Boris Johnson announced that those who racially abuse footballers online will be banned from attending matches. To me, this is yet another instance of our government failing to understand and address the real issue – the widespread racism within this country, which is now manifesting itself online. Perhaps, instead of focusing on football – leaving other victims of online abuse at the wayside – the PM should have directed his focus to the long-awaited Online Safety Bill first proposed two years ago, which the Department for Digital, Culture, Media and Sport promised would “force online companies to protect their users from harmful abuse or face some of the toughest corporate fines ever”. Until this is enacted, the possibility of adequate legal sanction for those publishing racial abuse online remains slim.
Written by Tare Youdeowei
Follow Tare on Instagram