Social media companies have come under criticism in a report on radicalisation published by the House of Commons Home Affairs Committee.
The report accuses social media companies of “consciously failing to combat the use of their sites to promote terrorism and killings”.
The internet has a huge impact in contributing to individuals turning to extremism, hatred and murder. Social media companies are consciously failing to combat the use of their sites to promote terrorism and killings. Networks like Facebook, Twitter and YouTube are the vehicle of choice in spreading propaganda and they have become the recruiting platforms for terrorism.
The report cites the Internet Watch Foundation as an example of the sort of initiative the Committee would like to see in the area of counter-extremism, stating:
We do not see why the success of the Internet Watch Foundation cannot be replicated in the area of countering online extremism.
This would appear rather optimistic given the obvious differences between child abuse content and extremist content. Child abuse content is a clearly defined category, whereas the lines between “extremism” and legitimate political expression are blurred and controversial, to the extent that successive governments have struggled to settle on a workable definition. The removal of child abuse content at source is almost universally supported, whereas the removal of any political speech, even “extremist” speech, is likely to prompt accusations of censorship and victimisation.
Indeed, the Committee would do well to note the IWF’s decision in 2011 to remove from its remit the similarly controversial category of hate speech.
The report goes on to recommend that “large technology companies” should be required to cooperate more fully with the Counter Terrorism Internet Referral Unit (CTIRU).
The UK Government should now enforce its own measures to ensure that the large technology companies operating in this country are required to cooperate with CTIRU promptly and fully, by investigating sites and accounts propagating hate speech, and then either shutting them down immediately, or providing an explanation to CTIRU of why this has not been done. This activity would be facilitated by the companies co-locating staff within the upgraded CTIRU and we recommend that this be part of its enhanced operations.
The Committee goes on to demand increased reporting requirements from tech companies.
The Government must also require the companies to be transparent about their actions on online extremism; instead of the piecemeal approach we currently have, they should all publish quarterly statistics showing how many sites and accounts they have taken down and for what reason. Facebook and Twitter should implement a trusted flagger system similar to Google’s and all social media companies must be more willing to give such trusted status to smaller community organisations, thereby empowering them in the fight against extremism.
For more information, see the report, Radicalisation: the counter-narrative and identifying the tipping point, in particular the section: Industry response to online radicalisation.