One of the major challenges currently facing policy makers is how to tackle the spread of online disinformation, which has been a particular problem during the Covid-19 pandemic. Disinformation is defined by the UK Government as “the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain.” In response, both the UK and the EU have introduced various initiatives to try to combat the problem.
The UK’s Digital, Culture, Media and Sport (DCMS) Committee carried out a lengthy inquiry into Disinformation and ‘fake news’ and published a report in late-February 2019, which called for “tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.”
The UK Government subsequently published an Online Harms White Paper in April 2019, in which it stated its aim “to make Britain the safest place in the world to be online”. Legislation will take a “proportionate, risk-based response”, it said, by introducing “a new duty of care on companies and an independent regulator responsible for overseeing this framework”.
The White Paper specifically mentioned disinformation as one of the online harms it was targeting, saying that, “Companies will need to take proportionate and proactive measures to help users understand the nature and reliability of the information they are receiving, to minimise the spread of misleading and harmful disinformation and to increase the accessibility of trustworthy and varied news content.”
In February 2020, the current Government announced that it was “minded” to name Ofcom as a proposed new ‘Online Harms Regulator’. However, a final decision has not yet been made, and there is currently no definitive date for when a Bill will be published.
Meanwhile, in March 2020, the DCMS Committee re-established the Sub-Committee on Online Harms and Disinformation amidst concerns about the extent of COVID-19 disinformation and misinformation. In June 2020, the Sub-Committee published a report, Misinformation in the COVID-19 ‘Infodemic’, which pressed the Government to bring forward online harms legislation and to give Parliament a role in establishing what harms (including disinformation) are in scope.
The European Commission has also been taking action against disinformation. In April 2018, it published a communication to the European Parliament outlining disinformation as a leading threat to democracies across Europe, as well as to the EU itself. To help combat this, it agreed a self-regulatory Code of Practice on Disinformation with platforms, leading social networks and advertisers in October 2018 to address the spread of online disinformation.
In September 2020, the Commission published a review of the Code of Practice assessing its first year in operation. It concluded that while the Code has “prompted concrete actions and policy changes by relevant stakeholders aimed at countering disinformation”, it has fallen short in four main areas. These are: the inconsistent and incomplete application of the Code across platforms and Member States, a lack of uniform definitions, the existence of several gaps in the coverage of the Code commitments, and the limitations intrinsic to the self-regulatory nature of the Code.
The Commission has stated that the review’s findings will support its “reflections on pertinent policy initiatives, including the European Democracy Action, as well as the Digital Services Act, which will aim to fix overarching rules applicable to all information society services.”
Read more out the LINX Public Affairs department and the service they provide for our members here.
Article written by Dan Smith, Cybersecurity Policy Manager for LINX