Category

Content Issues

Online harms White Paper

By | Content Issues

The UK government has today published its much anticipated policy paper “Online Harms“. This proposes a regulator for Internet content (whether
a new regulator, or repurposing an existing one), to introduce sweeping new regulation of Internet companies and against a broad range of Internet content. The centre-piece obligation for companies is a new “duty of care” to protect their users from broadly-described “harm”, the meaning of which will be set out in a series of Codes of Practice developed by the new regulator.

The government describes this as “novel and ambitious” and “world-leading”.

Introducing the White Paper, Secretary of State for Digital Jeremy Wright said

The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough

Home Secretary Sajid Javid added

The tech giants and social media companies have a moral duty to protect the young people they profit from.

Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online.

That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.

A twelve-week consultation on the White Paper begins today. A synoposis of some of the stand-out elements follows.

Companies covered

Companies in scope will be those “that allow users to share or discover user-generated content or interact with each other online.”, regardless
of size.

“A wide variety of organisations provide these services to users. […] The scope will include companies from a range of sectors, including social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.”

The regulator will adopt a “risk based approach” and “the regulator’s initial focus will be on those companies that pose the biggest and clearest risk of harm to users, either because of the scale of the platforms or because of known issues with serious harms”.

Funding

“The regulator will be funded by industry in the medium term, and the government is exploring options such as fees, charges or a levy to put
it on a sustainable footing.”

Harms covered

  • All illegal content; plus
  • Content without clear legal definition, including
    • cyberbullying and trolling
    • extremist content and activity
    • coercive behaviour and intimidation
    • promotion of self-harm and FGM
  • Children’s access
    • to pornography
    • to dating sites
    • to other inappropriate content
    • to social media (under 13)
  • A broad range of legal content including
    • disinformation
    • manipulation
    • abuse of public figures
  • Emerging challenges such as
    • Excessive screen time
    • “Designed in addiction”

The list is stated to be neither exhaustive nor static.

Out of scope: GDPR, hacking, “dark web”.

 

Private communications

The extent to which private communications will be included within the scope of the regulator’s ambit isn’t entirely clear. The White Paper suggests one-to-one communications will be out of scope, but “a WhatsApp group with hundreds of members” would be in scope. “Small private channels” used to groom children for sex, or or used to encourage terrorism, would be in scope.

There is a consultation question on the definition of private communications.

Technical measures

“Companies should invest in the development of safety technologies to reduce the burden on users to stay safe online.”

 

Regulation

  •  “The government will establish a new statutory duty of care to make companies take more responsibility for the safety of their users and
    tackle harm caused by content or activity on their services.”
  • “Companies must fulfil their new legal duties. The regulator will set out how to do this in codes of practice.”
  • “Companies will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology.”
  • There will be
    • a complaints mechanism for users to complain about content
    • (as a consultation option) a right for organisations that aim to protect users to make “supercomplaints”
    • strict timelimits for companies to act
    • redress mechanisms for users
  • “The scope to use the regulator’s findings in any claim against a company in the courts on grounds of negligence or breach of contract. And, if the regulator has found a breach of the statutory duty of care, that decision and the evidence that has led to it will be available to the individual to use in any private legal action.”
  • “To inform its reports and to guide its regulatory action, the regulator will have the power to require annual reports from companies” covering a broad and detailed range of aspects of what they do in this area, as well as a power for the regulator to make specific enquiries and compel the disclosure of research about harms.

The regulator will be required to follow the some regulatory principles:

  • A Duty for regulator to pay “due regard” to innovation and to protect users rights online, including fundamental rights to freedom of expression.
  • A Duty for regulator to respect proportionality
  • “A risk based approach”

Enforcement and sanctions

  • “Substantial fines” for non-compliance
  • ISP blocking could be used to prevent user access to foreign online services that decline to obey the new UK regulation
  • Another option for enforcement against non-compliant foreign entities would be disruption of their business activities (e.g. by getting payment
    services providers to withhold service, removing from search engines, preventing links on social media)
  • The White Paper includes a consultation question on personal liability for senior managers of businesses that fail to discharge their duty of care, including possibly civil fines or even personal criminal liability
  • The government is also consulting on imposing a requirement on companies that offer online services to nominate a representative person who is
    within the UK or the EEA, and so reachable for enforcement action and personal liability

Fulfilling a duty of care

Essentially, the actions a company would have to undertake in order to be considered to have discharged its duty of care are to be left to be determined through Codes of Practice established by the regulator.The White Paper sets out a range of topics a Code of Practice should cover, broken out by type of harm (terrorism, CSAM, disinformation, harassment etc)

Malaysian penalty for “fake news”: 10 years in jail

By | Content Issues, International, News

The Malaysian government has brought forward a bill in Parliament that sets the penalty for publishing so-called “fake news” online with up to ten years in jail plus a fine of 500,000 MYR (about £90,000), Reuters reports.

Kuala Lumpur, capital of Malaysia

“The proposed Act seeks to safeguard the public against the proliferation of fake news whilst ensuring the right to freedom of speech and expression under the Federal Constitution is respected,” the government said in the bill.

The bill gives a broad definition to fake news, covering  “news, information, data and reports which is or are wholly or partly false”. It seeks to apply the law extra-territorially, to anything published on the Internet provided Malaysia or Malaysians are affected by the article.

“Fake news” has become an increasingly popular target of political attack since Donald Trump popularised the term in his battles with CNN and other major broadcasters. In the UK, a Parliamentary Select Committee recently held their first ever hearings in Washington DC on the subject, summoning social media platforms to be lambasted for failing to suppress allegedly “fake news”. The Prime Minister’s office established a new unit to counter fake news in January.

So far, however, no UK government Minister has suggested jailing people for writing something on the Internet that isn’t right.

ECJ to rule on whether Facebook must actively seek out hate speech

By | Content Issues, News

The Austrian Supreme Court has asked the European Court of Justice to rule on whether Facebook should actively search for hate speech posted by users.  The original lawsuit against Facebook was filed by Eva Glawischnig, the former leader of the Austrian Green Party, in 2016, after Facebook refused to take down what she claimed were defamatory postings about her.

Last year, an Austrian appeals court ruled in favour of Glawischnig, ordering Facebook to remove the hate speech postings – both the original posts and any verbatim repostings of the same comments – not just in Austria but worldwide. The Austrian Supreme Court has asked the ECJ to look at two issues: 1. Whether Facebook needs to actively look for similar posts, instead of just reposts, and 2. Whether such content needs to be removed globally.

The case comes amidst concerted pressure in Europe for social media platforms to do more to tackle hate speech. A new hate speech law in Germany, known as the network enforcement act, requires companies to remove or block criminal content within 24 hours, or seven days for complex cases, of it being reported. The law has already attracted controversy, despite only being actively enforced since 1 January 2018, after Twitter deleted a post by the German justice minister, Heiko Maas, dating back to 2010 before he was appointed to the role, calling a fellow politician “an idiot”. Twitter has also deleted anti-Muslim and anti-migrant posts by the far-right Alternative for Germany (AfD) party and blocked a satirical magazine’s account after it parodied the AfD’s anti-Muslim comments. The German Government has said that an evaluation will be carried out within six months to examine how well the new law is working.

Meanwhile, the European Commission has kept up the pressure on tech companies calling for them “to step up and speed up their efforts to tackle these threats quickly and comprehensively” and reiterating that it would “if necessary, propose legislation to complement the existing regulatory framework.”

UK Government to set up new unit to tackle fake news

By | Content Issues, News

The UK government has announced that it will set up a new unit to counter “fake news” and disinformation. The government said that the “dedicated national security communications unit”, which is already being dubbed the “Ministry of Truth”, would be charged with “combating disinformation by state actors and others”. As yet, there is no further information on where the unit will be based or who will staff it.

The Digital, Culture, Media and Sport Committee is currently carrying out an inquiry into “fake news” and has requested information from Facebook and Twitter including on Russian activity during the EU referendum campaign.

IPO launches copyright lessons for seven-year olds

By | Content Issues, News

The UK’s Intellectual Property Office (IPO) has launched a new campaign to teach children about online copyright infringement. In a bid to make intellectual property “fun”, the IPO has produced a range of teaching materials for seven- to 11-year-olds, which centres on a series of cartoons following the adventures of Nancy and the Meerkats.

According to the BBC:

The five-minute cartoons tell the story of would-be pop star Nancy, a French bulldog, who battles her ideas-stealing, feline nemesis, Kitty Perry, and teaches friends, including Justin Beaver and a rather dim Welsh sheep called Ed Shearling, about the importance of choosing an original band name and registering it as a trademark.

The IPO, which believes learning to “respect” copyrights and trademarks is a “key life skill”, is spending £20,000 on the campaign, which is part-funded by the UK music industry.

UK Government publishes Internet Safety green paper

By | Content Issues, Malware and DOS attacks, News

The UK Government has announced proposals for a voluntary levy on Internet companies “to raise awareness and counter internet harms”. The government has said that the levy would target issues such as cyberbullying, online abuse and children being exposed to pornography on the Internet.

The levy is one of a series of measures proposed in the Internet Safety Green Paper, which is the result of a consultation launched in February. The other measures include:

·       A new social media code of practice to require more intervention by social media companies against allegedly bullying, intimidating or humiliating content

·       An annual Internet safety transparency report, to help government track how fast social media companies remove material that has been the subject of a complaint

·       Demands for tech and digital startups to “think safety first” – prioritising features to facilitate complaints content removal as functionality that must be into apps and products from the very start

All the measures will be voluntary although the government has not ruled out legislating if companies refuse to take part. In remarks that will be of concern to Internet companies, the Culture Secretary Karen Bradley hinted that the government could change the legal status of social media companies, to deem them publishers rather than platforms, which could mean even greater regulation of their users’ content.

“Legally they are mere conduits but we are looking at their role and their responsibilities and we are looking at what their status should be. They are not legally publishers at this stage but we are looking at these issues,” she said.

The consultation will close on 7 December, and the government expects to respond in early 2018.

Amber Rudd focusses on Internet in conference speech

By | Content Issues, News
Home Secretary Amber Rudd focussed on Internet policy issues in her speech to the Conservative Party Conference in Manchester. The Home Secretary reiterated her demands for Internet platforms to do more to combat terrorism and child abuse.
Rudd announced plans to tighten terrorism laws to criminalise merely viewing terrorist content, as opposed to keeping a copy found on the Internet, as well as new legislation to criminalise publishing information about the police or armed forces for the purposes of preparing an action of terrorism.Internet companies, however, will be most directly concerned with the Home Secretaries demands directly of them.

“But it is not just Government who has a role here. In the aftermath of the Westminster Bridge attack, I called the internet companies together. Companies like Facebook, Google, Twitter and Microsoft. I asked them what they could do, to go further and faster.

They answered by forming an international forum to counter terrorism. This is good progress, and I attended their inaugural meeting in the West Coast.

These companies have transformed our lives in recent years with advances in technology.

Now I address them directly. I call on you with urgency, to bring forward technology solutions to rid your platforms of this vile terrorist material that plays such a key role in radicalisation.

Act now. Honour your moral obligations.”

— Home Secretary Amber Rudd

The Home Secretary announced that the government would be funding Project Arachnid, web-crawler software developed by the Canadian child protection Cybertipline, designed to search out child abuse imagery online.

“It is software that crawls, spider-like across the web, identifying images of child sexual abuse, and getting them taken down, at an unprecedented rate.

Our investment will also enable internet companies to proactively search for, and destroy, illegal images in their systems. We want them to start using it as soon as they can.

Our question to them will be ‘if not, why not’. And I will demand very clear answers.”

— Amber Rudd

Rudd also doubled down on previous attacks on end-to-end encryption in person-to-person messaging software

“But we also know that end to end encryption services like Whatsapp, are being used by paedophiles. I do not accept it is right that companies should allow them and other criminals to operate beyond the reach of law enforcement.”

— Amber Rudd

Speaking earlier at a conference fringe event, she hit back at critics who accuse her of fighting a war against mathematics, saying

“I don’t need to understand how encrpytion works”,

— Amber Rudd

And accusing tech experts of “patronising” and “sneering” at politicians who want to regulate technology.

UK prime minister calls on internet firms to remove extremist content within two hours

By | Content Issues, International, News

The UK prime minister, Theresa May, has told internet companies that they need to go “further and faster” in removing extremist content in a speech to the United Nations general assembly. The prime minister said that terrorist material is still available on the internet for “too long” after being posted and has challenged companies to find a way to remove it within two hours. The material in question can include links to videos glorifying terrorism and material encouraging converts to commit terrorist acts.

In her speech, May said:

“Terrorist groups are aware that links to their propaganda are being removed more quickly, and are placing a greater emphasis on disseminating content at speed in order to stay ahead.

Industry needs to go further and faster in automating the detection and removal of terrorist content online, and developing technological solutions that prevent it being uploaded in the first place.”

The UK, together with France and Italy, is demanding evidence of progress by the time of a meeting of G7 interior ministers in Rome on 20 October.

MSPs warned cyber attack could last for days

By | Content Issues, Hacking, News
A cyber attack has recently impacted the Scottish Parliament. MSPs and their staff have been warned that they will be unlikely to be able to access their email accounts due to hackers launching a “brute force” cyber attack in an attempt to gain their passwords.
A brute force attack is a cyber attack that involves trying to use as many iterations or possibilities as possible to guess a password. Parliament chief executive Sir Paul Grice said that Parliament’s cyber systems were still under attack but there was no evidence that any systems had been breached: “At this point there is no evidence to suggest that the attack has breached our defences and our IT systems continue to be fully operational.” He went on to add that: “Staff from the BIT (Business Information Technology) Office are working closely with the NCSC and our suppliers to put in place additional security measures to continue to contain the incident and mitigate against any future attacks.”
It is not yet known which country the cyber attack originates from. It is believed, however, to be similar to the cyber attack launched on MPs earlier in June.

Cloudflare critiques own decision not to serve Daily Stormer

By | Content Issues, Hacking, News

Yesterday, Cloudflare ceased to provide caching and DDoS protection services for a far-right blog, the Daily Stormer, following claims by the latter that Cloudflare secretly support their ideology. Cloudflare’s CEO has published a lengthy and thoughtful analysis of their decision, beginning

Now, having made that decision, let me explain why it’s so dangerous.

One interesting tidbit concerns the nature of the pressure Cloudflare was under

“In fact, in the case of the Daily Stormer, the initial requests we received to terminate their service came from hackers who literally said: “Get out of the way so we can DDoS this site off the Internet.”

In an internal e-mail obtained by Gizmodo, Prince was blunt about his reasons for terminating Daily Stormer:

This was my decision. Our terms of service reserve the right for us to terminate users of our network at our sole discretion. My rationale for making this decision was simple: the people behind the Daily Stormer are assholes and I’d had enough.

Let me be clear: this was an arbitrary decision. It was different than what I’d talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. I called our legal team and told them what we were going to do. I called our Trust & Safety team and had them stop the service. It was a decision I could make because I’m the CEO of a major Internet infrastructure company.

Having made that decision we now need to talk about why it is so dangerous. I’ll be posting something on our blog later today. Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.

Read the whole blog post on Cloudfare.com and Prince’s internal e-mail on Gizmodo.

Update note: This article was updated on 18th August to add the quotes from and link to the e-mail obtained by Gizmodo.