The COVID-19 pandemic has seen many of these problems get worse, with children spending more time online, and disinformation/misinformation about the virus, vaccines etc spreading on social media. The initiative aims to provide incentives supporting businesses that behave responsibly, and penalise those that do not. It encourages the use of new process-based methods to prevent online abuses including sexual exploitation, terrorist propaganda, bullying and misinformation.
Existing body Ofcom will take on a new role as regulator of the system.
Government will introduce a tiered approach to tackle the issues in a systematic and proportionate manner.
Which organisations will be covered?
In scope organisations will include:
- search engines
- sites that host user generated content accessible by UK users
- sites facilitating public or private online interaction between service users, where one or more of them is in the UK.
They need not be based in the UK.
B2B services will be out of scope. Internet service providers will not assume the duty of care, although they must assist with regulatory enforcement measures.
Low risk organisations will benefit from a range of exemptions, such as internal services used within businesses, and retailers offering product review functionality.
The duty of care
At the heart of the proposed regulatory framework is a new duty of care to service users. In-scope providers will need to understand the risks that users are exposed to and implement systems and processes to improve user safety and monitor their effectiveness.
Working out how to achieve compliance with the duty will not be left entirely to organisations –codes of practice will be issued, focusing on systems, processes and governance that in-scope companies need to put in place. Companies may take alternative steps to those set out in the codes of practice, provided they can demonstrate to Ofcom that these are as effective as better than following the codes.
A general definition of harmful content and activity will appear in the Bill. This will be fleshed out in regulations to provide more detail and certainty.
An “overarching purpose” for the regulatory framework will require regulators and organisations alike to abide by a set of guiding principles. These are:
- Improving user safety: taking a risk-based approach that considers harm to individuals
- Protecting children: requiring higher levels of protection for services used by children
- Transparency and accountability: increasing user awareness about incidence of and response to harms
- Pro innovation: supporting innovation and reducing the burden on business
- Proportionality: acting in proportion to the severity of harm and resources available
- Protection of users’ rights online: including freedom of expression and right to privacy
- Systems and processes: taking a system and processes approach rather than focusing on individual pieces of content.
A tiered approach
Two categories of services are envisaged. Most services will fall into category 2 – seen as lower risk and so facing less stringent obligations. The minority of services deemed to present a higher risk to users will face more onerous responsibilities. Relevant factors will include audience size and functionality with precise thresholds to be set with advice from Ofcom.
- Category 1 – likely to include large social media sites that host user-generated content, such as TikTok, Instagram, Facebook and Twitter.
- Category 2 – the vast majority of in-scope organisations, and might include smaller dating sites and private messaging apps.
Both Category 1 and Category 2 service providers must take action against the proliferation of illegal content such as child pornography and terrorist propaganda. They must also assess the likelihood of children accessing their services, and where this is likely, provide additional protections for child users.
In addition, Category 1 service providers will have to tackle harmful content accessed by adults even if it is legal, such as the spread of misinformation. They will have to undertake regular risk assessments and publish reports on what they are doing to counter online harms.
The regime will be coupled with a two-pronged enforcement approach against companies that fail to meet their obligations and do not respond to warnings. The potential sanctions include:
- civil fines up to £18 million or 10% of annual global turnover, whichever is higher
- measures to disrupt a company’s business activities in the UK, and even block access to their services
- criminal sanctions for senior managers where information requests are not met.
These disruption and blocking powers are intended for only the most serious and persistent cases.
It is possible that additional offences will be included in the Bill. The Law Commission is currently reviewing the criminal law to assess whether more needs to be done to tackle issues like cyber-flashing and ‘pile-on’ harassment.
The introduction of the legislation is likely to be welcomed by the public. Ofcom research indicates that:
“a third of people feel the risks of being online – either to them or their children – have started to outweigh the benefits. Four in five adult internet users have concerns about going online, and most people support tighter rules.”
The fines are set at a level to be taken seriously even by the large players. More important, perhaps, would be the reputational and brand damage that serious adverse findings would entail.
Some have expressed concerns around the wide scope of the legislation, and also the possibility of undermining services that offer end-to-end encryption. Smaller players have said that the regime will be confusing and off-putting for entrepreneurs, and have the unwelcome effect of favouring larger businesses with the resources to implement compliance.
The wider picture
The UK sees itself as a global leader in this space, and emphasises an intention to work alongside partners internationally addressing this as a shared challenge. Details of collaborative working, and parallel activity are provided.
It is interesting to see this initiative dove-tailing with other activity to combat harmful online material. Payments services companies Mastercard and Visa recently blocked their cards from use on adult content site Pornhub due to allegations of substantial amounts of illegal content. And the EU’s planned Digital Services Act includes measures to police and remove illegal online content.
Take away points
The legislation is not yet fixed and will take some months to bring into law. However, it is likely to receive broad political support. The very extensive consultation exercise already undertaken has resulted in tailoring and improvement. We should expect a fairly swift introduction – businesses would do well to keep an eye out for consultation on the details and feed in their views to ensure the legislation hits its target without acting as a brake on innovation.
Learn more about our information technology and technology sector services.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.