Making the UK the safest place to be online - a progress review for Safer Internet Day
It was way back in March 2017 that the UK Government first made that commitment - to make the UK the safest place in the world to be online - in the Digital Strategy. Theresa May was Prime Minister; Karen Bradley was the DCMS Secretary of State; TikTok didn’t exist in the UK; and it would be a number of months before the first deepfake was created. Ofcom reported that “most parents continue to say that their child has a good balance between screen time and doing other things”; and the theme of that year’s Safer Internet Day was - rather optimistically - “unite for a better internet”.
Seven Safer Internet Days have come and gone since then - almost as many as the number of Secretaries of State who have had responsibility for delivering on the Government’s commitment to make the UK the safest place to be online. (Notably, that commitment survived the change in Government last year.)
So, how’s it going? Is the UK the safest place to be online, particularly for children? And if it isn’t yet, when will that be so? We’re going to try answer this by looking at three aspects:
- The progress of Online Safety Act implementation
- The gaps and how they might be filled
- The international context.
OSA progress: higher standards of protection for children
Section 1 of the Online Safety Act sets out that:
“Duties imposed on providers by this Act seek to secure (among other things) that services regulated by this Act are—
- safe by design, and
- designed and operated in such a way that—
(i) a higher standard of protection is provided for children than for adults,”
We’ve written extensively on what “safety by design” means in this regulatory context and will return to consider that in our analysis of the gaps in protections for children, below. We’ve also published a detailed explainer on the extent of the child safety duties in the OSA. It’s been a long time coming but 2025 is not only the year in which these duties come into force but also Ofcom’s self-proclaimed “year of action”. So what can we expect to see when?
- By March 2025: all providers must produce an illegal harms risk assessment and, once the illegal harms codes have been approved by Parliament, take measures to ensure compliance with them; in relation to children, specific measures of note here include safety default settings and user support to protect child users (where a service can identify them) from grooming, including not recommending child user accounts in network expansion prompts and vice versa, and not allowing strangers to DM a child user. There are also a series of measures to deal with the detection and spread of child sexual abuse material.
- By 16 April 2025: all providers must assess whether their service is likely to be accessed by children (under-18s). Ofcom published its child access assessment guidance last month which is clear that, unless services have “highly effective” age assurance mechanisms in place to stop under-18s accessing their sites, they are “likely to be accessed by children”. “Highly effective” is not defined in quantifiable terms, however.
- April 2025: Ofcom will publish its protection of children statement (which will respond to the consultation launched last year), along with its guidance for children’s risk assessments and its codes of practice for user to user and search services. The codes will be laid in Parliament and - after the approval process - come into force in July 2025.
- By July 2025: services which have assessed themselves as likely to be accessed by children must complete a “suitable and sufficient” children’s risk assessment and, once the codes of practice receive Parliamentary approval, regulated services must implement the safety measures required (including age assurance) and Ofcom can enforce against them.
- By July 2025: all services which allow porn on their platforms (whether commercial providers, or user-to-user services) must have effective age checks in place.
What’s covered in the children’s codes of practice?
We won’t know the precise details until Ofcom publishes the final versions in April, but - given the minimal changes between the draft and final versions of the illegal harms codes - there are unlikely to be many material changes from the consultation versions, which many organisations in civil society, ourselves included, have significant reservations about. Code measures that are likely to be included are grouped into a number of buckets:
- Governance and accountability - for example, having a named accountable person responsible for the companies’ duties in relation to protecting children.
- Content moderation - for example, having a system that allows for swift action against content harmful for children. The Act includes lists of such content: primary priority content (pornographic content, suicide and self-harm material), which children must be prevented from encountering; and priority content (bullying and violent content, abusive and hateful content related to protected characteristics), which children must be protected from encountering.
- Reporting and complaints - a number of performance measures are proposed here relating to the operation and responsiveness of services’ reporting and complaints mechanisms.
- Measures relating to terms of service; user support; and recommender systems where measures include excluding or limiting the prominence of designated content from the recommender feeds of child users.
So, while all these measures - taken together - may lead to some noticeable changes in the online experience of child social media users on platforms or services that have taken minimal steps to protect children to date, they sit behind an age-gating requirement that takes precedence as an obligation. Eg, stop children accessing your platform, and if you can’t do that, take these measures to make their experience safer. This age-gating is, in effect, the only substantive new measure to protect children: a blunt instrument and - as we said in our response to the consultation - a single point of failure.
If services keep under-18s off their platforms, then that - under Ofcom’s proposals - is enough; the more comprehensive duty to “design and operate” safer services, whether children can access them or not, to deliver the Act’s intent of “a higher standard of protection is provided for children” will not be realised under the codes’ measures. Ofcom’s enforcement of compliance with the age-assurance/age verification requirements will therefore be crucial in the months ahead: an enforcement programme has already been set up for pornography providers; enforcement on part 3 user-to-user services and search services can begin from July this year.
The gaps and how they might be filled
Given the length of time it has taken to get the online legislation onto the statute book - and the limitations so far of Ofcom’s approach to implementation - it’s no wonder that parents, campaigners and those with lived experience of harm have become frustrated with the pace of progress and have been agitating for more extensive action - such as smartphone bans, or restrictions on social media access for under-16s - to circumvent the delays in bringing in the promised protections and respond to the ever-increasing range of threats and risks faced by children.
There are gaps in Ofcom’s implementation of the Act as well as in the Act itself. Most prominent in the first category is the failure by Ofcom to come up with requirements that deliver on the Act’s “safe by design” objective. Linked to this is the omission of specific measures to mitigate the multiple, evidenced risks to children from livestreaming functionality in the first set of illegal harms codes - risks that Ofcom itself detailed comprehensively in its own risk register back in November 2023. We are promised proposals on this - and a number of other measures - in a further consultation due in April; it’s not clear whether some of the other omissions which we flagged from the first set of codes, including measures to reduce the risk to children of location information, large group messaging or ephemeral messaging, will be included in the new proposals. Either way, those measures won’t appear in a new version of the illegal harms code nor be enforceable until well into 2026.
None of these proposals will address the concerns that have fuelled the clamour for more extensive legislative action to address the impact of social media and smartphone use on children and young people. The rise of the Smartphone Free Childhood campaign and grassroots initiatives by parents to delay or ban children’s access to phones and social media is in part a reaction to the very slow pace of change in online protections for children - despite the repeated Government assurances. It also reacts to an increasing awareness of the addictive, attention-grabbing functionality of the platforms and apps that children access via their smartphones, something which the previous Government refused to address in the legislation despite pressure from Peers (see, for example, Lord Russell’s contribution and related amendments at Lords Report stage here). Nor do Ofcom’s measures - or the Act itself, given its overly dominant focus on content - really get at the underlying effect of the social media business model and how it manifests in particular risks to children, for example the financial incentives for influencers propagating harmful content or views.
Josh MacAlister’s Safer Phones Bill (more formally known as the Protection of Children (Digital Safety and Data Protection) Bill - due for its Second Reading on 7 March - aims to get at some of these design features, promising to “make smartphones less addictive for children and empower families and teachers to cut down on children’s daily smartphone screen time”. MacAlister has been undertaking a series of consultation hearings in Parliament with stakeholders and children themselves, and it’s likely that his PMB will include changes to the age of digital consent for GDPR purposes and a duty to enforce minimum age limits by social media services. The Government has not ruled out backing MacAlister’s Bill - a prerequisite for its success; this prospect led to the MP dropping his initial proposal to introduce a ban on smartphones in schools, which the DSIT Secretary of State had indicated he couldn’t accept. But it remains to be seen how much this might be a vehicle for some Government-backed remedial action to plug wider holes in the online safety regime.
The Government has also not ruled out bringing forward its own online safety legislation - it said in its recent draft Statement of Strategic Priorities for Ofcom that “where it is clear some problems cannot be solved without new legislation, the government will consider this, but our goal is to be innovative and deliver maximum outcomes within the Act’s existing provisions.” And certainly, beyond the gaps in the regime relating to protection of children, there are many areas where substantive new legislation is required to address, for example, the online conditions and access to violent and extremist content which led to the radicalisation of the Southport killer as well as the mis/disinformation that spread in the wake of the murders and led to days of rioting on British streets. (Our blog on the gaps in the regime exposed by those events is here.)
In the meantime, there have been some welcome recent moves to use other legislative vehicles as a means to bring in new criminal offences to address emerging gaps and harms, including the Home Office announcement that a number of new offences relating to the creation of AI-generated CSAM would be introduced via the Crime and Policing Bill and the (messy) introduction of new offences for creating and soliciting intimate images and deepfakes via the Data (Use and Access) Bill, which ultimately concluded in a significant victory for Baroness Owen’s determined campaign. Last week, a new amendment, backed by Baroness Kidron, was also accepted by the Government to introduce “data protection by design” for children into the Data Bill.
The international context
How far the debate over more expansive legislation survives the shifting relationship with the new US administration remains to be seen. But for now, the UK is maintaining a middle-ground position between the more systemic, risk-based approach taken by the Digital Services Act (where Article 28 sets out that online platforms should “put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security of minors on their service”, and Very Large Online Platforms must consider risks to children in their risk assessment and mitigation) and the more assertive response set out by the Australian government, where a Bill to ban under-16s from social media was announced and introduced in a blink of an eye, though it is not yet in force. While there are plenty of critics sceptical of this approach (for example, here and here), many countries, including the UK, will be watching its implementation closely.
So, to return to our opening questions: things are definitely moving (albeit slowly and imperfectly) in the UK, with much riding now on Ofcom’s appetite for robust enforcement and its willingness to revisit the importance of design-based measures in future iterations of the codes. But, insofar as any country can claim to be “the safest place to be online”, this jury is still out on when - and if - the UK will reach that goal.