Response to Ofcom Illegal Harms Consultation
Our full response to Ofcom’s first Online Safety Act consultation on illegal harms has been submitted. It is available as a PDF below, along with some of the supporting material we have also submitted.
It is to Ofcom’s immense credit that this first consultation (“Protecting people from illegal harms online”) was produced so quickly after the Online Safety Act received Royal Assent. The protracted passage of the Bill through Parliament undoubtedly afforded much time to prepare in some areas – for example, undertaking calls for evidence and commissioning research, recruiting staff and building up expertise. But it also meant that there were many legislative moving parts, political U-turns and last-minute policy shifts that this consultation has been unable to accommodate. This is not just evident in some of the gaps – many of which Ofcom acknowledges – but in the very different tone and areas of emphasis between some of the initial sections of the consultation (e.g. the overview, approach and background (volume 1) documents) and the detail that follows.
However, in some fundamental areas (such as the differential treatment between large small services, which we discuss in detail in our full response), the approach Ofcom has taken is at odds with the intent of other parts of the final Act; in many, the narrative tone and the decisions do not align with the expectations of Parliament (particularly the House of Lords, where scrutiny of the Bill was exemplary in its detail and its cross-party collaborative approach) or the reassurances offered by the Government in many of the debates, as we demonstrate by reference to Hansard.
Ofcom has stressed – in public stakeholder meetings and in private sessions with our Network – that they have had to trade off speed (getting the consultation out) over comprehensiveness (getting everything right first time). The consultation frequently mentions that the draft codes of practice are first iterations and will be updated based on the evidence received in this consultation and as new information emerges. Ofcom’s information-gathering powers only came into effect via a commencement order from 10 January and it is clear in statements made by its team and senior management that they see these powers as a route to amassing much more of the evidence they need to fill in the gaps and/or provide more evidence-based measures for further versions of the codes. That will take time.
We fully appreciate the challenges here. However, Ofcom’s Chief Executive wrote to Peers at the start of the House of Lords Committee Stage of the Online Safety Bill in April 2023 and reassured them that in relation to the phase one consultations, including on illegal harms, they can “move very quickly here because this part of the Bill has remained unchanged for some time and the illegal harms are defined in existing law. The Government’s and Parliament’s intentions about what they want platforms to achieve are clear. We launched a call for evidence on illegal harms in July 2022, and are well-advanced in gathering the necessary evidence, including on consumer experiences of those harms, drivers of risk, and the systems and processes available to services to address them.” (Our emphasis)
These reassurances – particularly on the last point – are not borne out by the material produced. We therefore have significant concerns that the approach that Ofcom has chosen to take – and the codes that have been drafted as a result – amounts to a missed opportunity to start the new online safety regime off on a robust footing. Our concerns are shared by many of those in our network, including those who co-signed our recent public statement.
In fact, we would go so far as to query whether the approach – taken in the round – that is set out in Ofcom’s proposals will even deliver the first step required by Section 1 (1) of the Act, providing for a “new regulatory framework which has the general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom”.
In short, Ofcom has not been bold enough. Arturo Bejar, the Meta whistleblower who has recently testified to Congress, observed: “Social media companies are not going to start addressing the harm they enable for teenagers on their own. They need to be compelled by regulators and policy makers to be transparent about these harms and what they are doing to address them.”
Ofcom has made a number of choices in how it is approaching the legislative framework that it has not fully justified and which we argue are not required by the language of the Act; there are inconsistencies between its analysis of the harms it has evidenced and the mitigation measures it proposes (see our comparison table here and attached as a PDF below); and there are some significant judgements (such as the primacy of costs in its proportionality approach) on which it is not consulting but which fundamentally affect the shape of the proposals that flow from them.
Moreover, we are concerned that the framework as proposed at this stage will not be “iterated” in subsequent versions of the codes: the combination of the takedown-focused illegal content judgements guidance and the rules-based, tick-box approach to governance and compliance proposed here will become the baseline for the regime for years to come. The piecemeal basis in which Ofcom has approached the selection of measures contained in the codes – only adding those where there is enough evidence – rather than stepping back to consider the risk-based outcome the legislation compels companies to strive to achieve concerns us. There is a significant risk that the chance to introduce (as Parliament intended) a systemic regulatory approach, rooted in risk assessment and “safety by design” principles will be lost.
We think, however, that there is a relatively simple solution to many of the inter-related issues we flag in our response which builds on the existing proposals without requiring restructuring or revision. That is to add a measure to the draft codes to put a risk-based, outcome-focused requirement on platforms, of all sizes, to put in place a system to identify appropriate measures to address the risks arising from the design and functionality of their service, as identified in their risk assessments, that are proportionate to the size and type of service, bearing in mind best practice and the state of the art. We set out proposed wording for this in our full response at page 19.
This, we contend, is in line with the parameters of the Act and, just as importantly, justified by Ofcom’s collected evidence of harm. It would provide a stop-gap, catch-all measure while Ofcom continues – via its information-gathering powers – to collect evidence on specific measures that work to mitigate harm while, in and of itself, helping to provide some of that evidence via companies’ compliance activities. Adopting this approach will allow Ofcom to bridge the gap between what is evidenced now and our future knowledge base without exposing users to unnecessary risk of harm. We very much hope that they will do so.