Online Safety Act Network

Ofcom's protection of children consultation: our summary response

Tags:

Ofcom’s protection of children consultation closed on Wednesday 17 July. In this blog post, we summarise our response to their proposals and reiterate the recommendation we made in response to their previous illegal harms consultation to deliver a more outcome-focused approach to risk mitigation in their draft codes of practice. Our full response is available here.

Background

Ofcom’s protection of children consultation is the second major plank of its implementation of the regulatory regime that it will be enforcing under the Online Safety Act 2023. The first -the illegal harms consultation - closed in February 2024 and Ofcom’s response has not yet been published. The protection of children’s proposals relate to the Online Safety Act’s child safety duties (section 12) and risk assessment (section 11); Ofcom is consulting on two draft codes of practice (for user-to-user services and for search) along with draft guidance for risk assessment and for children’s access assessments.

Ofcom refers to its attempts to provide alignment and consistency between the two consultations at a number of points in the documentation. For example, they have “sought to align our draft Children’s Risk Assessment Guidance with our draft Illegal Harms Risk Assessment Guidance where possible” (Summary; p10); and “our approach [to governance] is consistent with our Illegal Harms Consultation. This means service providers who must comply with both illegal content safety duties and children’s safety duties can choose to adopt a single process that covers both areas” (Summary; p12). Many of the measures proposed in the children’s codes mirror those in the illegal harms codes. (Proposed codes at a Glance)

We raised a number of concerns about the approach taken by Ofcom in its illegal harms proposals, not least as we felt that the strategic choices they had taken risked setting the regime off on a weak footing that would not be easily revised in subsequent iterations of the codes of practice. Our full response to the illegal harms consultation is here and a public statement, co-signed with a number of the organisations in our network, is here. Those concerns remain - not least, as the mirroring of the approaches and broadly similar measures from the illegal harms consultation bakes the same weaknesses into this one.

Ofcom - in volume 1 - sets out that the feedback which it received on the illegal harms consultation may result in a changed approach to some elements of the illegal harms proposals - and consequently the children’s proposals which mirror them. This is necessary if they are to maintain the consistency between the two parts of the regime:

“To ensure a coherent online safety regime and to help services understand their responsibilities, this consultation follows, as far as possible, a consistent approach with the Illegal Harms Consultation and Part 5 Consultation. We are currently carefully considering and analysing the responses received to these consultations.

Some of the feedback we have received on our previous proposals may also be relevant to the approach currently proposed in this consultation. Where that is the case, we will take into account the feedback on our regulatory approach in the round to ensure that our approach remains consistent across our consultations. For example, several respondents to the Illegal Harms Consultation expressed concern that under the Act services which follow our Codes of Practice will be deemed compliant with the relevant safety duties even if there are risks in their risk assessment which are not fully addressed by Ofcom’s proposed measures. We are considering this issue carefully and will provide a detailed response covering both the Illegal Harms and Protection of Children proposals following this consultation.” (Volume 1; p20)

This therefore makes responses to this consultation both more straightforward and more challenging at the same time. Straightforward in the sense that much of our analysis and feedback is the same; we provide cross-references to our previous submission and supporting evidence where appropriate but, in many cases, the substantive commentary and analysis is restated here. It is more challenging, however, in that we do not know how extensive Ofcom’s revisions will be as a result of the illegal harms consultation nor whether they will be (relatively speaking) superficial (eg, additional measures added to the codes of practice, for instance) or fundamental and transformative to the regime as a whole (eg, a more comprehensive approach to safety by design, or a different approach to governance and risk assessment).

We have chosen in our full response therefore to emphasise, where applicable, the same points we made in response to the earlier consultation, linking them to material from the current consultation to show that using the same (consistent) approach will lead to - in our view - similar (limited) regulatory outcomes. We also question whether this truly does deliver the “strongest protections for children” promised by the Government and enshrined in the Act at section 1 3 (b) (i). We hope that Ofcom will therefore address our feedback in the round when it responds to both consultations later in the year. Our work and advocacy through the legislative process, and now during the implementation phase, has only ever been with the intention of ensuring robust, outcomes-focused regulatory interventions that make the UK the safest place to be online.

Our recommendation

In our previous response, we made a recommendation for an amendment to the illegal harms codes of practice that - we felt - would resolve a number of the structural issues within Ofcom’s approach, including the shortcomings of the evidential threshold it had set itself before measures could be included in the codes, its approach to proportionality, the lack of a true focus on safety by design biting at the level of systems and the limitations of its risk assessment guidance. We do not know whether this suggestion has been taken on board by Ofcom nor whether a measure like this will appear in subsequent iterations of the codes. But we remain of the view that it is the most efficient and effective way to resolve the similar issues we have identified in this consultation and to ensure that there is a step change in the safety of users on all regulated services as soon as practicably possible. This approach is also, in our view, very much aligned with the intentions behind the Government’s policy goals and cross-party Parliamentary support for, and amendments to, the Bill during its passage.

We suggest the following wording is inserted in the draft codes for both illegal harms and protecting children, between the section on governance and accountability and the section on content moderation, which follows the order of areas in which measures should be taken identified in section 10 (4) and section 27 (4) (on illegal harms) and 12(8) and 29 (4) (child safety duties).

Design of functionalities, algorithms and other features

Product testing

For all services, suitable and sufficient product testing should be carried out during the design and development of functionalities, algorithms and other features to identify whether those features are likely to contribute to the risk of harm arising from illegal content on the service.

The results of this product testing should be a core input to all services risk assessments.

Mitigating measures

For all services, measures to respond to the risks identified in the risk assessment should be taken, including but not limited to, providing extra tools and functionalities, including additional layers of moderation or prescreening, by redesigning the features associate with the risks, by limiting access to them where appropriate or where the risk of harm is sufficiently severe by withdrawing the function, algorithm or other feature.

Decisions taken on mitigating measures, as part of the product design process or as a response to issues arising from the risk assessment, should be recorded. (Note: this would be included in the record keeping duties under section 23 (u2U) and section 34 (search).)

Monitoring and measurement

All services should develop appropriate metrics to measure the effectiveness of the mitigating measures taken in reducing the risk of harm identified in the risk assessment. These measures should feed back into the risk assessment.

The obligation here is to have a mechanism to consider how to mitigate, rather than requiring the use of particular technologies or the introduction of pre-determined safeguards in relation to technologies. Significantly, and given the proposal is based on the duty of care, the measure of success is not wholly about output measures (though they may indicate whether an effective process is in place) but about the level of care found in outcome-orientated processes and choices. Assessment is about the features taken together and not just an individual item in isolation.

Given that, the outcome may not be wholly successful; what is important, however, is the recognition of any such shortfall and the adaptation of measures in response to this. It may be that the language of the obligation should recognise that the measures proposed should be appropriate or effective in relation to the identified risk bearing in mind the objective sought to be achieved (in the sense that an arguable claim can be made about appropriateness rather than there being pre-existing specific evidence on the point). We note that Ofcom has proposed criteria for assessing the effectiveness of age verification criteria (technical accuracy, robustness, reliability and fairness) that are more about outcomes than specific outputs; it may be that analogous criteria could be introduced to assess the processes adopted to identify harms to select appropriate mitigation measures. Significantly, the extent of the testing and assessing obligation should be proportionate, bearing in mind the provider’s resources, reach and severity of likely impact on groups of users. The lack of reach and the less complex internal environment should of course mean that in any event the process will be less sizeable for smaller providers than larger.

Before setting out the detail of our full response, we felt it was important to acknowledge a few particular things that set the children’s consultation apart from the illegal harms consultation that preceded it.

Some positives

There is a greater sense of consistency and coherence between the constituent parts of this consultation. The illegal harms consultation felt like it had been rushed in some places - understandable given how quickly it was published, following Royal Assent for the Online Safety Act - leading to gaps, differences in tone and approach and internal inconsistencies between different parts of the documentation. The children’s consultation - while still overly long and repetitive in places - is more coherent and, as a result, easier to navigate.

Ofcom has worked hard to respond to feedback from civil society on its handling of the first consultation. The summary document is a welcome “way-in” for small organisations looking to engage with the detail - though it leaves out all the things that Ofcom did not address, including underage use, development stages or risk functions that have not been addressed - and there is more (but not a huge amount of) acknowledgement of the evidence from civil society organisations that acts as a counterbalance to the evidence from industry and tech platforms. That said, the consultation is still very long (1300+ pages vs 1900+) and the terms in which feedback is requested are fixed by the questions that Ofcom chooses to ask relating to the specific proposals, rather than open in the sense of seeking views as to the overall framework (within which those specific proposals sit) and its potential effectiveness.

As mentioned above, many of the issues that were raised in the illegal harms consultation have been acknowledged - though they have not been worked through to the new proposals.

There is evidence in some parts of the consultation (notably the children-specific aspects) of a shift away from prescriptive “tick-box” approaches to compliance to one where the responsibility is put on service providers to exercise a duty of care to the children who are using their platforms. This is a very welcome shift. However, in terms of upholding age terms and conditions, the proposal is to measure this on a tick-box consistency metric rather than outcomes.

There is also a welcome warning to services - contained in volume 4 on risk assessment - that if they are “already implementing measures such that they assess their risk level to be low or negligible, they should continue doing so. Stopping implementing such measures or changing them may constitute a significant change (see Step 4 below) and may increase their risk level.” (volume 4 pp56-57). This (to an extent) addresses concerns raised in response to the first consultation that the tick-box, prescriptive approach to measures in the codes - aligned with the safe harbour promise - could mean services making a decision to stop using existing protective or mitigating measures as they were no longer required to be compliant with the regulation. But there is still no incentive for those services making things any better after the code comes in.

Some caveats

There is no doubt that the combination of the age assurance measures and the new measures relating to recommender systems are significant steps forward in increasing the protections for children, particularly in relation to reducing their exposure to - and the impact of - Primary Priority Content and Priority Content that is harmful and, in some cases, life-threatening. But the limitations of the measures in addressing wider safety by design factors remain, compounded by the safe-harbour compliance threshold which does not prioritse overall improvements in the protection of children. For example:

  1. The age gating requirement sits on top of all the other obligations and is the only substantive new measure to protect children (and, as such, a single point of failure). The risk assessment obligations in this consultation are no more stringent than those proposed in the illegal harms consultation nor do they have to undertake any significant redesigns of their services as a result of the risks that may be identified. This means that for services, by keeping children off their platform, their obligations - as set out in section 1 of the Act, to “design and operate” safer services to ensure that a “higher standard of protection is provided for children than for adults” - is largely unaddressed from a “by design” perspective, nor does it tackle all the identified risks, but does give “safe harbour”. This falls short of the section one requirement in the Act.
  2. Measures that address the recommender system are quite far down the product development and design process. A more robust “safety by design” approach, allied with rigorous risk assessment and product safety testing, would be looking at many more aspects of the overall service before then. (We would refer here to the four-stage model, developed by Prof Lorna Woods in work for Carnegie UK; see p9 here.)
  3. There is a significant gap in the lack of any measure in the codes relating to livestreaming, not least as the risk register picks this up as a functionality that causes harm in a number of areas covered by the children’s safety duty and the fact that DCMS, back in 2021, specifically included practical guidance for companies on livestreaming in its “Principles of Safer Online Platform Design”. Similar gaps, which we cover further in our full response, are evident with location information, large group messaging and ephemeral messaging which Ofcom identifies have specific risks of facilitating harm to children but which are not covered by any measures.
  4. While there is more evidence and commentary presented here by Ofcom than previously on the influence on the business model on harms to children, particularly the financial incentives for influencers propagating harmful content or views, there are no new measures proposed to address this.

Some ongoing concerns

As noted above, because of the mirroring of many of the proposals between the illegal harms consultation and these proposals for protection children, we have a number of the same concerns.

We noted that the illegal harms consultation frequently mentioned that the draft codes of practice were first iterations; the same is true here. One of the reasons given for this previously was that Ofcom’s information-gathering powers only came into effect via a commencement order from 10 January - too late for the first consultation - but it was clear in statements from the Ofcom senior management during the previous consultation that they saw these powers as a route to amassing much more of the evidence they needed to fill in the gaps and/or provide more evidence-based measures for further versions of the codes.

In the short timescales between the commencement of the information-gathering powers and the publication of the children’s consultation, we would not expect material evidence to have been gathered to influence the proposals. However, we were surprised that these information-gathering powers had not even been used by the time the consultation was issued, especially given the number of areas that Ofcom flags as lacking evidence. Given that lack of evidence is frequently cited as a reason for not recommending specific measures (and that lack of evidence does not mean lack of harm), this further delays the production of more robust iterations of the codes. Moreover, as we have noted in our full response, there is much evidence that has already been amassed by Ofcom in relation to harm that does not lead to a requirement on companies to mitigate that harm. We refer Ofcom here to the advisory from the US Surgeon-General on the need for urgent action to minimise harms to children and adolescents:

“The current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents. At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents. We must acknowledge the growing body of research about potential harms, increase our collective understanding of the risks associated with social media use, and urgently take action to create safe and healthy digital environments that minimize harm and safeguard children’s and adolescents’ mental health and well-being during critical stages of development.” (Social Media and Youth Mental Health: May 2023, p4)

We remain concerned in that regard that Ofcom has not been bold enough. Arturo Bejar, the Meta whistleblower who testified to the US Congress, observed: “Social media companies are not going to start addressing the harm they enable for teenagers on their own. They need to be compelled by regulators and policy makers to be transparent about these harms and what they are doing to address them.” See also Bejar’s interview at the recent FOSI conference in Paris.

Also as previously, we remain concerned that Ofcom has made a number of choices in how it is approaching the legislative framework that it has not fully justified and which we argue are not required by the language of the Act; there are inconsistencies between its analysis of the harms it has evidenced and the mitigation measures it proposes (see an updated version of our table comparing measures proposed in both the illegal and children’s codes); and there are some significant judgements (such as the primacy of costs in its proportionality approach) on which it is not consulting but which fundamentally affect the shape of the proposals that flow from them. With regard to Ofcom’s perspective on costs, these are largely based on companies having to change things as a response to the need for regulatory compliance (eg existing market participants); they do not take into account the impact on new entrants, who would be in a position to design in better safety at (presumably) a lower cost but, under these proposal, would currently have no incentive to do so.

Moreover, until we see evidence to the contrary in Ofcom’s response to the illegal harms consultation, we are concerned that the framework as proposed at this stage will not be “iterated” in subsequent versions of the codes: the combination of the focus on content-moderation and the rules-based, tick-box approach to governance and compliance is likely to become the baseline for the regime for years to come.

The piecemeal basis in which Ofcom has approached the selection of measures contained in the codes – only adding those where there is enough evidence – rather than stepping back to consider the risk-based outcome the legislation compels companies to strive to achieve continues to concern us. Unless the combined response to the illegal harms consultation and this consultation suggests a significant shift in approach, the chance to introduce (as Parliament intended) a systemic regulatory approach, rooted in risk assessment and “safety by design” principles will be lost for another generation.

Evidence, risk and the precautionary principle - a case study: Generative AI

There are many studies that identify the risks posed to children by GenAI and immersive technologies. Indeed, Ofcom recognises this and provides the following summary in volume 3, with links to research studies:

“There is evidence which shows that GenAI can facilitate the creation of content harmful to children, including pornography, content promoting eating disorders, and bullying content, which is then shared on U2U services. Evidence shows there has been a pronounced increase in the availability of AI-generated pornography online, particularly on pornography services which are dedicated to AI-generated pornography and which could be accessed by children. We have found evidence showing that GenAI models can create eating disorder content, which has in some instances been shared on U2U services such as eating disorder discussion forums. There is also evidence of GenAI models being used to create content to bully and threaten individuals including ‘fakes’ of individual’s voices, which is shared on U2U services and could be encountered by children.

There is also emerging evidence indicating that GenAI models can create other kinds of harmful content which could be shared on U2U services and encountered by children. For example, audio and language GenAI models can produce racist, transphobic, violent remarks and religious biases (‘abuse and hate’) and engage in self-harm dialogue, even where unsolicited (‘suicide and self-harm')”

Prior to setting out this summary, Ofcom had noted that “children are early adopters of new technologies, and GenAI is no exception”. So, one would expect that there would be a measure requiring companies that use GenAI in their products and services, or that host content that may have been created by GenAI, to take account of their risk assessment relating to the harms that this might cause and take appropriate steps - especially as this would be a new feature and not already built in.

But there is no such measure. Instead, despite the evidence of harm that Ofcom has already provided, it says that “the evidence base for children’s interaction with harmful AI-generated content on U2U and search services will be limited”. It goes on “We are also aware that the risks associated with GenAI models may not yet be fully known. However, given the rapid pace at which the technology is evolving, we must not underestimate the expected risks associated with GenAI for children. As new evidence emerges over the coming years, we will update this Register appropriately.”

There is evidence of harm occurring now but Ofcom suggests doing nothing until new evidence emerges over “the coming years”. This is absolutely where a precautionary approach - as proposed by our recommended code of practice measure - would be appropriate, putting the responsibility on the services where GenAI might create harm to children to take measures to prevent that harm. This approach would in itself, then help to create an evidence base from which Ofcom could draw on to develop best-practice recommendations for future codified measures, resulting in a positive feedback loop focused on improving safety, rather than a void in which harm will continue to proliferate and evolve until such time as Ofcom has defined the appropriate response. Not only would this limit harm but also save Ofcom time and resources down the line.

In the context of the concerns - and case study - above, we note here that DCMS put forward its own definition of “safety by design” in 2021 in its guidance for companies on the “principles of safer online platform design”:

“Safety by design is the process of designing an online platform to reduce the risk of harm to those who use it. Safety by design is preventative. It considers user safety throughout the development of a service, rather than in response to harms that have occurred.”

Finally, given that two-thirds of the 36 U2U measures are direct “lift and shift” copies of those in the illegal harms consultation, it is debatable whether the codes here deliver the “higher protection” to children promised by the Government, particularly (as described above) when the age-gating measures are set to one side. Nor are they sufficiently future-proofed to provide preventative protection as technology evolves.

For all the reasons above, we are urging Ofcom to adopt the measure we set out above in its next iteration of the codes of practice for both illegal harms and the child protection duties: it would return the obligation to service providers to try to mitigate risk while evidence on the effectiveness of new measures for inclusion in future iterations of the codes is assessed by Ofcom, and would be an effective way to future-proof those same codes as new evidence of harms continue to emerge.


Download assets