Online Safety Act Network

Disinformation and disorder: the limits of the Online Safety Act

Tags:

Background

The tragic murder of three children – and injuries to many more – stabbed by a 17-year-old attacker at a summer holiday dance class in Southport on 29th July has been the trigger for riots and public disorder in towns and cities in the UK. Initial violence in Southport, London and Manchester was whipped up by mis and disinformation spread on social media, particularly on X, about the identity and background of the attacker with far-right groups then using a variety of platforms to organise the riots. (Analysis from Hope Not Hate provides useful context for these initial disturbances, with a report from ISD containing further detail, while the BBC reporter Marianna Spring looked specifically at the role of social media.)

Further unrest followed last weekend (3rd and 4th August) and continued through the week with violent demonstrations and riots being arranged – again via social media - at mosques and places thought to be connected with immigration around the country. On 8th August, Cheshire Police announced the arrest of a woman on suspicion of publishing written material to stir up racial hatred (section 19 of the Public Order Act 1986) and false communications (section 179 of the Online Safety Act 2023) in relation to a social media post about the identity of the attacker in the Southport murders and others have been charged with a range of offences including public order and communications offences, including three sentences handed out on Friday (9th August) for encouraging violence and disorder on social media. (Prof Lorna Woods spoke to ITV News about the latter developments; you can view her comments here.)

In response to the initial violence, the Prime Minister Keir Starmer said in a statement following a meeting with police and security leaders: “to large social media companies, violent disorder, clearly whipped up online: that is also a crime and it’s happening on your premises.”

Several media articles since then – such as this piece from Mark Sellman in the Times and this criticism reported in the Guardian from Sadiq Khan - have looked at whether the Online Safety Act (OSA) provides enough of a route for dealing with the issues here. The Prime Minister subsequently confirmed that the Act will be reviewed.

In this short explainer, we set out the relevant provisions in the Act and highlight the gaps.

Is misinformation and disinformation covered by the Online Safety Act?

The short answer is no. There were calls during the passage of the Bill throughout Parliament – including from the Joint Committee on pre-legislative scrutiny – for misinformation and disinformation to be covered by the legislation. The Report of the Committee observed:

“The viral spread of misinformation and disinformation poses a serious threat to societies around the world. Media literacy is not a standalone solution. We have heard how small numbers of people are able to leverage online services’ functionality to spread disinformation virally and use recommendation tools to attract people to ever more extreme behaviour. This has resulted in large scale harm, including deaths from COVID-19, from fake medical cures, and from violence. We recommend content neutral safety by design requirements, set out as minimum standards in mandatory codes of practice. These will be a vital part of tackling regulated content and activity that creates a risk of societal harm, especially the spread of disinformation.” (page 38)

Content neutral design obligations have been recognised as being a more proportionate approach to dealing with content which does not contravene the criminal law – including by two successive UN Special Rapporteurs on Freedom of Expression (David Kaye and Irene Khan). Although the Act does refer to services being safe by design in s 1(3), there was no requirement for a design code. The only express mentions of misinformation in the Act are the establishment of a Committee to advise Ofcom (section 152) and changes to Ofcom’s media literacy duties (section 165). Although the Government’s indicative list of content harmful to children did include health misinformation, this category of content was not specifically included in the Act (though misinformation harmful to children could be caught by the general children’s rules). A new Foreign Interference Offence – introduced in the National Security Act and included in the list of priority offences in Schedule 7 of the OSA – covers some aspects of electoral disinformation but, as we explain further below, would likely have had limited impact here.

What levers do Ofcom have under the Online Safety Act?

We break down here the social media aspects that played into the unrest in Southport and whether Ofcom has any levers to respond. A caveat in terms of the specifics of this case is that, while the regulator has its powers under the OSA, much of it is not yet in force; the consultations on implementation are ongoing and the relevant codes of practice against which Ofcom can enforce are not yet in place. Ofcom has written an open letter suggesting however that companies take action before the measures come into force.

Given that the Act defines the obligations on companies by reference to types of content (essentially “illegal” and “harmful to children”, both sub-divided into priority and non-designated categories), we need to understand what types of content were being made available to determine if the regime is triggered. We should emphasise, however, that we have not engaged with the social media material directly but from media reports. Consequently our analysis is an estimate of how the regime might apply in similar circumstances. The analysis of the activity that led up to the violence in Southport and elsewhere has pointed to the following components:

  • Unsubstantiated claims about the identity and motives of the attacker
  • Sharing of claims with incitement to action by far-right accounts and influencers with large followings (some previously banned by X, including Tommy Robinson and Andrew Tate)
  • Algorithmic promotion/virality of claims and calls to action
  • Content navigation features e.g., trending topics and hashtags
  • Anonymous/disposable accounts
  • Livestreaming functionality
  • Soi-disant news outlets, e.g., Channel 3 “News” website
  • Use of Telegram and other closed messaging platforms by far-right groups to share details of planned protest/incite violence

There are a number of issues. The first is the item-by-item approach in the Act. The Act does not deal with situations like this as one event (or one event across many platforms) but as a series of items of content (albeit content potentially seen in context). This means that different types of content that are part of the same story could be treated differently – whether as priority illegal content; non-designated illegal content; content harmful to children or other content. The response may then be patchy or uneven across these content types.

The later content directed to organising protests, if not riots, are more likely to trigger the regime as priority offences, notably in relation to the public order offences listed in Schedule 7 (especially as threats can be implied) or, possibly, the Schedule 5 terrorism offences. Services have a number of duties in relation to priority illegal content: to have systems in place or run the service so as to prevent individuals from encountering it (s 10(2)(a)); to have systems to minimise the length of time it is available (s 10(3)(a)) and also to mitigate the risk the service is being used to facilitate a relevant priority offence (s 10(2)(b)). Racially antagonistic and divisive rhetoric that can stir up conflict and provoke reaction, e.g., Twitter posts urging people to attend a demonstration at a long-established Jewish community under the banner of a ‘#Summer of Hate’, have been found to violate s 19 of the Public Order Act 1986 (effectively publishing inflammatory material), which is a priority offence under Sch 7 of the OSA. Provocative but more abstract posts, lacking direct calls to action, even if implying aggression (e.g., ‘mass deportations’), may present a more ambiguous case.

Some abusive content if not caught by the Public Order Act priority offences might still be illegal content by virtue of s 127 Communications Act. Cases in relation to that offence have caught Nazi themes and imagery; see for instance Blagg’s and Meechan’s convictions. The obligations on the service would be lower: a general duty to mitigate and to have a system in place to take content down (ss 10(2)(c) and 10(3)(b)). Some posts may well not trigger the criminal law at all – for example posts claiming that ‘Diversity is a hate crime against white people’.

The circulation of false statements – which could be seen as the starting point for much of this - as such (e.g., the name; the “fact” the perpetrator was an immigrant) would only be caught if they fall within either the foreign interference offence (a priority offence) or the new false communications offence (s 179, non-designated offence, in force at 31 Jan 2024) which replaces section 1(a)(iii) of the Malicious Communications Act 1988 and (for England, Wales and Northern Ireland) section 127(2)(a) and (b) of the Communications Act 2003.

Foreign interference can apply to disinformation but in addition requires a number of elements not the least of which is the involvement of a foreign power as well as the activities having “an interference effect”, some of which may be hard to spot or prove. Although there are some claims as to foreign state involvement, as yet it is far from clear that these conditions are satisfied here.

The false information offence only kicks in when the person posting knows the information is untrue (rather than just not caring one way or the other). The original poster would probably know they were making stuff up, but those who onward shared would not be in the same position. This would seem to suggest that the false communications offence is not a good fit for situations where a significant part of the problem is the virality of content. Moreover, the person must intend to cause non-trivial harm (an undefined standard) to a likely audience. This raises the question of whether, when posts are directed at one group but capable of being seen by another group likely to be harmed by the post, the sender intends them to be so hurt; the cases so far suggest that this is the case. There are reports that a woman has been arrested in relation to a social media post misidentifying the attacker in the Southport murders, but there is no detail as to how her post maps on to the offence (she is also suspected of having committed public order offences); she has claimed that she did not know the information was false. Of course, the fact that someone is arrested on suspicion of committing an offence does not mean a prosecution will follow and succeed, but the fact she has been arrested suggests the OSA might be more broadly applicable to disinformation than initially apparent.

One of the significant problems in this area – whether we are talking about priority or non-designated criminal content - is that the criminal law is being used to define the threshold for regulatory intervention. This is an issue because the offences are not directed at the content itself but the behaviour behind the content; the content does not constitute the entirety of the offence. Rather there must be a mental element and an assessment of any relevant defences. For example, the mental element required for s 19 of the Public Order Act 1986 requires more than recklessness. This problem has been underlined by Ofcom’s approach in its Illegal Content Judgements Guidance for the OSA which has emphasised an assessment ex post in the context of take down, rather than considering how we might understand ex ante the indicators for likely criminal content – and this is important to enable interventions earlier up the distribution chain (e.g., interventions around accounts or weighting in recommender tools). (Our analysis on this issue is here.)

Another issue concerns content that on its own is not problematic but becomes so because of the volume and amplification of that content. This issue was noted with an item by item approach to content, especially in relation to children (see here for example Carnegie UK’s response to the draft Bill, paras 27-28). Ofcom has addressed this issue in the context of content harmful to children, but it is much more difficult to do that in the context of criminal content where the offences are defined in statute. It would be only possible if the offence (or its interpretation) allows the accumulation to be taken into account for the determination of the criminal threshold that it would be possible then to bring this content in the regime. This sort of material might, however, have fallen within the “harmful but legal” provisions that were replaced during the legislative progress of the Bill by the Triple Shield.

The regime does not deal with the capabilities of platforms to amplify content and to connect groups. In general the safety by design approach in Ofcom’s consultations has been weak. For example, the impact of recommender algorithm is only covered re children’s content. Hashtags, anonymous accounts and other functionalities are not covered, although the risk register recognises the role that they can play (and see Ofcom’s researchon search engines).

For non-criminal content that is not content harmful to children, the main mechanism to deal with it is through the provisions on terms of service. Section s 72(3) of the OSA in effect provides that Category 1 services must apply their terms of service and do so equally. We do not yet know which services are comprised in Category 1, but clearly this obligation will not apply to all user to user services and does not apply to any search services. Furthermore, services may only take action against content if their terms of service provide so (s 71). Notably, the Act provides for no minimum content in relation to terms of service, so it is possible that services might not deal with issues of misinformation at all. It also does not stop providers from reducing the level of protection (as, for example, X did). Ofcom could do nothing in this scenario.

There is no crisis or emergency response mechanism though there is a provision dealing with “special circumstances” (s 175). This is an odd provision as the motivating initiative lies with the Secretary of State and Ofcom may only use its media literacy functions. Nonetheless, the provision still gives Ofcom the power to issue a notice requiring a company to make a public statement about how that service is dealing with a specified threat. For example, the Secretary of State could issue under s 175 a direction in circumstances where there is a threat to the safety of the public to require OFCOM to give priority to ensuring that misinformation and disinformation is effectively tackled when exercising its media literacy function; and to require service providers to report on action they are taking to address e.g., rioting in the streets, assuming such circumstances qualify as “special” under the Act. The public statement requirement does not trump the prohibition in s 71.

What more do we need?

The riots fuelled by social media misinformation following the Southport attack highlighted issues around algorithmic promotion and content virality. The OSA does not directly cover misinformation and disinformation, with calls during its passage for such coverage being rejected. With ongoing consultations and no active codes of practice, this gap presents limitations for Ofcom’s ability to respond. Moreover, the Act’s openness to an item-by-item content approach leads to an uneven regulatory response; not all harmful content triggers intervention and certain posts may evade the regime. Regulatory intervention is also hampered by the lack of clear criminal law thresholds.

The new Government had already indicated that it will look again at the OSA, particularly with regard to protection of children. In the light of the events in the last week, there is a case to revisit both the wider application of the Act’s provisions and address some of the weaknesses in current coverage that will tie Ofcom’s hands in the event of a similar occurrence and to look at the parts of the original OSB that were removed. The Prime Minister has suggested that such a review might be now on the table.

We shall consider further how to improve the OSA in the light of disorder - and indeed the wider gaps in the Act to feed into any future review. Initial headline recommendations might include the following:

  • Establish the Disinformation Committee
  • Review how the criminal threshold is understood to allow it to fit better with a systems approach and ex ante design-based mitigations
  • Introduce a general duty on social media platforms’ capabilities to amplify content not adequately addressed in the Act
  • Introduce comprehensive safety-by-design requirements; and
  • Build stronger crisis mechanisms into the Act, considering both crisis response - as seen in the international sphere already (e.g., GIFCT Content Incident Protocol) - and crisis-specific risk assessment and mitigation processes.