Online platforms must begin assessing whether their services expose users to illegal material by 16 March 2025 or face financial punishments as the Online Safety Act (OSA) begins taking effect.
Ofcom, the regulator enforcing the UK’s internet safety law, published its final codes of practice for how firms should deal with illegal online content on Monday.
Platforms have three months to carry out risk assessments identifying potential harms on their services or they could be fined up to 10% of their global turnover.
Ofcom head Dame Melanie Dawes told BBC News this was the “last chance” for industry to make changes.
“If they don’t start to seriously change the way they operate their services, then I think those demands for things like bans for children on social media are going to get more and more vigorous,” she said.
“I’m asking the industry now to get moving, and if they don’t they will be hearing from us with enforcement action from March.”
But critics say the OSA fails to tackle a wide range of harms for children.
Andy Burrows, head of the Molly Rose Foundation, said the organisation was “astonished and disappointed” by a lack of specific, targeted measures for platforms on dealing with suicide and self-harm material in the guidance.
“Robust regulation remains the best way to tackle illegal content, but it simply isn’t acceptable for the regulator to take a gradualist approach to immediate threats to life,” he said.
Under Ofcom’s codes, platforms will need to identify if, where and how illegal content might appear on their services and ways they will stop it reaching users.
According to the OSA, this includes content relating to child sexual abuse material (CSAM), controlling or coercive behaviour, extreme sexual violence, promoting or facilitating suicide and self-harm.
Ofcom began consulting on its illegal content codes and guidance in November 2023.
It says it has now “strengthened” its guidance for tech firms in several areas.
This includes clarifying requirements to remove intimate image abuse content, which and helping guide firms on how to identify and remove material related to women being coerced into sex work.
Ofcom codes
Some of the child safety features required by Ofcom’s codes include ensuring that social media platforms stop suggesting people befriend children’s accounts, as well as warning them about risks of sharing personal information.
Certain platforms must also use a technology called hash-matching to detect child sexual abuse material (CSAM) – a requirement that now applies to smaller file hosting and storage sites.
Hash matching is where media is given a unique digital signature which can be checked against hashes belonging to known content – in this case, databases of known CSAM.
Many large tech firms have already brought in safety measures for teenage users and controls to give parents more oversight of their social media activity in a bid to tackle dangers for teens and pre-empt regulations.
For instance, on Facebook, Instagram and Snapchat, users under the age of 18 cannot be discovered in search or messaged by accounts they do not follow.
In October, Instagram also started blocking some screenshots in direct messages to try and combat sextortion attempts – which experts have warned are on the rise, often targeting young men.
‘Snail’s pace’
Concerns have been raised throughout the OSA’s journey over its rules applying to a huge number of varied online services – with campaigners also frequently warning about the privacy implications of platform age verification requirements.
And parents of children who died after exposure to illegal or harmful content have previously criticised Ofcom for moving at a “snail’s pace”.
The regulator’s illegal content codes will still need to be approved by parliament before they can come fully into force on 17 March.
But platforms are being told now, with the presumption that the codes will have no issue passing through parliament, and firms must have measures in place to prevent users from accessing outlawed material by this date.