Security

What is the UK’s Online Safety Act and what powers will it provide? | Law


The Online Safety Act is back in the spotlight after a week of violence driven in part by far-right groups coordinating online, using a mixture of services including Telegram, TikTok and X.

“There need to be amendments to the Online Safety Act,” the mayor of London, Sadiq Khan, told the Guardian on Thursday. “I think it’s not fit for purpose.”

But the act is a sprawling piece of legislation that runs to 286 pages and 241 sections. Amending it would be no easy feat, and even working out what is actually covered by the law today, almost a year since it was passed by parliament, is tricky.

Here we look at the main areas covered by the new laws, which are expected to come into force in the UK in 2025.


Ofcom

The biggest chunk of the Online Safety Act relates to Ofcom, giving the regulator beefy new powers to tame social networks in particular. The act focuses on “category one user-to-user services” – the biggest platforms on the internet that host user-generated content – and gives them a host of new duties and responsibilities. They must, for instance, “protect news publisher content”, giving newspapers and broadcasters special notice before moderating their material, and must similarly protect “the free expression of content of democratic importance” by including provisions in their terms of service that are designed to take that principle into account.

But those requirements are filtered through Ofcom. The legislation is set up to prevent the regulator having to act on individual pieces of content. Instead, Ofcom is supposed to focus on the rules the platforms themselves set, and monitor whether they are doing what they say they will. In theory, this means a platform that wants to be minimally censorious can be, provided it doesn’t give users a false sense of security by claiming otherwise.

For the time being, though, those powers are purely notional. Ofcom still needs to publish its own codes of practice and guidance, later this year, before regulated services can even begin to be held to account for their platforms. Even then, they will have three months to assess the risk of illegal content before being required to act.


Terrorism and child abuse

Ofcom also gets new powers to deal with online content that is already illegal. The regulator can now issue notices requiring companies to proactively respond to terrorist content and child sexual exploitation and abuse (CSEA) content. Those notices can require regulated services to use “accredited technology” to find and remove such content, or to actively develop or source it themselves.

Those clauses have been controversial, since messaging services such as WhatsApp worry they could require them to effectively disable end-to-end encryption, or to enable so-called “client-side scanning”, where AI tools running on users’ phones would monitor their communications.

Again, however, the powers will not come into effect until after Ofcom has published guidance and consulted the information commissioner.


Offensive communications

Not every aspect of the new act involves Ofcom. The act also created new communications offences, giving the police direct powers to take action against speech online. Two were directly intended to replace the old offence of malicious communications, a very broad law that covered almost any use of a communications system to cause distress. The new laws are much narrower, and make it an offence to send false or threatening communications with the intent “to cause non-trivial psychological or physical harm to a likely audience”.

Other offences are wholly new. The act makes it a crime to “encourage or assist” serious self-harm. It will also be a crime to deliberately send flashing images with the intent of triggering an epileptic fit, either in a known person or in the community at large.


Sexual offences

The act introduces new clauses into the Sexual Offences Act, to ban so-called “revenge porn”, or non-consensual explicit imagery, and cyberflashing. Each offence can now lead to a jail sentence of up to two years. 

Notably, the nonconsensual explicit imagery ban is forward-looking enough to cover many instances of deepfake pornography – AI-generated explicit imagery. It covers images that merely “appear” to be a photograph showing another person in an intimate state.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.