[ad_1]
British media watchdog Ofcom on Wednesday issued new guidelines to technology platforms requiring them to take tougher action against harmful and illegal content.
Ofcom hopes digital giants such as Googling, Apple, Meta, Amazon And Microsoft on board with its guidelines after King Charles III gave the final green light for strict new laws known as the Online Safety Act.
Ofcom is the main regulator under the UK’s Online Safety Act, with powers to enforce regulations and impose fines on tech companies. The law gives the watchdog the power to impose fines of as much as 10% of companies’ global annual turnover for breaches, and even threaten possible prison sentences for managers in case of repeated infringements.
Ofcom outlined what it called new codes of practice for digital platforms, which it wants them to follow to limit the harmful and toxic content users – especially children – encounter online.
However, the codes of practice are non-binding and merely act as a ‘safe harbour’, meaning services can take a different approach to fulfilling their duties if they wish.
Ofcom’s codes of practice are non-binding and act as a ‘safe harbour’, meaning services can take a different approach to fulfilling their duties if they wish. However, the companies must, as required, comply with the duties set out in the law, which means they still have to demonstrate to the regulator that they are taking action to rid their services of illegal content.
“There’s the stick. The stick’s there,” Ofcom’s Gill Whitehead told CNBC’s “Squawk Box Europe” on Thursday. “The bottlenecks revolve around the possibility of fining 10% of global turnover. The accountability of senior managers, the ability to prosecute senior managers.”
“And we also have the power to disrupt services. We can disrupt their payment providers or the services themselves to ensure they can’t be seen in Britain. What we want to do is work very closely with the technology companies so that they actually follow the rules, because that leads to safer experiences for our children.”
In the codes, Ofcom recommends that services take a range of measures, including ensuring content moderation teams have the right resources and training, and that content flagging systems are easy to use.
Ofcom also wants platforms to ensure users can block other users, and to establish risk assessments for when platforms make changes to their recommendation algorithms.
In addition, Ofcom also wants online platforms to take a range of steps to combat child sexual exploitation and abuse, fraud and terrorism.
This involves using a technology called “hash matching” to detect and remove such material – in other words, companies would have to match digital fingerprints for individual pieces of content, called “hashes”, to a database of known illegal and malicious contents.
Crucially, Ofcom said it had no intention of breaking end-to-end encryption, a mechanism that platforms such as WhatsApp and Meta-owned Signal use to allow users to securely send messages from one person to another to send others. This is a major point of contention for those platforms, which had warned they could leave the UK if they were forced to weaken encryption.
Google, Apple, Meta, Amazon and Microsoft did not immediately return requests for comment.
Consumer rights group It said it hopes Ofcom does not weaken its enforcement actions within the scope of the laws.
“Social media companies and search engines must be held to a high standard and Ofcom cannot shy away from taking strong enforcement action, including fines, against companies if they break the law,” said Rocio Concha, director of policy and advocacy. a statement.
The regulator will request comments from interested parties in response to the proposals. The consultation period ends on 23 February 2024, after which Ofcom plans to publish the final versions of its guidance and codes of practice by winter 2024. After that declaration has been issued, the services have three months to carry out risk assessments.
The UK Online Safety Act has been in the making for the past four years. It originated in the form of the Online Harms White Paper and was intended to limit harm on social media, such as content that promotes illegal drug use, terrorism, self-harm or suicide.
The European Union has its own law called the Digital Services Act, while several lawmakers in the US want to reform a law called Section 230, which provides platforms with liability immunity for what their users post.