Online Safety Bill: Plan to make big tech remove harmful content axed (2024)

  • Published

By Chris Vallance & Shiona McCallum

Technology reporters

Controversial measures which would have forced big technology platforms to take down legal but harmful material have been axed from the Online Safety Bill.

Critics of the section in the bill claimed it posed a risk to free speech.

Culture Secretary Michelle Donelan denied weakening laws protecting adult social media users and said they would have more control over what they saw.

The bill - which aims to police the internet - is intended to become law in the UK before next summer.

The government argues that the changes do not undermine the protections for children.

Technology companies will still have to stop children - defined as those under 18 - from seeing content that poses a risk of causing significant harm.

Many social media platforms have a minimum age and offer parental controls.

Companies will have to explain how they will check their users' age - some like Instagram are using age-verification technology.

But some have criticised the latest changes, including Labour and the Samaritans who called it a hugely backward step.

Ian Russell, the father of teenager Molly Russell, who ended her life after viewing suicide and self-harm content online, said the bill had been watered down and the decision might have been made for political reasons to help it pass more quickly.

But Ms Donelan said there may have been a "misunderstanding" over what Ian Russell said.

She told BBC Radio 4's Today programme: "nothing is getting watered down or taken out when it comes to children."

"We're adding extra in, so there is no change to children. This is a very complicated Bill and there's lots of aspects to it, but I wouldn't want any of your listeners to think for a minute that we are removing anything when it comes to children because we're not."

User control

The bill previously included a section which required "the largest, highest-risk platforms" to tackle some legal but harmful material accessed by adults.

It meant that the likes of Facebook, Instagram and YouTube would have been told to prevent people being exposed to content relating to self-harm and eating disorders as well as misogynistic posts.

It was "legislating for hurt feelings", former Conservative leadership candidate Kemi Badenoch said.

That requirement has now been removed from the bill - tech giants will instead have to introduce a system allowing adult users more control to filter out harmful content they do not want to see.

Ms Donelan insisted the legislation was not being watered down - and that tech companies had the expertise to protect people online.

"These are massive, massive corporations that have the money, the knowhow and the tech to be able to adhere to this," she said.

She warned that those who did not comply would face significant fines and "huge reputational damage".

Adults will be able to access and post anything legal, provided a platform's terms of service allow it - although, children must still be protected from viewing harmful material.

In July, the former minister David Davis was one of nine senior Conservatives who wrote a letter to then Culture Secretary Nadine Dorries, warning the legal but harmful provision posed a threat to free speech.

He told the BBC he was glad it had now been taken out the bill but he still had other "serious worries" about the threat to privacy and freedom of expression which could "undermine end-to-end encryption".

In some scenarios the bill permits the government to direct companies to use technology to examine private messages.

"I urge the government to accept the amendments in my name to fix these technology notices so that they no longer pose a threat to encryption, which we all rely on to keep safe online," he said.

  • Ofcom: A third of children have adult social media
  • Sharing p*rnographic deepfakes to be illegal

Lucy Powell MP, Labour's shadow culture secretary, criticised the decision to remove obligations over "legal but harmful" material.

She said it gave a "free pass to abusers and takes the public for a ride" that it was "a major weakening, not strengthening, of the bill".

And the boss of charity the Samaritans, Julie Bentley, said "the damaging impact that this type of content has doesn't end on your 18th birthday".

"Increasing the controls that people have is no replacement for holding sites to account through the law and this feels very much like the Government snatching defeat from the jaws of victory."

But Ms Donelan told BBC News the revised bill offered "a triple shield of protection - so it's certainly not weaker in any sense".

This requires platforms to:

  • remove illegal content
  • remove material that violates their terms and conditions
  • give users controls to help them avoid seeing certain types of content to be specified by the bill

This could include content promoting eating disorders or inciting hate on the basis of race, ethnicity, sexual orientation or gender reassignment- although, there will be exemptions to allow legitimate debate.

But the first two parts of the triple shield were already included in the draft bill.

At its heart this complicated bill has a simple aim: those things that are criminal or unacceptable in real life should be treated the same online.

But that means reining in the power of the big tech companies and bringing an end to the era of self-regulation.

Getting the bill this far has been a complex balancing act. Dropping the need to define what counts as "legal but harmful" content may have satisfied free speech advocates.

Including new criminal offences around encouraging self-harm or sharing deep fake p*rn could feel like a win for campaigners.

But it won't satisfy everyone - the Samaritans for example don't feel it adequately protects adults from harmful material.

The Molly Rose Foundation set up by Molly Russell's family believes the bill's been watered down. It's not about freedom of speech, it said in a statement, it's about the freedom to live.

And there's much about the bill that is still unclear.

Internet safety campaigner Mr Russell told BBC Radio 4's Today programme: "I think the most harmful content to [Molly] was content that could be described as legal but harmful."

He added: "It is very hard to understand that something that was important as recently as July, when the bill would have had a full reading in the Commons and was included in the bill, this legal but harmful content, it is very hard to understand why that suddenly can't be there."

Image source, Russell family

Campaign group the Centre for Countering Digital Hate (CCDH) said platforms might feel "off the hook" because of the new focus on user controls "in place of active duties to deal with bad actors and dangerous content".

Elon Musk's takeover of Twitter indicated tough rules were needed, it said. Twitter recently reinstated a number of banned accounts, including that of Ye, formerly known as Kanye West, which had been suspended over anti-Semitic posts.

But CCDH chief executive Imran Ahmed added it was welcome the government "had strengthened the law against encouragement of self-harm and distribution of intimate images without consent".

It was recently announced that the encouragement of self-harm would be prohibited in the update to the Online Safety Bill.

Fine companies

Other changes will require technology companies to assess and publish the risk of potential harm to children on their sites.

Companies must also explain how they will enforce age limits - knowing users' ages will be a key part in preventing children seeing certain types of content.

And users' accounts must not be removed unless they have broken the law or the site's rules.

Tech policy expert at the Open Rights Group, Dr Monica Horten, said the bill lacked definition about how companies will know the age of their users.

"Companies are likely to use AI systems analysing biometric data including head and hand measurements, and voices," she said.

"This is a recipe for a gated internet, currently subject to minimal regulation and run by third-party private operators."

Much of the enforcement of the new law will be by communications and media regulator Ofcom, which will be able to fine companies up to 10% of their worldwide revenue.

It must now consult the victims' commissioner, the domestic-abuse commissioner and the children's commissioner when drawing up the codes technology companies must follow.

Additional reporting by Rachel Russell.

Have you been affected by the issues raised in this story? Share your experiences by emailing haveyoursay@bbc.co.uk.

Please include a contact number if you are willing to speak to a BBC journalist. You can also get in touch in the following ways:

  • WhatsApp: +44 7756 165803
  • Tweet: @BBC_HaveYourSay
  • Upload pictures or video
  • Please read our and privacy policy

If you are reading this page and can't see the form you will need to visit the mobile version of the BBC website to submit your question or comment or you can email us at HaveYourSay@bbc.co.uk. Please include your name, age and location with any submission.

Related Topics

  • Online Safety Bill
  • Child protection

More on this story

  • Self-harm content to be criminalised in online bill

    • Published

      27 November 2022

  • Sharing p*rnographic deepfakes to be illegal

    • Published

      25 November 2022

  • Online Safety Bill to return as soon as possible

    • Published

      20 September 2022

Online Safety Bill: Plan to make big tech remove harmful content axed (2024)

FAQs

What is the Kids Online Safety Act 2024? ›

Specifically, the Kids Online Safety Act:

Requires social media platforms to provide minors with options to protect their information, disable addictive product features, and opt out of personalized algorithmic recommendations. Platforms are required to enable the strongest privacy settings for kids by default.

What is the end to end encryption online safety bill? ›

The Online Safety Bill and E2EE

The bill has been designed to protect both the safety of users as well as their right to privacy. It is deliberately tech-neutral and future-proofed, to ensure it keeps pace with technologies, including end-to-end encryption.

How does the UK's online safety bill aim to clean up the internet? ›

The duties under the bill

The bill is based on three fundamental duties: protecting children; shielding the public from illegal content; and helping adult users avoid harmful – but not illegal – content on the biggest platforms.

What is the online safety bill in the UK in 2024? ›

The Online Safety Act became law in October 2023 and contains a range of measures intended to improve online safety in the UK, including duties on internet platforms about having systems and processes in place to manage harmful content on their sites, including illegal content.

Why is online safety for children? ›

But online access also comes with risks, like inappropriate content, cyberbullying, and online predators. Using social media apps and websites where kids interact, predators may pose as a child or teen looking to make a new friend.

What does online safety mean for kids? ›

Internet safety is all about staying safe online and being aware of any potential risks we might face, which include malware, scams, and cyberbullying. Children accessing the internet for the first time may be unaware of these risks, which makes teaching internet safety for kids very important.

Can police get end-to-end encryption? ›

End-to-end encryption (E2EE) is a system that, amongst others, allows mobile phone users to communicate with each other without anyone else eavesdropping. So, the police cannot listen in either, even if they are authorized to tap the communication.

Why am I getting end-to-end encryption? ›

When you message someone who also has RCS chats turned on in Google Messages, your chat conversations automatically upgrade to end-to-end encryption. With end-to-end encryption, no one can read the content sent between you and the other person.

Do banks use end-to-end encryption? ›

Many banks also use end-to-end encryption, which prevents anyone from seeing your information while it's being transmitted.

What is the internet safety Bill in Canada? ›

About the proposed Online Harms Bill

On February 26, 2024, the Government of Canada introduced Bill C-63 to create a new Online Harms Act —a baseline standard for online platforms to keep Canadians safe—to hold online platforms accountable for the content they host.

Who regulates social media? ›

Federal Communications Commission (FCC) | USAGov.

Why do we need safer internet? ›

Being safe online means individuals are protecting themselves and others from online harms and risks which may jeopardise their personal information, lead to unsafe communications or even effect their mental health and wellbeing.

Did the Kosa Act get passed? ›

President Joe Biden pushed Congress to pass legislation to protect children online during his 2023 State of the Union Address, leading Blackburn and Blumenthal to reintroduce KOSA in the Senate on May 2, 2023. KOSA along with COPPA 2.0 were approved by the Senate Commerce Committee on July 27, 2023.

What is the Internet regulation bill? ›

This bill requires providers of interactive computer services (e.g., social media companies) to publish their policy explaining the types of content that is permissible on the service and provide a system for users to submit complaints about content that may violate the policy or involve illegal content.

What is online safety and security? ›

Online Safety is being aware of the nature of the possible threats that you could encounter whilst engaging in activity through the Internet, these could be security threats, protecting and managing your personal data, online reputation management, and avoiding harmful or illegal content.

Will KOSA pass in 2024? ›

As of mid-March 2024, KOSA has gained strong bi-partisan support (co-sponsorship) from 65 senators, which is theoretically enough to pass a Senate floor vote; however, one has not been held or scheduled yet. If KOSA passes the Senate, it will then be up to the House of Representatives to consider the bill.

What is the Protecting Kids on social media Act? ›

unveiled the Protecting Kids on Social Media Act, which would establish that age minimum and require that parents consent for teens 13 to 17 to access social media. The bill would also prohibit platforms from deploying algorithms to feed content to kids under 18.

Is KOSA going to pass? ›

President Joe Biden pushed Congress to pass legislation to protect children online during his 2023 State of the Union Address, leading Blackburn and Blumenthal to reintroduce KOSA in the Senate on May 2, 2023. KOSA along with COPPA 2.0 were approved by the Senate Commerce Committee on July 27, 2023.

What is the New York Times Kids Online Safety Act? ›

It would require online services like social media networks, video game sites and messaging apps to take “reasonable measures” to prevent harm — including online bullying, harassment, sexual exploitation, anorexia, self-harm and predatory marketing — to minors who used their platforms.

Top Articles
Latest Posts
Article information

Author: Delena Feil

Last Updated:

Views: 5269

Rating: 4.4 / 5 (45 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.