Ofcom will have its remit expanded to enforce laws over firms like Facebook, Instagram and YouTube to protect people from harmful, illegal terrorist and child abuse content.
The new powers allow Ofcom to force social media platforms to remove harmful content quickly and also minimising the risk of it appearing in the first place.
However, there has been no confirmation of what punishments or fines the bolstered regulator will be able to hand out.
Ofcom was set up back in 2003 and already regulates the television, radio, telecoms and postal sectors in Britain.
Digital Secretary Baroness Morgan said: “We will give the regulator the powers it needs to lead the fight for an internet that remains vibrant and open but with the protections, accountability and transparency people deserve.”
She added the measures would only apply to websites that allow sharing of user-generated content – for example, through comments, forums or videos – meaning fewer than 5% of all UK businesses will be affected.
YouTube reacted saying the news was of “great importance” and promising to “work in partnership with the government and Ofcom to ensure a free, open and safer internet that works for everyone”. And the Internet Association, which represents firms such as Amazon, Google and Microsoft, said it was keen to debate “issues of concern that are still under review”, citing questions over content that is legal but potentially harmful.
A government white paper launched in April last year set out plans to fine or ban social media firms if they fail to tackle the publication of harmful material such as terrorist content, child sex abuse, harrassment and fake news.
“Regulators are catching up”
Many in the industry feel this is long overdue. Danny Meadows-Klue, the inaugural Digital Commissioner at the Direct Marketing Commission, and a former government advisor who helped instigate the creation of the ICO said: “Regulators are catching up. Social media is the first thing most people check before breakfast, and the last place they look at night. It’s become the key media channel”.
He believes there are limits to letting Facebook police themselves: “Self-regulatory frameworks delivered effective results up to a ceiling. With toxic content readily available inside mainstream consumer platforms, it’s clear that without regulatory teeth good social outcomes can’t be achieved.” The key will be ensuring the remit of the regulators is effectively managed and never politicised.