Monday, 23rd December 2024

Social media firms to be penalized for not removing violent content in UK

British regulators on Sunday unveiled a landmark proposal to penalize Facebook, Google and other tech giants that don't stop the spread of harmful content online, marking a major new regulatory threat for an industry that's long dodged responsibility for what its users say or share

Monday, 8th April 2019

Embargoed to 0001 Thursday January 31 File photo dated 03/01/18 of social media app icons, as a report by MPs has concluded social media companies must be subject to a "legal duty of care" to protect the health and well-being of younger users of their sites.

British regulators on Sunday unveiled a landmark proposal to penalize Facebook, Google and other tech giants that don't stop the spread of harmful content online, marking a major new regulatory threat for an industry that's long dodged responsibility for what its users say or share.

The aggressive, new plan - drafted by the United Kingdom's leading consumer-protection authorities and blessed by Prime Minister Theresa May - targets a wide array of web content, including child exploitation, false news, terrorist activity, and extreme violence.

New laws proposed to tackle social media companies have been welcomed by senior police and children’s charities.

If approved by Parliament, U.K. watchdogs would gain unprecedented powers to issue fines and other punishments when social media sites don't swiftly remove the most egregious posts, photos or videos from public view.

Top British officials said their blueprint would amount to "world-leading laws to make the U.K. the safest place in the world to be online." The document raises the possibility that the top executives of major tech companies could be held directly liable for failing to police their platforms. It even asks lawmakers to consider whether regulators should have the ability to order internet service providers and others to limit access to some of the most harmful content on the web.

In the last 15 years, reports of child abuse online have risen from 110,000 globally in 2004 to 18.4m last year.

Experts said the idea potentially could limit the reach of sites where graphics, violent content often thrives and that played an important role in spreading images of last month's mosque attack in New Zealand.

"The Internet can be brilliant at connecting people across the world - but for too long these companies have not done enough to protect users, especially children, and young people, from harmful content," May said in a statement.

The sector's continued struggles came into sharp relief last month, after videos of the deadly shooting in Christchurch, New Zealand, proliferated online, despite heightened investments by Facebook, Google, and Twitter on more human reviewers - and more-powerful tech tools - to stop such posts from going viral.