Britain Seeks Leading Role In Regulating Social Media

The internet has traditionally been seen as largely “ungovernable” thanks to its decentralised nature and the potential (fast decreasing) for anonymity. A  popular description for it has been that it’s like the Wild West, which has struck me as amusing for years, since I remember when the internet really was a lawless and wild place, where anybody could be anyone and do whatever they liked.

These days, it feels far more constrained to me, with its rules and regulations and surveillance, and if the UK gets it’s way, that may only keep on increasing.

The British government, (clearly not busy enough with the on-going Brexit saga), has revealed its blueprint for new laws that are intended to regulate social media, search, messaging, and even file-sharing platforms in order prevent content that causes “online harm” from being posted.

Online Harm

According to their proposal, the term “online harm” covers all sorts of content relating to sexual abuse, violence, hate speech, terrorism, self-harm (including suicide), and underage sexting, and this proposal coincides with increasing international pressure on platforms like Facebook and Google to prevent that type of content from appearing on their sites.

It’s a broad term, and one of the concerns that have been raised is the possibility for that term to come to include things which were not part of the original scope. Certainly it walks a fine line between legitimate freedom of expression and what may be perceived by some people to be “harmful” content.

Online Policing

The British proposal includes the appointment of a new, independent regulator, who would monitor platforms for content that has been deemed harmful, be able to issue fines for non-compliance, and even hold the company executives liable if their platform did not adhere to these regulations.

According to UK PM, Theresa May, the companies who provide these platforms have not done enough to protect users, especially children and young people, from harmful content. Their intent is to put a legal duty of care on these companies, and force them to take responsibility for keeping users safe.

The paper also proposes that social media companies publish transparency reports, respond quickly to user complaints, build in safety features and educate users on recognising misinformation and malicious behaviour.

More Details Required

It is of course early days in terms of this proposal, and they’re still a long way from becoming actual law (in the UK) yet. An association of internet tech firms, including Facebook, Google, Snap, Reddit and Twitter amongst others, has responded that the proposal needs more detail.

According to the group, they are committed to working with government and civil society in order to ensure that the UK is a “safe online space,” but added that in order to do so, they need proposals that are both targeted, (more specific in other words), and practical for platforms of all sizes to implement.

The UK executive director of the group, Daniel Dyeball, added, “We also need to protect freedom of speech and the services consumers love. The scope of the recommendations is extremely wide, and decisions about how we regulate what is and is not allowed online should be made by parliament,” while Coadec, a group lobbying on behalf of internet start-ups pointed out that overly-strict regulation could punish smaller firms that don’t have the money and clout of Facebook and Google.

The Practicality

The reality is that any kind of centralised regulation is going to be difficult and expensive to implement, and prone to failures and errors.

Just Facebook by itself (with over 2 billion active users) generates 4 petabytes (4,096 terabytes, or 4,096000 gigabytes) every day. 350 million photo’s are uploaded to it every day. 100 million hours of video are watched on it every day. And that’s only on Facebook.

Add Google / YouTube, Reddit, etc. etc. and an already staggering amount of data increases by unbelievable amounts. The reality is, it’s simply not possible with current technology and resources to police all of this data. Perhaps, as AI improves, most of this will be able to be farmed out to software, but software is far from perfect, and for a long time we will have countless false positives where harmless material is censored, and as many false negatives, where harmful content slips through the filters, as already happened with the recent mosque shootings in New Zealand.

The truth is,while we’re still a long way from the real lawless days of the internet, we’re almost as far away from an effectively policed internet as well.