How Will Section 230 and 512 Safe Harbour Reform Affect User Generated Content?

Thursday, 21 October 2021
Bob Leggitt
"The public tolerance for Big Tech as an unelected, global government has all but worn through. Big Tech is NOT the government and it needs to be restrained with laws. In the fullness of time, no one will stop those laws from being made, so it's time for us to lobby for our own interests - not the interests of psychopaths who regard us as 'dumb fucks'."
Laptop keyboard spelling STOP

It's coming. Change is in the air. In the works: a new digital world in which large distributors of online content can no longer rely on naive, 1990s laws to absolve them of responsibility for the content's effects.

We can't yet feel its presence, but we can see the embryo of reform emerging. Instances of massive tech platforms self-serving on the back of third-parties' misbehaviour have been way too relentless to write off as some prolonged blip. There's a crisis of public tolerance, and far from cooling off, it's hotting up. Politicians are showing more determination than ever before to tackle Big Tech's systemic abuse of safe harbours like US Section 230 and the DMCA OCILLA Section 512.

Everyone's like: "Duh, how do we control the size of Big Tech?"... DITCH ITS UNCONDITIONAL IMMUNITIES! The reason it can keep sucking in more and more and more of the web on a limitless basis is that it is 100% legally unaccountable for the harms its dissemination systems cause. Ration its lawsuit immunities and its monstrous size diminishes automatically.

DOOM AND GLOOM?

Big Tech, in reply, wants us to imagine the impending legal reforms as a doorway to draconian prohibition. An era in which we, the general public are not allowed to post anything at all, and Web 2.0 goes out of business. Small forums perish first, then Reddit goes down, then Facebook and Twitter, before Wikipedia ritualistically burns at some Parliamentary stake as ministers grin evil grins into a badly-orientated green stagelight.

And the reality? Big Tech's immunity lobbyists are good at spinning a dramatic yarn, but no one is going to take away the public voice. All society wants is for the people who collect the riches from all of this to take their fair share of the responsibility.

So how will this really play out? Where are new content laws likely to take the digital landscape?

Far from the foretold doom, legal revisions will bring us a much healthier environment. And contrary to the narrative that the tech collective spouts, the reforms can be aimed very squarely at the worst offenders. Most of the harms being caused online at present are due to the vast scale at which mammoth platforms are pumping content onto the global stage. The tech powers know that the real problem is not the content or speech itself, but the incredibly wide visibility that their distribution machines are giving it. And that's a problem they calculatedly created, in the name of profit…

IS "BOOSTING" PUBLISHING?

One of the central issues in aportioning the blame for spreading harmful content is that of so-called "boosting". There's an obvious link on the bar chart between the rise of public concern over the spread of harm, and the rise of boosting on way-too-big websites.

Boosting is using priority algorithms or even human intervention to artificially make certain pieces of organic content or speech immensely more visible than they would be if left to spread naturally. This should, and almost inevitably will be one of the key areas of legal reform in the coming era.

Boosting is primarily associated with social media, but it's also a feature of search engines. Some search engines no longer serve precisely what the consumer asks for in a regular search. They instead serve priority content, which sometimes does match the query, but sometimes doesn't.

Big Tech's behind-the-scenes answer to all this is: "Oppress the public more, in a way that pushes us a little further towards our goal of ruling the world. Make the public provide us with ID and biometric verification on the pretext that they'll behave better."

It knows that won't work. It just wants YET MORE surveillance collateral to exploit for YET MORE unlimited capital. But this is what happens if we DON'T place new legal responsibilities on tech giants. The entire bottle of bitter medicine will be fed to us.

You can test whether a search engine is artificially boosting, by entering a search term, then changing it to a slightly different query within the same genre and searching again. Some search engines will return the same set of results for both queries, which means they're serving priority presets above the specific matches. Additional proof can often be found in the fact that the top result does not actually answer the query or contain the critical keyword(s), whereas something halfway down Page 2 does.

People imagine that because it's "only Wikipedia, YouTube and similar sites" that enjoy this privileged boosting in search, it's okay. But it's not just Wikipedia, YouTube and similar sites. And even if it were, there have been many assertions that Wikipedia is not politically neutral and does not present a neutral take on information. And since Wikimedia has recently backed a politically-biased lobby, we can no longer dismiss those allegations as hot air.

YouTube is similarly steeped in accusations of moderation bias. And worse, the most prominent sites that unfairly and corruptly dominate the search results are actually platforms with immunity to lawsuits. Would search results not have more integrity if the source sites were subject to libel and copyright law?

Meanwhile, big social platforms will artificially boost the visibility of content on the basis that it generates activity - even when they know that the activity it generates is fractious. It can be argued that, very much like certain search engines, social networks are treating organic content in the same way they treat adverts. Placing it into the same priority slots.

Once an algorithm is interfering with the visibility of organic content to this extent, it enters the realm of editorial control. Which moves things into the category of publishing. The platform is saying:

"This is what we want people to see, even though it has not specifically been requested".

Advertising content is different from organic content, because it's subject to a set of standards and a level of scrutiny - plus it's identified to the public as promotional matter. It's therefore acceptable for properly identified advertising content to be placed in highly visible locations. But this should not apply to organic commentary about a divisive or political subject, which is artificially made more visible in the same way as an advert. That's exercising editorial control, and exhibiting bias.

No crime in itself. But if the content turns out to harbour libellous speech, then the platform that artificially boosted it is as guilty of causing harm as the party who posted it. Why are the platforms protected from lawsuit? Simply, because arbitrary and opaque algorithmic boosting did not exist when the current laws were made back in the 1990s.

It's interesting that tech companies will endlessly bang us over the head with the metaphorical frying pan of "KEEP YOUR SPYWARE UP TO DATE!!! (Did I say spyware? Meant software, obviously)". But they have a very different ethos when it comes to keeping laws up to date.

SMALL SPEECH VS BIG SPEECH

The tech collective repeatedly pushes the narrative that if their safe harbours are interfered with, it will kill off countless small platforms (like trad forums) who can't afford the legal fees to defend against a stampede of claims. But this deliberately breathless meme ignores the minimal lawsuit stats in currently unprotected areas of online publishing.

It also assumes that factors such as visibility and artificial boosting would not be taken into account in any legal revisions. In a moment, we'll see how a new legal framework could protect small platforms and make things fairer for low-vis publishers on big platforms, whilst aggressively tackling the epicentre of harm on mainstream social media and in search. How it could actually help hand some power back to the smaller players.

Small forums are being brainwashed into supporting Big Tech's immunity mandate on the basis that they couldn't survive without it. But Big Tech's immunity, and the unrestrained, trample-all growth it's fuelled, has killed off more small forums than safe harbour reforms ever could. Even in a worst case scenario, with all safe harbours scrapped, human-moderated forums would face no more lawsuits than multi-author blogs do now. And are multi-author blogs sued offline? No. They've fared very much better than small forums, and they have no safe harbour protection at all. Don't fall for the lies. It's massive platforms' and search engines' anti-competitive exploitation of safe harbours that kills off small communities - not the law.

HOW THIS NEW WORLD WOULD REALLY LOOK

Once you examine the true likelihoods of safe harbour reform, you see how distorted the picture presented by the tech collective's "digital rights" groups really is. It won't turn cyberspace into a barren desert of deletion. Bloggers are not protected by safe harbours and never have been. And oh look, bloggers everywhere! How does that fit into the narrative of "ThE InTeRnEt CaNnOt SuRvIvE wItHoUt LeGaL iMmUnItY"?

And safe harbour reform does not, in any case, mean withdrawal of legal protections. The main developments will probably look something like this…

Revisions to the protections for platforms, rather than the flat-out abolition of protections.

Safe harbour protections reintroduced as conditional privileges rather than automatic rights. Platforms can remain protected from lawsuits for what third-parties publish within their walls, but only provided they adhere to a framework of responsible conduct.

A recognition of human unmanageability. Environments that are too big and busy to be moderated by human beings are subject to different conditions from those with human moderation.

Greater responsibility for platforms to demonstrate that they are taking reasonable steps to prevent harm and discrimination at policy level. "Humanly-unmanageable" platforms may, for example, be required to make their algorithms available to independent assessors as a condition of retaining their protections. They very determinedly don't want to face this, because their algorithms are currently both harmful and discriminatory.

Whilst they couldn't be directly forced to submit their algorithms, platforms with "humanly-unmanageable" traffic volume could be refused the privilege of safe harbour protection if they don't. Far from being bad for the public as tech giants claim, this would actually help protect ordinary people from unjust suspensions, shadowbans and the like, while holding the "sheltered" class of influencers who cause the real damage to the same standards as everyone else.

A boosting clause. If a platform artifically prioritises the visibility of organic content (i.e. content which is not identified as an advert), it's treated as the publisher of that content and receives no safe harbour protection from lawsuits. This would not trouble traditional forums or chronologically-prioritised / open source environments such as the Fediverse, since it can easily be demonstrated that they don't artifically boost hate speech, libel or intellectual property.

A censorship clause. A platform is not protected from discrimination lawsuits where it deletes or artificially restricts content outside of legal necessity, state order, security reasons, or its current Terms of Service. In other words, to qualify for safe harbour protection, the platform needs a valid reason for deleting or restricting content. If it can't provide a documented reason, it's considered to be exercising editorial control and is treated in the same way as a publisher. Platforms using automated moderation processes could retain their protection provided their algorithms are independently approved as fair and non-discriminatory.

Visibility thresholds, above which content must be subject to internal human scrutiny, and may even lose its safe harbour protections. A case in which an offending piece of content received fifty million views on a platform could be treated separately from a case in which a piece of content received just a few thousand views. In fact, for the sake of preventing both social/reputational devastation and damage to intellectual property value, it would be reasonable to exempt content with millions of views from safe harbour protection altogether. It would be interesting, with a visibility threshold of, say, a million views, to see how many platforms would "suddenly notice" a piece of nicked or legally dicey content when it reached 999,000.

Maintaining protections for lower view counts would be a vital step in helping to preserve vibrant discourse on small forums and in small online communities, whilst holding to account the rich manipulators that cause the damage. This would also discourage massive platforms from artificially boosting content.

Measures to limit the unrestrained virality that is so damaging when a piece of content is harmful. This would be a high-end framework applied only to the arterial networks that have real power to control viral spread.

Tightening of copyright law to correct platforms' policy-level failures rather than targeting individual instances of theft or abuse. For example…

Greater onus placed on platforms to deter the public from posting third-party content without the copyright owner's consent. Most of them don't do this at all at present. In fact most do the opposite, and actually encourage unauthorised re-posting. Legal protections could be limited or stripped for platforms that refuse to explicitly deter copyright violation.

Notably, this does not involve any kind of filtering. People are still free to upload and speak unimpaired. However, they do so properly informed about the legal implications of posting third-party content without consent, and are reminded that they face suspension for copyright violation. This would wipe out the majority of copyright violation overnight.

Whilst this is an obvious, common sense solution, the tech giants desperately don't want to discourage copyright violation. This is a measure of how dependent their business model is on content they were never entitled to appropriate.

Search engine protections limited only to their actual search features. In other words, if search engines choose to serve as distribution platforms directly providing image download facilities and the like, the distribution features are exempt from safe harbour protection. So in essence, a search engine could not be sued for thumb-linking to a defamatory image, but it could be sued for directly distributing the defamatory image. Search engines might also share liability if they boost offending content in a way that can be considered an editorial decision, or if they opaquely manipulate their results outside of a publicly docuemnted priority system.

Search engine caches and other cached, publicly available content of tenuous necessity, not protected by safe harbours. The answer to the question "But then how do we defend against copyright lawsuits on cached content?" is: delete the cache. A unique project such as the Internet Archive has incredible historical value and should be considered separately from caches - especially since its premise is not mere duplication and it's not replaceable with any substitute. However, its legal protections should still be subject to conditions.

The law has to balance the public interest with rightsholders' rights to be informed and to exercise copyright control without undue inconvenience. And there are cases in which the Internet Archive has retained content removed from the source site due to legal notices. A difficult area that needs sensitive thought, but the law must lead on consent and put human rights above the internet's rights.

The fact remains that the majority of people are unaware that snapshots from their cyber history have been preserved and can still be accessed on Wayback Machine. They can't opt out of something they don't even know exists. That has to be put right, and this is where the power of Big Tech as a collective could be used to inform. The tech collective is quick enough to club together and protest when its own carte blanche is under threat. Not so quick to do so in the interests of other people's rights.

Retrospective revenue rights for copyright holders whose work has been monetised without consent or agreement. As a creator, if your work has been exploited for revenue by a UGC platform, you are morally entitled to financial redress. That moral entitlement should be written into law. It's hard to believe it hasn't been already. A "right to retrospective revenue" law would still protect platforms from disproportionate demands from copyright holders. But it would also ensure platforms could not simply keep all the money they obtained via their use of the intellectual property.

Attribution prompts made mandatory on upload and pasting functions. Platforms that do not prompt users to attribute media and substantial text content to a source are not protected against copyright lawsuits. Attribution can be bypassed, but must be requested. Where the attribution can contain a link, the prompt should request it.

When entered, the link must remain unaltered and free from unnecessary limitations. i.e. the platform must not append it with a "nofollow" attribute which limits the source's general online visibility, or a "noreferrer" attribute which hampers the source's ability to discover where their content has been re-posted. The platform must not pass the link through a "dark domain", creating similar limitations for the source. Providing a simple, unmanipulated hyperlink is the very least large platforms can do for the people whose media content they use to drive profit.

But many of them are not just taking (and profiting from) other people's work without asking - they're actively dumping on those people via sly technical schemes. Schemes which prevent the rightsholders from gaining their rightful online status and/or controlling their distribution. It's an underground of anti-competitiveness against small creators, and it truly stinks.

IMPORTANT NOTE: Even if safe harbour protections were completely removed, user generated content platforms would still not automatically become liable in defamation or copyright cases. If it could not be proved that they knew the content was there, or that they could reasonably have known there was a problem with it, they could still be cleared of liability. We should remember that safe harbours are an immunity to legal action, not a sole means of defence against it. And frivolous legal action costs money, let's not forget. Removing the safe harbours doesn't suddenly make the legal system into a free coconut shy.

TAKE CARE OF THE BIG PICTURE, AND THE SMALL PICTURE TAKES CASE OF ITSELF

The above measures work to improve the big picture, without causing harm in the small picture. They would restrain Big Tech without taking the axe to content, and importantly, no responsible platform would be any more vulnerable to lawsuits than it is now. At the moment, the tech giants are running round like headless chickens - boosting this, deleting that… Like…

"OMG someone mentioned the word "vaccine" without bursting into a spontaneous round of applause? They must be struck from the very bowels of cyberspace!"

Then putting the deleted content back up again because the deleted party was in a position to expose more undemocratic Big Tech censorship to the baying mass public…

It's stupid. They can't grasp that if they'd simply left content alone to do its own thing, no one would have been reaching for gavel in the first place. We need to protect speech and content, but stop protecting the manipulation and abuse of that speech and content, specifically by Big Tech, in the name of greed. New laws can do that. And in time, new laws will do that.

DEFEND YOURSELF - NOT BIG TECH

We don't need to panic about legal reforms in the digital world. We need to help shape them by spreading sensible suggestions and forwarding them to political figures. Owners of small platforms should stress the critical issue of visibility and the need for visibility thresholds in any legal reforms. This makes a lot more sense than doing what the Big Tech lobby advises and simply demanding the same free-for-all we have now.

We know it's the huge platforms and brands that cause the damage. They haven't used safe harbours as a fallback protection. They've used them as a business model. That's why we will be seeing changes in the law.

The public tolerance for Big Tech as an unelected, global government has all but worn through. Big Tech are NOT the government and they need to be restrained with laws. In the fullness of time, no one will stop those laws from being made, so it's time for us to lobby for our own interests - not the interests of psychopaths who regard us as 'dumb fucks'. Smaller online communities need to stop standing up for the digital bulldozer, and start standing up for themselves. The new laws are coming. Never mind the rights of Big Brother. Big Brother will survive. Just make sure the laws protect YOU.