"If Twitter or Facebook's publishing is deemed to be libellous, defamatory or discriminatory, they can be sued. That's the basic, theoretical goal."
One of the hot debating topics of early 2021 centred around the rights and immunities afforded to social media platforms. In particular, whether these vast monoliths of info-dissemination have mutated into publishers, and thus, whether they should be subject to the stricter rules that apply to publishing.
So what defines a publisher? What defines a platform? And what difference does it make? Let's find out…
Some people say that as soon as a platform begins to discriminate between what can and can't be posted, it becomes a publisher. But using this definition there wouldn't be any platforms on the WWW at all. Unless you go onto the dark net, more or less every centralised website facilitating user-generated content has some kind of moderation policy.
So what about bias? Is that what makes a platform into a publisher? No. We wouldn't describe a traditional message board as a publisher. It's a platform. But it probably has a long list of rules that allow it to manipulate the discussion to its own advantage. For example, to delete posts that are liable to hamper the forum's commercial goals. To ban members who demotivate activity.
These policies can be very specific, relating, say, to the censorship of posts about rival boards/sites, or the silencing of whole groups of people. Some forums in the manosphere even ban all women from posting. And ironically, those boards are among the more vocal critics of social media's ability to moderate. Like a forum banning all women is fine, but social media banning one conspiracy theorist who's a literal danger to public safety is some kind of human rights violation. In the end, bias is bias, minimisation of harmful speech is minimisation of harmful speech, and neither makes a user-generated content outlet into a publisher.
THE DIFFERENCE BETWEEN PLATFORM AND PUBLISHER
So what does? In order to define a publisher, we have to move away from generalised rules for posting, and into the area of per-contribution assessment. Not what kind of thing is allowed, but what specific thing is allowed. Here's the difference between a platform and a publisher…
With a platform, where there are five individual posts about the same thing, all of them will be published. One or more may be removed or hidden later if they breach a rule.
With a publisher, where there are five individual posts about the same thing, only the one(s) meeting a specific standard of quality will be published - probably just one.
A publisher pre-screens and pre-approves content based on quality. A platform doesn't have any qualitative bar above spam. Platforms have become better over time at after-screening content with analysis metrics. This helps them bury spam and boost the things in which people are showing an interest. But the relatively dumb machines deciding what's popular and what isn't are not going to recognise complex issues such as potential libel or defamation. And even with after-screening it's still not really about quality. It's about engagement, and that's a very different thing. On a socially-driven site, much less about what was said, and much more about who said it.
SOCIAL MEDIA AND PUBLISHING
Many people say that the reason they want social media to be reclassified as a publisher, is so that it can be held legally accountable for libellous messaging and defamation hosted on its servers. And so that it can be made liable in other areas of law, such as discrimination. The endgame these people are after, ironically enough, is free speech.
So-called safe-harbour laws such as the US Section 230, and international equivalents such as Europe's e-Commerce Directive 2000, effectively immunise social media from accountability for what third parties post. There is a proviso, which is that once notified of an illegality, the platform must act to remove it. But until then, if a third party posts user generated content, only that individual third party is liable.
The people who want social media reclassified as a publisher believe that reclassification will exempt the likes of Twitter and Facebook from Section 230 and similar, making these vast social platforms responsible for the fair dissemination of information. If Twitter or Facebook's publishing is deemed to be libellous, defamatory or discriminatory, they can be sued. That's the basic, theoretical goal.
But in practice, declaring Twitter and Facebook to be publishers would not achieve that goal. Firstly, publishers don't have to be unbiased. They can be as biased as they like as long as they stay within the law. Some supporters of the reclassification idea go further and cite that major social platforms should be deemed public utilities and regulated for political bias by the government. But this brings in other issues, such as which exact government does the regulating. Social platforms could re-centralise anywhere in the world, so trying to bind them to one specific set of standards on political bias would be futile.
SHOULD SOCIAL MEDIA BE CONSIDERED A PUBLISHER?
And it's all pretty academic anyway, because no social media platform I'm aware of can realistically be seen as a publisher at present. Some are moving firmly in that direction, but they're not there yet, by a long way.
So what would have to change in order for, say, Twitter to become a publisher? Essentially, the main area of content consumption would have to become dominated by pre-screened material. For example, instead of the homepage centring around a timeline of random users' Tweets, it becomes a “professional news feed”, compiled manually from media group submissions. If the main content is being pre-screened, it's publishing.
But even then, if the public were allowed to tweet replies, those replies would still be protected by Section 230 and similar provisions. So under current law social media would still not become liable for unscreened third-party libel, defamation or discrimination.
The “safe harbour” laws are based on the origin of the content - not the status of the website. So a site doesn't have to be officially classed as a “platform” to be covered. The only condition is that the service is open for spontaneous communication from random third parties, and that the offending content was posted by one such third party - without any pre-screening.
So if Twitter had a professionally curated news feed as its main focus, but public replies were still permitted, there would in fact be two separate sets of conditions in force. The pre-screened, published matter would not be covered by a safe harbour, so if it contained defamatory comments, Twitter would be liable. But the public responses coming in without any pre-screening would still be covered by safe harbours. It wouldn't make any difference at all whether Twitter was recognised as a platform or a publisher.
SO SCRAP SECTION 230 THEN?
I would support the scrapping of Section 230 as it currently exists, as well as the DMCA safe harbour. I don't believe blanket immunity from legal action is healthy, and it's clearly been abused by Big Tech. But removing the safe harbours would not throw the law into a void. If cast aside, the safe harbours would give way to other laws specifying exactly who is responsible for what, and when. And sites allowing user-generated content would still retain selective protection for cases where they could not realistically have known in advance that libellous, defamatory or discriminatory content would be posted.
The “new” laws would almost certainly work in a similar way to the concept of receiving stolen goods. That's how the current national laws beneath the wider safe harbours generally work. If it can be proven that a site knew it was harbouring legally suspect content and did not act, it's liable - whether or not it's been formally notified. But if it's deemed that the site couldn't have been expected to know, it's in the clear. These laws already exist. It's just that the safe harbours sit above them, offering extra protection at the current time.
Discretionary law, to me, is a better concept than blanket immunity, but it wouldn't greatly change the picture on social media, because social media is already aggressively moderating content it knows to be factually inaccurate. And it certainly wouldn't have prevented the suspensions of Donald Trump. If anything, it would have hastened them.