Big Tech's Secret Fear: The Disappearing Web

Wednesday, 10 March 2021
Bob Leggitt
"Why would both Facebook and Twitter ignore 99.99% of successful startup ideas, but suddenly jump aboard with newsletters? On closer inspection, the answer is obvious..."
Padlocks
Photo by zhang xiaoyu on Unsplash (image modified).

The World Wide Web is slowly disappearing. No, really. It is! I mean, obviously, it's all still around somewhere. It's just that with every week that goes by, a little more of it closes off unconditional access. And the conditions are getting more demanding as the hourglass runs down.

There can be no doubt that we're now moving towards Web 3.0, in which quality content is no longer freely distributed to one and all, but directly exchanged with you, personally, for a form of collateral. Could be money. Could be saleable data. But it's no longer enough for many sites to show us a page in the hope we might click an ad. As browsers continue to incorporate native ad-blocking, more of us than ever before are unable to even see the ads. So the publishers are saying:

“NO! Hopeful ad display no longer covers costs. We want your saleable data, your resources, and/or your money - up front. Or we don't show you the page.”

The ad-blockers who have inadvertently helped to erect these walls of conditionality, have now realised what a huge problem they've caused and are backtracking. Instead of blocking all ads, many are now loading “ethical adverts” by default, and only blocking ads they see as overly aggressive. But it's too little, way too late. Publishers have already seen a far better economy in turning each page load into a straight transaction. They don't care what the ad-blockers do or don't magnanimously give them permission to serve.

Indeed, a lot of new content isn't even being served on a domain at all. It's going directly to readers' email inboxes, in the form of subscription-dependent “newsletters”. Twitter has recently acknowledged this blossoming trend in acquiring the Revue newsletter service, and taking the first steps to integrate it with the main social platform. Facebook, meanwhile, is reportedly building its own newsletter delivery mechanism.

"Big Tech's problem is that publishers are discovering they can make more money in a year from just 100 subscribers than they can make from a million free access visits to an ad-monetised page. That epiphany is steadily closing down the open Web."

SOCIAL MEDIA'S CONCESSION OF FEAR

At a glance, this looks curious. I mean, of all the successful startup ideas that have come along over the past decade, why would both Facebook and Twitter ignore 99.99% of them, but suddenly jump aboard with newsletters? On closer inspection, the answer is obvious. Other publishing projects have fed Facebook and Twitter. Newsletters threatened to steadily starve them out. Not only starve them of up-to-date content, but also starve them of critical analytics information.

Yep, social media is really scared of the newsletter trend (and paywalled content in general). Why? Because it puts the plunderable-by-proxy material upon which the social platforms have parasitically built their empire, out of their reach. Subscription newsletters don't appear on Google, so random people can't just do an image search, plunder the cherries, and throw them all over Facebook and Twitter for Zuckerberg and Dorsey to monetise with ads.

But it's worse than that. Both Twitter and Facebook want to be the news. So newsletters have touched an incredibly sensitive nerve. Newsletters are news delivery that completely excludes Twitter and Facebook. Nemesis!

The social platforms don't really want to mess with newsletters. They're getting involved out of fear. Fear of what will happen to a business model that leeches off freely-accessible and plunderable content, if and when gated content becomes the default. Fear of not being able to analyse a burgeoning trend. When something is not on the accessible Web, Facebook and Twitter can't just sit there monitoring its progress and strategically plot against it. They're in the dark. And a lack of information is what Big Tech fears most.

So this trend towards distribution systems that lock out Big Tech is a major worry for the Web's biggest powers. It's not a complete lockout just yet. Most emails are currently being delivered by major, tracked webmail services, which means the likes of Google, Microsoft and Yahoo can analyse trends in newsletters. Only partially in a native sense. But they can get an idea of what's going on.

Facebook and Twitter, however, don't offer major webmail services, and thus they have no native control over information-gathering in that field. And there's nothing to stop newsletter services from building all-in-one solutions that don't require external email delivery at all. Solutions that completely bypass Big Tech. Then the almighty powers of Ye Interwebz are really left without a clue what's going on. At scale, it's a monstrous threat to their powerbase.

The creeping lockout would have a separate impact on other tech institutions. It's not only a matter of: what happens to social media? What happens, for example, to search engines?

GOOGLE'S CONCESSION OF FEAR

Google has already been forced to concede ground to the tide of so-called “gated content” - content whose main body is locked behind a paywall. Up until 2017, Google had no truck with this type of content at all, and would not rank it in search however many backlinks it generated. But in October that year, Google unveiled a new protocol for sites offering gated content. Such material does now appear in Google's results - to the point of annoyance as regards some search types.

I believe that people are entitled to attach a monetary value to their time and expertise. But I don't believe they're entitled to free promotion when they do that - and that's what Google's system now does. It promotes commercial content for free. Google is aware of that, and we know it doesn't like the idea, because it spent nearly two decades opposing it.

The concession of 2017 came about for two main reasons:

One, because Google was faced with being cut off and snubbed by an increasing proportion of the Web. It recognised by 2017 that there would one day come a time when non-marketing content from major publishers would be walled by default. And it asked itself what role a search engine whose results are heavily weighted towards major publishers, would play in that environment. Google could either start serving gated content in its results through gritted teeth, or it could steadily watch its results get evermore restricted as more and more major publishers opted to gate.

And two, because in making a protocol that potentially DOES rank gated content, Google is able to bait the providers of that gated content into revealing (to Google at least) what's behind the paywall. That achieves the all-important end of keeping Google in the loop. Being completely blocked from analysing an increasing volume of high value content was simply not an option for Google. The massive data company had to gain access to that data.

This has not, however, solved a problem for Google in the longer term. People will still get sick of a search engine serving them results they have to pay to access. You can subscribe to a few things, but not everything. And that means many results will become ostensibly useless to most Google surfers as the weight of paid-access content increases.

The change of policy does, however, allow Google to blame the lack of top quality, free results on the publishers, rather than take the blame for its own outdated ranking system. A ranking system that allows rich, commercial publishers to buy their way to the top of the organic results, squeezing out the smaller, 'enthusiast' publishers who are far less likely to gate. A ranking system that will fail altogether when the moneyed sector of the open Web closes enough doors.

CURRENT PREVALENCE OF CONDITIONAL ACCESS

We're further down the road towards a closed Web than it might at first glance appear.

Well over half of the hyperlinks doing the rounds on the significantly visible face of Twitter are now leading to pages that are inaccessible unless the visitor meets one or more conditions. The condition(s) can be anything from enabling JavaScript and/or cookies, through divulging an IP address or giving up an email address, to paying a subscription fee.

Most people online are already meeting a number of these conditions. So they haven't noticed the dramatic increase in page-load conditionality that's come about since the early 2010s. It's only when you block sites from running scripts, setting cookies and accessing your IP address that you see how the landscape has changed.

The sheer volume of domains that have stopped giving truly unconditional access, and begun building walls of conditionality, is astounding. And those walls are never coming down. They're only going to get higher.

If you do already meet the most common conditions, you may not know that on 16th December 2020 Twitter itself added “JavaScript-enabled only” conditionality to its usage requirements. Before this, in some countries at least, it was still possible to use Twitter without JavaScript. A year ago it was possible in any country.

The change guarantees that Twitter can aggressively track all users who access the platform without a proxy. Twitter has also quietly scrapped its recognition of the “Do Not Track” identifier, and made a range of former freely-accessible pages into 'login-only' areas.

Friend lists, Follower lists, the Media Tweets page… All of these were previously accessible without an account. Now you need to sign up to see them. And that means you can't read any of them without being recognised, tracked, added to a rolling data log, and packaged up for sale. Headline: Twitter is no longer an open platform. It demands data rather than money, but it's still making an up-front demand for collateral in exchange for access.

Other social platforms won't allow any sign-ups or page access at all without phone data, and the volume of such platforms continues to rise. So whilst social media constantly extols openness (and yes, Jack, we still remember you pretending to want to make Twitter more open a little over a year ago - only to close more of it off in 2020 than ever before), its actions speak louder than its words.

Social media knows that gating is the future and is taking steps to be part of the closed Web. But it simultaneously needs to persuade publishing to remain openly accessible. If everyone shuts up shop, the idea of Facebook and Twitter endlessly helping themselves to the contents of Google Images by proxy falls apart. If everyone shuts up shop, there is no Google Images.

BREAKUP OF THE WEB

Some commentators have already predicted that Web 3.0, whatever it may turn out to be, will take us back towards the fragmented Internet of the pre-social media age. And if the Web does fragment into smaller, exclusive pockets of highly targeted interest, the outlook for Big Tech is dire.

Big Tech has made a career out of inserting itself into open content distribution streams on various pretexts, and then (ab)using its huge scale and power to become the actual source of the content rather than just the means of discovery.

To break up the Web into exclusive enclaves, to which Big Tech has little or no access, would be the one power-destroying eventuality that Google, Facebook and Twitter could do very little about. That's why they must, at all costs, now insert themselves into the new, dark distribution streams, and somehow attempt to push publishing back into the open. Push it into big spaces. Stop it retracting into exclusive, private clubs. They're worried. Expect to see some pretty drastic pushes and shoves before too long.