The Search Engine is Dying and There is No Cure

Tuesday, 18 October 2022
Bob Leggitt

Google has locked itself into a range of self-threatening behaviours, which correlate closely with behaviours exhibited by other huge brands whose fortunes took a nosedive.

Search logo next to closing down sale notice

When Google recently determined that approaching half of Generation Z think websearch sucks and use social discovery instead, it reached for the panic button.

Naturally, the primary solution it came up with was "more surveillance". No surprise there. But as I predicted in mid 2015, socially-driven discovery is now a dire threat to Google's future. And worse, the momentum appears unstoppable. If the generational apathy towards classic websearch continues, we'd expect to see Google drifting into a vault of irrelevance by the time Gen Alpha substantially comes of age.

"I would be very surprised indeed if, ten years down the line, there wasn’t a much greater union between social media and serious web search." - Bob Leggitt, June 2015.

So, is it specifically Google, the brand, that faces this apathy, or is it websearch as a broad concept? Well, Google has become qualitatively uncompetitive, delivering embarrassingly lazy and laughably biased results. It maintains its market share only through...

  • The brute force of default preferencing.
  • A surveillance-based MO which compensates for lack of inherent quality with interest-targeting.
  • Seriously anti-competitive misuse of the Chrome browser monopoly.

Whilst I absolutely do not recommend using a profile-gated search engine, this Twitter thread of Neeva's gives a fantastic insight into Google's dark patterns. Not only the aggressive and extremely sly retention tactics Google has implemented, but also the effectiveness of those deeply anti-competitive tactics in the real world. You really can batter the competition with a simple popup. But of course, this only works when users are switching to a recognised search engine natively in the browser. It can't prevent people ditching websearch altogether.

Neeva sits among a crowd of rival websearch engines vying to fill Google's shoes. But to say it's fallen short of hopes would be an understatement. Even after major marketing investment, and the dropping of a paid subscription-only model in favour of a freemium profile-gate, it's amassed just 600K users. When Neeva launched last year, its bosses envisaged 200 million users on full subscription as a realistic possibility. I think we can pretty safely say they no longer anticipate that kind of success.

And more established rivals are ailing too. DuckDuckGo, whose traffic had been rising by about 50% per year, is looking like it could flatline in 2022. For this point in the year, 2022's stats are up by less than 1% on the same day in 2021. That's a big change.

DDG did run into two bouts of bad publicity this year - one for censoring its results, and the other for preferencing Microsoft trackers in its browser. But month to month analysis of the stats shows that the brakes had kicked in well before the first of those two PR crises. So the problem, whilst possibly worsened by media criticism, has its roots elsewhere.

And whilst Brave Search has also taken some of DuckDuckGo's share, Brave itself is now suffering as a brand, with the browser's user figure having plunged 11% in consistent monthly drops since June. This looks tied to the general crash in enthusiasm for Web3 as NFTs lose the zeitgeist and governments size up to regulate crypto. Could also be influenced by the spread of news that Brave's blocking mechanism will collapse when Google drives out Manifest V2. But it does show that the alternative tech market is heavily gimmick-driven and does not fare well when the chosen gimmicks tank. And we can deduce from the fact that Brave is still leading on its plummeting browser, that the search engine has not independently bucked this trend.

The websearch market is now saturated with hopefuls trying to wrap up the same crap in different paper, and the reality is that the whole concept is losing traction. The reasons why are pretty simple. Apart from the pointlessness of running searches which repeatedly surface choreographed lists of silos and megasites that no one needs a search engine to find, there's a perceptible coldness to bot-curated, Teletext-style staticity on a Web which has essentially become a television.

And there's now a compound problem. The more market share Google loses to the social arena, the more it needs to compensate for lost revenue. That means more self-preferencing, more surveillance, and more desperate ad-stuffing - making a comparatively bad experience even worse.

It's hard to see even a hint of an escape hatch. Google has locked itself into a range of self-threatening behaviours, which correlate closely with behaviours exhibited by other huge brands whose fortunes took a nosedive. The most obvious of these is 'referrer rot'.

The secret is out of the bag. Google's core system of prioritising content - broadly unchanged since the 1990s - is now completely useless. If Google doesn't persistently present results based on site reputation and user-surveillance, the quality is truly embarrassing.

REFERRER ROT

Referrer rot is a syndrome in which a middleman becomes reliant on such a small number of independently-accessible destinations, that the client eventually ceases to see any need for the middleman, and goes directly to the destinations.

In the context of websearch, persistent surfacing of the same platforms and megasites, over and over again, eventually persuades users that they don't need the search engine. If you run ten websearches, and after nine of those searches you end up visiting Wikipedia, you're going to begin wondering whether you really need the middleman, be it Google, DuckDuckGo, or whoever else.

Only quite recently have search results become quite as predictable as they currently are. So older generations still maintain their historical vision of websearch, with at least a fighting chance of discovering new destinations.

But much younger people have only known the era in which every Google search of a certain type presents the same group of platforms and sites - only a handful of which offer any value. So the psychology for them is different. They don't expect the unexpected. They're seeing Google purely as a middleman for Wikipedia, YouTube and whichever content-marketing drones paid most attention to Danny Sullivan this quarter. For them, what is the point of Google?

The problem is, this is not something Google is able to turn round. Even though referring all searches to the same set of sites is destroying the search engine's raison d'ĂȘtre, it's necessary for a range of critical reasons. Reasons such as public tolerance, propaganda-control, protection against humiliating misinformation, etc. The secret is out of the bag. Google's core system of prioritising content - broadly unchanged since the 1990s - is now completely useless. If Google doesn't persistently present results based on site reputation and user-surveillance, the quality is truly embarrassing.

If something is recommended before you even have to search for it, there's no need to type words into boxes at all.

BACKFIRING WITH A BANG

The twisting of search engine optimisation (SEO) into a highly commercial consumer-race has been Google's hope of sculpting its own answer to social influencer recommendation. Unfortunately for Google, the plan has spectacularly backfired.

SEO began life in the 1990s as a set of common-sense measures for making a website more desirable to search engines. Matt Cutts progressively evolved it into a Google-serving coaching regime, and today it's just a boot camp in which Danny Sullivan unironically tells capitalist drones how to game the Google results. It's totally inverted the function of search engines. Instead of users finding the world's brightest enthusiasts, the world's dullest salespeople find them.

It's hard to work out how Google failed to realise that even after the mother of all bootcamps, capitalist drones would still produce destructively dull content. But it seems they did.

Unlike older generations, who have been carefully and painstakingly subjected to Google's brainwashing, kids just see straight through all the bullshit and identify Google Search as precisely what it's become: an ad board. That they would find TikTok a better place to discover trends, hangouts and retailers is no surprise at all. True, TikTok is just as much a middleman as Google. But a) it doesn't make the fact anywhere near as obvious, b) it focuses on the exact commodity Google now glaringly lacks: entertainment, and c) it has a human face.

Websearch analysts are totally misunderstanding the TikTok trend. They're scratching their heads because TikTok's search facilities don't rival Google's in surveys. But the point is, it's no longer about search. Social discovery is a much broader process. If something is recommended before you even have to search for it, there's no need to type words into boxes at all. Voice assistants paved the way for the zero-typing age, but even they don't compete well with emotionally-rich social environments.

Zero-typing discovery is what TikTok aims to accomplish. Other social platforms have the same goal but have not been as clever with their "addiction model". Websearch has a massive problem here, because people expect to have to type in words. If they get results that don't match the words they type, they'll call foul. Social media doesn't have that strict Q&A format, and it defers a lot of trust onto people, groups or advertisers. So the scope for recommendation is much more open. Plus, social platforms major on emotional connection, which carries advertsing more robustly than some detached and faceless listicle.

SOCIAL DISCOVERY FOR GROWN-UPS

As someone who's for long been migrating to social discovery for research, I feel I'm now at the tail end of my relationship with websearch. Nearly all of the online research for a recent 5,400-worder I published about Fender Jaguar guitars was actually accomplished via the decentralised Twitter front-end Nitter.

Only by singling out Twitter users with expert or first-hand subject knowledge - among them actual musicians whose activity mapped out the guitar's history - was I able to get the depth of detail I was looking for. To clarify, I wasn't interviewing anyone or even sending them one-off queries. All of the info was already published. I was just using specific social profiles as search 'containers' to restrict the results to a trusted and subject-relevant conversational exchange.

I find it fascinating that so many important and well recognised experts provide live and ongoing updates on social media, and yet the Google (and general websearch) ecosystem keeps you totally severed from them, to the point that it's as if they live on a different planet. This is choreographed and totally deliberate.

The fiercely anti-competitive Wikipedia notably doesn't directly cite luminaries on Twitter, because that would mean persistent linking to Twitter, which would mean a public majority realising they could get reliable information straight from source... With an interactive potential that the Google ecosystem can't provide. So Wikipedia instead cites secondary sources who get their information from the horse's mouth on social media. It's not as authoritative, but it puts a wall between the public and the original sources of information. Ensures that people have to keep going back to the Google ecosystem for their knowledge fix, and allows Big Tech to control the narrative.

Nitter is currently the number one most-used option in my search bookmarks. Number two is Wikiless - a proxied version of Wikipedia which does not directly expose you to a member of the Silicon Valley cartel, and which fully loads its pages even with ALL third-party content blocked. That neither of my top two search bookmarks is actually a recognised search engine speaks volumes about the state of websearch in 2022.

As you scale up your use of social discovery, you find source sites which have not been updated for many years and which still have plain HTTP (unencrypted) connections. These can harbour extremely valuable info. I've mentioned before that, without any good reason beyond preserving its data monopoly, Google now deliberately keeps HTTP-only sites out of its results. So there's a whole realm of genuinely enlightening, first-hand experience that Google's control-freakery alone has erased from websearch's accessible archive. There are now many things you can only find via social media.

Indeed, with Twitter/Nitter queries you can even specifically request unencrypted articles by adding url:http after your search term. Very useful in some genres. You're completely eliminating the professional 'SEO and digital marketing' circus, whose content will inevitably be HTTPS encrypted.

SEARCHING WITH TWITTER OR NITTER

Whether you're best off with Twitter or with Nitter depends on your requirements. Nitter is better shielded from aggressive surveillance. Twitter offers better control over the results, with options for chronological display and the potential for a generally deeper dig.

Whichever you choose, put your keywords and phrases in quotes, and use filters to cut out the spam. Here are some of my standard filters for Nitter searches. You just type them into the search box one space after your query. Most can also be used on Twitter...

until:YYYY-MM-DD only displays Tweets posted BEFORE the specified date.

since:YYYY-MM-DD only displays Tweets posted AFTER the specified date, and can be used in conjunction with until:YYYY-MM-DD to set a precise date range. This 100% accurate date filtering is something no classic websearch engine is capable of matching.

from:@username will search only within a particular person's profile. You can search multiple people's profiles by using the OR operator. For example: "big tech" from:@username1 OR from:@username2 OR from:@username3.

-url:https eliminates Tweets containing encrypted links.

-url:http eliminates Tweets containing unencrypted links, and can be used n conjunction with -url:https to eliminate all links and keep the focus on conversational Tweets and native Twitter threads. Since spammers normally have a goal of pushing people off site to their own domains, filtering out the links zaps a huge volume of spam. However, many spammers are now wise to link-filtering and will spam their promotions directly into Tweets or Twitter threads.

-filter:retweets eliminates Retweets and ensures you don't see duplicates of the same Tweet (only necessary on Nitter - Twitter does this automatically in search).

-keyword, -#hashtag, -@username or -"your phrase" (i.e. simply placing a minus sign before a word, hashtag, user or phrase you don't want to see) eliminates all Tweets containing the specified component. For example, the following is one query I'm currently using to search the privacy keyword. Without the filters and using the #privacy hashtag, the search results will be overwhelmed with spam. With filters, the non-hashtagged query looks like this...

privacy lang:en -URL:http -URL:https -crypto -consulting -team -blockchain -BTC -bitcoin -coin -hacker -hacking -blockchains -filter:retweets

Bang that straight into a Nitter search box and see what real people are saying. It looks at first glance as if I'm searching for cryptocurrency info. But closer inspection shows that all the crypto terms are prefixed with a minus sign, meaning I'm filtering those words OUT of the search. I'm also filtering all links, which removes a vast amount of spam, at the expense of missing the odd useful article. I can usually recover the missing articles through a combo of Fediverse hashtag searches and Twitter feeds/lists confined to specific people.

You can also try the same search and filters with #privacy as a hashtag rather than privacy as a keyword. The hashtag version is a lot calmer, but usually a lot more spammy. There's a fine balance between making the feed readable and losing important information, and it's amazing how many people who use the #privacy hashtag are actually intent on grabbing data. But the isolated gems in these searches are worth the trawl, and they'd never be found via a classic websearch engine. A range of searches based on these principles allow a perspective outside the usual alt-tech "pRiVaCy ToOlS" chant. I should also add that privacy is an incredibly tough search. It gets a lot easier with tags and keywords that aren't major cash-cows. #privacy is a serious cash-cow, as you'll see if you search it regularly.

The more you experiment with different filters and concepts, the more you realise how restrictive and manipulative websearch engines are in comparison.

And this is only the first step in shifting to social discovery. It's only a very short time before, through the search, you start to identify good sources, who will actually deliver information. You can then combine those good sources into a bookmarked feed a la...

from:@username1 OR from:@username2 OR from:@username3 OR from:@username4 OR from:@username5

The result is equivalent to following those accounts - except you don't have to log in to see the feed (so better privacy), and you don't get all the other "Liked by someone that the bloke you just unfollowed once replied to" crap that Twitter chucks onto the main follow timeline.

SEO - today just a boot camp in which Danny Sullivan unironically tells capitalist drones how to game the Google results - has inverted the function of search engines. Instead of users finding the world's brightest enthusiasts, the world's dullest salespeople find THEM.

FUTURE OF DISCOVERY

There is no perfect way of discovering information online. Beating a path to the truth will always be a fight, and even if only for a sense of how distorted Web-based discovery can be, it's good to maintain access to as much printed matter as is feasible. But we in the older generations should not forget that once upon a time we all lived perfectly happily without any digital search opportunities at all. Then dBASE came along, and suddenly we could type words into computers and watch matching database records slowly creep up a self-scrolling screen.

We owned and controlled the database, and no outside forces could ever interfere with it, change the ordering of the results or insert sly promotions. In some ways, it's all been downhill from there. Maybe the next big thing after social discovery will be local, personally-maintained databases. Maybe it won't. But what we can be sure of is that websearch as we know it cannot survive a fully-formed Metaverse. Mind you, I'm not sure I can either, so let's not go down that road just yet.