Optimising Your Online Privacy With Self-Minimisation of Data

Popzazzle | Sunday, 8 August 2021 |

"Corporate spying, at the level it's now reached, is creepy, stalkerish, manipulative, predatory, warped, perverted, and abusive of human rights. Even if it carries no demonstrable collateral harm, you don't need to feel it's something you should willingly and happily accept."

Data is Money

The easy way to write a post about online privacy would be to list a range of so-called “privacy respecting” alternatives to Big Tech. But it's become increasingly obvious that at least some of these alternatives are a far cry from what they claim to be, and are actually part of the very system they profess to oppose.

At best, simply trusting services because their marketing says “we're all about your privacy”, when some of the worst privacy policies in the world open with “Your privacy is important to us”, is a wildly superficial and somewhat na├»ve approach.

The true key to optimising online privacy lies in disrupting the core tenets of tracking. Tenets as simple as product allegiance, for example. By sticking with one brand, one browser, one login, we make ourselves frightfully easy to monitor. Whilst, say, a VPN is touted as a route to better privacy, it allows a single provider to log the entirety of a user's online activity. And there's nothing other than that provider's word to say that the available information will not be packaged and sold to the Great Inscrutables.


In 2006, at the conclusion of a two-year civil racketeering case, Judge Gladys Kessler found that the entire tobacco industry had conspiratorially lied to the public about the safety of their products, continuously, for fifty years. Countless people had died painful deaths as a result. None of the companies cared. There were no exceptions. They ALL lied. None showed remorse or even an intention to stop lying in future. ALL of them appealed the Judge's requirement that they must inform the public of tobacco's dangers. This is corporate mentality.

In the light of this mentality, the idea that we could trust a tech outfit's marketing pitches on privacy, when they have an equal commercial incentive both to collect/exploit our data, and to tell us that they don't, would border on insanity.

And even without lying, tech brands can capitalise upon loopholes in data protection law to straightforwardly not tell us about some (or all) of the underground data trading regimes they're involved in.

So we have to find a different method of protecting ourselves from out-of-control surveillance than merely trusting companies who...

  • Are heavily incentivised to lie and are part of an industry renowned for its dishonest, unlawful and morally-bankrupt behaviour.
  • Contradict their own marketing pitches in their privacy policies and/or terms of service.
  • Shroud the detail of their processes in secrecy.
  • Often fail to adequately explain how their services make money.
  • Are not even required to tell us they sell reversibly “anonymised data”.


What this post will propose, is the practice of self-minimising data.

Data minimisation is usually considered to be the responsibility of those who collect data. A responsibility to collect only the data they absolutely need to deliver their service. Handlers of personal data have a legal requirement to do this.

The two main problems are:

1) Data handlers can loophole this requirement by reversibly “anonymising” data. In other words, part-deleting bits of information so that an individual record cannot be considered to identify the person it relates to, but if sold, can be de-anonymised by the recipient. “Anonymised” data is exempt from data protection law, so companies can, and do, collect and sell it in almost unimaginable detail and volume.

2) So many companies break data protection law that the regulators don't even have a starting point for enforcement. The regulators' recourse is to megafine the likes of Amazon, Google and Facebook, but any deeper in the pyramid than this, violation is running virtually unchecked. The list of UK companies ever fined under GDPR is just five entries long. That's less than two fines per year. The notion that there is any control whatsoever over data mismanagement is pure fantasy.

So what data self-minimisation does, is transfer the responsibility of minimising data from the collector, to the contributor. i.e. from the tech companies, to you. Where data-minimisation's core principle is: don't collect data you don't need to collect, self-minimisation's core principle is: don't provide data you don't need to provide.

We'll look at the main danger zones for tracking, and explore the ways we can isolate our activity from those “data hotspots”. This will not only minimise the amount of data that tech companies are able to collect - but also, in particular, minimise the quality of the data. High quality data is 100% confirmed as relating to a real, rigidly identified individual. Low quality data either comes with a high degree of uncertainty, or can't be linked with any verified real-life identity at all.


As a final note before we start, this is not a guide for people who are intending to break laws and evade capture. It's for law-abiding people who feel discomfort with the utterly warped level of spying that tech companies now engage in, to the point that it's affecting their state of mind and even potentially their mental health. It's not about thwarting the best efforts of Interpol. It's about obfuscating your online activity continuum to disrupt the flow of data to the central points of harm - the data brokers.

Data brokers surreptitiously collect information about us, and make it available for sale to more or less anyone who wants to buy it. Most of their acquisition methods sit outside of the consent loop, but regulators are routinely passive toward them, doing the bare minimum to restrain them despite their categorical, systemic and persistent breaches of data protection laws.

As data purchase points, data brokers have a real influence on the resources that will or won't be made available to us. They can prevent us from getting jobs or accommodation, negatively and unfairly impact our insurance premiums, etc. The less data brokers know about us, the better. This guide seeks to ensure that your Internet use stays clear of them, and that you generally achieve greater freedom from harmful, hardcore profiling.

But you don't need any collateral threat to object to being spied upon. Corporate spying, at the level it's now reached, is creepy, stalkerish, manipulative, predatory, warped, perverted, and abusive of human rights. Even if it carries no demonstrable collateral harm, you don't need to feel it's something you should willingly and happily accept.


Recognising where the surveillance hazards are is the starting point for optimising online privacy. Here are the primary danger zones...

LOGINS. Logins are the number one means by which trackers ensure we can be identified per visit, accurately monitored/profiled over a long period, and exploited as a product. Coupled with a reliable predictor of true identity - such as financial transaction - logged in data is the highest quality data companies can get. Also, logins nearly always require JavaScript to be enabled, which allows the site to run all sorts of behaviour-monitoring programs from within the browser.

Note also that operating systems themselves have now become associated with logins, as have some browsers. If it's at all avoidable, do not log your entire device, or your browser, into an online account.

Sometimes it's genuinely necessary for us to have a login. For example, if the information we'll be remotely accessing is private, security-walled or paywalled, or if we're publishing in a manner that must specifically be attributed to us. But at other times it's entirely unnecessary for us to have a login. For example, if we just want to read free, publicly-posted information.

For optimum privacy, you should try to do as much as you possibly can without logging in. As a general rule, if you're only viewing free, published content, you do not need to be logged in, and you shouldn't be logged in. Searching Twitter? Log out. Otherwise your search will be recorded on your profile.

As a policy, decide what you intend to do online, and select the method with the minimum tracking potential. If you're only surfing the Web for research, do it in incognito mode - ideally on a browser you never use for logins, and which has all cookies blocked. No cookies, no sneaky logins. That's a reliable fact. And don't hold back on the number of different browsers you use. Tech relies on your loyalty to comprehensively spy on and exploit you. Don't give tech any loyalty. In commerce, loyalty is almost never in the consumer's interests.

I routinely use Chrome, Firefox, various versions of Chromium running separately with their own data folders, Pale Moon, MyPal, Seamonkey, Slimjet, k-meleon, Tor, and then some other browsers such as Retrozilla for old systems. I even use Internet Explorer to load HTML documents on a couple of PCs that never access the Internet.

Some of these browsers are confined to one brand login. Chrome, for example, only accesses logged in Google services. Nothing else, and I don't do any web searches in that browser. If I'm logging into Microsoft I'll use a different browser. If I'm just researching I'll use Pale Moon with cookies and JavaScript blocked.

If you intend to use Google Chrome you should know that by default it will run forced updates via GoogleUpdate.exe, and scan your drive with a program called software_reporter_tool.exe, then send the results to Google. It could even attempt to make changes to your software environment. Plus it will surreptitiously log the browser into your Google account if you don't disable the sync. To manage the privacy harms and spyware/malware threats of Google Chrome you'll need to know how to disable the additional programs it runs and/or firewall its additional outgoing connections. It also requires rigorous revision of its default settings - look out for FLoC in the "Privacy Sandbox", etc. Most people who care about privacy will not use Google Chrome at all. But if you're only using it for logged in Google services, Google would know what you were doing even if you used another browser - which may then be reporting to other companies, widening the data-spread further.

As a proxying system, Tor is different from other browsers. It prevents your destination site from discovering your IP address, and from being able to fingerprint your device. However, Tor is a network of servers run by volunteers, who can be literally anyone. Major tech companies, your next door neighbour, criminals, the Police... Anyone. And the volunteers can discover your IP address and fingerprint.

Among the current volunteers are people or groups who identify themselves as: “PieceOfShitSrv”, “lifeisabitch”, “Bastard”, and “DieYouRebelScum1”. Do these sound like names you'd give access to your digital fingerprint by choice?

This illustrates the problem with using privacy tools. You still, ultimately, have to trust someone. And as you increase the level of obfuscation, the people you're trusting get more anonymous and less accountable. The design of Tor is meant to ensure that the people who get your IP address will not also see your destination site. But this assumes that large parts of the Tor server network have not been monopolised by the same person, organisation or group. This can happen, and has happened.

So my advice re Tor would be: ONLY use it to browse sites. NEVER for logins (the destination site recognises you by login anyway, so it's pointless), NEVER to transact, and NEVER to access your private information. If you want to log into a site, go there directly - not via Tor. And if you are using Tor just to browse, use it with purpose. Like when you specifically don't want the site you're visiting to match your IP address with a known profile. And even then, be aware that your destination site could itself be involved in running Tor servers.

If we're not using a proxy (such as Tor), trackers can still recognise us without a login, and many sites with login facilities will still record our visits in detail even if we don't have an account. They'll tie the usage log to our IP address. But if we're not logged in, even sites that can associate our IP address with a known login ID can't automaticallly add the activity to our profile, and here's why...

Consider a family. The husband has a Google account, but the wife doesn't. They both use the same desktop computer at various times, which will give a familiar fingerprint and IP address to Google. When the husband uses Google services, he's logged in. But when the wife uses them, she's not. And the wife uses a different browser.

If Google were to log all of the activity for that computer and IP address to the husband's profile, there would be two glaring problems. One, the activity-log built by Google would be useless in ad-targeting terms, because it would contain activity from two completely separate people, with separate interests, and separate demographics. But far worse, if the husband requested his usage data - as he's entitled to do under the law - he would see all of his wife's activity, and that would be a data protection violation.

So without a login, identifying a specific person remains both unreliable and legally dicey. Even more unreliable in workplaces, where perhaps fifty or more employees use one computer at various times. This is why tech companies desperately want us to create an account and log in.

We can exploit the uncertainty that exists outside of a login to minimise what can be recorded in our identified profiles. Or not, in some cases...

Some sites will insist that you do log in to read free, published content. This is “hard gating”, and it's an measure that unnecessarily destroys your freedom to read free content in privacy. MeWe and Tinder are classic examples of a hard gating setup. You can't view any user content at all unless you create an account, log in, and give the sites the wherewithal to record your every move to a known profile.

For viewing only, the login is not necessary from the user side, because the content is not deemed to have monetary value and does not require a security wall. There's nothing for the login to protect. So the login is only there by the provider's choice - almost inevitably because the provider wants to reliably identify and behaviour-monitor individual users, and exploit them as products.

Other sites employ a coerced login strategy. They will not strictly require you to log in to access free, public content, but will make it so difficult for you to navigate your content consumption while logged out, that logging in is the only practical solution. Obvious examples include Pinterest, LinkedIn, Instagram and Facebook.

One of the giveaway signs of a coerced login strategy is that the search facilities are walled behind the login. The biggest Friendica instance also gates its search, so the policy is not confined to centralised platforms. Some Mastodon and Diaspora* instances are hard-gated, like MeWe and Tinder. Diaspora* in general is partially gated and can be considered a walled garden.

Think carefully about why sites or platforms are requiring you to log in, and if there's no reason beyond “policy”, ask yourself whether you should be using those resources at all. A login without any explanation other than “policy” will nearly always be a surveillance trap. And it's telling that so many supposedly privacy-focused sites implement unnecessary logins. It's a glaring contradiction that should raise an alarm.

Any web services that demand a phone number, when the number's only documented use is for the sign-up or login itself, should be dismissed without any further consideration. A good example of this would be a service such as Vivaldi Webmail, which uses the facade of “privacy” to drive what's clearly a data-mining business, and which expects people to believe, quote: “We don't read your email or monetise your account in any way” - despite the privacy policy literally saying: “Email messages are scanned”. A perfect illustration of the marketing blurb contradicting the legal copy, and not in any way unusual.

Requiring any unnecessary or irrelevant data is against Article 5(1)(c) of the GDPR, but more than that, it's a clear indication of the company's motives. Genuine privacy-focused companies do not insist on collecting irrelevant and unnecessary data, and then handing it straight to data-mining third parties, as Vivaldi does with the phone numbers it collects.

Unnecessary demands for phone numbers have become another deeply normalised anti-privacy practice. Aside from collecting excess data, such demands also discriminate against people who are making a stand against surveillance by refusing to own a bug. And they discriminate against some of the world's poorest people, who access the Internet via a public computer, but do not own a mobile phone. Should those people not have equal connectivity and an equal voice online?

Demanding mobile phone numbers for services that don't require them is unethical, and we should not hold back in telling tech companies that. Especially those in the plastic privacy genre.

FINANCIAL TRANSACTION. This is another top crisis point for privacy. When you transact financially online...

1. You're forced to use your true identity, which means trackers categorically know who they're dealing with. This is why trackers swarm around financial transaction like flies around poop.

2. The nucleus of a data broker's dossier is financial, so the likelihood of you being picked up by data brokers when you financially transact is high. But if your digital footprint does not have a financial nucleus, any blocks of activity fed to data brokers are likely to be orphaned. You can exploit the data brokers' one-track minds, by isolating most of your online activity from money.

3. Financial transaction often goes hand in hand with login. And where that is the case, you're rubber stamping the identity that goes with your behaviour profile.

Ideally, don't spend any money online at all. Support your local shops, in cash, and/or use something like Gumtree or Picclick to find secondhand goods by local region, then collect in person. Saves on shipping costs too, and you can see exactly what you're buying before you buy it.

But if you do need to transact financially online (and I get that some people do), one good self-minimisation technique is to set aside a device just for that purpose, and then segregate the rest of your Internet use to a separate device, on which you use a different name. Due to the principle we saw with the husband and wife family situation, a separate device, with a separate name, and no financial activity, will almost inevitably be considered a different person by trackers - even if it has the same IP address. You must, however, keep that second device strictly ringfenced. One login to PayPal will blow the whole thing.

JAVASCRIPT. Most of the web's worst tracking tools run on JavaScript. That's the reason many sites now force the use of JavaScript by re-engineering their pages not to load without it. But on the information highway (as opposed to Web 2.0), pages that do load without JavaScript are still in the majority. So it's good practice, if you're only surfing or researching, to block JavaScript by default, and then toggle it on by discretion if there's no other option.

JavaScript is also a security threat. For visiting unknown sites, disabling JavaScript makes you much, much safer. I can't overstress how important it is to block JavaScript if you're seeking to avoid hardcore creepware and browse more safely. JavaScript is used heavily in session-recording - the really detailed, voyeuristic behavioural surveillance. It's also used to forensically identify you. For example, used in an attempt to recognise you by your typing speed and traits. Simply, JavaScript is the most pervasive aggressive spying facilitator currently in use.

K-meleon browser has a native JavaScript toggle button, allowing you to switch JavaScript on or off on demand. With other browsers I would recommend using the extension NoScript.

By default, NoScript blocks JavaScript, and then allows you to lift the block per individual domain or even subdomain. If a page doesn't load properly you just hit the NoScript icon at the top right of the browser and it will show you the domains that want to serve scripts. You don't have to allow them all. You can choose which one(s) you want to allow. Start by resetting the main domain (which should be at the top of the list) from “Default” to “Trusted”. That's often enough to load the page.

If not, you'll need to “Trust” one or more of the other domains that appear in the list. It may take a few moments to get the setting right, but once you make your “Trusted” settings the site will always work in future - unless you choose to revoke the “Trust” . For sites you're not intending to visit again, you can “Trust” the domain(s) temporarily. The “Trusted” status will then end when you close the browser.

Depending on how, and into which browser you install it, NoScript may whitelist popular domains, including those from Google, Microsoft, YouTube, PayPal, Yahoo etc. This may have been a condition of getting NoScript into the Chrome Store, but it's a huge compromise, because it allows a vast range of tracking scripts to run. So I would suggest checking, immediately after installation, to see if there's a whitelist. To do this, go to NoScript's Options and check the Per-site Permissions tab. If there's nothing there, there's no whitelist. If you do see any whitelisted domains, simply reset their status from “Trusted” to “Default”. Then you're ready to make your own exceptions.

You could also use uBlock Origin as a JavaScript blocker and permission toggle. You then also get broader tracker-blocking built in. However, with uBlock it's not as easy to be selective with which scripting domains you allow per site. Both NoScript and uBlock categorically declare that they collect no data of any kind. So you could use them both. NoScript as a specialist JavaScript blocker, and uBlock purely for blocking trackers.

Be aware that if you're using Tor, you should NOT rely on the bundled NoScript, because in Tor it's set up not to work on HTTPS sites. You can mess about trying to get it to work if you wish, but don't be surprised if it reverts straight back to not working. Clearly, the providers of Tor want to pretend you're effectively managing JavaScript, when on nearly all of the sites you visit you're literally blocking nothing at all - because Tor's other default extension (HTTPS Everywhere) diverts your HTTP visits to their HTTPS connections. On Tor, switch NoScript off and install uBlock Origin. I won't say anything about HTTPS Everywhere except that I have it disabled.

In blocking JavaScript, you'll inevitably encounter annoyances as various pages refuse to load, but you will make yourself vastly harder to behaviour-monitor, harder to forensically identify. Indeed, one of the most basic privacy setups you can run is a non-telemetrised, non-home-phoning browser like Ungoogled Chromium, with no extensions, cookies blocked, no logins, and JavaScript natively disabled. On a separate, Linux computer that doesn't use your real identity or financially transact, this will present surprisingly good resistance to the grand tracking machine. But check the Linux distro to make sure it doesn't have telemetry of its own. Some do.

SMARTPHONES AND SMART DEVICES. In privacy terms, using any piece of tech gear with the word “smart” in its title is like jumping off a cliff. “Smart” devices are bugs. They're designed to spy on people and that's what they do. They give tech companies opportunities to remotely activate microphones and/or cameras - usually on an opt-out basis - and that's deliberate.

This is way beyond data mining. Let's get the language straight. These devices are (often secretly) recording and collecting private conversations and visual interactions. Even if the device owner has consented to being recorded, any third parties involved have not, and are almost inevitably unaware they're being recorded. So this is non-consensual, voyeuristic content. Not “data”.

Stalking people and secretly recording them is perverted. That's the word society always used before surveillance tech came along. Why should we change it now?

Realistically, why would anyone ever hide a camera and a microphone on a television? These products are not being designed by normal people. It's weird, debauched voyeurism, which shows us just how warped and morally-bankrupt the technology industry has become. This technology should not, in my view, be supported at the point of purchase. But as these concepts are normalised, most people may in time feel they have little choice but to buy the products.

The only practical advice I can give is: either research in depth a smart device's privacy threats before buying it, or just don't buy it. Tape over any unused cameras, and think carefully about how you can stop a microphone from being remotely activated. If you're not confident you can stop it, and you're not able to physically disconnect the microphone, I can only recommend not using the device. With something like a TV, I would suggest completely denying it access to the Internet.

SEARCH ENGINES. Search engines have direct access to some of the most sensitive data we ever provide. Never log into them, ever, and as a precaution, ideally only use them in incognito mode, with cookies and JavaScript disabled. If you're going to search for anything that could link you to stigma, such as mental health problems or private physical conditions, use Tor and hide your IP Address. Not all search engines will work via Tor, so you'll be restricted in your options.

Try to follow the principle of anti-loyalty with search engines. Don't just pick one. Switch frequently. Use different search engines, from different browsers, from different computers/devices if possible, and sometimes via Tor. This will make it much harder for one provider to assemble a joined up dossier of your search history.

IRL CONNECTIONS. In Real Life, or IRL connections, are connections that link you with identified parties from your real world social life. These are the Holy Grail for social media platforms, and they allow sites like Facebook to identify millions of people by association.

The problem is that most of us have that one friend or colleague who has no concept of online privacy and will upload their contact list to Facebook, find us without us even realising they've signed up, and then “helpfully” tag us into a group photo. Facebook now has a full ID on us, coupled with face-recognition. Our privacy is officially shot, and the whole thing took two minutes.

This kind of recognition system is hard to avoid on social platforms, due to the collection of contact list information, and other features such as intelligent photo-tagging, which use people's desperation to connect as a means to extract extremely reliable third-party identifications.

The violation in these systems comes in the fact that the contact data collected by the platform relates to third parties who have often not consented for that data to be held. Only the first party gives consent. But their consent is invalid, because it's not their personal data that's being collected. Most social platforms collect this third party data without the owners' consent - including some self-styled “privacy respecting” alternatives. For example, from the MeWe Terms...

"If you give MeWe permission to upload your address book in order to serve your needs with invitations, MeWe may store a copy of the phone numbers, emails, and names in your address book on MeWe servers."

One data self-minimisation policy that helps guard against this is email compartmentalisation. Using different email addresses, from different email providers, for different things. It's the same principle as with the browsers and search engines. Use as many as you feel comfortable with. Keep it all fragmented and difficult to piece together.

It's bad practice to join a social media platform with an email address you use to contact IRL connections. If your connections join platforms and upload their contact lists, and you're already a member using the email address that appears on those contact lists, the platforms get an immediate flag on your association. Many social platforms will then do their best to extract more information about the third parties whose contact data they're collecting - photo-tagging systems being the most obvious example.

All of this takes your privacy out of your hands and places it in the hands of third parties you may barely know. That's what it's designed to do. But compartmentalising your email addresses does a lot to break the chain. If an acquaintance can't find you, they're much less likely to tag you into a photo.

If you use different email providers for every major service you join, you're also increasing your security. That's because if someone hacks your email, they're then likely able to hack every service you use it to log into. If you've only used that provider for one registration, you limit the damage. Ideally, the email address(es) you're using to actually talk to people should not be the email address(es) you use to sign up to services.

Avoiding IRL connection processes is also a lot easier if you ensure your second device - the one that doesn't financially transact or use your real name - never communicates with your real life contacts. Outside of the workplace (and even in it if it's avoidable), no one with a serious commitment to privacy should be using Facebook or LinkedIn, ever. But don't assume that if you're not using Facebook or LinkedIn you're not being identified by IRL association. You have to do more than just avoid control-freak social media sites with anti-security insistences on the use of real names.

COOKIES. Some people think that if they block third party cookies, cookies can't be used to track them. This is not true. Short of a login, first-party cookies are the most reliable way to identify a return visitor to any site. For optimum data self-minimisation, block all cookies. The browser will probably say “not recommended” but that's for people who can't be bothered to set exceptions for sites that actually need cookies.

With all cookies blocked, you can then set exceptions for all the sites you need to log into. In Chrome-type browsers, just go to chrome://settings/cookies, and where it says “Sites that can always use cookies”, hit the Add button. Then just enter the domain you want to exempt in this format...


The brackets, asterisk and dot before the domain just tell the browser to exempt any subdomains as well as the main domain. For example, if you need to accept cookies for mail.google.com and drive.google.com, simply entering google.com in your cookie exceptions won't achieve that. But [*.]google.com will, because it exempts every subdomain for google.com.

Entering your cookie exceptions for whole domains allows you to quickly facilitate your logins, per case, without accepting cookies from services and sites you don't log into. The average person really shouldn't have to set many cookie exceptions. It's a relatively small amount of effort for a very pronounced level of additional privacy.

You will encounter some sites that won't load at all unless you have cookies enabled. You can either award these sites an exception on contact, or see if you can find the same content elsewhere. I usually do the latter, and since most of the sites that require cookies to load pages are secondhand content silos, it's not difficult to find the original work. Finding the original work, rather than patronising a cookie-demanding theft-by-proxy site like Pinterest, also makes you more ethical.

TELEMETRY AND "CORPORATE PARENTING". Telemetry - the automated gathering of product (as opposed to site) usage data - is a rapidly growing concern. It's still in its infancy, but it's already becoming normalised, with supposed “privacy-focused” or “free” products now implementing it, as earlier adopters heavily scale up the volume and frequency of their data guzzle.

We know that tech companies don't stop scaling up these initiatives until the consequences start to hit them in the pocket, and on that basis it probably won't be long before telemetry becomes just another excuse for perpetual surveillance. Telemetry is a particular danger because it can be implemented in a way that bypasses data protection law. In the absence of lawsuits and wide scale consumer rejection, telemetry looks like yet another privacy violation set to spiral out of control.

Telemetry is most associated with browsers and operating systems. Check the deep settings to see if you can switch the telemetry off. If it's impossible to switch off, and there's an alternative product that either allows telemetry to be disabled or doesn't have telemetry at all, seriously consider using the alternative.

To combat the problem of desktop programs "phoning home" with telemetry data and other stuff they've mooched off your drive, use one of the desktop computer's greatest, if often underused weapons: the firewall. Firewalls are often set to allow outgoing network connections by default. Change that default and block all outgoing connections. You'll then have to invidually set exceptions for everything you want to use the Internet. Depending on the firewall, that could take an hour or two. But once it's done, you're secure in the knowledge that when you install a new piece of desktop software, it will not be able to communicate with its manufacturer. Browsers get round this, because by nature of their job they have to access the Internet. But don't give other programs a free hotline to Big Brother.

“Corporate parenting” is what happens when a software or hardware manufacturer builds their product in such a way that the purchaser does not get executive control of it. In other words, you don't own your computer. It's really owned by Microsoft, Apple, Google, or whoever else made the software.

The corporation then “parents” the user, implementing processes they consider to be “for the best”. That's best for themselves. Not best for you, obviously. They may change the functionality of the product, add or delete software (sometimes even deleting software you paid for) , automatically run a baffling array of programs without consent, scan your drive(s), send collected information back to the “parent”, etc.

Corporate parenting is not only totalitarian - it's also necessarily spying and a privacy violation. The ways to prevent it include deleting Windows 10 and installing a GNU/Linux free software package. Or isolating a workspace computer from the Internet entirely, and having a separate machine just for the Web. Or using older proprietary operating systems that allow communication with the software vendor to be completely disabled or blocked.

For a personal machine, I wouldn't use a later Microsoft system than Windows 7. When 7 loses compatibility with the Internet I plan to solely use Linux to go online. I do use Linux now, and it's fine for getting around the Web. But for creative tasks it's hard to break away from over thirty years of experience with Microsoft OS.

And if you do go for a Linux package, pay close attention to its overall privacy regime. Some of them have poor privacy control by default - and the available browsers can be a particular weakness. Don't automatically think: "Because it's Linux, it must be great for privacy".


Those are the main danger zones for online spying, and some suggested remedies. But there's one burning question...

Should we be using browser extensions? It's an important question, because browser extensions can work similarly to VPNs, watching everything we do, and potentially mining the data for sale.

I would definitely advise using as few browser extensions as possible. If you don't use any at all, that's at least one less party you're granting access to your activity.

And I would recommend stepping around the Eth Tech extensions. Options like DuckDuckGo Privacy Essentials and EFF Privacy Badger, for example. These are categorically nothing like as efficient as the open-source uBlock Origin, and they come from money-motivated organisations that have opposed adequate copyright protections for content creators (demonstrating that they put self-interest above ethics), and are really just allies of Big Tech performatively pretending to oppose it.

But some privacy extensions may soon, in any case, be cut off by major browsers' deprecation of Manifest v2, as plans to outlaw ad-blockers take shape. Both Google and Mozilla are implementing the new Manifest v3, which does not facilitate blockers like uBlock Origin.

Indeed, v3 is already live in Chrome. But currently the browsers still support v2, so the ad- and tracker-blockers continue to run for now. When v2 is withdrawn - and we don't yet have a date for that - the most powerful blockers will crash out of the game. It'll be interesting to see what happens to Brave browser at that point, because the technology it currently relies upon to block ads and trackers will not work post-v2.

It's worth getting used to other means of avoiding trackers now, rather than waiting until v2 is withdrawn and suddenly being hammered with aggressive scripts. Learning to use the web with JavaScript disabled, and to avoid JS-dependent sites - is a good longer-term solution, although the JS-free web will inevitably continue to shrink over time.


Compartmentalising your web activity combats joined up tracking without going to extremes. Using numerous different browsers, numerous different email addresses and services, and separate devices for transacting, non-transacting, and offline use if you can, is a great starting point. Keep JavaScript and cookies disabled for random browsing, and you'll find the browsing experience faster, as well as more secure and private.

Avoid logging in as much as poss, and don't forget that every link or button click is information. The fact that you're not filling in a form doesn't mean the site is not gathering data about you. Data mining today records everything you touch, including the scroll bar and your mouse. With new micro-profiling threats arriving all the time - one of the latest being WebAssembly - it's more vital than ever that we at least make our identities plausibly deniable by declining cookies, isolating browsers and staying logged out.

Above all, trust nothing, and no one, and greet every online proposition with a reality check. People don't invest $millions building and running services that have zero ROI. If they're running a costly service and you can't see the ROI, it's about 99 to 1 that they're collecting your data, and selling your data.
Bob Leggitt
Post author Bob Leggitt is a print-published writer and photographer, digital content innovator, multi-instrumentalist and twice Guitarist of the Year finalist, image manipulation expert, web page designer/programmer, virtual musical instrument builder, "Twitter detective", and author of successful blogs such as Planet Botch, Twirpz and Tape Tardis. | [Twitter] | [Contact Details]