If you thought that privacy, decentralisation or free speech was going to drive the next big social media sensation, you were wrong. Anti-piracy on the other hand... Well...
It’s a telling trait that advancement within the different disciplines of technology has been unequal. Perhaps most telling of all, is the gaping chasm in progress between the tech giants’ ability to keep track of a person, versus their ability to keep track of stolen content.
Although big tech treats these two realms of progress as though they’re completely different disciplines, they actually use exactly the same principles. So the tools to eliminate online piracy already exist. They’re just rarely used. And the reason they’re so rarely used? Big tech is a pirate. Big tech loves piracy, and it defends its right to host pirated content by persistently derailing proposed anti-piracy laws.
The amount of money the tech giants have spent on lobbies and campaigns to block anti-piracy laws (which they always somehow manage to re-frame as “censorship bills”) is astronomical. So this is not a case of those tech giants simply not caring enough about piracy to stamp it out. It’s a case of those tech giants being ruthlessly committed to facilitating piracy.
Evidently, without that elusive change in the law, an internet which is anti-piracy by design is not going to be built from the top down. But could it be built from the bottom up?
Copyright holders are not infuriated by sharing. They're infuriated by being cut out of the distribution loop.
ANTI-PIRACY BY DESIGN
There are two fundamental implementations of anti-piracy by design. They are…
- Ownership verification.
- Managed distribution.
Both of these systems are flawed, and if there’s no webwide protocol for their use, they’re limited in the protections they can offer. But if used in conjunction with each other, over a big enough proportion of the web, they would force the tech giants who currently assist piracy, to turn against it. Let’s see how all this works…
OWNERSHIP VERIFICATION
Ownership verification is simple in principle. When a new piece of content is uploaded to a web service, the uploader is given an opportunity to verify themselves as the copyright holder. The service then scans future content uploads, and when the scanner finds a match with the ownership-verified content, the re-poster is prohibited from posting that content as their own.
The social platform Waterfall is already using a system similar to this. In their implementation, the re-post is permitted, but the copyright holder automatically gets credit and an all-important visibility boost. Other implementations could instead prohibit the re-post and allow the re-poster to request permission from the copyright holder – queuing the post until permission is granted. Which option is best would depend on the platform and circumstance.
Instagram has already shown that giving Google and Twitter the middle finger is far from damaging. We just need someone with a bigger middle finger than Instagram.
But there’s an obvious problem with wider coverage. As soon as you move off the platform that uses the system, or source content from elsewhere, the protection ceases to exist. This could theoretically be solved in one of two ways. Either a platform using ownership verification/scanning could grow to a size where it becomes a dominant market force, or the ownership verification system could launch as an independent protocol, which can be used by a whole range of platforms.
The prospect of a small platform with ownership verification rising to become a dominant market force may seem remote, and it’s probably not going to happen if the platform is niched or themed. But with a non-specialised, general use social platform it’s more than feasible, and here’s why…
The system would appeal to one exceptionally important and influential user type. A user type that drives and indeed defines the growth of social media: the celebrity.
Celebrities don’t just face issues with unauthorised content redistribution. They also have a monumental problem with identity theft and misuse. Well-implemented ownership verification and copyright protection scanning would dramatically help reduce the problem of impersonators, because virtually all impersonators require media that’s owned by other people in order to perpetrate their schemes.
Influencers experience the same problem, and would benefit in the same way. And it’s not just impersonators (often scammers) they have to contend with. It’s parody trolls, and even meanwells, like unofficial “fan pages”, who just get it horrifically wrong and re-use the celeb/influencer’s content in an undesirable or damaging way.
A lot of big businesses are desperate for copyright protection tools too. It’s a need that runs right across the top tier of public visibility.
All meaningful growth on social media is driven by people with fans. And celebrities and influencers have armies of fans. Literally millions per person in some cases. So a system that gives those celebs and influencers an easier life is not just going to attract them. It’s also going to migrate their vast audiences.
The second popular theory is similar in concept to the systems that companies like Adobe and Microsoft have employed to stop software piracy.
Okay, but what if those celebs actually want people sharing their content? Not a problem at all. That’s what reblog/retweet-style tools are for. As long as there’s a legitimate way to share, which benefits the celebrity, then both the celeb and the interacting audience is kept happy. Copyright holders are not infuriated by sharing. They’re infuriated by being cut out of the distribution loop, and having their content repurposed so it either benefits someone else, or just fails to benefit them. These are exactly the things that ownership scanning prevents. It does NOT prevent legitimate sharing.
The impact that something like this could have for influential social users is sufficiently profound to fuel a rise to market dominance for the right platform. And that platform probably wouldn’t even need to achieve market dominance before the existing powers realised that they too needed to better protect their most important users. The likes of Facebook and Twitter would obviously not wait until they’d been usurped before taking action.
Ownership verification and protection scanning would, however, take off a lot quicker if it were introduced as a universally accessible protocol, rather than being confined to individual platforms. With lots of smaller platforms using a common ownership protocol, general user expectations would evolve more quickly, and that would put a lot more pressure on the established tech giants to follow suit.
As with any system, there would be a danger of misuse. Without any preventative measures, it’s likely that some people would attempt to claim ownership of content they don’t own – especially if the system’s coverage only constitutes a tiny percentage of the overall web. For the system to have any chance of working, all platforms involved would need to take a zero-tolerance approach with abusers. A false claim of ownership results in a ban. Always. Then you’ve got the problem of banned users re-registering and starting again – but no one said this was a watertight plan.
On top of that, you’ve got the issue of false flags, and the inevitable range of technical workarounds that content thieves would employ to “unduplicate” their duplicates. Image distortions and manipulations are already part of content thieves’ toolkit, and if the web upped the stakes, the thieves inevitably would too. But we should also remember that the vast bulk of unauthorised redistribution on social media is not perpetrated by career pirates – it’s perpetrated by ordinary people who think it’s okay because the various platforms’ algorithms tell them it’s okay. Once the various platforms’ algorithms begin to tell the general public that unauthorised redistribution is not okay, the majority of it is going to stop.
The system would appeal to one exceptionally important and influential user type. A user type that drives and indeed defines the growth of social media: the celebrity.
MANAGED DISTRIBUTION
Managed distribution tends to work not by identifying the content’s owner, but by identifying its consumer. There are two current buzz theories on managed distribution, and both of them ideally require the consumer to log in. The login requirement is another thing that would help smaller social media platforms to grow.
With the first theory, the consumer’s login identity is embedded into the media they view or download. Name and email address, for example. This in itself is going to deter most people from re-posting that content – because they’d also be posting information which they're unlikely to want publicised. It may also deter them from saving or downloading, which once again wouldn’t necessarily be a bad thing for the hosting platform or site. If consumers have to go back each time they want to view the content, that means more page visits.
However, seeing their personal info embedded into every piece of media they view or download could also deter consumers from using the platform altogether. It hints at a poor privacy regime, and as a privacy-obsessed consumer I’d find it pretty oppressive. What if you view historial matter and your ID appears on a photo of Hitler? The scheme sounds great idealistically, but once you explore the minutiae it can cause a lot of serious discomfort, and drive away a lot of users.
You also have to budget for the fact that some people on the internet are really not paying attention to what they're doing. They might be drunk, or impaired in other ways. If you're embedding their private info into media content, and they're re-posting it without noticing, it could in some instances put them in danger. The platform may have to take responsibility for that.
Then there’s the potential for confusion among the wider public over whether the person whose ID is embedded into the content is actually the consumer or the owner. The whole thing is pretty cavalier, and there’s an obvious workaround on sites where consumers can use fake names and throwaway email addresses. Nevertheless, on sites where consumers are actually buyers (and hence more rigorously identified), A lot of copyright holders are asking for an embedded user ID option. In case you’re wondering, the sites are currently blanking their requests, which in my view is wise.
The second popular theory on copyright protection through managed distribution is much more viable. It’s similar in concept to the systems that companies like Adobe and Microsoft have employed to stop software piracy. SaaS, or Software as a Service, places the software, or at least some critical components of it, at the provider’s side of an internet connection. So unless the consumer connects with the provider as an identified user, they can’t use the software at all.
Well-implemented ownership verification and copyright protection scanning would dramatically help reduce the problem of impersonators, because virtually all impersonators require media that’s owned by other people in order to perpetrate their schemes.
Getting this idea to work for content distribution is more difficult. It’s easy enough to serve content via a unique system so that the source file can’t be saved or reproduced without the involvement of the distributing platform. But people can normally still capture and save the visual media that appears on their screens, and particularly when people have large screens, that’s going to render the protection circumventable.
However, just paying cursory attention to the concept has a huge impact on reducing unauthorised redistribution. Instagram, for example, took a SaaS-inspired stance on serving photographs. Instead of publishing the images as standard files, which could be downloaded by users, indexed by search engines and snippeted to other social sites, it chose to publish them via a native JavaScript routine. Essentially, it elected to deliver the images in a format that consumers could not access without Instagram itself. An almost identical concept to SaaS, but with image delivery.
Yes, at this basic level it’s flawed, because as I mentioned, people can still capture their screens, then save the photo as a file, then upload that file somewhere else. But even taking that obvious workaround into account, the reduction in piracy has been tremendous. Just the fact that Instagram photos don’t automatically appear on Google Images or snippet to Twitter, has cut out much of the “negative osmosis”. As all photographers and image creators know – when the image is the entire post, snippeting is not snippeting. It’s simply republishing. Instagram’s business model was founded on a recognition of this. From the start, there was a very firm:
“NO! We’re a photo site! If you ‘snippet’ our photos, you take our WHOLE posts, and our traffic! We’re going to make it impossible for you to snippet.”.
And OMG did that policy work! It made Flickr look like absolute idiots, giving their posts away to Twitter and the like…
“So er... Now you've seen the post shared to Twitter iz u not gonna click through to Flickr?”
“Why on Earth would we? Twitter shows us the photo, duh.”
And merely making it more complicated to save an Instagram image has rendered a situation where far fewer people can be bothered. If it’s easier just to save the page and revisit than it is to save the image as a file, most people will save the page. That’s unquestionably been another of the key components in Instagram’s growth. There’s an incentive for consumers to go back to Instagram, and that, in turn, motivates the content publishers – because they get more attention.
So could this SaaS-like concept get more sophisticated for content hosts? A slightly more sophisticated trick is to do exactly what Instagram does and serve the image as a script, but then use HTML to overlay a separate blank image on top, in the form of a standard JPEG, with a CSS opacity of zero. Unlike with Instagram’s system, the right click download function still works, so users think they’re downloading and saving the image. Only once they go to view the image they saved do they realise they’ve downloaded a useless white rectangle…
I’ve replicated this above using two JPEG images rather than a JavaScript file and a JPEG, but the effect here on the page is the same. Try to right click and save the photo, and then look at what you’ve actually saved. Why is it ideally better to use an Instagram-style script to deliver the main image? Because if you don’t, scrapers and search engines can still pick up the file. And scrapers and search engines are far and away the biggest redistributors of content. Of course, visitors can still use screen capture and save the image that way, but if they think they’ve already saved the image, and don’t realise until later that they haven’t, you’re throwing an extra layer of hassle into their path.
You can take the SaaS-inspired idea even further by offering a prescribed download function which saves the media as a hyperlink rather than a file. It can be stored with other image or video saves, and even show as a thumbnail in the folder. But when it’s clicked, instead of opening a file, it opens a web page. The media itself displays as normal – in an image viewer or video player. With some thought, this process could be made so similar to that of opening any regular saved image file, that the user would not even notice they didn’t really have the image on their computer or device. Only if they tried to redistribute it would they run into a problem.
Once again, anyone with a little tech savvy can bypass the protection, but if the delivery system makes the psuedo save and reopen process as seamless as ordinary saving and reopening, whilst making ordinary saving much more difficult and inconvenient, the huge majority of the public will quickly submit to the pseudo process.
WILL IT HAPPEN?
One of the biggest causes for optimism is that if copyright protection is seen to be working, without destroying the experience for the average consumer, it is singularly the most powerful gimmick a small social media platform can adopt. Forget all your decentralisation, privacy, “free speech” and NSFW-friendly fanfaring. That attracts the little people. Real protection of intellectual property will attract the big people. Potentially the people with millions of followers.
The platforms can’t trade on that gimmick alone. They have to get everything else right. But Instagram has already shown that giving Google and Twitter the middle finger is far from damaging. We just need someone with a bigger middle finger than Instagram.