Data is the de facto currency of our age. And the data we gather about our lives, health, travel, energy use, commodity consumption, and so on, could be used to plan more sustainable communities, cure diseases, reduce carbon emissions, and more.
But it can also be an asset for private companies to turn into cash. Hardly a week goes by without news of an organisation sharing customer data, or mining it to uncover aspects of user behaviour that seem intrusive or exploitative.
For example, Facebook was in the news recently for allegedly sharing data about emotionally vulnerable teenagers with its advertising partners. With nearly two billion users, Facebook is a personal data superpower, whose ‘citizens’ divulge intimate details about their lives and beliefs.
Most consumers are happy to let companies take data from them in this way and get next to nothing in return – beyond reams of targeted advertising that many of us simply want to switch off, even as it follows us around the internet insisting that we pay attention to it.
Data sharing of this type creates a feedback loop of endless aggregate advantage to the provider and their partners, not to the customer.
So while it’s often claimed that we’re living in a flat, networked society in which we all own and control the means of production (a digital restatement of socialist principles), we may be living in the opposite, a form of extreme, data-based capitalism in which owning and manipulating data about the population is the new land grab, the new gold rush – and the new election influencer.
When Microsoft closed its deal to buy LinkedIn for $26 billion last December, most people thought that it had acquired a social networking platform. But the cash also bought it the personal data of over 440 million people – their contacts, networks, and career histories, and potentially the intellectual property of any blog they write. Since the acquisition, the platform has become noisier and stuffed with so-called ‘advertorial’ content.
Deals like this are among the thousands of reasons why there is a growing movement for consumers to take back control of their data from the large organisations that believe it belongs to them.
The Web’s prime mover Sir Tim Berners-Lee is one of the many thought leaders who think that consumers should be able to license personal data on their own terms – in effect, to turn off the flow and take control of the faucet. When I interviewed him in 2014, he said, “People say, ‘privacy is dead, get over it’. I don’t agree. The idea that privacy is dead is hopeless and sad. We have to build systems that allow for privacy.”
He’s not alone in this view. Anonymous, the hackers collective, says it wants to build a social platform that will be wholly transparent, free of advertising, and will give users complete power over their own data and over how – and where – it is used.
At the same time, more and more companies – while paying lip service to the government’s surveillance plans – are pushing people towards encrypted email, messaging, and phones.
This quest for greater privacy will soon have a regulatory angle. In May 2018, Europe’s General Data Protection Regulation (GDPR) will come into force, regardless of Brexit. Not only does it mandate fines of up to four per cent of company turnover for breaches of data protection and security, it puts the idea of user consent front and centre of its provisions.
GDPR makes organisations accountable for their actions, and requires that personal data be collected for “specified, explicit and legitimate purposes” with the “consent of the data subject”. More, it says that processing this data must be necessary to “protect the vital interests of a data subject” and “for the performance of a task carried out in the public interest”.
That said, the onus in 2018 will be on citizens actually reading providers’ Ts & Cs.
But the message is clear: in Europe at least, the days of “take, take, take” are coming to an end. And the trend among many technology users is inescapable: while we all share our lives on social media, some of us are going back under cover, back into the silo.
So why is this happening? It’s partly because of the snoopers, and the flawed, binary thinking behind state surveillance – that context-free data can reveal the truth about complex human beings. But it’s mainly because of a certain type of company. Let’s call them ‘vempires’.
These are the companies that see each consumer as a data blood bag. In 2015, for example, the music platform Spotify announced that it reserved the right to take personal files from users’ smartphones: their photographs, videos, and private data. Extraordinary.
But we continue to use Spotify, Pandora, and similar services, in our millions because we like free stuff.
The fact is that few of us actively (or knowingly) invest our personal data in improving society or humanity’s collective future, but that’s mainly because we lack a platform for doing so. In the meantime, we’re happy to simply give it away in return for noise.
What we need is a platform that empowers us only to invest our data in initiatives that we agree with, and which blocks its use anywhere else – even if someone shares personal data without our consent. In short, we need a means of automating ‘I agree’ or ‘I don’t agree’ and wrapping our own terms and conditions around personal data – like a Creative Commons licence that refers to the individual, not just a media file.
A possible solution to this is the personal API, a personal application programming interface, behind which users can place their data, on their terms.
This would give them the right to say to the next vempire to arrive at their window, “All this anonymised data about me, my health, my fitness, my contacts, my life, I license to you – on these terms and conditions. You may use it for this purpose, but not for that purpose, in this industry, but not in that industry.”
Or to say, “This personally identifiable data about me, you may use to do this, but not that. And in return, I want x, y, and z. I want you to donate to this charity. I want this product for free, and I want micropayments for a, b, and c.” And those terms would be embedded in the data itself, and using the data would constitute a legally binding agreement. A form of rights management for citizens’ private data.
In this way, too, organisations’ Corporate Social Responsibility (CSR) statements and investments would become testable commodities that are actively supported by consenting customers – consent that could be withdrawn with a click by removing their data from company servers.
So who would lose from making data available in this way, and its terms this accountable, this transparent, and this auditable? I would argue no one. Who would benefit? I would argue everyone.
And it would have another repercussion: it would automatically make organisations into responsible data guardians, and push them towards decisions, and towards research, that are actively supported by consumers and citizens. Crowdsourced ethical behaviour, based on citizens’ consent and active choices.
What say you?