The last thing privacy needs is new laws

World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist’s error, calling for unworkable new privacy mechanisms where none in fact are warranted.

The Telegraph reports Berners-Lee as saying “Some people say privacy is dead – get over it. I don’t agree with that. The idea that privacy is dead is hopeless and sad.” He highlighted that peoples’ participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant.

Of course he’s right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions “a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me” he said according to The Telegraph.

It’s a classic case of being careful what you ask for, in case you get it. What would control over “all data about you” look like? Most of the data about us these days – most of the personal data, aka Personally Identifiable Information (PII) – is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don’t know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don’t see that ownership could fix a privacy problem.

What could “ownership” of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional.

The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data’s creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA?

What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what’s going on with their PII, and so on.

Berners-Lee alluded to the privacy threats of Big Data, and he’s absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It’s not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don’t care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent.

Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect.