Or Reorientating how engineers think about privacy.
From my chapter Blending the practices of Privacy and Information Security to navigate Contemporary Data Protection Challenges in “Trans-Atlantic Data Privacy Relations as a Challenge for Democracy”, Kloza & Svantesson (editors), in press.
One of the leading efforts to inculcate privacy into engineering practice has been the “Privacy by Design” movement. Commonly abbreviated "PbD" is a set of guidelines developed in the 1990s by the then privacy commissioner of Ontario, Ann Cavoukian. The movement seeks to embed privacy “into the design specifications of technologies, business practices, and physical infrastructures”. PbD is basically the same good idea as build in security, or build in quality, because retrofitting these things too late in the design lifecycle leads to higher costs* and compromised, sub-optimal outcomes.
Privacy by Design attempts to orientate technologists to privacy with a set of simple callings:
- 1. Proactive not Reactive; Preventative not Remedial
- 2. Privacy as the Default Setting
- 3. Privacy Embedded into Design
- 4. Full Functionality – Positive-Sum, not Zero-Sum
- 5. End-to-End Security – Full Lifecycle Protection
- 6. Visibility and Transparency – Keep it Open
- 7. Respect for User Privacy – Keep it User-Centric.
PbD is a well-meaning effort, and yet its language comes from a culture quite different from engineering. PbD’s maxims rework classic privacy principles without providing much that’s tangible to working systems designers.
The most problematic aspect of Privacy by Design is its idealism. Politically, PbD is partly a response to the cynicism of national security zealots and the like who tend to see privacy as quaint or threatening. Infamously, NSA security consultant Ed Giorgio was quoted in “The New Yorker” of 21 January 2008 as saying “privacy and security are a zero-sum game”. Of course most privacy advocates (including me) find that proposition truly chilling. And yet PbD’s response is frankly just too cute with its slogan that privacy is a “positive sum game”.
The truth is privacy is full of contradictions and competing interests, and we ought not sugar coat it. For starters, the Collection Limitation principle – which I take to be the cornerstone of privacy – can contradict the security or legal instinct to always retain as much data as possible, in case it proves useful one day. Disclosure Limitation can conflict with usability, because Personal Information may become siloed for privacy’s sake and less freely available to other applications. And above all, Use Limitation can restrict the revenue opportunities that digital entrepreneurs might otherwise see in all the raw material they are privileged to have gathered.
Now, by highlighting these tensions, I do not for a moment suggest that arbitrary interests should override privacy. But I do say it is naive to flatly assert that privacy can be maximised along with any other system objective. It is better that IT designers be made aware of the many trade-offs that privacy can entail, and that they be equipped to deal with real world compromises implied by privacy just as they do with other design requirements. For this is what engineering is all about: resolving conflicting requirements in real world systems.
So a more sophisticated approach than “Privacy by Design” is privacy engineering in which privacy can take its place within information systems design alongside all the other practical considerations that IT professionals weigh up everyday, including usability, security, efficiency, profitability, and cost.
See also my "Getting Started Guide: Privacy Engineering" from Constellation Research.
- Not unrelatedly, I wonder if we should re-examine the claim that retrofitting privacy, security and/or quality after a system has been designed and realised leads to greater cost! Cold hard experience might suggest otherwise. Clearly, a great many organisations persist with bolting on these sorts of features late in the day -- or else advocates wouldn't have to keep telling them not to. And the Minimum Viable Product movement is almost a license to defer quality and other non-essential considerations. All businesses are cost conscious, right? So averaged across a great many projects over the long term, could it be that businesses have in fact settled on the most cost effective timing of security engineering, and it's not as politically correct as we'd like?!
Last month, over September 26-27, I attended a US government workshop on The Use of Blockchain in Healthcare and Research, organised by the Department of Health & Human Services Office of the National Coordinator (ONC) and hosted at NIST headquarters at Gaithersburg, Maryland. The workshop showcased a number of winning entries from ONC's Blockchain Challenge, and brought together a number of experts and practitioners from NIST and the Department of Homeland Security.
I presented an invited paper "Blockchain's Challenges in Real Life" (PDF) alongside other new research by Mance Harmon from Ping Identity, and Drummond Reed from Respect Network. All the workshop presentations, the Blockchain Challenge winners' papers and a number of the unsuccessful submissions are available on the ONC website. You will find contributions from major computer companies and consultancies, leading medical schools and universities, and a number of unaffiliated researchers.
I also sat on a panel session about identity innovation, joining entrepreneurs from Digital Bazaar, Factom, Respect Network, and XCELERATE, all of which are conducting R&D projects funded by the DHS Science and Technology division.
Around the same time as the workshop, I happened to finalise two new Constellation Research papers, on security and R&D practices for blockchain technologies. And that was timely, because I am afraid that once again, I have immersed myself in some of the most current blockchain thinking, only to find that key pieces of the puzzle are still missing.
Disclosure: I traveled to the Blockchain in Healthcare workshop as a guest of ONC, which paid for my transport and accommodation.
Three observations from the Workshop
There were two things I just did not get as I read the winning Blockchain Challenge papers and listened to the presentations. And I observe that there is one crucial element that most of the proposals are missing
Firstly, one of the most common themes across all of the papers was interoperability. A great challenge in e-health is indeed interoperability. Disparate health systems speak different languages, using different codes for the same medical procedures. Adoption of new standard terminologies and messaging standards, like HL-7 and ICD, is infamously slow, often taking a decade or longer. Large clinical systems are notoriously complex to implement, so along the way they invariably undergo major customisation, which makes each installation peculiar to its setting, and resistant to interfacing with other systems.
In the USA, Health Information Exchanges (HIEs) have been a common response to these problems, the idea being that an intermediary switching system can broker understanding between local e-health programs. But as anyone in the industry knows, HIEs have been easier said than done, to say the least.
According to many of the ONC challenge papers, blockchain is supposed to bring a breakthrough, yet no one has explained how a ledger will make the semantics of all these e-health silos suddenly compatible. Blockchain is a very specific protocol that addresses the order of entries in a distributed ledger, to prevent Double Spend without an administrator. Nothing about blockchain's fundamentals relates to the contents of messages, healthcare semantics, medical codes and so on. It just doesn't "do" interoperability! The complexity in healthcare is intrinsic to the subject matter; it cannot be willed away with any new storage technology.
The second thing I just didn't get about the workshop was the idea that blockchain will fix healthcare information silos. Several speakers stressed the problem that data is fragmented, concentrated in local repositories, and hard to find when needed. All true, but I don't see what blockchain can do about this. A consensus was reached at the workshop that personal information and Protected Health Information (PHI) should not be stored on the blockchain in any significant amounts (not just because of its sensitivity but also the sheer volume of electronic health records and images in particular). So if we're agreed that the blockchain could only hold pointers to health data, what difference can it make to the current complex of record systems?
And my third problem at the workshop was the stark omission of key management. This is the central administrative challenge in any security system, of getting the right cryptographic keys and credentials into the right hands, so all parties can be sure who they are dealing with. The thing about blockchain is that it did away with key management. The genius of the original Bitcoin blockchain is it allows people to exchange guaranteed value without needing to know anything about each other. Blockchain actually dispenses with key management and it may be unique in the history of security for doing so (see also Blockchain has no meaning). But when we do need to know who's who in a health system – to be certain when various users really are authorised medicos, researchers, insurers or patients – then key management must return to the mix. And then things get complicated, much more complicated than the utopian setting of Bitcoin.
Moreover, healthcare is hierarchical. Inherent to the system are management structures, authorizations, credentialing bodies, quality assurance and audits – all the things that blockchain's creator Satoshi Nakamoto expressly tried to get rid of. As I explained in my workshop speech, if a blockchain deployment still has to involve third parties, then the benefits of the algorithm are lost. So said Nakamoto him/herself!
In my view, most blockchain for healthcare projects will discover, sooner or later, than once the necessary key management arrangements are taken care of, their choice of distributed ledger technology becomes inconsequential.
New Constellation Research on Blockchain Technologies
Security for blockchains and Distributed Ledger Technologies (DLTs) have evolved quickly. As soon as interest in blockchain grew past crypto-currency into mainstream business applications, it became apparent that the core ledger would need to augmented with permissions for access control, and encryption for confidentiality. But what few people appreciate is that these measures conflict with the rationale of the original blockchain algorithm, which was expressly meant to dispel administration layers. The first of my new papers looks at these tensions, what they mean for public and private blockchain systems, paints a picture for third generation DLTs.
The uncomfortable marriage of ad hoc security and the early blockchain is indicative of a broader problem I've written about many times: too much blockchain "innovation" is proceeding with insufficient rigor. Which brings us to the second of my new papers. In the rush to apply blockchain to broader payments and real world assets, few entrepreneurs have been clear and precise about the problems they think they’re solving. If the R&D is not properly grounded, then the resulting solutions will be weak and will ultimately fail in the market. It must be appreciated that the original blockchain was only a prototype. Great care needs to be taken to learn from it and more rigorously adapt fast-evolving DLTs to enterprise needs.
Constellation ShortListTM for Distributed Ledger Technologies Labs
Finally, Constellation Research has launched a new product, the Constellation ShortListTM. These are punchy lists by our analysts of leading technologies in dozens of different categories, which will each be refreshed on a short cycle. The objective is to help buyers of technology when choosing offerings in new areas.
My Constellation ShortListTM for blockchain-related solution providers is now available here.
For the past few years, a crucial case has been playing out in Australia's legal system over the treatment of metadata in privacy law. The next stanza is due to be written soon in the Federal Court.
It all began when a journalist with a keen interest in surveillance, Ben Grubb, wanted to understand the breadth and depth of metadata, and so requested that mobile network operator Telstra provide him a copy of his call records. Grubb thought to exercise his rights to access Personal Information under the Privacy Act. Telstra held back a lot of Grubb's call data, arguing that metadata is not Personal Information and is not subject to the access principle. Grubb appealed to the Australian Privacy Commissioner, who ruled that metadata is identifiable and hence represents Personal Information. Telstra took their case to the Administrative Appeals Tribunal, which found in favor of Telstra, with a surprising interpretation of "Personal Information". And the Commissioner then appealed to the next legal authority up the line.
At yesterday's launch of Privacy Awareness Week in Sydney, the Privacy Commissioner Timothy Pilgrim informed us that the full bench of the Federal Court is due to consider the case in August. This could be significant for data privacy law worldwide, for it all goes to the reach of these sorts of regulations.
I always thought the nuance in Personal Information was in the question of "identifiability" -- which could be contested case by case -- and those good old ambiguous legal modifiers like 'reasonably' or 'readily'. So it was a great surprise that the Administrative Appeals Tribunal, in overruling the Privacy Commissioner in Ben Grubb v Telstra, was exercised instead by the meaning of the word "about".
Recall that the Privacy Act (as amended in 2012) defines Personal Information as:
- "Information or an opinion about an identified individual, or an individual who is reasonably identifiable: (a) whether the information or opinion is true or not; and (b) whether the information or opinion is recorded in a material form or not."
The original question at the heart of Grubb vs Telstra was whether mobile phone call metadata falls under this definition. Commissioner Pilgrim showed that call metadata is identifiable to the caller (especially identifiable by the phone company itself that keeps extensive records linking metadata to customer records) and therefore counts as Personal Information.
When it reviewed the case, the tribunal agreed with Pilgrim that the metadata was identifiable, but in a surprise twist, found that the metadata is not actually about Ben Grubb but instead is about the services provided to him.
- Once his call or message was transmitted from the first cell that received it from his mobile device, the [metadata] that was generated was directed to delivering the call or message to its intended recipient. That data is no longer about Mr Grubb or the fact that he made a call or sent a message or about the number or address to which he sent it. It is not about the content of the call or the message ... It is information about the service it provides to Mr Grubb but not about him. See AATA 991 (18 December 2015) paragraph 112.
To me it's passing strange that information about calls made by a person is not also regarded as being about that person. Can information not be about more than one thing, namely about a customer's services and the customer?
Think about what metadata can be used for, and how broadly-framed privacy laws are meant to stem abuse. If Ben Grubb was found, for example, to have repeatedly called the same Indian takeaway shop, would we not infer something about him and his taste for Indian food? Even if he called the takeaway shop just once, we might still conclude something about him, even if the sample size is small. We might deduce he doesn't like Indian (remember that in Australian law, Personal Information doesn't necessarily have to be correct).
By the AAT's logic, a doctor's appointment book would not represent any Personal Information about her patients but only information about the services she has delivered to them. But in fact the appointment list of an oncologist for instance would tell us a lot about peoples' cancer.
Given the many ways that metadata can invade our privacy (not to mention that people may be killed based on metadata) it's important that the definition of Personal Information be broad, and that it has a low threshold. Any amount of metadata tells us something about the person.
I appreciate that the 'spirit of the law' is not always what matters, but let's compare the definition of Personal Information in Australia with corresponding concepts elsewhere (see more detail beneath). In the USA, Personally Identifiable Information is any data that may "distinguish" an individual; in the UK, Personal Data is anything that "relates" to an individual; in Germany, it is anything "concerning" someone. Clearly the intent is consistent worldwide. If data can be linked to a person, then it comes under data privacy law.
Which is how it should be. Technology neutral privacy law is framed broadly in the interests of consumer protection. I hope the Federal Court in drilling into the definition of Personal Information upholds what the Privacy Act is for.
Personal Information definitions around the world.
Personal Information, Personal Data and Personally Identifiable Information are variously and more or less equivalently defined as follows (references are hyperlinked in the names of each country):
- data which relate to a living individual who can be identified
- any information concerning the personal or material circumstances of an identified or identifiable individual
- information about an identifiable individual
- information which can be used to distinguish or trace an individual's identity ...
- information or an opinion ... about an identified individual, or an individual who is reasonably identifiable.
Posted in Privacy
I was talking with government identity strategists earlier this week. We were circling (yet again) definitions of identity and attributes, and revisiting the reasonable idea that digital identities are "unique in a context". Regular readers will know I'm very interested in context. But in the same session we were discussing the public's understandable anxiety about national ID schemes. And I had a little epiphany that the word "unique" and the very idea of it may be unhelpful. I wonder if we could avoid using the word "uniqueness" wherever we can.
The link from uniqueness to troublesome national identity is not just perception; there is a real tendency for identity and access management (IDAM) systems to over-identify, with an obvious privacy penatly. Security professionals feel instinctively that they more they know about people, the more secure we all will be.
Whenever we think uniqueness is important, I wonder if there are really other more precise objectives that apply? Is "singularity" a better word for the property we're looking for? Or the mouthful "non-ambiguity"? In different use cases, what we really need to know can vary:
- Is the person (or entity) accessing service the same as last time?
- Is the person exercising a credential clear to use it? Delegation of digital identity actually makes "uniqueness" moot)
- Does the Relying Party (RP) know the user "well enough" for the RP's purposes? That doesn't always mean uniquely.
I observe that when IDAM schemes come loaded with reference to uniqueness, it's tends to bias the way RPs do their identification and risk management designs. There is an expectation that uniqueness is important no matter what. Yet it is emerging that much fraud (most fraud?) exploits weaknesses at transaction time, not enrollment time: even if you are identified uniquely, you can still get defrauded by an attacker who takes over or bypasses your authenticator. So uniqueness in and of itself doesn't always help.
If people do want to use the word "unique" then they should have the discipline to always qualify it, as mentioned, as "unique in a context". But I have to say that "unique is a context" is not "unique".
Finally it's worth remembering that the word has long been degraded by the biometrics industry with their habit of calling most any biological trait "unique". There's a sad lack of precision here. No biometric as measured is ever unique! Every mode, even iris, has a non zero False Match Rate.
What's in a word? A lot! I'd like to see more rigorous use of the word "unique". At least let's be aware of what it means subliminally to the people we're talking with - be they technical or otherwise. With the word bandied around so much, engineers can tend to think uniqueness is always a designed objective, and laypeople can presume that every authentication scheme is out to fingerprint them. Literally.
World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist's error, calling for unworkable new privacy mechanisms where none in fact are warranted.
The Telegraph reports Berners-Lee as saying "Some people say privacy is dead – get over it. I don't agree with that. The idea that privacy is dead is hopeless and sad." He highlighted that peoples' participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant.
Of course he's right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions "a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me" he said according to The Telegraph.
It's a classic case of being careful what you ask for, in case you get it. What would control over "all data about you" look like? Most of the data about us these days - most of the personal data, aka Personally Identifiable Information (PII) - is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don't know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don't see that ownership could fix a privacy problem.
What could "ownership" of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional.
The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data's creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA?
What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what's going on with their PII, and so on.
Berners-Lee alluded to the privacy threats of Big Data, and he's absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It's not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don't care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent.
Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect.
For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was a shortcut around full compliance; as such it was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement would be eventually annulled, as it was last October.
With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield".
The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweep of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts.
The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU.
But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency.
Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015.
Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make overall privacy better by magic.
For instance, Google in 2010 was caught over-collecting personal information through its StreetView cars. It is widely known (and perfectly acceptable) that mapping companies use the positions of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem here was that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they came across it. In over a dozen countries around the world, Google admitted they had breached local privacy laws by colelcting excessive PII, apologised for the overreach, explained it as inadvertent, and deleted all the WiFi records in question. The matter was settled in just a few months in places like Korea, Japan and Australia. But in the US, where there is no general collection limitation privacy rule, Google has been defending what they did. Absent general data privacy protection, the strongest legislation that seems to apply to the StreetView case is wire tap law, but its application to the Internet is complex. And so the legal action has taken years and years, and it's still not resolved.
I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, concede that the collection of WiFi traffic was unnecessary and wrong.
Other proof that European privacy law is deeper and broader than the Privacy Shield is found in social networking mishaps. Over the years, many of Facebook's business practices for instance have been found unlawful in the EU. Recently there was the final ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identifying otherwise anonymous people, without consent (and without restraint as to what might be done next with that new PII), seems to be an unlawful under the Collection Limitation and Usage Limitation principles.
In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since. Whether they are successful or not will have nothing to do with the "Privacy Shield".
The Privacy Shield comes into a troubled trans-Atlantic privacy environment. Whether or not the new EU-US arrangement fares better than the Safe Harbor remains to be seen. But in any case, since the Privacy Shield really aims to free up business access to data, sadly it's unlikely to do much good for true privacy.
The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data.
The highest court in Germany has ruled that Facebook’s “Find Friends” function is unlawful there. The decision is the culmination of legal action started in 2010 by German consumer groups, and confirms the rulings of other lower courts in 2012 and 2014. The gist of the privacy breach is that Facebook is illegitimately using details of third parties obtained from members, to market to those third parties without their consent. Further, the “Find Friends” feature was found to not be clearly explained to members when they are invited to use it.
My Australian privacy colleague Anna Johnston and I published a paper in 2011 examining these very issues; see "Privacy Compliance Problems for Facebook", IEEE Technology and Society Magazine, V31.2, December 1, 2011, at the Social Science Research Network, SSRN.
Here’s a recap of our analysis.
One of the most significant collections of Personally Identifiable Information (PII) by online social networks is the email address books of members who elect to enable “Find Friends” and similar functions. This is typically the very first thing that a new user is invited to do when they register for an OSN. And why wouldn’t it be? Finding friends is core to social networking.
New Facebook members are advised, immediately after they first register, that “Searching your email account is the fastest way to find your friends”. There is a link to some minimal explanatory information:
- Import contacts from your account and store them on Facebook's servers where they may be used to help others search for or connect with people or to generate suggestions for you or others. Contact info from your contact list and message folders may be imported. Professional contacts may be imported but you should send invites to personal contacts only. Please send invites only to friends who will be glad to get them.
This is pretty subtle. New users may not fully comprehend what is happening when they elect to “Find Friends”.
A key point under international privacy regulations is that this importing of contacts represents an indirect collection of PII of others (people who happen to be in a member’s email address book), without their, knowledge let alone authorisation.
By the way, it’s interesting that Facebook mentions “professional contacts” because there is a particular vulnerability for professionals which I reported in The Journal of Medical Ethics in 2010. If a professional, especially one in sole practice, happens to have used her web mail to communicate with clients, then those clients’ details may be inadvertently uploaded by “Find Friends”, along with crucial metadata like the association with the professional concerned. Subsequently, the network may try to introduce strangers to each other on the basis they are mutual “friends” of that certain professional. In the event she happens to be a mental health counsellor, a divorce attorney or a private detective for instance, the consequences could be grave.
It’s not known how Facebook and other OSNs will respond to the German decision. As Anna Johnston and I wrote in 2011, the quiet collection of people’s details in address books conflicts with basic privacy principles in a great many jurisdictions, not just Germany. The problem has been known for years, so various solutions might be ready to roll out quite quickly. The fix might be as simple in principle as giving proper notice to the people who’s details have been uploaded, before their PII is used by the network. It seems to me that telling people what’s going on like this would, fittingly, be the “social” thing to do.
But the problem from the operators’ commercial points of view is that notices and the like introduce friction, and that’s the enemy of infomopolies. So once again, a major privacy ruling from Europe may see a re-calibration of digital business practices, and some limits placed on the hitherto unrestrained information rush.
A big part of my research agenda in the Digital Safety theme at Constellation is privacy. And what a vexed topic it is! It's hard to even know how to talk about privacy. For many years, folks have covered privacy in more or less academic terms, drawing on sociology, politics and pop psychology, joining privacy to human rights, and crafting new various legal models.
Meanwhile the data breaches get worse, and most businesses have just bumped along.
When you think about it, it’s obvious really: there’s no such thing as perfect privacy. The real question is not about ‘fundamental human rights’ versus business, but rather, how can we optimise a swarm of competing interests around the value of information?
Privacy is emerging as one of the most critical and strategic of our information assets. If we treat privacy as an asset, instead of a burden, businesses can start to cut through this tough topic.
But here’s an urgent issue. A recent regulatory development means privacy may just stop a lot of business getting done. It's the European Court of Justice decision to shut down the US-EU Safe Harbor arrangement.
The privacy Safe Harbor was a work-around negotiated by the Federal Trade Commission, allowing companies to send personal data from Europe into the US.
But the Safe Harbor is no more. It's been ruled unlawful. So it’s a big, big problem for European operations, many multinationals, and especially US cloud service providers.
At Constellation we've researched cloud geography and previously identified competitive opportunities for service providers to differentiate and compete on privacy. But now this is an urgent issue.
It's time American businesses stopped getting caught out by global privacy rulings. There shouldn't be too many surprises here, if you understand what data protection means internationally. Even the infamous "Right To Be Forgotten" ruling on Google’s search engine – which strikes so many technologists as counter intuitive – was a rational and even predictable outcome of decades old data privacy law.
The leading edge of privacy is all about Big Data. And we aint seen nothin yet!
Look at artificial intelligence, Watson Health, intelligent personal assistants, hackable cars, and the Internet of Everything where everything is instrumented, and you see information assets multiplying exponentially. Privacy is actually just one part of this. It’s another dimension of information, one that can add value, but not in a neat linear way. The interplay of privacy, utility, usability, efficiency, efficacy, security, scalability and so on is incredibly complex.
The broader issue is Digital Safety: safety for your customers, and safety for your business.
A new effort dubbed Project Enigma "guarantees" us privacy, by way of a certain technology. Never mind that Enigma's "magic" (their words) comes from the blockchain and that it's riddled with assumptions; the very idea of technology-based perfection in privacy is profoundly misguided.
Enigma is not alone; the vast majority of 'Privacy Enhancing Technologies' (PETs) are in fact secrecy or anonymity solutions. Anonymity is a blunt and fragile tool for privacy; in the event that encryption for instance is broken, you still need the rule of law to stem abuse. I wonder why people still conflate privacy and anonymity? Plainly, privacy is the protection you need when your affairs are not secret.
In any event, few people need or want to live underground. We actually want merchants and institutions and employers and doctors to know us in reasonable detail, but we insist they exercise restraint in what they do with that knowledge.
Consider a utopian architecture where things could be made totally secret between you and a correspondent. How would you choose to share something with more than one party, like a health record, or a party invitation? How would you delegate someone to share something with others on your behalf? How would you withdraw permissions? How would it work in a heterogeneous IT environment? And above all, how would you control all the personal information created about you behind your back, unseen, beyond your reach?
Privacy is about restraint. It's less about what we do with someone’s personal information than what we don’t do. So it’s more political than technological. Privacy can only really be managed through rules. Of course rules and enforcement are imperfect, but let’s not be utopian about privacy. Just as there is no such thing as absolute security, there is no perfect privacy either.
Posted in Privacy
For 35 years now, a body of data protection jurisprudence has been built on top of the original OECD Privacy Principles. The most elaborate and energetically enforced privacy regulations are in Europe (although well over 100 countries have privacy laws at last count). By and large, the European privacy regime is welcome by the roughly 700 million citizens whose interests it protects.
Over the years, this legal machinery has produced results that occasionally surprise the rest of the world. Among these was the "Right To Be Forgotten", a ruling of the European Court of Justice (ECJ) which requires web search operators in some cases to block material that is inaccurate, irrelevant or excessive. And this week, the ECJ determined that the U.S. "Safe Harbor" arrangement (a set of pragmatic work-arounds that have permitted the import of personal information from Europe by American companies) is invalid.
These strike me as entirely logical outcomes of established technology-neutral privacy law. The Right To Be Forgotten simply treats search results as synthetic personal information, collected algorithmically, and applies regular privacy principles: if a business collects personal information, then lawful limits apply no matter how it's collected. And the self-regulated Safe Harbor was found to not provide the strength of safeguards that Europeans have come to expect. Its inadequacies are old news; action by the court has been a long time coming.
In parallel with steadily developing privacy law, an online business ecosystem has evolved, centred on the U.S. and based on the limitless resource that is information. Fabulous products, services and unprecedented economic success have flowed. But the digital rush (like gold and oil rushes before it) has brought calamity. A shaken American populace, subject to daily breaches, spying and exploitation, is left wondering who and what will ever keep them safe in cyberspace.
So it's honestly a mystery to me why every European privacy advance is met with such reflexive condemnation in America.
The OECD Privacy Principles safeguard individuals by controlling the flow of information about them. In the decades since the principles were framed, digital technologies and business models have radically expanded how information is created and how it moves. Personal information is now produced as if by magic (by wizards who make billions by their tricks). But the basic privacy principles are steadfastly the same, and are manifestly more important than ever. You know, that's what good laws are like.
A huge proportion of the American public would cheer for better data protection. We all know they deserve it. If American institutions had a better track record of respecting and protecting the data commons, then they'd be entitled to bluster about European privacy. But as things stand in Silicon Valley and Washington, moral outrage should be directed at the businesses and governments who sit on their hands over data breaches and surveillance, instead of those who do something about it.
Posted in Privacy