I’ve been a critic of Blockchain. Frankly I’ve never seen such a massed rush of blood to the head for a new technology. Breathless books are being churned out about “trust infrastructure” and an “Internet of Value”. They say Blockchain will keep politicians and business people honest, and enable “billions of excluded people to enter the global economy”.
Most pundits overlook the simple fact that Blockchain only does one thing: it lets you move Bitcoin (a digital bearer token) from one account to another without an umpire. And it doesn’t even do that very well, for the Proof of Work algorithm is stupendously inefficient. Blockchain can't magically make merchants keep up their side of a bargain. Surprise! You can still get ripped off paying with Bitcoin. Blockchain simply doesn’t do what the futurists think it does. In their hot flushes, they tend to be caught in a limbo between the real possibilities of distributed consensus today and a future that no one is seeing clearly.
But Blockchain does solve what was thought to be an impossible problem, and in the right hands, that insight can convert to real innovation. I’m happy to see some safe pairs of hands now emerging in the Blockchain storm.
One example is an investment being made by Ping Identity in Swirlds and its new “hashgraph” distributed consensus platform. Hashgraph has been designed from the ground up to deliver many of Blockchain’s vital properties (consensus on the order of events, and redundancy) in a far more efficient and robust manner.
And what is Ping doing with this platform? Well they’re not rushing out with vague promises to manufacture "trust" but instead they’re making babysteps on real problems in identity management. For starters, they’re applying the new hashgraph platform to Distributed Session Management (DSM). This is the challenge of verifiably shutting down all of a user’s multiple log-on sessions around the web when they take a break, suffer a hack, or lose their job. It's one of the great headaches of enterprise identity administration and is exploited in a great many cyberattacks.
Ping’s identity architects have carefully set out the problem they’re trying to solve, why it’s hard, and how existing approaches don’t deliver the desired security properties for session management. They then evaluated a number of consensus approaches - not just Blockchain but also Paxos and Raft – and discussed their limitations. The Ping team then landed on hashgraph, which appears to meet the needs, and also looks like it can deliver a range of advanced features.
In my view, Ping Identity’s work is the very model of mature security design. It’s an example of the care and attention to detail that other innovators should follow.
Swirld’s founder Dr Leemon Baird will be presenting hashgraph in more detail to the Cloud Identity Summit in New Orleans tomorrow (June 7th).
In "We are hopelessly hooked" (New York Review of Books, February 25), political historian Jacob Weisberg canvasses the social impact of digital technology. He describes mobile and social media as “self-depleting and antisocial” but I would prefer different-social not merely for the vernacular but because the new media's sadder side is a lot like what's gone before.
In reviewing four recent contributions to the field - from Sherry Turkle, Joseph Reagle and Nir Eyal - Weisberg dwells in various ways on the twee dichotomy of experience online and off. For many of us, the distinction between digital and "IRL" (the sardonic abbreviation of "in real life") is becoming entirely arbitrary, which I like to show through an anecdote.
I was a mid-career technology researcher and management consultant when I joined Twitter in 2009. It quickly supplanted all my traditional newsfeeds and bulletin boards, by connecting me to individuals who I came to trust to pass on what really mattered. More slowly, I curated my circles, built up a following, and came to enjoy the recognition that would ordinarily come from regular contact, if the travel was affordable from far flung Australia. By 2013 I had made it as a member of the “identerati” – a loose international community of digital identity specialists. Thus, on my first trip to the US in many years, I scored a cherished invitation to a private pre-conference party with 50 or so of these leaders.
On the night, as I made my way through unfamiliar San Francisco streets, I had butterflies. I had met just one of my virtual colleagues face-to-face. How would I be received “IRL”? The answer turned out to be: effortlessly. Not one person asked the obvious question – Steve, tell us about yourself! – for everyone knew me already. And this surprising ease wasn’t just about skipping formalities; I found we had genuine intimacy from years of sharing and caring, all on Twitter.
Weisberg quotes Joseph Reagle in "Reading the Comments..." looking for “intimate serendipity” in successful online communities. It seems both authors are overlooking how serendipity catalyses all human relationships. It’s always something random that turns acquaintances into friends. And happy accidents may be more frequent online, not in spite of all the noise but because of it. We all live for chance happenings, and the much-derided Fear Of Missing Out is not specific to kids nor the Internet. Down the generations, FOMO has always kept teenagers up past their bed time; but it’s also why we grown-ups outstay our welcome at dinner parties and hang out at dreary corporate banquets.
Weisberg considers Twitter’s decay into anarchy and despair to be inevitable, and he may be right, but is it simply for want of supervision? We know sudden social decay all too well; just think of the terribly real-life “Lord of the Flies”.
Sound moral bearings are set by good parents, good teachers, and – if we’re lucky – good peers. At this point in history, parents and teachers are famously less adept than their charges in the new social medium, but this will change. Digital decency will be better impressed on kids when all their important role models are online.
It takes a village to raise a child. The main problem today is that virtual villages are still at version 1.0.
We all know that digital transformation is imminent, but getting there is far from easy. The digital journey is fraught with challenges, not least of which is customer access. "Online" is not what it used to be; the online world by many measures is bigger than the “real world” and it’s certainly not just a special corner of a network we occasionally log into. Many customers spend a substantial part of their lives online. The very word "online" is losing its meaning, with offline becoming a very unusual state. So enterprises are finding they need to totally rethink customer identity, bringing together the perspectives of CTO for risk management and engineering, and the CMO for the voice of the customer.
Consider this. The customer experience of online identity was set in concrete in the 1960s when information technology meant mainframes and computers only sat in “laboratories”. That was when we had the first network logon. The username and password was designed by sys admins for sys admins.
Passwords were never meant to be easy. Ease of use was irrelevant to system administrators; everything about their job was hard, and if they had to manage dozens of account identifiers, so be it. The security of a password depends on it being hard to remember and therefore, in a sense, hard to use. The efficacy of a password is in fact inversely proportional to its ease of use! Isn't that a unique property in all consumer technology?
The tragedy is that the same access paradigm has been inherited from the mainframe era and passed right on through the Age of the PCs in the 1980s, to the Internet in the 2000s. Before we knew it, we all turned into heavy duty “computer” users. The Personal Computer was always regarded as a miniaturized mainframe, with a graphical user interface layered over one or more arcane operating systems, from which consumers never really escaped.
But now all devices are computers. Famously, a phone today is more powerful than all of NASA’s 1969 moon landing IT put together). And the user experience of “computing” has finally changed, and radically so. Few people ever touch operating system anymore. The whole UX is at the app level. What people know now is all tiles and icons, spoken commands, and gestures. Wipe, drag, tap, flick.
Identity management is probably the last facet of IT to be dragged out of the mainframe era. It's all thanks to mobility. We don’t "log on" anymore, we unlockour device. Occasionally we might be asked to confirm who we are before we do something risky, like look up a health record or make a larger payment. The engineer might call it “trust elevation” or some such but the user feels it’s like a reassuring double check.
We might even stop talking about “Two Factor Authentication” now the mobile is so ubiquitous. The phone is your second factor now, a constant part of your life, hardly ever out of sight, and instantly noticed if lost or stolen. And under the covers, mobile devices can make use of many other signals – history, location, activity, behaviour – to effect continuous or ambient authentication, and look out for misuse.
So the user experience of identity per se is melting away. We simply click on an app within an activated device and things happen. The authentication UX has been dictated for decades by technologists, but now, for the first time, the CTO and the CMO are on the same page when it comes to customer identity.
To explore these crucial trends, Ping Identity is hosting a webinar on June 2, Consumerization Killed the Identity Paradigm. To learn more about customer identity and how to implement it successfully in your enterprise, please join me and Ping Identity’s CTO Patrick Harding and CMO Brian Bell.
For the past few years, a crucial case has been playing out in Australia's legal system over the treatment of metadata in privacy law. The next stanza is due to be written soon in the Federal Court.
It all began when a journalist with a keen interest in surveillance, Ben Grubb, wanted to understand the breadth and depth of metadata, and so requested that mobile network operator Telstra provide him a copy of his call records. Grubb thought to exercise his rights to access Personal Information under the Privacy Act. Telstra held back a lot of Grubb's call data, arguing that metadata is not Personal Information and is not subject to the access principle. Grubb appealed to the Australian Privacy Commissioner, who ruled that metadata is identifiable and hence represents Personal Information. Telstra took their case to the Administrative Appeals Tribunal, which found in favor of Telstra, with a surprising interpretation of "Personal Information". And the Commissioner then appealed to the next legal authority up the line.
At yesterday's launch of Privacy Awareness Week in Sydney, the Privacy Commissioner Timothy Pilgrim informed us that the full bench of the Federal Court is due to consider the case in August. This could be significant for data privacy law worldwide, for it all goes to the reach of these sorts of regulations.
I always thought the nuance in Personal Information was in the question of "identifiability" -- which could be contested case by case -- and those good old ambiguous legal modifiers like 'reasonably' or 'readily'. So it was a great surprise that the Administrative Appeals Tribunal, in overruling the Privacy Commissioner in Ben Grubb v Telstra, was exercised instead by the meaning of the word "about".
Recall that the Privacy Act (as amended in 2012) defines Personal Information as:
- "Information or an opinion about an identified individual, or an individual who is reasonably identifiable: (a) whether the information or opinion is true or not; and (b) whether the information or opinion is recorded in a material form or not."
The original question at the heart of Grubb vs Telstra was whether mobile phone call metadata falls under this definition. Commissioner Pilgrim showed that call metadata is identifiable to the caller (especially identifiable by the phone company itself that keeps extensive records linking metadata to customer records) and therefore counts as Personal Information.
When it reviewed the case, the tribunal agreed with Pilgrim that the metadata was identifiable, but in a surprise twist, found that the metadata is not actually about Ben Grubb but instead is about the services provided to him.
- Once his call or message was transmitted from the first cell that received it from his mobile device, the [metadata] that was generated was directed to delivering the call or message to its intended recipient. That data is no longer about Mr Grubb or the fact that he made a call or sent a message or about the number or address to which he sent it. It is not about the content of the call or the message ... It is information about the service it provides to Mr Grubb but not about him. See AATA 991 (18 December 2015) paragraph 112.
To me it's passing strange that information about calls made by a person is not also regarded as being about that person. Can information not be about more than one thing, namely about a customer's services and the customer?
Think about what metadata can be used for, and how broadly-framed privacy laws are meant to stem abuse. If Ben Grubb was found, for example, to have repeatedly called the same Indian takeaway shop, would we not infer something about him and his taste for Indian food? Even if he called the takeaway shop just once, we might still conclude something about him, even if the sample size is small. We might deduce he doesn't like Indian (remember that in Australian law, Personal Information doesn't necessarily have to be correct).
By the AAT's logic, a doctor's appointment book would not represent any Personal Information about her patients but only information about the services she has delivered to them. But in fact the appointment list of an oncologist for instance would tell us a lot about peoples' cancer.
Given the many ways that metadata can invade our privacy (not to mention that people may be killed based on metadata) it's important that the definition of Personal Information be broad, and that it has a low threshold. Any amount of metadata tells us something about the person.
I appreciate that the 'spirit of the law' is not always what matters, but let's compare the definition of Personal Information in Australia with corresponding concepts elsewhere (see more detail beneath). In the USA, Personally Identifiable Information is any data that may "distinguish" an individual; in the UK, Personal Data is anything that "relates" to an individual; in Germany, it is anything "concerning" someone. Clearly the intent is consistent worldwide. If data can be linked to a person, then it comes under data privacy law.
Which is how it should be. Technology neutral privacy law is framed broadly in the interests of consumer protection. I hope the Federal Court in drilling into the definition of Personal Information upholds what the Privacy Act is for.
Personal Information definitions around the world.
Personal Information, Personal Data and Personally Identifiable Information are variously and more or less equivalently defined as follows (references are hyperlinked in the names of each country):
- data which relate to a living individual who can be identified
- any information concerning the personal or material circumstances of an identified or identifiable individual
- information about an identifiable individual
- information which can be used to distinguish or trace an individual's identity ...
- information or an opinion ... about an identified individual, or an individual who is reasonably identifiable.
Posted in Privacy
Almost everything you read about the blockchain is wrong. No new technology since the Internet itself has excited so many pundits, but blockchain just doesn’t do what most people seem to think it does. We’re all used to hype, and we can forgive genuine enthusiasm for shiny new technologies, but many of the claims being made for blockchain are just beyond the pale. It's not going to stamp out corruption in Africa; it's not going to crowdsource policing of the financial system; it's not going to give firefighters unlimited communication channels. So just what is it about blockchain?
The blockchain only does one thing (and it doesn’t even do that very well). It provides a way to verify the order in which entries are made to a ledger, without any centralized authority. In so doing, blockchain solves what security experts thought was an unsolvable problem – preventing the double spend of electronic cash without a central monetary authority. It’s an extraordinary solution, and it comes at an extraordinary price. A large proportion of the entire world’s computing resource has been put to work contributing to the consensus algorithm that continuously watches the state of the ledger. And it has to be so, in order to ward off brute force criminal attack.
How did an extravagant and very technical solution to a very specific problem capture the imagination of so many? Perhaps it’s been so long since the early noughties’ tech wreck that we’ve lost our herd immunity to the viral idea that technology can beget trust. Perhaps, as Arthur C. Clarke said, any sufficiently advanced technology looks like magic. Perhaps because the crypto currency Bitcoin really does have characteristics that could disrupt banking (and all the world hates the banks) blockchain by extension is taken to be universally disruptive. Or perhaps blockchain has simply (but simplistically) legitimized the utopian dream of decentralized computing.
Blockchain is antiauthoritarian and ruthlessly “trust-free”. The blockchain algorithm is rooted in politics; it was expressly designed to work without needing to trust any entity or coalition. Anyone at all can join the blockchain community and be part of the revolution.
The point of the blockchain is to track every single Bitcoin movement, detecting and rejecting double spends. Yet the blockchain APIs also allow other auxiliary data to be written into Bitcoin transactions, and thus tracked. So the suggested applications for blockchain extend far beyond payments, to the management of almost any asset imaginable, from land titles and intellectual property, to precious stones and medical records.
From a design perspective, the most troubling aspect of most non-payments proposals for the blockchain is the failure to explain why it’s better than a regular database. Blockchain does offer enormous redundancy and tamper resistance, thanks to a copy of the ledger staying up-to-date on thousands of computers all around the world, but why is that so much better than a digitally signed database with a good backup?
Remember what blockchain was specifically designed to do: resolve the order of entries in the ledger, in a peer-to-peer mode, without an administrator. When it comes to all-round security, blockchain falls short. It’s neither necessary nor sufficient for any enterprise security application I’ve yet seen. For instance, there is no native encryption for confidentiality; neither is there any access control for reading transactions, or writing new ones. The security qualities of confidentiality, authentication and, above all, authorization, all need to be layered on top of the basic architecture. ‘So what’ you might think; aren’t all security systems layered? Well yes, but the important missing layers undo some of the core assumptions blockchain is founded on, and that’s bad for the security architecture. In particular, as mentioned, blockchain needs massive scale, but access control, “permissioned” chains, and the hybrid private chains and side chains (put forward to meld the freedom of blockchain to the structures of business) all compromise the system’s integrity and fraud resistance.
And then there’s the slippery notion of trust. By “trust”, cryptographers mean so-called “out of band” or manual mechanisms, over and above the pure math and software, that deliver a security promise. Blockchain needs none of that ... so long as you confine yourself to Bitcoin. Many carefree commentators like to say blockchain and Bitcoin are separable, yet the connection runs deeper than they know. Bitcoins are the only things that are actually “on” the blockchain. When people refer to putting land titles or diamonds “on the blockchain”, they’re using a short hand that belies blockchain’s limitations. To represent any physical thing in the ledger requires a schema – a formal agreement as to which symbols in the data structure correspond to what property in the real world – and a binding of the owner of that property to the special private key (known in the trade as a Bitcoin wallet) used to sign each ledger entry. Who does that binding? How exactly do diamond traders, land dealers, doctors and lawyers get their blockchain keys in the first place, and how does the world know who’s who? These questions bring us back to the sorts of hierarchical authorities that blockchain was supposed to get rid of.
There is no utopia in blockchain. The truth is that when we fold real world management, permissions, authorities and trust, back on top of the blockchain, we undo the decentralization at the heart of the design. If we can’t get away from administrators then the idealistic peer-to-peer consensus algorithm of blockchain is academic, and simply too much to bear.
I’ve been studying blockchain for two years now. My latest in-depth report was recently published by Constellation Research.
I was talking with government identity strategists earlier this week. We were circling (yet again) definitions of identity and attributes, and revisiting the reasonable idea that digital identities are "unique in a context". Regular readers will know I'm very interested in context. But in the same session we were discussing the public's understandable anxiety about national ID schemes. And I had a little epiphany that the word "unique" and the very idea of it may be unhelpful. I wonder if we could avoid using the word "uniqueness" wherever we can.
The link from uniqueness to troublesome national identity is not just perception; there is a real tendency for identity and access management (IDAM) systems to over-identify, with an obvious privacy penatly. Security professionals feel instinctively that they more they know about people, the more secure we all will be.
Whenever we think uniqueness is important, I wonder if there are really other more precise objectives that apply? Is "singularity" a better word for the property we're looking for? Or the mouthful "non-ambiguity"? In different use cases, what we really need to know can vary:
- Is the person (or entity) accessing service the same as last time?
- Is the person exercising a credential clear to use it? Delegation of digital identity actually makes "uniqueness" moot)
- Does the Relying Party (RP) know the user "well enough" for the RP's purposes? That doesn't always mean uniquely.
I observe that when IDAM schemes come loaded with reference to uniqueness, it's tends to bias the way RPs do their identification and risk management designs. There is an expectation that uniqueness is important no matter what. Yet it is emerging that much fraud (most fraud?) exploits weaknesses at transaction time, not enrollment time: even if you are identified uniquely, you can still get defrauded by an attacker who takes over or bypasses your authenticator. So uniqueness in and of itself doesn't always help.
If people do want to use the word "unique" then they should have the discipline to always qualify it, as mentioned, as "unique in a context". But I have to say that "unique is a context" is not "unique".
Finally it's worth remembering that the word has long been degraded by the biometrics industry with their habit of calling most any biological trait "unique". There's a sad lack of precision here. No biometric as measured is ever unique! Every mode, even iris, has a non zero False Match Rate.
What's in a word? A lot! I'd like to see more rigorous use of the word "unique". At least let's be aware of what it means subliminally to the people we're talking with - be they technical or otherwise. With the word bandied around so much, engineers can tend to think uniqueness is always a designed objective, and laypeople can presume that every authentication scheme is out to fingerprint them. Literally.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. For a decade now, Lockstep has been monitoring these figures, plotting the trend data and analysing what the industry is doing - and not doing - about Card Not Present fraud. Here is our summary for the financial year 2015 stats.
Card Not Present (CNP) fraud has grown over 25 percent year-on-year from FY2014, and now represents 84 percent of all fraud on Australian cards.
APCA evidently has an uneasy relationship with any of the industry's technological responses to CNP fraud, like the controversial 3D Secure, and tokenization. Neither get a mention in the latest payment fraud media release. Instead APCA puts the stress on shopper behaviour, describing the continuing worsening in fraud as "a timely reminder to Australians to remain vigilant when shopping online". Sadly, this ignores that fact that card data used for organised criminal CNP fraud comes from mass breaches of databases, not from websites. There is nothing that shoppers can do when using their cards online to stop them being stolen, because they're much more likely to get stolen from backend systems over which the shoppers have no control.
You can be as careful as you like online - you can even avoid Internet shopping entirely - and still have your card data stolen from a regular store and used in CNP attacks online.
- "Financial institutions and law enforcement have been working together to target skimming at ATMs and in taxis and this, together with the industry’s progressive roll-out of chip-reading at ATMs, is starting to reflect in the fraud data".
That's true. Fraud by skimming and carding was halved by the smartcard rollout, and has remained low and steady in absolute terms for three years. But APCA errs when it goes on:
- "Cardholders can help these efforts by always protecting their PINs and treating their cards like cash".
Safeguarding your physical card and PIN does nothing to prevent the mass breaches of card data held in backend databases.
A proper fix to replay attack is easily within reach, which would re-use the same cryptography that solves skimming and carding, and would restore a seamless payment experience for card holders. Apple for one has grasped the nettle, and is using its Secure Element-based Apple Pay method (established now for card present NFC payments) for Card Not Present transactions, in the app.
See also my 2012 paper Calling for a Uniform Approach to Card Fraud Offline and On" (PDF).
The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.
This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.
With all the innovation in payments leveraging cryptographic Secure Elements in mobile phones, perhaps at last we will see CNP payments modernise for web and mobile shopping.
World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist's error, calling for unworkable new privacy mechanisms where none in fact are warranted.
The Telegraph reports Berners-Lee as saying "Some people say privacy is dead – get over it. I don't agree with that. The idea that privacy is dead is hopeless and sad." He highlighted that peoples' participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant.
Of course he's right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions "a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me" he said according to The Telegraph.
It's a classic case of being careful what you ask for, in case you get it. What would control over "all data about you" look like? Most of the data about us these days - most of the personal data, aka Personally Identifiable Information (PII) - is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don't know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don't see that ownership could fix a privacy problem.
What could "ownership" of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional.
The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data's creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA?
What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what's going on with their PII, and so on.
Berners-Lee alluded to the privacy threats of Big Data, and he's absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It's not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don't care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent.
Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect.
For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was a shortcut around full compliance; as such it was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement would be eventually annulled, as it was last October.
With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield".
The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweep of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts.
The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU.
But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency.
Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015.
Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make overall privacy better by magic.
For instance, Google in 2010 was caught over-collecting personal information through its StreetView cars. It is widely known (and perfectly acceptable) that mapping companies use the positions of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem here was that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they came across it. In over a dozen countries around the world, Google admitted they had breached local privacy laws by colelcting excessive PII, apologised for the overreach, explained it as inadvertent, and deleted all the WiFi records in question. The matter was settled in just a few months in places like Korea, Japan and Australia. But in the US, where there is no general collection limitation privacy rule, Google has been defending what they did. Absent general data privacy protection, the strongest legislation that seems to apply to the StreetView case is wire tap law, but its application to the Internet is complex. And so the legal action has taken years and years, and it's still not resolved.
I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, concede that the collection of WiFi traffic was unnecessary and wrong.
Other proof that European privacy law is deeper and broader than the Privacy Shield is found in social networking mishaps. Over the years, many of Facebook's business practices for instance have been found unlawful in the EU. Recently there was the final ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identifying otherwise anonymous people, without consent (and without restraint as to what might be done next with that new PII), seems to be an unlawful under the Collection Limitation and Usage Limitation principles.
In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since. Whether they are successful or not will have nothing to do with the "Privacy Shield".
The Privacy Shield comes into a troubled trans-Atlantic privacy environment. Whether or not the new EU-US arrangement fares better than the Safe Harbor remains to be seen. But in any case, since the Privacy Shield really aims to free up business access to data, sadly it's unlikely to do much good for true privacy.
The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data.
The highest court in Germany has ruled that Facebook’s “Find Friends” function is unlawful there. The decision is the culmination of legal action started in 2010 by German consumer groups, and confirms the rulings of other lower courts in 2012 and 2014. The gist of the privacy breach is that Facebook is illegitimately using details of third parties obtained from members, to market to those third parties without their consent. Further, the “Find Friends” feature was found to not be clearly explained to members when they are invited to use it.
My Australian privacy colleague Anna Johnston and I published a paper in 2011 examining these very issues; see "Privacy Compliance Problems for Facebook", IEEE Technology and Society Magazine, V31.2, December 1, 2011, at the Social Science Research Network, SSRN.
Here’s a recap of our analysis.
One of the most significant collections of Personally Identifiable Information (PII) by online social networks is the email address books of members who elect to enable “Find Friends” and similar functions. This is typically the very first thing that a new user is invited to do when they register for an OSN. And why wouldn’t it be? Finding friends is core to social networking.
New Facebook members are advised, immediately after they first register, that “Searching your email account is the fastest way to find your friends”. There is a link to some minimal explanatory information:
- Import contacts from your account and store them on Facebook's servers where they may be used to help others search for or connect with people or to generate suggestions for you or others. Contact info from your contact list and message folders may be imported. Professional contacts may be imported but you should send invites to personal contacts only. Please send invites only to friends who will be glad to get them.
This is pretty subtle. New users may not fully comprehend what is happening when they elect to “Find Friends”.
A key point under international privacy regulations is that this importing of contacts represents an indirect collection of PII of others (people who happen to be in a member’s email address book), without their, knowledge let alone authorisation.
By the way, it’s interesting that Facebook mentions “professional contacts” because there is a particular vulnerability for professionals which I reported in The Journal of Medical Ethics in 2010. If a professional, especially one in sole practice, happens to have used her web mail to communicate with clients, then those clients’ details may be inadvertently uploaded by “Find Friends”, along with crucial metadata like the association with the professional concerned. Subsequently, the network may try to introduce strangers to each other on the basis they are mutual “friends” of that certain professional. In the event she happens to be a mental health counsellor, a divorce attorney or a private detective for instance, the consequences could be grave.
It’s not known how Facebook and other OSNs will respond to the German decision. As Anna Johnston and I wrote in 2011, the quiet collection of people’s details in address books conflicts with basic privacy principles in a great many jurisdictions, not just Germany. The problem has been known for years, so various solutions might be ready to roll out quite quickly. The fix might be as simple in principle as giving proper notice to the people who’s details have been uploaded, before their PII is used by the network. It seems to me that telling people what’s going on like this would, fittingly, be the “social” thing to do.
But the problem from the operators’ commercial points of view is that notices and the like introduce friction, and that’s the enemy of infomopolies. So once again, a major privacy ruling from Europe may see a re-calibration of digital business practices, and some limits placed on the hitherto unrestrained information rush.