Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Mobility and the experience of identity

We all know that digital transformation is imminent, but getting there is far from easy. The digital journey is fraught with challenges, not least of which is customer access. "Online" is not what it used to be; the online world by many measures is bigger than the “real world” and it’s certainly not just a special corner of a network we occasionally log into. Many customers spend a substantial part of their lives online. The very word "online" is losing its meaning, with offline becoming a very unusual state. So enterprises are finding they need to totally rethink customer identity, bringing together the perspectives of CTO for risk management and engineering, and the CMO for the voice of the customer.

Consider this. The customer experience of online identity was set in concrete in the 1960s when information technology meant mainframes and computers only sat in “laboratories”. That was when we had the first network logon. The username and password was designed by sys admins for sys admins.

Passwords were never meant to be easy. Ease of use was irrelevant to system administrators; everything about their job was hard, and if they had to manage dozens of account identifiers, so be it. The security of a password depends on it being hard to remember and therefore, in a sense, hard to use. The efficacy of a password is in fact inversely proportional to its ease of use! Isn't that a unique property in all consumer technology?

The tragedy is that the same access paradigm has been inherited from the mainframe era and passed right on through the Age of the PCs in the 1980s, to the Internet in the 2000s. Before we knew it, we all turned into heavy duty “computer” users. The Personal Computer was always regarded as a miniaturized mainframe, with a graphical user interface layered over one or more arcane operating systems, from which consumers never really escaped.

But now all devices are computers. Famously, a phone today is more powerful than all of NASA’s 1969 moon landing IT put together). And the user experience of “computing” has finally changed, and radically so. Few people ever touch operating system anymore. The whole UX is at the app level. What people know now is all tiles and icons, spoken commands, and gestures. Wipe, drag, tap, flick.

Identity management is probably the last facet of IT to be dragged out of the mainframe era. It's all thanks to mobility. We don’t "log on" anymore, we unlockour device. Occasionally we might be asked to confirm who we are before we do something risky, like look up a health record or make a larger payment. The engineer might call it “trust elevation” or some such but the user feels it’s like a reassuring double check.

We might even stop talking about “Two Factor Authentication” now the mobile is so ubiquitous. The phone is your second factor now, a constant part of your life, hardly ever out of sight, and instantly noticed if lost or stolen. And under the covers, mobile devices can make use of many other signals – history, location, activity, behaviour – to effect continuous or ambient authentication, and look out for misuse.

So the user experience of identity per se is melting away. We simply click on an app within an activated device and things happen. The authentication UX has been dictated for decades by technologists, but now, for the first time, the CTO and the CMO are on the same page when it comes to customer identity.

To explore these crucial trends, Ping Identity is hosting a webinar on June 2, Consumerization Killed the Identity Paradigm. To learn more about customer identity and how to implement it successfully in your enterprise, please join me and Ping Identity’s CTO Patrick Harding and CMO Brian Bell.

Posted in Internet, Identity, Constellation Research

Personal Information: What's it all 'about'?

For the past few years, a crucial case has been playing out in Australia's legal system over the treatment of metadata in privacy law. The next stanza is due to be written soon in the Federal Court.

It all began when a journalist with a keen interest in surveillance, Ben Grubb, wanted to understand the breadth and depth of metadata, and so requested that mobile network operator Telstra provide him a copy of his call records. Grubb thought to exercise his rights to access Personal Information under the Privacy Act. Telstra held back a lot of Grubb's call data, arguing that metadata is not Personal Information and is not subject to the access principle. Grubb appealed to the Australian Privacy Commissioner, who ruled that metadata is identifiable and hence represents Personal Information. Telstra took their case to the Administrative Appeals Tribunal, which found in favor of Telstra, with a surprising interpretation of "Personal Information". And the Commissioner then appealed to the next legal authority up the line.

At yesterday's launch of Privacy Awareness Week in Sydney, the Privacy Commissioner Timothy Pilgrim informed us that the full bench of the Federal Court is due to consider the case in August. This could be significant for data privacy law worldwide, for it all goes to the reach of these sorts of regulations.

I always thought the nuance in Personal Information was in the question of "identifiability" -- which could be contested case by case -- and those good old ambiguous legal modifiers like 'reasonably' or 'readily'. So it was a great surprise that the Administrative Appeals Tribunal, in overruling the Privacy Commissioner in Ben Grubb v Telstra, was exercised instead by the meaning of the word "about".

Recall that the Privacy Act (as amended in 2012) defines Personal Information as:

    • "Information or an opinion about an identified individual, or an individual who is reasonably identifiable: (a) whether the information or opinion is true or not; and (b) whether the information or opinion is recorded in a material form or not."

The original question at the heart of Grubb vs Telstra was whether mobile phone call metadata falls under this definition. Commissioner Pilgrim showed that call metadata is identifiable to the caller (especially identifiable by the phone company itself that keeps extensive records linking metadata to customer records) and therefore counts as Personal Information.

When it reviewed the case, the tribunal agreed with Pilgrim that the metadata was identifiable, but in a surprise twist, found that the metadata is not actually about Ben Grubb but instead is about the services provided to him.

    • Once his call or message was transmitted from the first cell that received it from his mobile device, the [metadata] that was generated was directed to delivering the call or message to its intended recipient. That data is no longer about Mr Grubb or the fact that he made a call or sent a message or about the number or address to which he sent it. It is not about the content of the call or the message ... It is information about the service it provides to Mr Grubb but not about him. See AATA 991 (18 December 2015) paragraph 112.

To me it's passing strange that information about calls made by a person is not also regarded as being about that person. Can information not be about more than one thing, namely about a customer's services and the customer?

Think about what metadata can be used for, and how broadly-framed privacy laws are meant to stem abuse. If Ben Grubb was found, for example, to have repeatedly called the same Indian takeaway shop, would we not infer something about him and his taste for Indian food? Even if he called the takeaway shop just once, we might still conclude something about him, even if the sample size is small. We might deduce he doesn't like Indian (remember that in Australian law, Personal Information doesn't necessarily have to be correct).

By the AAT's logic, a doctor's appointment book would not represent any Personal Information about her patients but only information about the services she has delivered to them. But in fact the appointment list of an oncologist for instance would tell us a lot about peoples' cancer.

Given the many ways that metadata can invade our privacy (not to mention that people may be killed based on metadata) it's important that the definition of Personal Information be broad, and that it has a low threshold. Any amount of metadata tells us something about the person.

I appreciate that the 'spirit of the law' is not always what matters, but let's compare the definition of Personal Information in Australia with corresponding concepts elsewhere (see more detail beneath). In the USA, Personally Identifiable Information is any data that may "distinguish" an individual; in the UK, Personal Data is anything that "relates" to an individual; in Germany, it is anything "concerning" someone. Clearly the intent is consistent worldwide. If data can be linked to a person, then it comes under data privacy law.

Which is how it should be. Technology neutral privacy law is framed broadly in the interests of consumer protection. I hope the Federal Court in drilling into the definition of Personal Information upholds what the Privacy Act is for.

Personal Information definitions around the world.

Personal Information, Personal Data and Personally Identifiable Information are variously and more or less equivalently defined as follows (references are hyperlinked in the names of each country):

United Kingdom

    • data which relate to a living individual who can be identified

Germany

    • any information concerning the personal or material circumstances of an identified or identifiable individual

Canada

    • information about an identifiable individual

United States

    • information which can be used to distinguish or trace an individual's identity ...

Australia

    • information or an opinion ... about an identified individual, or an individual who is reasonably identifiable.

Posted in Privacy

What is it about blockchain?

Almost everything you read about the blockchain is wrong. No new technology since the Internet itself has excited so many pundits, but blockchain just doesn’t do what most people seem to think it does. We’re all used to hype, and we can forgive genuine enthusiasm for shiny new technologies, but many of the claims being made for blockchain are just beyond the pale. It's not going to stamp out corruption in Africa; it's not going to crowdsource policing of the financial system; it's not going to give firefighters unlimited communication channels. So just what is it about blockchain?

The blockchain only does one thing (and it doesn’t even do that very well). It provides a way to verify the order in which entries are made to a ledger, without any centralized authority. In so doing, blockchain solves what security experts thought was an unsolvable problem – preventing the double spend of electronic cash without a central monetary authority. It’s an extraordinary solution, and it comes at an extraordinary price. A large proportion of the entire world’s computing resource has been put to work contributing to the consensus algorithm that continuously watches the state of the ledger. And it has to be so, in order to ward off brute force criminal attack.

How did an extravagant and very technical solution to a very specific problem capture the imagination of so many? Perhaps it’s been so long since the early noughties’ tech wreck that we’ve lost our herd immunity to the viral idea that technology can beget trust. Perhaps, as Arthur C. Clarke said, any sufficiently advanced technology looks like magic. Perhaps because the crypto currency Bitcoin really does have characteristics that could disrupt banking (and all the world hates the banks) blockchain by extension is taken to be universally disruptive. Or perhaps blockchain has simply (but simplistically) legitimized the utopian dream of decentralized computing.

Blockchain is antiauthoritarian and ruthlessly “trust-free”. The blockchain algorithm is rooted in politics; it was expressly designed to work without needing to trust any entity or coalition. Anyone at all can join the blockchain community and be part of the revolution.

The point of the blockchain is to track every single Bitcoin movement, detecting and rejecting double spends. Yet the blockchain APIs also allow other auxiliary data to be written into Bitcoin transactions, and thus tracked. So the suggested applications for blockchain extend far beyond payments, to the management of almost any asset imaginable, from land titles and intellectual property, to precious stones and medical records.

From a design perspective, the most troubling aspect of most non-payments proposals for the blockchain is the failure to explain why it’s better than a regular database. Blockchain does offer enormous redundancy and tamper resistance, thanks to a copy of the ledger staying up-to-date on thousands of computers all around the world, but why is that so much better than a digitally signed database with a good backup?

Remember what blockchain was specifically designed to do: resolve the order of entries in the ledger, in a peer-to-peer mode, without an administrator. When it comes to all-round security, blockchain falls short. It’s neither necessary nor sufficient for any enterprise security application I’ve yet seen. For instance, there is no native encryption for confidentiality; neither is there any access control for reading transactions, or writing new ones. The security qualities of confidentiality, authentication and, above all, authorization, all need to be layered on top of the basic architecture. ‘So what’ you might think; aren’t all security systems layered? Well yes, but the important missing layers undo some of the core assumptions blockchain is founded on, and that’s bad for the security architecture. In particular, as mentioned, blockchain needs massive scale, but access control, “permissioned” chains, and the hybrid private chains and side chains (put forward to meld the freedom of blockchain to the structures of business) all compromise the system’s integrity and fraud resistance.

And then there’s the slippery notion of trust. By “trust”, cryptographers mean so-called “out of band” or manual mechanisms, over and above the pure math and software, that deliver a security promise. Blockchain needs none of that ... so long as you confine yourself to Bitcoin. Many carefree commentators like to say blockchain and Bitcoin are separable, yet the connection runs deeper than they know. Bitcoins are the only things that are actually “on” the blockchain. When people refer to putting land titles or diamonds “on the blockchain”, they’re using a short hand that belies blockchain’s limitations. To represent any physical thing in the ledger requires a schema – a formal agreement as to which symbols in the data structure correspond to what property in the real world – and a binding of the owner of that property to the special private key (known in the trade as a Bitcoin wallet) used to sign each ledger entry. Who does that binding? How exactly do diamond traders, land dealers, doctors and lawyers get their blockchain keys in the first place, and how does the world know who’s who? These questions bring us back to the sorts of hierarchical authorities that blockchain was supposed to get rid of.

There is no utopia in blockchain. The truth is that when we fold real world management, permissions, authorities and trust, back on top of the blockchain, we undo the decentralization at the heart of the design. If we can’t get away from administrators then the idealistic peer-to-peer consensus algorithm of blockchain is academic, and simply too much to bear.

I’ve been studying blockchain for two years now. My latest in-depth report was recently published by Constellation Research.

Posted in Security, Internet, Innovation, Identity, Blockchain

Uniquely difficult

I was talking with government identity strategists earlier this week. We were circling (yet again) definitions of identity and attributes, and revisiting the reasonable idea that digital identities are "unique in a context". Regular readers will know I'm very interested in context. But in the same session we were discussing the public's understandable anxiety about national ID schemes. And I had a little epiphany that the word "unique" and the very idea of it may be unhelpful. I wonder if we could avoid using the word "uniqueness" wherever we can.

The link from uniqueness to troublesome national identity is not just perception; there is a real tendency for identity and access management (IDAM) systems to over-identify, with an obvious privacy penatly. Security professionals feel instinctively that they more they know about people, the more secure we all will be.

Whenever we think uniqueness is important, I wonder if there are really other more precise objectives that apply? Is "singularity" a better word for the property we're looking for? Or the mouthful "non-ambiguity"? In different use cases, what we really need to know can vary:

  • Is the person (or entity) accessing service the same as last time?
  • Is the person exercising a credential clear to use it? Delegation of digital identity actually makes "uniqueness" moot)
  • Does the Relying Party (RP) know the user "well enough" for the RP's purposes? That doesn't always mean uniquely.

I observe that when IDAM schemes come loaded with reference to uniqueness, it's tends to bias the way RPs do their identification and risk management designs. There is an expectation that uniqueness is important no matter what. Yet it is emerging that much fraud (most fraud?) exploits weaknesses at transaction time, not enrollment time: even if you are identified uniquely, you can still get defrauded by an attacker who takes over or bypasses your authenticator. So uniqueness in and of itself doesn't always help.

If people do want to use the word "unique" then they should have the discipline to always qualify it, as mentioned, as "unique in a context". But I have to say that "unique is a context" is not "unique".

Finally it's worth remembering that the word has long been degraded by the biometrics industry with their habit of calling most any biological trait "unique". There's a sad lack of precision here. No biometric as measured is ever unique! Every mode, even iris, has a non zero False Match Rate.

What's in a word? A lot! I'd like to see more rigorous use of the word "unique". At least let's be aware of what it means subliminally to the people we're talking with - be they technical or otherwise. With the word bandied around so much, engineers can tend to think uniqueness is always a designed objective, and laypeople can presume that every authentication scheme is out to fingerprint them. Literally.

Posted in Government, Identity, Privacy, Security

Card Not Present Fraud up another 25% YOY

The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. For a decade now, Lockstep has been monitoring these figures, plotting the trend data and analysing what the industry is doing - and not doing - about Card Not Present fraud. Here is our summary for the financial year 2015 stats.

CNP trends pic to FY 2015

Card Not Present (CNP) fraud has grown over 25 percent year-on-year from FY2014, and now represents 84 percent of all fraud on Australian cards.

APCA evidently has an uneasy relationship with any of the industry's technological responses to CNP fraud, like the controversial 3D Secure, and tokenization. Neither get a mention in the latest payment fraud media release. Instead APCA puts the stress on shopper behaviour, describing the continuing worsening in fraud as "a timely reminder to Australians to remain vigilant when shopping online". Sadly, this ignores that fact that card data used for organised criminal CNP fraud comes from mass breaches of databases, not from websites. There is nothing that shoppers can do when using their cards online to stop them being stolen, because they're much more likely to get stolen from backend systems over which the shoppers have no control.

You can be as careful as you like online - you can even avoid Internet shopping entirely - and still have your card data stolen from a regular store and used in CNP attacks online.

APCA says:

    • "Financial institutions and law enforcement have been working together to target skimming at ATMs and in taxis and this, together with the industry’s progressive roll-out of chip-reading at ATMs, is starting to reflect in the fraud data".

That's true. Fraud by skimming and carding was halved by the smartcard rollout, and has remained low and steady in absolute terms for three years. But APCA errs when it goes on:

    • "Cardholders can help these efforts by always protecting their PINs and treating their cards like cash".

Safeguarding your physical card and PIN does nothing to prevent the mass breaches of card data held in backend databases.

A proper fix to replay attack is easily within reach, which would re-use the same cryptography that solves skimming and carding, and would restore a seamless payment experience for card holders. Apple for one has grasped the nettle, and is using its Secure Element-based Apple Pay method (established now for card present NFC payments) for Card Not Present transactions, in the app.

See also my 2012 paper Calling for a Uniform Approach to Card Fraud Offline and On" (PDF).

Abstract

The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.

This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.

With all the innovation in payments leveraging cryptographic Secure Elements in mobile phones, perhaps at last we will see CNP payments modernise for web and mobile shopping.

Posted in Smartcards, Security, Payments, Innovation, Fraud

The last thing privacy needs is new laws

World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist's error, calling for unworkable new privacy mechanisms where none in fact are warranted.

The Telegraph reports Berners-Lee as saying "Some people say privacy is dead – get over it. I don't agree with that. The idea that privacy is dead is hopeless and sad." He highlighted that peoples' participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant.

Of course he's right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions "a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me" he said according to The Telegraph.

It's a classic case of being careful what you ask for, in case you get it. What would control over "all data about you" look like? Most of the data about us these days - most of the personal data, aka Personally Identifiable Information (PII) - is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don't know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don't see that ownership could fix a privacy problem.

What could "ownership" of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional.

The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data's creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA?

What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what's going on with their PII, and so on.

Berners-Lee alluded to the privacy threats of Big Data, and he's absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It's not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don't care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent.

Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect.

Posted in Privacy, e-health, Big Data

The Privacy Shield - another blunt weapon

For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was a shortcut around full compliance; as such it was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement would be eventually annulled, as it was last October.

With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield".

The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweep of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts.

The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU.

But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency.

Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015.

Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make overall privacy better by magic.

For instance, Google in 2010 was caught over-collecting personal information through its StreetView cars. It is widely known (and perfectly acceptable) that mapping companies use the positions of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem here was that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they came across it. In over a dozen countries around the world, Google admitted they had breached local privacy laws by colelcting excessive PII, apologised for the overreach, explained it as inadvertent, and deleted all the WiFi records in question. The matter was settled in just a few months in places like Korea, Japan and Australia. But in the US, where there is no general collection limitation privacy rule, Google has been defending what they did. Absent general data privacy protection, the strongest legislation that seems to apply to the StreetView case is wire tap law, but its application to the Internet is complex. And so the legal action has taken years and years, and it's still not resolved.

I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, concede that the collection of WiFi traffic was unnecessary and wrong.

Other proof that European privacy law is deeper and broader than the Privacy Shield is found in social networking mishaps. Over the years, many of Facebook's business practices for instance have been found unlawful in the EU. Recently there was the final ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identifying otherwise anonymous people, without consent (and without restraint as to what might be done next with that new PII), seems to be an unlawful under the Collection Limitation and Usage Limitation principles.

In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since. Whether they are successful or not will have nothing to do with the "Privacy Shield".

The Privacy Shield comes into a troubled trans-Atlantic privacy environment. Whether or not the new EU-US arrangement fares better than the Safe Harbor remains to be seen. But in any case, since the Privacy Shield really aims to free up business access to data, sadly it's unlikely to do much good for true privacy.

The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data.

Posted in Social Networking, Social Media, Privacy, Biometrics, Big Data

Finding friends gets a little harder

The highest court in Germany has ruled that Facebook’s “Find Friends” function is unlawful there. The decision is the culmination of legal action started in 2010 by German consumer groups, and confirms the rulings of other lower courts in 2012 and 2014. The gist of the privacy breach is that Facebook is illegitimately using details of third parties obtained from members, to market to those third parties without their consent. Further, the “Find Friends” feature was found to not be clearly explained to members when they are invited to use it.

My Australian privacy colleague Anna Johnston and I published a paper in 2011 examining these very issues; see "Privacy Compliance Problems for Facebook", IEEE Technology and Society Magazine, V31.2, December 1, 2011, at the Social Science Research Network, SSRN.

Here’s a recap of our analysis.

One of the most significant collections of Personally Identifiable Information (PII) by online social networks is the email address books of members who elect to enable “Find Friends” and similar functions. This is typically the very first thing that a new user is invited to do when they register for an OSN. And why wouldn’t it be? Finding friends is core to social networking.

New Facebook members are advised, immediately after they first register, that “Searching your email account is the fastest way to find your friends”. There is a link to some minimal explanatory information:

    • Import contacts from your account and store them on Facebook's servers where they may be used to help others search for or connect with people or to generate suggestions for you or others. Contact info from your contact list and message folders may be imported. Professional contacts may be imported but you should send invites to personal contacts only. Please send invites only to friends who will be glad to get them.

This is pretty subtle. New users may not fully comprehend what is happening when they elect to “Find Friends”.

A key point under international privacy regulations is that this importing of contacts represents an indirect collection of PII of others (people who happen to be in a member’s email address book), without their, knowledge let alone authorisation.

By the way, it’s interesting that Facebook mentions “professional contacts” because there is a particular vulnerability for professionals which I reported in The Journal of Medical Ethics in 2010. If a professional, especially one in sole practice, happens to have used her web mail to communicate with clients, then those clients’ details may be inadvertently uploaded by “Find Friends”, along with crucial metadata like the association with the professional concerned. Subsequently, the network may try to introduce strangers to each other on the basis they are mutual “friends” of that certain professional. In the event she happens to be a mental health counsellor, a divorce attorney or a private detective for instance, the consequences could be grave.

It’s not known how Facebook and other OSNs will respond to the German decision. As Anna Johnston and I wrote in 2011, the quiet collection of people’s details in address books conflicts with basic privacy principles in a great many jurisdictions, not just Germany. The problem has been known for years, so various solutions might be ready to roll out quite quickly. The fix might be as simple in principle as giving proper notice to the people who’s details have been uploaded, before their PII is used by the network. It seems to me that telling people what’s going on like this would, fittingly, be the “social” thing to do.

But the problem from the operators’ commercial points of view is that notices and the like introduce friction, and that’s the enemy of infomopolies. So once again, a major privacy ruling from Europe may see a re-calibration of digital business practices, and some limits placed on the hitherto unrestrained information rush.

Posted in Social Networking, Privacy

Weak links in the Blockchain

One of the silliest things I've read yet about blockchain came out in Business Insider Australia last week. They said that the blockchain “in effect” lets the crowd police the monetary system.

In the rush to make bigger and grander claims for the disruptive potential of blockchain, too many commentators are neglecting the foundations. If they think blockchain is important, then it’s all the more important they understand what it does well, and what it just doesn’t do at all.

Blockchain has one very clever, very innovative trick: it polices the order of special events (namely Bitcoin spends) without needing a central authority. The main “security” that blockchain provides is nottamper resistance or inviolability per se -- you can get that any number of ways using standard cryptography -- but rather it’s the process for a big network of nodes to reach agreement on the state of a distributed ledger, especially the order of updates to the ledger.

To say blockchain is “more secure” is a non sequitur. Security claims need context.

  • If what matters is agreeing ‘democratically’ on the order of events in a decentralised public ledger, without any central authority, then blockchain makes sense.
  • But if you don't care about the order of events, then blockchain is probably irrelevant or, at best, heavily over-engineered.
  • And if you do care about the order of events (like stock transactions) but you have some central authority in your system (like a stock exchange), then blockchain is not only over-engineered, but its much-admired maths is compromised by efforts to scale it down, into private chains and the like, for the power of the original blockchain consensus algorithm lies in its vast network, and the Bitcoin rewards for the miners that power it.

A great thing about blockchain is the innovation it has inspired. But let’s remember that the blockchain (the one underpinning Bitcoin) has been around for just seven years, and its spinoffs are barely out of the lab. Analysts and journalists are bound to be burnt if they over-reach at this early stage.

The initiatives to build smaller, private or special purpose distributed ledgers, to get away from Bitcoin and payments, detract from the original innovation, in two important ways. Firstly, even if they replace the Bitcoin incentive for running the network (i.e. mining or “proof of work”) with some other economic model (like “proof of stake”), they compromise the tamper resistance of blockchain by shrinking the pool. And secondly, as soon as you fold some command and control back into the original utopia, blockchain’s raison d'etre is no longer clear, and its construction looks over-engineered.

Business journalists are supposed to be sceptical about technology, but many have apparently taken leave of their critical faculties, even talking up blockchain as a "trust machine". You don’t need to be a cryptographer to understand the essence of blockchain, you just have to be cautious with magic words like “open” and “decentralised”, and the old saw "trust". What do they really mean? Blockchain does things that not all applications really need, and it doesn't do what many apps do need, like access control and confidentiality.

Didn't we learn from PKI that technology doesn't confer trust? It's been claimed that putting land titles on the blockchain will prevent government corruption. To which I say, please heed Bruce Schneier, who said only amateurs hack computers; professional criminals hack people.

Posted in Security, Payments, Innovation, Blockchain, Trust

A brush with fame (not)

Wired thinks it has unmasked Bitcoin inventor Satoshi Nakamoto as an Australian security personality Craig Wright. Plenty of others beg to differ.

Curiously, I had an ugly argument with Wright and a handful of Bitcoin enthusiasts on Twitter in May 2015.

It started after I asked a simple question about why some people had started advocating blockchain for identity. I didn't get a straight answer, but instead copped a fair bit of abuse. Wright's Twitter account has since been deleted, so it's hard to reconstruct the thread (I'd love it if someone out there knows how to extract a more complete Twitter archive; I don't suppose anyone Storified the thread?).

Reproduced below is one side of the spat. I only have my own archived tweets from the time in question but you should get the gist. Wright could never stick to the point - what does blockchain have to offer identity management? Instead he took all inquiries as an attack. He's passionate about Bitcoin changing the world, and if I recall correctly, boasted of his own enormous wealth from Bitcoin mining (he's no crypto-anarchist, as is clear from his exhorbitant LinkedIn profile, one of the longest you'll ever see). Wright's arguments were all deflections; he even dredged up a PKI project from 17 years ago on which we worked together, where evidently he and I had some difference of opinion, something I honestly can't remember.

10/05/2015 3:32 Blockchain-for-identity proponents: Please set out the problem to be solved, analyse it, state your proposal, and argue its benefits.
11/05/2015 22:52 .@caelyxsec: "Bitcoin is just soft certs" @matthewsinclair < Classic!
11/05/2015 22:56 .@matthewsinclair @caelyxsec "Passport", "no central authority", "no walled gardens". Same old utopian slogans. Plus blockmagic.
11/05/2015 22:57 What does a Onelogin actually mean? It's a nickname. Who vouches for it? @matthewsinclair @caelyxsec
11/05/2015 23:09 .@matthewsinclair: @caelyxsec "what does having my Twitter & GitHub usernames signed into the blockchain actually mean?"; Not much.
15/05/2015 8:20 Seems to be a first-come-first-served nickname and self-certified details saved to the #blockchain. @paulmadsen @iglazer @TechPolicy
15/05/2015 8:24 .@Chris_Skinner "Repeat after me: Bitcoin Bad, Blockchain Good"; But good for what? Time stamped archive.
15/05/2015 9:27 .@craigvallis @paulmadsen @iglazer Very little! I don't see identity specialists advocating #blockchain for pressing identity problems
15/05/2015 10:28 RT @craigvallis: @Steve_Lockstep @paulmadsen @iglazer Heard the same from BitCoin specialists, without the coin blockchain is just a database
15/05/2015 10:31 .@craigvallis Clever contribution of #blockchain is to solve the double spend problem. But not a problem in identity @paulmadsen @iglazer
15/05/2015 21:26 .@Chris_Skinner Sure, I get Bitcoin for some payments, but I don't get #blockchain for anything else.
15/05/2015 22:15 .@Chris_Skinner Nope. Blockchain special properties relate to stopping double spend. I don't see the advantages for eg contract exchange
15/05/2015 22:21 1/2 - Thesis: #blockchain is a bit magical, so some guess it must have potential beyond payments - like identity. We need rigor here
15/05/2015 22:23 2/2 - I liken this to the way some are enamored with Quantum Mechanics to explain eg consciousness. Even magic has limits.
15/05/2015 23:16 Turns out BTC is hard to sustain even for payments. But for non-payments, is there any business model at all? https://t.co/69eHD9ssFi
15/05/2015 23:36 .@Dr_Craig_Wright Actually I always proposed community based PKI http://t.co/DagiIx74la (2003) http://t.co/o6aYQWvqMA (2008). Going strong
15/05/2015 23:40 .@Dr_Craig_Wright There's not much to attack. I still can't find a rigorous explanation of blockchain for identity.
16/05/2015 1:01 .@Dr_Craig_Wright So most people are just guessing that blockchain has potential for identity.
16/05/2015 1:09 .@Dr_Craig_Wright But maybe you can point me to one those many sources to explain the potential of blockchain or whatever for identity?
16/05/2015 1:23 .@BitcoinBelle Please explain what blockchain does that a digital signature chained to eg a bank does not? @Chris_Skinner @Dr_Craig_Wright
16/05/2015 1:27 @Dr_Craig_Wright @BitcoinBelle @Chris_Skinner Explanations please, not abuse.
16/05/2015 1:29 .@BitcoinBelle I get BTC for the unbanked. I do. But I don't get contracts or patents in that setting. @Chris_Skinner @Dr_Craig_Wright
16/05/2015 1:32 @BitcoinBelle Can you follow a thread? Or a line of logic?
16/05/2015 1:34 .@BitcoinBelle So once again, explain please how a timestamp plus tamper resistance is special? @Chris_Skinner @Dr_Craig_Wright
16/05/2015 1:42 1/4: @benmcginnes Proof of what? Someone unilaterally asserted something about themselves? @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright
16/05/2015 1:43 2/4: "Proof" to what standard? That word implies accreditation somewhere. @benmcginnes @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright
16/05/2015 1:44 3/4: Who relies on the proof? ie what's the detailed use case? @benmcginnes @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright
16/05/2015 1:47 4/4: Why/how does interfacing to blockchain give better proof than a PK cert? @benmcginnes @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright
16/05/2015 2:40 .@benmcginnes Math proof in identity is the easy bit. Proof of attributes and rel'ships matters more. @Chris_Skinner @Dr_Craig_Wright
16/05/2015 2:43 .@benmcginnes Oh please. That's why I'm asking people to compare 2 types: blockchain and PK certs. @Chris_Skinner @Dr_Craig_Wright
16/05/2015 2:46 .@Dr_Craig_Wright I mean accred in the broadest sense: a disinterested endorsement. Self asserted means 0 @benmcginnes @Chris_Skinner
16/05/2015 3:18 .@Dr_Craig_Wright Something I said in a PKI advisory 17 years seems to still be eating you Craig. What is it? @benmcginnes
16/05/2015 5:12 .@BitcoinBelle But. Why. Bother? What's better about blockchain, compared with putting your hysterics on Twitter? @el33th4xor
16/05/2015 5:16 So I asked for an explanation of #blockchain for identity. And all I get is hippy nonsense - it's not central, not fiat, not govt.
16/05/2015 8:35 @futureidentity It's certainly the case with Bitcoin that it's more about the people than the technology.
16/05/2015 10:26 @jonmatonis @futureidentity Thanks but sorry, what do you mean by user defined privacy?
16/05/2015 10:27 @jonmatonis @futureidentity Please explain deniability of ownership.
16/05/2015 11:06 .@jonmatonis Thanks. How is that realized with blockchain where all transactions are available for all to see? @futureidentity
16/05/2015 12:10 .@benmcginnes I don't need visuals. I need blockchain-for-identity pundits to set out the problem it solves. @jonmatonis @futureidentity
16/05/2015 19:52 Twitter: Where you can be sure to find all the answers to questions you never asked.
16/05/2015 19:57 .@adam3us But why #blockchain? It was designed to stop double spend. Cheaper ways to hold immutable attributes @jonmatonis @futureidentity
16/05/2015 20:04 RT @adam3us: .@Steve_Lockstep @jonmatonis @futureidentity Well indeed identity does not belong on chain. Payment protocol is offchain
16/05/2015 20:09 .@cdelargy Which id mgt action corresponds to spending? Is it each presentation of "I am Steve"? @adam3us @jonmatonis @futureidentity
16/05/2015 20:18 .@jonmatonis Which is to say identity is not the new form of currency? .@futureidentity
16/05/2015 20:21 .@adam3us Auxillary info meaning the attributes and most importantly who vouches for them? @cdelargy @jonmatonis @futureidentity
16/05/2015 22:00 RT @adam3us: .@Steve_Lockstep @cdelargy @jonmatonis @futureidentity Yes Blockchain hasn't bandwidth for finance app msgs with identity
16/05/2015 22:26 .@Beautyon_ Not at all. I've articulated how I see the main id problem to solve: http://t.co/LPXBHieawT I ask others do the same
16/05/2015 22:31 .@Beautyon_ I'm not anti Bitcoin. I'm pro rigor. Almost nobody weighing in articulates the IDAM problem blockchain supposedly fixes
16/05/2015 22:33 .@Beautyon_ I think I agree. Names per se are not as important as the more general "Here's an attribute about me you can rely on"
16/05/2015 22:36 .@Beautyon_ So I say we need IDAM system to imbue attributes with pedigree and present them so RPs r assured of pedigree and user control
16/05/2015 22:38 .@Beautyon_ If blockchain is involved in every attribute presentation, is bandwidth ok? And isn't the 10 minute reconciliation too long?
16/05/2015 22:40 .@Beautyon_ No, I frame identity as "what do I need to know about you to be able to deal with you?" in a context.
16/05/2015 22:47 .@Beautyon_ In the lingo of IDAM, the holder of the asset you want to access is the Relying Party. They rely on your credential or key.
16/05/2015 23:03 @Beautyon_ No I don't use GPG. Maybe I might still understand if someone offers an explanation.
16/05/2015 23:08 .@Beautyon_ Why the elitism? Why can't blockchain enthusiasts explain themselves to the unwashed? You're like Freemasons
16/05/2015 23:17 .@Beautyon_ 20 years in PKI. I think I got the basics. And an allergy to people who can't explain their craft in natural language.
17/05/2015 3:42 .@WulfKhan IDAM is complicated. Many facets. Many problems. Which are addressed by blockchain? I am not on about BTC. @Beautyon_
17/05/2015 4:22 .@Beautyon_ I advise organisations on non trivial authentication and privacy problems. DIY secrecy is not important in my world.
17/05/2015 4:35 User pseudonymity is a crude fragile measure. Privacy != secrecy. It's about what others do with info about you. https://t.co/VpiKWHTLBH

For what it's worth, in my wildest dreams I can't imagine the confusing, self-important Craig Wright being Nakamoto.

Posted in Blockchain, Security