I was talking with government identity strategists earlier this week. We were circling (yet again) definitions of identity and attributes, and revisiting the reasonable idea that digital identities are "unique in a context". Regular readers will know I'm very interested in context. But in the same session we were discussing the public's understandable anxiety about national ID schemes. And I had a little epiphany that the word "unique" and the very idea of it may be unhelpful. I wonder if we could avoid using the word "uniqueness" wherever we can.
The link from uniqueness to troublesome national identity is not just perception; there is a real tendency for identity and access management (IDAM) systems to over-identify, with an obvious privacy penatly. Security professionals feel instinctively that they more they know about people, the more secure we all will be.
Whenever we think uniqueness is important, I wonder if there are really other more precise objectives that apply? Is "singularity" a better word for the property we're looking for? Or the mouthful "non-ambiguity"? In different use cases, what we really need to know can vary:
- Is the person (or entity) accessing service the same as last time?
- Is the person exercising a credential clear to use it? Delegation of digital identity actually makes "uniqueness" moot)
- Does the Relying Party (RP) know the user "well enough" for the RP's purposes? That doesn't always mean uniquely.
I observe that when IDAM schemes come loaded with reference to uniqueness, it's tends to bias the way RPs do their identification and risk management designs. There is an expectation that uniqueness is important no matter what. Yet it is emerging that much fraud (most fraud?) exploits weaknesses at transaction time, not enrollment time: even if you are identified uniquely, you can still get defrauded by an attacker who takes over or bypasses your authenticator. So uniqueness in and of itself doesn't always help.
If people do want to use the word "unique" then they should have the discipline to always qualify it, as mentioned, as "unique in a context". But I have to say that "unique is a context" is not "unique".
Finally it's worth remembering that the word has long been degraded by the biometrics industry with their habit of calling most any biological trait "unique". There's a sad lack of precision here. No biometric as measured is ever unique! Every mode, even iris, has a non zero False Match Rate.
What's in a word? A lot! I'd like to see more rigorous use of the word "unique". At least let's be aware of what it means subliminally to the people we're talking with - be they technical or otherwise. With the word bandied around so much, engineers can tend to think uniqueness is always a designed objective, and laypeople can presume that every authentication scheme is out to fingerprint them. Literally.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. For a decade now, Lockstep has been monitoring these figures, plotting the trend data and analysing what the industry is doing - and not doing - about Card Not Present fraud. Here is our summary for the financial year 2015 stats.
Card Not Present (CNP) fraud has grown over 25 percent year-on-year from FY2014, and now represents 84 percent of all fraud on Australian cards.
APCA evidently has an uneasy relationship with any of the industry's technological responses to CNP fraud, like the controversial 3D Secure, and tokenization. Neither get a mention in the latest payment fraud media release. Instead APCA puts the stress on shopper behaviour, describing the continuing worsening in fraud as "a timely reminder to Australians to remain vigilant when shopping online". Sadly, this ignores that fact that card data used for organised criminal CNP fraud comes from mass breaches of databases, not from websites. There is nothing that shoppers can do when using their cards online to stop them being stolen, because they're much more likely to get stolen from backend systems over which the shoppers have no control.
You can be as careful as you like online - you can even avoid Internet shopping entirely - and still have your card data stolen from a regular store and used in CNP attacks online.
- "Financial institutions and law enforcement have been working together to target skimming at ATMs and in taxis and this, together with the industry’s progressive roll-out of chip-reading at ATMs, is starting to reflect in the fraud data".
That's true. Fraud by skimming and carding was halved by the smartcard rollout, and has remained low and steady in absolute terms for three years. But APCA errs when it goes on:
- "Cardholders can help these efforts by always protecting their PINs and treating their cards like cash".
Safeguarding your physical card and PIN does nothing to prevent the mass breaches of card data held in backend databases.
A proper fix to replay attack is easily within reach, which would re-use the same cryptography that solves skimming and carding, and would restore a seamless payment experience for card holders. Apple for one has grasped the nettle, and is using its Secure Element-based Apple Pay method (established now for card present NFC payments) for Card Not Present transactions, in the app.
See also my 2012 paper Calling for a Uniform Approach to Card Fraud Offline and On" (PDF).
The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.
This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.
With all the innovation in payments leveraging cryptographic Secure Elements in mobile phones, perhaps at last we will see CNP payments modernise for web and mobile shopping.
World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist's error, calling for unworkable new privacy mechanisms where none in fact are warranted.
The Telegraph reports Berners-Lee as saying "Some people say privacy is dead – get over it. I don't agree with that. The idea that privacy is dead is hopeless and sad." He highlighted that peoples' participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant.
Of course he's right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions "a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me" he said according to The Telegraph.
It's a classic case of being careful what you ask for, in case you get it. What would control over "all data about you" look like? Most of the data about us these days - most of the personal data, aka Personally Identifiable Information (PII) - is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don't know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don't see that ownership could fix a privacy problem.
What could "ownership" of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional.
The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data's creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA?
What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what's going on with their PII, and so on.
Berners-Lee alluded to the privacy threats of Big Data, and he's absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It's not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don't care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent.
Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect.
For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was a shortcut around full compliance; as such it was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement would be eventually annulled, as it was last October.
With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield".
The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweep of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts.
The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU.
But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency.
Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015.
Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make overall privacy better by magic.
For instance, Google in 2010 was caught over-collecting personal information through its StreetView cars. It is widely known (and perfectly acceptable) that mapping companies use the positions of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem here was that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they came across it. In over a dozen countries around the world, Google admitted they had breached local privacy laws by colelcting excessive PII, apologised for the overreach, explained it as inadvertent, and deleted all the WiFi records in question. The matter was settled in just a few months in places like Korea, Japan and Australia. But in the US, where there is no general collection limitation privacy rule, Google has been defending what they did. Absent general data privacy protection, the strongest legislation that seems to apply to the StreetView case is wire tap law, but its application to the Internet is complex. And so the legal action has taken years and years, and it's still not resolved.
I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, concede that the collection of WiFi traffic was unnecessary and wrong.
Other proof that European privacy law is deeper and broader than the Privacy Shield is found in social networking mishaps. Over the years, many of Facebook's business practices for instance have been found unlawful in the EU. Recently there was the final ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identifying otherwise anonymous people, without consent (and without restraint as to what might be done next with that new PII), seems to be an unlawful under the Collection Limitation and Usage Limitation principles.
In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since. Whether they are successful or not will have nothing to do with the "Privacy Shield".
The Privacy Shield comes into a troubled trans-Atlantic privacy environment. Whether or not the new EU-US arrangement fares better than the Safe Harbor remains to be seen. But in any case, since the Privacy Shield really aims to free up business access to data, sadly it's unlikely to do much good for true privacy.
The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data.
The highest court in Germany has ruled that Facebook’s “Find Friends” function is unlawful there. The decision is the culmination of legal action started in 2010 by German consumer groups, and confirms the rulings of other lower courts in 2012 and 2014. The gist of the privacy breach is that Facebook is illegitimately using details of third parties obtained from members, to market to those third parties without their consent. Further, the “Find Friends” feature was found to not be clearly explained to members when they are invited to use it.
My Australian privacy colleague Anna Johnston and I published a paper in 2011 examining these very issues; see "Privacy Compliance Problems for Facebook", IEEE Technology and Society Magazine, V31.2, December 1, 2011, at the Social Science Research Network, SSRN.
Here’s a recap of our analysis.
One of the most significant collections of Personally Identifiable Information (PII) by online social networks is the email address books of members who elect to enable “Find Friends” and similar functions. This is typically the very first thing that a new user is invited to do when they register for an OSN. And why wouldn’t it be? Finding friends is core to social networking.
New Facebook members are advised, immediately after they first register, that “Searching your email account is the fastest way to find your friends”. There is a link to some minimal explanatory information:
- Import contacts from your account and store them on Facebook's servers where they may be used to help others search for or connect with people or to generate suggestions for you or others. Contact info from your contact list and message folders may be imported. Professional contacts may be imported but you should send invites to personal contacts only. Please send invites only to friends who will be glad to get them.
This is pretty subtle. New users may not fully comprehend what is happening when they elect to “Find Friends”.
A key point under international privacy regulations is that this importing of contacts represents an indirect collection of PII of others (people who happen to be in a member’s email address book), without their, knowledge let alone authorisation.
By the way, it’s interesting that Facebook mentions “professional contacts” because there is a particular vulnerability for professionals which I reported in The Journal of Medical Ethics in 2010. If a professional, especially one in sole practice, happens to have used her web mail to communicate with clients, then those clients’ details may be inadvertently uploaded by “Find Friends”, along with crucial metadata like the association with the professional concerned. Subsequently, the network may try to introduce strangers to each other on the basis they are mutual “friends” of that certain professional. In the event she happens to be a mental health counsellor, a divorce attorney or a private detective for instance, the consequences could be grave.
It’s not known how Facebook and other OSNs will respond to the German decision. As Anna Johnston and I wrote in 2011, the quiet collection of people’s details in address books conflicts with basic privacy principles in a great many jurisdictions, not just Germany. The problem has been known for years, so various solutions might be ready to roll out quite quickly. The fix might be as simple in principle as giving proper notice to the people who’s details have been uploaded, before their PII is used by the network. It seems to me that telling people what’s going on like this would, fittingly, be the “social” thing to do.
But the problem from the operators’ commercial points of view is that notices and the like introduce friction, and that’s the enemy of infomopolies. So once again, a major privacy ruling from Europe may see a re-calibration of digital business practices, and some limits placed on the hitherto unrestrained information rush.
One of the silliest things I've read yet about blockchain came out in Business Insider Australia last week. They said that the blockchain “in effect” lets the crowd police the monetary system.
In the rush to make bigger and grander claims for the disruptive potential of blockchain, too many commentators are neglecting the foundations. If they think blockchain is important, then it’s all the more important they understand what it does well, and what it just doesn’t do at all.
Blockchain has one very clever, very innovative trick: it polices the order of special events (namely Bitcoin spends) without needing a central authority. The main “security” that blockchain provides is nottamper resistance or inviolability per se -- you can get that any number of ways using standard cryptography -- but rather it’s the process for a big network of nodes to reach agreement on the state of a distributed ledger, especially the order of updates to the ledger.
To say blockchain is “more secure” is a non sequitur. Security claims need context.
- If what matters is agreeing ‘democratically’ on the order of events in a decentralised public ledger, without any central authority, then blockchain makes sense.
- But if you don't care about the order of events, then blockchain is probably irrelevant or, at best, heavily over-engineered.
- And if you do care about the order of events (like stock transactions) but you have some central authority in your system (like a stock exchange), then blockchain is not only over-engineered, but its much-admired maths is compromised by efforts to scale it down, into private chains and the like, for the power of the original blockchain consensus algorithm lies in its vast network, and the Bitcoin rewards for the miners that power it.
A great thing about blockchain is the innovation it has inspired. But let’s remember that the blockchain (the one underpinning Bitcoin) has been around for just seven years, and its spinoffs are barely out of the lab. Analysts and journalists are bound to be burnt if they over-reach at this early stage.
The initiatives to build smaller, private or special purpose distributed ledgers, to get away from Bitcoin and payments, detract from the original innovation, in two important ways. Firstly, even if they replace the Bitcoin incentive for running the network (i.e. mining or “proof of work”) with some other economic model (like “proof of stake”), they compromise the tamper resistance of blockchain by shrinking the pool. And secondly, as soon as you fold some command and control back into the original utopia, blockchain’s raison d'etre is no longer clear, and its construction looks over-engineered.
Business journalists are supposed to be sceptical about technology, but many have apparently taken leave of their critical faculties, even talking up blockchain as a "trust machine". You don’t need to be a cryptographer to understand the essence of blockchain, you just have to be cautious with magic words like “open” and “decentralised”, and the old saw "trust". What do they really mean? Blockchain does things that not all applications really need, and it doesn't do what many apps do need, like access control and confidentiality.
Didn't we learn from PKI that technology doesn't confer trust? It's been claimed that putting land titles on the blockchain will prevent government corruption. To which I say, please heed Bruce Schneier, who said only amateurs hack computers; professional criminals hack people.
Curiously, I had an ugly argument with Wright and a handful of Bitcoin enthusiasts on Twitter in May 2015.
It started after I asked a simple question about why some people had started advocating blockchain for identity. I didn't get a straight answer, but instead copped a fair bit of abuse. Wright's Twitter account has since been deleted, so it's hard to reconstruct the thread (I'd love it if someone out there knows how to extract a more complete Twitter archive; I don't suppose anyone Storified the thread?).
Reproduced below is one side of the spat. I only have my own archived tweets from the time in question but you should get the gist. Wright could never stick to the point - what does blockchain have to offer identity management? Instead he took all inquiries as an attack. He's passionate about Bitcoin changing the world, and if I recall correctly, boasted of his own enormous wealth from Bitcoin mining (he's no crypto-anarchist, as is clear from his exhorbitant LinkedIn profile, one of the longest you'll ever see). Wright's arguments were all deflections; he even dredged up a PKI project from 17 years ago on which we worked together, where evidently he and I had some difference of opinion, something I honestly can't remember.
|10/05/2015 3:32||Blockchain-for-identity proponents: Please set out the problem to be solved, analyse it, state your proposal, and argue its benefits.|
|11/05/2015 22:52||.@caelyxsec: "Bitcoin is just soft certs" @matthewsinclair < Classic!|
|11/05/2015 22:56||.@matthewsinclair @caelyxsec "Passport", "no central authority", "no walled gardens". Same old utopian slogans. Plus blockmagic.|
|11/05/2015 22:57||What does a Onelogin actually mean? It's a nickname. Who vouches for it? @matthewsinclair @caelyxsec|
|11/05/2015 23:09||.@matthewsinclair: @caelyxsec "what does having my Twitter & GitHub usernames signed into the blockchain actually mean?"; Not much.|
|15/05/2015 8:20||Seems to be a first-come-first-served nickname and self-certified details saved to the #blockchain. @paulmadsen @iglazer @TechPolicy|
|15/05/2015 8:24||.@Chris_Skinner "Repeat after me: Bitcoin Bad, Blockchain Good"; But good for what? Time stamped archive.|
|15/05/2015 9:27||.@craigvallis @paulmadsen @iglazer Very little! I don't see identity specialists advocating #blockchain for pressing identity problems|
|15/05/2015 10:28||RT @craigvallis: @Steve_Lockstep @paulmadsen @iglazer Heard the same from BitCoin specialists, without the coin blockchain is just a database|
|15/05/2015 10:31||.@craigvallis Clever contribution of #blockchain is to solve the double spend problem. But not a problem in identity @paulmadsen @iglazer|
|15/05/2015 21:26||.@Chris_Skinner Sure, I get Bitcoin for some payments, but I don't get #blockchain for anything else.|
|15/05/2015 22:15||.@Chris_Skinner Nope. Blockchain special properties relate to stopping double spend. I don't see the advantages for eg contract exchange|
|15/05/2015 22:21||1/2 - Thesis: #blockchain is a bit magical, so some guess it must have potential beyond payments - like identity. We need rigor here|
|15/05/2015 22:23||2/2 - I liken this to the way some are enamored with Quantum Mechanics to explain eg consciousness. Even magic has limits.|
|15/05/2015 23:16||Turns out BTC is hard to sustain even for payments. But for non-payments, is there any business model at all? https://t.co/69eHD9ssFi|
|15/05/2015 23:36||.@Dr_Craig_Wright Actually I always proposed community based PKI http://t.co/DagiIx74la (2003) http://t.co/o6aYQWvqMA (2008). Going strong|
|15/05/2015 23:40||.@Dr_Craig_Wright There's not much to attack. I still can't find a rigorous explanation of blockchain for identity.|
|16/05/2015 1:01||.@Dr_Craig_Wright So most people are just guessing that blockchain has potential for identity.|
|16/05/2015 1:09||.@Dr_Craig_Wright But maybe you can point me to one those many sources to explain the potential of blockchain or whatever for identity?|
|16/05/2015 1:23||.@BitcoinBelle Please explain what blockchain does that a digital signature chained to eg a bank does not? @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 1:27||@Dr_Craig_Wright @BitcoinBelle @Chris_Skinner Explanations please, not abuse.|
|16/05/2015 1:29||.@BitcoinBelle I get BTC for the unbanked. I do. But I don't get contracts or patents in that setting. @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 1:32||@BitcoinBelle Can you follow a thread? Or a line of logic?|
|16/05/2015 1:34||.@BitcoinBelle So once again, explain please how a timestamp plus tamper resistance is special? @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 1:42||1/4: @benmcginnes Proof of what? Someone unilaterally asserted something about themselves? @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 1:43||2/4: "Proof" to what standard? That word implies accreditation somewhere. @benmcginnes @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 1:44||3/4: Who relies on the proof? ie what's the detailed use case? @benmcginnes @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 1:47||4/4: Why/how does interfacing to blockchain give better proof than a PK cert? @benmcginnes @BitcoinBelle @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 2:40||.@benmcginnes Math proof in identity is the easy bit. Proof of attributes and rel'ships matters more. @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 2:43||.@benmcginnes Oh please. That's why I'm asking people to compare 2 types: blockchain and PK certs. @Chris_Skinner @Dr_Craig_Wright|
|16/05/2015 2:46||.@Dr_Craig_Wright I mean accred in the broadest sense: a disinterested endorsement. Self asserted means 0 @benmcginnes @Chris_Skinner|
|16/05/2015 3:18||.@Dr_Craig_Wright Something I said in a PKI advisory 17 years seems to still be eating you Craig. What is it? @benmcginnes|
|16/05/2015 5:12||.@BitcoinBelle But. Why. Bother? What's better about blockchain, compared with putting your hysterics on Twitter? @el33th4xor|
|16/05/2015 5:16||So I asked for an explanation of #blockchain for identity. And all I get is hippy nonsense - it's not central, not fiat, not govt.|
|16/05/2015 8:35||@futureidentity It's certainly the case with Bitcoin that it's more about the people than the technology.|
|16/05/2015 10:26||@jonmatonis @futureidentity Thanks but sorry, what do you mean by user defined privacy?|
|16/05/2015 10:27||@jonmatonis @futureidentity Please explain deniability of ownership.|
|16/05/2015 11:06||.@jonmatonis Thanks. How is that realized with blockchain where all transactions are available for all to see? @futureidentity|
|16/05/2015 12:10||.@benmcginnes I don't need visuals. I need blockchain-for-identity pundits to set out the problem it solves. @jonmatonis @futureidentity|
|16/05/2015 19:52||Twitter: Where you can be sure to find all the answers to questions you never asked.|
|16/05/2015 19:57||.@adam3us But why #blockchain? It was designed to stop double spend. Cheaper ways to hold immutable attributes @jonmatonis @futureidentity|
|16/05/2015 20:04||RT @adam3us: .@Steve_Lockstep @jonmatonis @futureidentity Well indeed identity does not belong on chain. Payment protocol is offchain|
|16/05/2015 20:09||.@cdelargy Which id mgt action corresponds to spending? Is it each presentation of "I am Steve"? @adam3us @jonmatonis @futureidentity|
|16/05/2015 20:18||.@jonmatonis Which is to say identity is not the new form of currency? .@futureidentity|
|16/05/2015 20:21||.@adam3us Auxillary info meaning the attributes and most importantly who vouches for them? @cdelargy @jonmatonis @futureidentity|
|16/05/2015 22:00||RT @adam3us: .@Steve_Lockstep @cdelargy @jonmatonis @futureidentity Yes Blockchain hasn't bandwidth for finance app msgs with identity|
|16/05/2015 22:26||.@Beautyon_ Not at all. I've articulated how I see the main id problem to solve: http://t.co/LPXBHieawT I ask others do the same|
|16/05/2015 22:31||.@Beautyon_ I'm not anti Bitcoin. I'm pro rigor. Almost nobody weighing in articulates the IDAM problem blockchain supposedly fixes|
|16/05/2015 22:33||.@Beautyon_ I think I agree. Names per se are not as important as the more general "Here's an attribute about me you can rely on"|
|16/05/2015 22:36||.@Beautyon_ So I say we need IDAM system to imbue attributes with pedigree and present them so RPs r assured of pedigree and user control|
|16/05/2015 22:38||.@Beautyon_ If blockchain is involved in every attribute presentation, is bandwidth ok? And isn't the 10 minute reconciliation too long?|
|16/05/2015 22:40||.@Beautyon_ No, I frame identity as "what do I need to know about you to be able to deal with you?" in a context.|
|16/05/2015 22:47||.@Beautyon_ In the lingo of IDAM, the holder of the asset you want to access is the Relying Party. They rely on your credential or key.|
|16/05/2015 23:03||@Beautyon_ No I don't use GPG. Maybe I might still understand if someone offers an explanation.|
|16/05/2015 23:08||.@Beautyon_ Why the elitism? Why can't blockchain enthusiasts explain themselves to the unwashed? You're like Freemasons|
|16/05/2015 23:17||.@Beautyon_ 20 years in PKI. I think I got the basics. And an allergy to people who can't explain their craft in natural language.|
|17/05/2015 3:42||.@WulfKhan IDAM is complicated. Many facets. Many problems. Which are addressed by blockchain? I am not on about BTC. @Beautyon_|
|17/05/2015 4:22||.@Beautyon_ I advise organisations on non trivial authentication and privacy problems. DIY secrecy is not important in my world.|
|17/05/2015 4:35||User pseudonymity is a crude fragile measure. Privacy != secrecy. It's about what others do with info about you. https://t.co/VpiKWHTLBH|
For what it's worth, in my wildest dreams I can't imagine the confusing, self-important Craig Wright being Nakamoto.
An unpublished letter to the editor of The Economist.
November 1, 2015
Just as generalists mesmerized by quantum physics are prone to misapply it to broader but unrelated problems, some are making exorbitant claims for the potential of blockchain to change the world ("The trust machine", The Economist, October 31st). Yes, blockchain is extraordinarily clever but it was designed specifically to stop electronic cash from being double spent, without needing central oversight. As a general ledger, blockchain is unwieldy and expensive.
Trust online is all about provenance. How can I be sure a stranger’s claimed attributes, credentials and possessions are genuine? Proving a credit card number, employment status, or ownership of a block of land in a ‘democratic’ peer-to-peer mesh strikes some as utopian, but really it’s oxymoronic. The blockchain is an indelible record of claims, which still need to be vouched for before they are carved forever into mathematical stone.
Principal Analyst - Identity & Privacy, Constellation Research.
This morning Microsoft's CEO Satya Nadella gave a global speech about enterprise security. He announced a new Cyber Defense Operations Center, a should-not-be-new Microsoft Enterprise Cybersecurity Group and a not-at-all-new-sounding Enterprise Mobility Suite (EMS). The webcast can be replayed here but don't expect to be blown away. It's all just tablestakes for a global cloud provider.
Security is being standardised all over the place now. Ordinary people are getting savier about security best practice; they know for example that biometrics templates need to be handled carefully in client devices, and that secure storage is critical for assets like identities and Bitcoin. "Secure Element" is almost a lay-person's term now (Apple tried to give the iPhone security chip the fancy name "Enclave" but seem to now regard it as so standard it doesn't need branding).
All this awareness is great, but it's fast becoming hygeine. Like airplane safety. It's a bit strange for corporations to seek to compete on security, or to have the CEO announce what are really textbook security services. At the end of the speech, I couldn't tell if anything sets Microsoft apart from its arch competitors Google or Amazon.
Most of today's CISOs operate at a higher, more strategic level than malware screening, anti-virus and encryption. Nadella's subject matter was really deep in the plumbing. Not that there's anything wrong with that. But it just didn't seem to me like the subject matter for a CEO's global webcast.
The Microsoft "operational security posture" is very orthodox, resting on "Platform, Intelligence and Partners". I didn't see anything new here, just a big strong cloud provider doing exactly what they should: leveraging the hell out of a massive operation, with massive resources, and massive influence.
A big part of my research agenda in the Digital Safety theme at Constellation is privacy. And what a vexed topic it is! It's hard to even know how to talk about privacy. For many years, folks have covered privacy in more or less academic terms, drawing on sociology, politics and pop psychology, joining privacy to human rights, and crafting new various legal models.
Meanwhile the data breaches get worse, and most businesses have just bumped along.
When you think about it, it’s obvious really: there’s no such thing as perfect privacy. The real question is not about ‘fundamental human rights’ versus business, but rather, how can we optimise a swarm of competing interests around the value of information?
Privacy is emerging as one of the most critical and strategic of our information assets. If we treat privacy as an asset, instead of a burden, businesses can start to cut through this tough topic.
But here’s an urgent issue. A recent regulatory development means privacy may just stop a lot of business getting done. It's the European Court of Justice decision to shut down the US-EU Safe Harbor arrangement.
The privacy Safe Harbor was a work-around negotiated by the Federal Trade Commission, allowing companies to send personal data from Europe into the US.
But the Safe Harbor is no more. It's been ruled unlawful. So it’s a big, big problem for European operations, many multinationals, and especially US cloud service providers.
At Constellation we've researched cloud geography and previously identified competitive opportunities for service providers to differentiate and compete on privacy. But now this is an urgent issue.
It's time American businesses stopped getting caught out by global privacy rulings. There shouldn't be too many surprises here, if you understand what data protection means internationally. Even the infamous "Right To Be Forgotten" ruling on Google’s search engine – which strikes so many technologists as counter intuitive – was a rational and even predictable outcome of decades old data privacy law.
The leading edge of privacy is all about Big Data. And we aint seen nothin yet!
Look at artificial intelligence, Watson Health, intelligent personal assistants, hackable cars, and the Internet of Everything where everything is instrumented, and you see information assets multiplying exponentially. Privacy is actually just one part of this. It’s another dimension of information, one that can add value, but not in a neat linear way. The interplay of privacy, utility, usability, efficiency, efficacy, security, scalability and so on is incredibly complex.
The broader issue is Digital Safety: safety for your customers, and safety for your business.