In cyber security, user awareness, education and training have long gone past their Use By Date. We have technological problems that need technological fixes, yet governments and businesses remain averse to investing in real security. Instead, the long standing management fad is to 'audit ourselves' out of trouble, and to over-play user awareness as a security measure when the systems we make them use are inherently insecure.
It’s a massive systemic failure in the security profession.
We see a policy and compliance fixation everywhere. The dominant philosophy in security is obsessed with process. The international information security standard ISO 27001 is a management system standard; it has almost nothing to say universally about security technology. Instead the focus is on documentation and audit. Box ticking. It’s intellectually a carbon-copy of the ISO 9001 quality management standard, and we all know the limitations of that.
Or do we? Remember that those who don’t know the lessons of history are condemned to repeat it. I urge all infosec practitioners to read this decade old article: Is ISO 9000 really a standard? -- it should ring some bells.
Education, policy and process are almost totally useless in fighting ID theft. Consider this: those CD ROMs with 25,000,000 financial records, lost in the mail by British civil servants in 2007 were valued at 1.5 billion pounds, using the going rate on the stolen identity black market. With stolen data being so immensely valuable, just how is security policy ever going to stop insiders cashing in on such treasure?
In another case, after data was lost by the Australian Tax Office, there was earnest criticism that the data should have been encrypted. But so what if it was? What common encryption method could not be cracked by organised crime if there was millions and millions of dollars worth of value to be gained?
The best example of process and policy-dominated security is probably the Payment Card Industry Data Security Standard PCI-DSS. The effectiveness of PCI-DSS and its onerous compliance regime was considered by a US Homeland Security Congressional Committee in March 2009. In hearings, the National Retail Federation submitted that “PCI has been plagued by poor execution ... The PCI guidelines are onerous, confusing, and are constantly changing”. They noted the irony that “the credit card companies’ rules require merchants to store credit card data that many retailers do not want to keep” (emphasis in original). The chair committee remarked that “The essential flaw with the PCI Standard is that it allows companies to check boxes, but not necessarily be secure. Compliance does not equal security. We have to get beyond check box security.”
To really stop ID theft, we need proper technological preventative measures, not more policies and feel-good audits.
The near exclusive emphasis on user education and awareness is a subtle form of blame shifting. It is simply beyond the capacity of regular users to tell pharming sites from real sites, or even to spot all phishing e-mails. What about the feasibility of training people to "shop safely" online? It's a flimsy proposition, considering that the biggest cases of credit card theft have occurred at backend databases of department store chains and payments processors. Most stolen card details in circulation probably originate from regular in-store Card Present transactions, and not from Internet sites. The lesson is even if you never ever shop online, you can have your card details stolen and abused behind your back. All the breathless advice about looking out for the padlock is moot.
In other walks of life we don’t put all the onus on user education. Think about car safety. Yes good driving practices are important, but the major focus is on legislated standards for automotive technology, and enforceable road rules. In contrast, Internet security is dominated by a wild west, everyone-for-themselves mentality, leading to a confusing patchwork of security gizmos, proprietary standards and no common benchmarks.
Astronomy is the archetypal pure science. It can seem frankly pointless, especially when the projects like the Hubble Space Telescope cost a billion dollars or more. The point of astronomy is a vexed question, and one that politicians often have to engage with.
Some pundits justify astronomy on the grounds of the spinoffs, like better radio antenna technology. Some say our destiny is to emigrate from the planet, so we'd better start somewhere. Others simply assert unapologetically that peering into the heavens is what homo sapiens does, at any cost.
Here's a different rationale. To my mind (as an ex astronomer) the deepest practical value of astronomy is it enables us to perform experiments that are just too grand to ever be done on earth. The limits of earth-bound experiments of course change over time, but during each era, astronomy has furnished answers to questions that elude terrestrial investigation.
For example, astronomers were the first to:
And there must be other examples.
I mention General Relativity, which was another one of those apparently academic pursuits, until it came to the rescue a few years ago in the most practical way. Soon after the Global Positioning Satellite (GPS) system came online, its results started drifting. Engineers realised quickly that the high precision clocks in orbit were getting out of sync with those on the ground. And the explanation turned out to be gravity. According to General Relativity, a clock will run more slowly in a gravitational field, and because the force of gravity is slightly lower above the Earth than on the surface, the GPS clocks were running faster than expected. The effect was just a few parts in a billion, but enough to cause the positioning results to drift by a few metres, and whatsmore, to get worse over time. By reprogramming the GPS controllers to account for gravitational time dilation the problems were solved and the system has been stable ever since. So if it wasn't for Einstein, your sat nav wouldn't work, and you'd be lost. Or rather, more lost than you are now.
So astronomy is supremely practical. And here's another thing. Astronomy occasionally provides the most profound and compelling truths about reality. My favorite example is a classic gooose-bump moment in the history of science: Galileo's discovery of sunspots and his appreciation of what they meant. Until then, everyone thought the Sun was a perfect unchanging disc of light. After projecting the Sun through his newly uinvented teleschope onto a white sheet, Galileo soon noticed blemishes which rather put paid to the perfection. But much more importantly, Galileo saw that the spots were moving. They drifted across the face of the disc over a few hours, disappeared, and then returned, on the other side where they began. In today's idiom, he might have said "O.M.F.G!" The sun turns out to be a turning ball! It was a critical moment of unification in human culture, contributing more evidence to the realisation that all of the things and all of the stuff in the universe are fundamentally the same. The Coppernican revolution was much more than star gazing: it reset humankind's understanding of the mystical, reinforcing that everything is ordered, and ordinary. Corporeal, and explicable to human minds.
It's not the only time a radical, instant, disruptive re-framing of our place in the world has been delivered by astronomy. It wasn't really so long ago -- in the early 20th century -- that Edwin Hubble and his peers established that the 'nebulae' were actually galaxies just like the Milky Way, and that the universe therefore had to be tens of millions of times bigger than previously thought possible.
This kind of revelation about our place in the scheme of things only comes from astronomy.
Most people who favour the current Australian flag hold that the small Union Jack up in the corner is an objective nod to our history. The tacit assumption is that the Southern Cross is the major motif of the flag, and that the Union Jack was added. Yet this is a mythic misunderstanding of how our flag was put together, and is itself emblematic of a continuing servility to Britain.
The construction of the Australian flag was precisely the other way round! Technically, our flag is what vexillologists (flag specialists) call a "defaced blue ensign" of the British Navy. There are several blue ensign designs, and all of them start with the Union Jack, before being “defaced”. This is why the proud flags of New Zealand, Fiji, Tuvalu and all six Australian states are basically the same.
So our proud Southern Cross is literally a second class embellishment on someone else’s flag. Where’s the national pride in that?
It's not just the image of the Union Jack on our flag that offends my republican sensibilities. Australia as a young nation was subordinated in the very way its flag was constructed.
The search for an up-to-date flag continues ...
Privacy is a notoriously slippery topic. Even the word "privacy" has eluded universally accepted definition. Yet information privacy (aka data protection) law is really pretty straightforward, even if the implications of these laws are counter-intuitive for some. A degree of ignorance of privacy law has led to some infamous missteps. Here I'm going to review data privacy law, and look at how some of the big Internet brands continue to misunderstand privacy technicalities, at their peril.
There can be endless arguments about the meaning of privacy. Not only is it intensely personal, it also ranges across philosophy, human rights, civil liberties and politics.
Sometimes people try to analyse privacy rights through the legal frameworks of copyright or even data ownership, but these are not fruitful approaches. Copyright of course is a thorny issue; intellectual property rights are controversial in cyberspace and they seem to only complicate privacy. As for "ownership", well philosophers are still working out what they can even mean for data.
Australia's Privacy Act, like most such information privacy and data protection law worldwide, neatly side steps the moral and philosophical minefields.
Paradoxically, the words "private" and "public" don't even figure in the Privacy Act. Instead the focus is on Personal Information -- namely any information or opinion about an individual where their identity is "apparent or can reasonably be ascertained" -- and how it is handled. Note that the definition captures a lot more than personal details expressly provided by forms and questionnaires; it includes any data at all associated with an individual.
Consultants often advise that privacy and security are different things. And so they are, but more even importantly, privacy is only partially related to confidentiality and secrecy. Privacy is really all about control. Paradoxically perhaps, anonymity is not necessary for privacy; neither does having details about oneself in the public domain mean that data escapes all privacy regulations. For information privacy, simply stated, is a state where organisations respect the knowledge they have about you, and are restrained in what they do with it.
All information privacy or data protection law (in jurisdictions that have it) centres on the following principles, amongst others:
― The Collection Principle means a business generally cannot gather (or acquire or even generate) Personal Information if it is not required for a defined business function, and without the individual's consent.
― The Use & Disclosure Principles mean that information gathered (or created) for one purpose cannot be used for unrelated secondary purposes without consent, nor can it be disclosed to unrelated parties.
― The Access & Correction Principles mean that an individual usually has the right to be given access to all Personal Information held by a business about them, and to have any errors fixed.
Some of the implications may be surprising, especially for technologists.
Privacy law is blind to how information is collected. It doesn't matter how Personal Information comes to be in your business; even if Personal Information is generated internally from audit logs or evaluative processes, once you have it, you are deemed to have made a collection according to privacy law. Moreover, even if Personal Information is collected from the public domain, it is still subject to privacy law.
[Update Feb 2013: A couple of more recent cases have highlighted also the difference between anonymity/secrecy and privacy. In many places and especially Europe, privacy is much more about granting people control over how their Personal Information is used, than it is about keeping all information secret. Therefore when anonymity is occasionally lost, individuals still have rights and legal recourse should their information be abused. The best example is that European regulators found Facebook's facial recognition processes to breach the Collection Limitation principle and had Facebook shut it down. The lesson is: big data processes or biometrics may give technologists fabulous powers to re-identify anonymous or 'public' data but those powers cannot be used willy-nilly. Another potential test case is that of the 'DNA hacking' reported in early 2013 where bioinformaticians cleverly used genealogical data from public websites to re-identify anonymous DNA donors. And then we have Google Glass which will inevitably generate boundless identification of people and objects captured on video in your daily walk through life. "Boundless" that is if Google disregards the Collection principle. See also my recent post "The beginning of privacy". ]
An important recent case is Google's collection of wifi data from open home networks by StreetView cars. Some argue it's careless for people to not encrypt their wireless setups, but the fact is that data gathered by sniffing networks is subject to the Privacy Act if it relates to individuals that can be identified (and with Google's vast linked databases, working out identities is assumed to be within their powers). A person has not agreed to the exploitation of their information merely because they might be lax with their security.
Some say privacy law hasn't kept up with technology. For the most part, established principles-based information privacy law does work well in cyberspace, for it is fundamentally all about the rights of individuals to have some control over who knows what about them. Information privacy principles are a powerful and straightforward way to analyse personal rights even in dynamic and complicated settings like online social networking. So conventional information privacy law is being used in Germany and elsewhere to curtail the more excessive practices of Google (collection of personally identifiable wifi transmissions) and of Facebook (generation of biometric templates from photo tagging and re-use of those templates to identify people in images data).
Yet networking technology does challenge privacy principles. We all know why Facebook, Twitter, Google and LinkedIn offer such fantastic services for free: it's because they're generating vast commercial value from the network information and Big Data they're amassing. Information privacy law requires that individuals be informed as to why Personal Information is collected about them and how it's going to be used. But if sophisticated data analytics and ever increasing networks of information lead to discoveries that aren't apparent until critical mass is reached, then it's actually impossible to inform members up front about the precise collection purpose. Instead, businesses should share more of the spoils of social networking with their customers, who typically gladly opt in if properly rewarded for participating in what is still a great big experiment.
This fundamental clash with the Collection Principle is the only case I know of where technology really has outstripped privacy law.
If you work in e-commerce and cyber security policy, law, regulations or strategy, you've almost certainly been taught the difference between "authentication" and "authorisation". One describes 'who you are' and the other what you're allowed to do. The dichotomy is at the heart of most network access control, and it informs almost all contemporary thinking about digital identity. And it's misguided.
I believe the sterile language of authentication and authorisation, especially the orthodox primacy of the former over the latter, has distorted the study of digital identity. By making authentication come first, the language cements the tacit assumption that we each have just one main identity, and it surfaces that core identity in all routine transactions. This is not a good starting point if we seek the right balance of security and privacy online.
Kim Cameron tried to shift this dichotomy with his "Laws of Identity" but sadly this particular subtlty never quite caught on. Cameron said that digital identity is "a set of claims made by one digital subject about itself or another digital subject". This means that a digital identity is really all about the attributes, breaking the nexus between authentication and authorization. Cameron recognised explicitly that this new view "does not jive with some widely held beliefs – for example, that within a given context, identities have to be unique". And that belief is indeed widespread: it's at the heart of the "nymwars" dispute that erupted over Google's and Facebook's Real Names policies. Unfortunately, for all the forcefullness of the "Laws", opinions about the number of identities we 'really' have remain polarised.
People have been confused about the 'real' identity versus digital for a long time. A dogmatic obsession with 'real' identity is what shoved PKI off the rails in the mid 1990s. There are purists who say PKI can only be concerned with identity, but we really need to move away from an absolutist view of authentication.
In the vast majority of routine transactions, parties are only interested in authorisation and not identity. Consider: pharmacists dispensing prescriptions don't "know" (let alone trust) doctors. Investors don't "know" a company's auditors. Airline passengers don't "know" the pilots nor the airframe safety inspectors. Bank customers don't "know" their tellers. Employees don't "know" who signs their pay cheques. The parties to these transactions may be mutual strangers and yet they obviously know enough about one another to be able to transact usefully. Each party has a dependable identity in a particular context. In context, they are not total strangers. We can conclude that identity-in-context is precisely the same thing as authorisation.
The idea that authentication and authorisation are different things is an artefact which, it seems to me, arose when 1970s era computer scientists started thinking about resource access control. The distinction does not usually arise in regular real world business, where all that matters in routine transactions is the credentials of the sender, in context.
Internet commerce is a collision of worlds: IT and business. And far too many of the default assumptions, language and sheer imaginings of technologists (like "non repudiation") have infiltrated our e-business paradigm. It's ironic because we're told incessantly that e-business and identity management are "not technology issues" and yet the received wisdom of digital identity has come from computer scientists!
In IT, "attributes" and authorisation are always secondary to identification and authentication. Yet the real world is subtly different. Yes, I identify myself with a primary authenticator like a drivers licence when I open a new bank account or join a video store. However, I never use that breeder ID again, for the bank and video store each provide me with new credentials; that is, new identities in their respective contexts.
Surely the authentication-authorisation split is unhelpful to the twin causes of Internet security and privacy. It exposes to theft more breeder identity information than is generally necessary, and it enables otherwise dispirate business to be joined up. The sooner we cement a new simplifying assumption the better: in most routine transactions, authorisation and not identity is all that matters.
Better clarity follows about what the real problem is with digital identity. For the most part, our important business attributes (and the ones that are most prone to ID, like account numbers, social security numbers and government identifiers) are grounded in conventional real world rules. They are issued by bricks-and-mortar institutions, and used online. The main problem is not with existing identity issuance processes; it's with the way perfectly good identities once issued are so vulnerable online. We usually present our ids as simple alphanumeric data, which are passed around through the matrix without any checks on their pedigree. So the real challenge is to preserve the integrity, authenticity and pedigree of the different identities we already have when we exercise them online. This is actually a straightforward technical issue, with readily available solutions using ordinary asymmetric cryptography. It is not at all necessary to engineer a whole new identity paradigm, changing the time-honored conventions by which meaningful context-specific identities are issued. We simply need to take the recognised identities we already have and convey them in a smarter way online.
PKI has a reputation for terrible complexity, but it is actually simpler than many mature domestic technologies.
It's interesting to ponder why PKI got to be (or look) so complicated. There have been at least two reasons. First, the word is frequently taken to mean the original overblown "Big PKI" general purpose identification schemes, with their generic and context-free passport grade ID checks, horrid user agreements and fine print. Yet there are alternative ways to deploy public key technology in closed authentication schemes, and indeed that is where it thrives; see http://lockstep.com.au/library/pki. Second, there is all that gory technical detail foisted on lay people in the infamous "PKI 101" sessions. Typical explanations start with a tutorial on asymmetric cryptography even before they tell you what PKI is for. ]
I've long wondered what it is about PKI that leads its advocates to train people into the ground. Forty-odd years ago when introducing the newfangled mag stripe banking card, I bet the sales reps didn't feel the need to explain electromagnetism and ferric oxide chemistry.
This line of thought leads to fresh models for 'domesticating' PKI by embedding it in plastic cards. By re-framing PKI keys and certificates as being means to an end and not ends in themselves, we can also:
― identify dramatically improved supply chains to deliver PKI's benefits
― re-cast the traditionally difficult business model for CAs, and
― demystify how PKI can support a plurality of IDs and apps.
Consider the layered complexity of the conventional plastic card, and the way the layers correspond to steps in the supply chain. At its most basic level, the card is based on solid state physics and Maxwell's Equations for electromagnetism. These govern the properties of ferric oxide crystals, which are manufactured as powders and coated onto bulk tape by chemical companies like BASF and 3M. The tape is applied to blank cards, which are distributed to different schemes for personalisation. Usually the cards are pre-printed in bulk with artwork and physical security features specific to the scheme. In general, personalisation in each scheme starts with user registration. Data is written to the mag stripe according to one of a handful of coding standards which differ a little bwteen banks, airlines and other niches. The card is printed or embossed, and distributed.
The variety of distinct schemes using magnetic stripe cards is almost limitless: bank cards, credit cards, government entitlements, health insurance, clubs, loyalty cards, gift cards, driver licences, employee ID, universities, professional associations etc etc. They all use the same ferromagnetic components delivered through a global supply chain, which at the lower layers is very specialised and delivered by only a handful of companies.
And needless to say, hardly anyone needs to know Maxwell's Equations to make sense of a mag stripe card.
The smartcard supply chain is very similar. The only technical difference is the core technology used to encode the user information. The theoretical foundations are cryptography instead of electromagnetism, and instead of bulk ferric oxide powders and tapes, specialist semiconductor companies fabricate the ICs and preload them with firmware. From that point on, the smartcard and mag stripe card supply chains overlap. In fact the end user in most cases can't tell the difference between the different generations of card technologies.
Smartcards (and their kin: SIMs, USB keys and smartphones) are the natural medium for deploying PKI technology.
Re-framing PKI deployment like this ...
― decouples PK technology from the application and scheme layers, and tames the technical complexity; it shows where to draw the line in "PKI 101" training
― provides a model for transitioning from conventional "id" technology to PKI with minimum disruption of current business processes and supplier arrangements
― shows that it's perfectly natural for PKI to be implemented in closed communities of interest (schemes) and takes us away from the unhelpful orthodox Big PKI model
― suggests new "wholesale" business models for CAs; historically CAs found it difficult to sell certificates direct, but a clearly superior model is to provide certificates into the initialisation step
― demonstrates how easy to use PKI should be; that is, exactly as easy to use as the mag stripe card.
I once discussed this sort of bulk supply chain model at a conference in Tokyo. Someone in the audience asked me how many CAs I thought were needed worldwide. I said maybe three or four, and was greeted with incredulous laughter. But seriously, if certificates are reduced to digitally signed objects that bind a parcel of cardholder information to a key associated with a chip, why shouldn't certificates be manufactured by a fully automatic CA service on an outsourced managed service basis? It's no different from security printing, another specialised industry with the utmost "trust" requirements but none of the weird mystique that has bedevilled PKI.
Update September 2012
The recent discovery that junk DNA is not actually junk rather reinforces my long standing thesis, espoused below, that we don't know enough about how genes work to be able to validate genetic engineering artifacts by testing alone. I point out that computer programs are only validated by a mixture of testing, code inspection and theory, all of which is based on knowing how the code works at the instruction level. But we don't have a terribly complete picture of how genes interact. We always knew they were massively parallel, and now it turns out that junk DNA has some sort of role in gene expression across the whole of the genome, raising the combinatorial complexity enormously. This tells me that we have little idea how modifications at one point in the genome can impact the functioning at any number of other points (but it hints at an explanation as to why human beings are so much more complex than nematodes despite having only a relatively small number of additional raw genetic instructions).
And now there is news that a cow in New Zealand, genetically engineered in respect of one allergenic protein, was born with no tail. Now it's too early to be able to blame the GM for this oddity, but equally, the junk DNA finding surely undermines the confidence that any genetic engineer can have in predicting that their changes cannot have had unexpected and really unpredictable side effects.
Original post, 15 Jan 2011
As a software engineer years ago I developed a deep unease about genetic engineering and genetically modified organisms (GM). The software experience suggests to me that GM products cannot be verifiable given the state of our knowledge about how genes work. I’d like to share my thoughts.
Genetic engineering proponents seem to believe the entire proof of a GM pudding is in the eating. That is, if trials show that GM food is not toxic, then it must be safe, and there isn't anything else to worry about. The lesson I want others to draw from the still new discipline of software engineering is there is more to the verification of correctness in complex programs than tesing the end product.
Recently I’ve come across an Australian government-sponsored FAQ Arguments for and against gene technology (May 2010) that supposedly provides a balanced view of both sides of the GM debate. Yet it sweeps important questions under the rug.[At one point the paper invites readers to think about whether agriculture is natural. It’s a highly loaded question grounded in the soothing proposition that GM is simply an extension of the age old artificial selection that gave us wheat, Merinos and all those different potatoes. The question glosses over the fact that when genes recombine under normal sexual reproduction, cellular mechanisms constrain where each gene can end up, and most mutations are still-born. GM is not constrained; it jumps levels. It is quite unlike any breeding that has gone before.]
Genes are very frequently compared with computer software, for good reason. I urge that the comparison be examined more closely, so that lessons can be drawn from the long standing “Software Crisis”.
Each gene codes for a specific protein. That much we know. Less clear is how relatively few genes -- 20,000 for a nematode; 25,000 for a human being -- can specify an entire complex organism. Science is a long way from properly understanding how genes specify bodies, but it is clear that each genome is an immensely intricate ensemble of interconnected biochemical short stories. We know that genes interact with each other, turning each other on and off, and more subtly influencing how each is expressed. In software parlance, genetic codes are executed in a massively parallel manner. This combinatorial complexity is probably why I can share fully half of my genes with a turnip, and have an “executable file” in DNA that is only 20% longer than that of a worm, and yet I can be so incredibly different from those organisms.
If genomes are like programs then let’s remember they have been written achingly slowly over eons, to suit the circumstances of a species. Genomes are revised in a real world laboratory over billions of iterations and test cases, to a level of confidence that software engineers can’t even dream of. Brassica napus.exe (i.e. canola) is at v1000000000.1. Tinkering with isolated parts of this machinery, as if it were merely some sort of wiki with articles open to anyone to edit, could have consequences we are utterly unable to predict.
In software engineering, it is received wisdom that most bugs result from imprudent changes made to existing programs. Furthermore, editing one part of a program can have unpredictable and unbounded impacts on any other part of the code. Above all else, all but the very simplest software in practice is untestable. So mission critical software (like the implantable defibrillator code I used to work on) is always verified by a combination of methods, including unit testing, system testing, design review and painstaking code inspection. Because most problems come from human error, software excellence demands formal design and development processes, and high level programming languages, to preclude subtle errors that no amount of testing could ever hope to find.
How many of these software quality mechanisms are available to genetic engineers? Code inspection is moot when we don’t even know how genes normally interact with one another; how can we possibly tell by inspection if an artificial gene will interfere with the “legacy” code?
What about the engineering process? It seems to me that GM is akin to assembly programming circa 1960s. The state-of-the-art in genetic engineering is nowhere near even Fortran, let alone modern object oriented languages.
Can today’s genetic engineers demonstrate a rigorous verification regime, given the reality that complex software programs are inherently untestable?
We should pay much closer attention to the genes-as-software analogy. Some fear GM products because they are unnatural; others because they are dominated by big business and a mad rush to market. I simply say let’s slow down until we’re sure we know what we're doing.
What do you call it when a metaphor or analogy outgrows the word it is based on, thus co-opting that word to mean something quite new? Metaphors are meant to clarify complex concepts by letting people think of them in simpler terms. But if the detailed meaning is actually different, then the metaphor becomes misleading and dangerous.
I'm thinking of the idea of the electronic passport. Ever since the early days of Big PKI, there's been the beguiling idea of an electronic passport that will let the holder into all manner of online services and enable total strangers to "trust" one another online. Later Microsoft of course even named their digital identity service "Passport", and the word is still commonplace in discussing all manner of authentication solutions.
The idea is that the passport allows you to go wherever you like.
Yet there is no such thing.
A real world passport doesn't let you into any old country. It's not always sufficient; you often need a visa. You can't stay as long as you like in a foreign place. Some countries won't let you in at all if you carry the passport of an unfriendly nation. You need to complete a landing card and customs declarations specific to your particular journey. And finally, when you've got to the end of the arrivals queue, you are still at the mercy of an immigration officer who has the discretion to turn you away. As with all business, there is so much more going on here than personal identity.
So in the sense of the meaning important to the electronic passport metaphor, the "real" passport doesn't actually exist!
The simplistic notion of electronic passport is really deeply unhelpful. The dream and promise of general purpose digital certificates is what derailed PKI, for they're unwieldy, involve unprecedented mechanisms for conferring open-ended "trust", and are rarely useful on their own (ironically that's also a property of real passports). Think of the time and money wasted chasing the electronic passport when all along PKI technology was better suited to closed transactions. What matters in most transactions is not personal identity but rather, credentials specific to the business context. There never has been a single general purpose identity credential.
And now with "open" federated identity frameworks, we're sleep-walking into the same intractable problems, all because people have been seduced by a metaphor based on something that doesn't exist.
The well initiated understand that the Laws of Identity, OIX, NSTIC and the like involve a plurality of identities, and multiple attributes tuned to different contexts. Yet NSTIC in particular is still confused by many with a single new ID, a misunderstanding aided and abetted by NSTIC's promoters using terms like "interoperable" without care, and by casually 'imagining' that a student in future will log in to their bank using their student card .
Words are powerful and they're also malleable. Some might say I'm being too pedantic sticking to the traditional reality of the "passport". But no. It would be OK in my opinion for "passport" to morph into something more powerful and universal -- except that it can't. The real point in all of this is that multiple identities are an inevitable consequence of how identities evolve to suit distinct business contexts, and so the very idea of a digital passport is a bit delusional.
What most people seem to be missing in the NSTIC discussion is the sheer novelty of a weird and wonderful transaction matrix where Identity Providers and Attribute Providers get joined to Service Providers and Customers.[See NIST's new NSTIC website (which should be superceded by a forthcoming program office site), Whitehouse cybersecurity czar Howard Schmidt's blog, and Identity Woman's nicely reasoned account. There are quite a few useful tweets tagged #NSTIC].
They are going to need a raft of brand new legal agreements to cover off liability for damages arising from misidentification when an Identity Provider has nothing to do with the Service. Experience in Australian identitiy initiatives like the Trust Centre and VANguard shows that such agreements are challenging to draft and extraordinarily difficult for lawyers to accept. Risk management is intractable unless there are conditions imposed on what credentials can be used for, and then the negotiation of the fine print will become critical. New laws are almost certainly needed to limit liability in NSTIC. The possibility of legislation has been touched on but it needs elevating to the very top of the to-do list.
Put simply: In so much business today, the Service Provider is also the Identity Provider. If you change that arrangement by adding a third party IdP, the contractual consequences are enormous. Even if the scheme sponsors can draft new legal agreements for service providers, customers and identity providers, the lawyers for the banks, telcos, governments and so on will say "Wow! We've never see a contract like this before." What then? These are not the words you want to ever hear from a commercial lawyer.
The NSTIC Program Office would do well to appreciate that open federated identity for serious applications like banking, healthcare and professional services is still an unproven idea. In fact it's a truly radical idea. The proliferation of weed-like social IDs is used as a model for future higher risk transactions, but they tell us very little about serious e-business. Today's social logons are unverified nicknames, used by websites that don't care who you are.
It's not the sort of thing that governments normally jump into with such haste.
Why should digital identity be so tricky? The past decade is littered with earnest initiatives that failed to meet expectations (like the Australian Trust Centre) or consortia that over promised and under delivered (such as Liberty Alliance). Over time I’ve been a part of three promising federated identity initiatives, all of which failed to launch. For the past decade we’ve had countless deconstructions of “trust” and dissertations on “identity” but none of this work has led to the sort of breakthrough that’s clearly needed.
Now we have Kantara in Liberty's place, and the Open Identity Exchange (OIX) which is said to reflect an "ecosystem" of identity providers and consumers. The National Strategy for Trusted Identities in Cyberspace (NSTIC) has coopted the OIX architecture as a given.
In spite of its conspicuous failures, and the revolving door of security industry consortia, Federated Identity has become an orthodoxy. NSTIC takes “federation” as a given.
All federated id models start with the intuitively appealing premise that if an individual has already been identified by one service provider, then that identification should be made available to other services, to save time, streamline registration processes, reduce costs, and open up new business channels. It's a potent mix of supposed benefits, and yet strangely unachievable. True, we can now enjoy the convenience of logging onto multiple blogs and social sites with an OpenID, or an unverified Twitter account. But higher risk services like banking, e-health and government welfare stand apart, still maintaining their own identifiers and sovereign registration processes.
“Open” is one of those feel-good words that are self-evidently desirable. Like “interoperable” and “ecosystem”, the term is bandied about without much examination. What exactly does “open” identity mean?
There is a strong implication in “open identity” that identities issued by different organisations can be (nay, should be) treated equally. But when I look at any of the ’serious’ identities used when transacting with business and with government, there is almost always an obvious preferred issuer for each of them. Banks issue credit cards; health agencies issue health identifiers; governments issue driver licences, SSNs, tax file numbers and passports; employers issue employee IDs; registration bodies issue professionals’ credentials.
So these types of identities are not actually “open” on the issuer side.
Now, if there is usually a one-to-one relationship between a type of identity and the natural issuer of that identity (or in other words, if there is usually just one preferred issuer for each given identity), then a great deal of the open identity framework seems to be over-engineered.
Making things too complicated
This is just one example of the wide ranging abstractions that characterise orthodox identity thinking, the aim of which is to create "trust frameworks" sufficient to enable business to be conducted amongst strangers. To this end, federated identity proponents implore banks and government agencies to re-invent themselves as "Identity Providers" in accordance with the weird and wonderful Laws of Identity. This is an unfamiliar role for many institutions which have evolved over many decades to manage their members and business in tight silos.
The Laws of Identity and the new frameworks are chock-full of novel generalisations. They deconstruct identities, attributes and services, and imagine that when two parties meet for the first time with a desire to transact, they start from scratch to negotiate a set of attributes that confer mutual trust. Yet in practice, it is rare for parties in business to start from such a low base. Instead, merchants assume that shoppers come with credit cards, patients assume that doctors come with medical qualifications, and banks assume that customers have accounts. If you don't have the right credential for the transaction at hand, then that's just bad luck. You simply can’t do business, and you may have to go back, out of band, and get yourself appropriately credentialed.
Perhaps the most distracting generalisation in the new identity ecosystem is that Service Providers, Identity Providers and Attribute Providers are all different entities. In reality, these roles are usually fulfilled simultaneously and invisibly by banks, governments, social networks and so on, each serving the needs of distinct albeit overlapping groups of users.
All federated identity projects I have worked on were undone by the legal complexity and loss of control when customer relationship silos are broken down. It seems obvious with 20:20 hindsight, yet federation projects can battle on for years before they hit the wall.
If we are to avoid wasting more time and energy, we urgently need a new set of simplifying assumptions, instead of complicating generalisations. Fresh thinking about digital identity won't only demystify the grand plans for federated identity, but it will also help to improve more immediate challenges like electronic verification (EV) of identity, and bank account portability.
A great deal of effort has been wasted on federated models and open identity frameworks, catering for a utopia where parties have no prior business arrangements. We don't do routine transactions in the real world without context, and I can't see the point of designing radical new frameworks with untold liability implications to enable business to be done 'freestyle' online.
The urgent problems of identity theft and cyber fraud can be dealt with directly, by addressing the reliability of digital identity data. We don't need to change or extend the meaning of existing identities, nor the ways in which service providers deal directly with their clients. The generalisations in the open identity frameworks may be intellectually fascinating but they mostly only complicate matters.
Effective action in cyber security demands simplification, and not academic abstraction.