On July 23, BlackBerry hosted its second annual Security Summit, once again in New York City. As with last year’s event, this was a relatively intimate gathering of analysts and IT journalists, brought together for the lowdown on BlackBerry’s security and privacy vision.
By his own account, CEO John Chen has met plenty of scepticism over his diverse and, some say, chaotic product and services portfolio. And yet it’s beginning to make sense. There is a strong credible thread running through Chen’s initiatives. It all has to do with the Internet of Things.
Disclosure: I traveled to the Blackberry Security Summit as a guest of Blackberry, which covered my transport and accommodation.
The Growth Continues
In 2014, John Chen opened the show with the announcement he was buying the German voice encryption firm Secusmart. That acquisition appears to have gone well for all concerned; they say nobody has left the new organisation in the 12 months since. News of BlackBerry’s latest purchase - of crisis communications platform AtHoc - broke a few days before this year’s Summit, and it was only the most recent addition to the family. In the past 12 months, BlackBerry has been busy spending $150M on inorganic growth, picking up:
Chen has also overseen an additional $100M expenditure in the same timeframe on organic security expansion (over and above baseline product development). Amongst other things BlackBerry has:
The Growth Explained - Secure Mobile Communications
Executives from different business units and different technology horizontals all organised their presentations around what is now a comprehensive security product and services matrix. It looks like this (before adding AtHoc):
BlackBerry is striving to lead in Secure Mobile Communications. In that context the highlights of the Security Summit for mine were as follows.
The Internet of Things
BlackBerry’s special play is in the Internet of Things. It’s the consistent theme that runs through all their security investments, because as COO Marty Beard says, IoT involves a lot more than machine-to-machine communications. It’s more about how to extract meaningful data from unbelievable numbers of devices, with security and privacy. That is, IoT for BlackBerry is really a security-as-a-service play.
Chief Security Officer David Kleidermacher repeatedly stressed the looming challenge of “how to patch and upgrade devices at scale”.
- MyPOV: Functional upgrades for smart devices will of course be part and parcel of IoT, but at the same time, we need to work much harder to significantly reduce the need for reactive security patches. I foresee an angry consumer revolt if things that never were computers start to behave and fail like computers. A radically higher standard of quality and reliability is required. Just look at the Jeep Uconnect debacle, where it appears Chrysler eventually thought better of foisting a patch on car owners and instead opted for a much more expensive vehicle recall. It was BlackBerry’s commitment to ultra high reliability software that really caught my attention at the 2014 Security Summit, and it convinces me they grasp what’s going to be required to make ubiquitous computing properly seamless.
Refreshingly, COO Beard preferred to talk about economic value of the IoT, rather than the bazillions of devices we are all getting a little jaded about. He said the IoT would bring about $4 trillion of required technology within a decade, and that the global economic impact could be $11 trillion.
BlackBerry’s real time operating system QNX is in 50 million cars today.
AtHoc is a secure crisis communications service, with its roots in the first responder environment. It’s used by three million U.S. government workers today, and the company is now pushing into healthcare.
Founder and CEO Guy Miasnik explained that emergency communications involves more than just outbound alerts to people dealing with disasters. Critical to crisis management is the secure inbound collection of info from remote users. AtHoc is also not just about data transmission (as important as that is) but it works also at the application layer, enabling sophisticated workflow management. This allows procedures for example to be defined for certain events, guiding sets of users and devices through expected responses, escalating issues if things don’t get done as expected.
We heard more about BlackBerry’s collaboration with Oxford University on the Centre for High Assurance Computing Excellence, first announced in April at the RSA Conference. CHACE is concerned with a range of fundamental topics, including formal methods for verifying program correctness (an objective that resonates with BlackBerry’s secure operating system division QNX) and new security certification methodologies, with technical approaches based on the Common Criteria of ISO 15408 but with more agile administration to reduce that standard’s overhead and infamous rigidity.
CSO Kleidermacher announced that CHACE will work with the Diabetes Technology Society on a new healthcare security standards initiative. The need for improved medical device security was brought home vividly by an enthralling live demonstration of hacking a hospital drug infusion pump. These vulnerabilities have been exposed before at hacker conferences but BlackBerry’s demo was especially clear and informative, and crafted for a non-technical executive audience.
- MyPOV: The message needs to be broadcast loud and clear: there are life-critical machines in widespread use, built on commercial computing platforms, without any careful thought for security. It’s a shameful and intolerable situation.
I was impressed by BlackBerry’s privacy line. It's broader and more sophisticated than most security companies, going way beyond the obvious matters of encryption and VPNs. In particular, the firm champions identity plurality. For instance, WorkLife by BlackBerry, powered by Movirtu technology, realizes multiple identities on a single phone. BlackBerry is promoting this capability in the health sector especially, where there is rarely a clean separation of work and life for professionals. Chen said he wants to “separate work and private life”.
The health sector in general is one of the company’s two biggest business development priorities (the other being automotive). In addition to sophisticated telephony like virtual SIMs, they plan to extend extend AtHoc into healthcare messaging, and have tasked the CHACE think-tank with medical device security. These actions complement BlackBerry’s fine words about privacy.
So BlackBerry’s acquisition plan has gelled. It now has perhaps the best secure real time OS for smart devices, a hardened device-independent Mobile Device Management backbone, new data-centric privacy and rights management technology, remote certificate management, and multi-layered emergency communications services that can be diffused into mission-critical rules-based e-health settings and, eventually, automated M2M messaging. It’s a powerful portfolio that makes strong sense in the Internet of Things.
BlackBerry says IoT is 'much more than device-to-device'. It’s more important to be able to manage secure data being ejected from ubiquitous devices in enormous volumes, and to service those things – and their users – seamlessly. For BlackBerry, the Internet of Things is really all about the service.
I have just updated my periodic series of research reports on the FIDO Alliance. The fourth report, "FIDO Alliance Update: On Track to a Standard" is available at Constellation Research (for free for a time).
The Identity Management industry leader publishes its protocol specifications at v1.0, launches a certification program, and attracts support in Microsoft Windows 10.
The FIDO Alliance is the fastest-growing Identity Management (IdM) consortium we have seen. Comprising technology vendors, solutions providers, consumer device companies, and e-commerce services, the FIDO Alliance is working on protocols and standards to strongly authenticate users and personal devices online. With a fresh focus and discipline in this traditionally complicated field, FIDO envisages simply “doing for authentication what Ethernet did for networking”.
Launched in early 2013, the FIDO Alliance has now grown to over 180 members. Included are technology heavyweights like Google, Lenovo and Microsoft; almost every SIM and smartcard supplier; payments giants Discover, MasterCard, PayPal and Visa; several banks; and e-commerce players like Alibaba and Netflix.
FIDO is radically different from any IdM consortium to date. We all know how important it is to fix passwords: They’re hard to use, inherently insecure, and lie at the heart of most breaches. The Federated Identity movement seeks to reduce the number of passwords by sharing credentials, but this invariably confounds the relationships we have with services and complicates liability when more parties rely on fewer identities.
In contrast, FIDO’s mission is refreshingly clear: Take the smartphones and devices most of us are intimately connected to, and use the built-in cryptography to authenticate users to services. A registered FIDO-compliant device, when activated by its user, can send verified details about the device and the user to service providers, via standardized protocols. FIDO leverages the ubiquity of sophisticated handsets and the tidal wave of smart things. The Alliance focuses on device level protocols without venturing to change the way user accounts are managed or shared.
The centerpieces of FIDO’s technical work are two protocols, called UAF and U2F, for exchanging verified authentication signals between devices and services. Several commercial applications have already been released under the UAF and U2F specifications, including fingerprint-based payments apps from Alibaba and PayPal, and Google’s Security Key from Yubico. After a rigorous review process, both protocols are published now at version 1.0, and the FIDO Certified Testing program was launched in April 2015. And Microsoft announced that FIDO support would be built into Windows 10.
With its focus, pragmatism and membership breadth, FIDO is today’s go-to authentication standards effort. In this report, I look at what the FIDO Alliance has to offer vendors and end user communities, and its critical success factors.
Few technologies are so fundamental and yet so derided at the same time as public key infrastructure. PKI is widely thought of as obsolete or generically intrusive yet it is ubiquitous in SIM cards, SSL, chip and PIN cards, and cable TV. Technically, public key infrastructure Is a generic term for a management system for keys and certificates; there have always been endless ways to build PKIs (note the plural) for different communities, technologies, industries and outcomes. And yet “PKI” has all too often come to mean just one way of doing identity management. In fact, PKI doesn’t necessarily have anything to do with identity at all.
This blog is an edited version of a feature I once wrote for SC Magazine. It is timely in the present day to re-visit the principles that make for good PKI implementations and contextualise them in one of the most contemporary instances of PKI: the FIDO Alliance protocols for secure attribute management. In my view, FIDO realises PKI ‘as nature intended’.
In their earliest conceptions in the early-to-mid 1990s, digital certificates were proposed to authenticate nondescript transactions between parties who had never met. Certificates were construed as the sole means for people to authenticate one another. Most traditional PKI was formulated with no other context; the digital certificate was envisaged to be your all-purpose digital identity.
Orthodox PKI has come in for spirited criticism. From the early noughties, many commentators pointed to a stark paradox: online transaction volumes and values were increasing rapidly, in almost all cases without the help of overt PKI. Once thought to be essential, with its promise of "non repdudiation", PKI seemed anything but, even for significant financial transactions.
There were many practical problems in “big” centralised PKI models. The traditional proof of identity for general purpose certificates was intrusive; the legal agreements were complex and novel; and private key management was difficult for lay people. So the one-size-fits-all electronic passport failed to take off. But PKI's critics sometimes throw the baby out with the bathwater.
In the absence of any specific context for its application, “big” PKI emphasized proof of personal identity. Early certificate registration schemes co-opted identification benchmarks like that of the passport. Yet hardly any regular business transactions require parties to personally identify one another to passport standards.
”Electronic business cards”
Instead in business we deal with others routinely on the basis of their affiliations, agency relationships, professional credentials and so on. The requirement for orthodox PKI users to submit to strenuous personal identity checks over and above their established business credentials was a major obstacle in the adoption of digital certificates.
It turns out that the 'killer applications' for PKI overwhelmingly involve transactions with narrow contexts, predicated on specific credentials. The parties might not know each other personally, but invariably they recognize and anticipate each other's qualifications, as befitting their business relationship.
Successful PKI came to be characterized by closed communities of interest, prior out-of-band registration of members, and in many cases, special-purpose application software featuring additional layers of context, security and access controls.
So digital certificates are much more useful when implemented as application-specific 'electronic business cards,' than as one-size-fits-all electronic passports. And, by taking account of the special conditions that apply to different e-business processes, we have the opportunity to greatly simplify the registration processes, user experience and liability arrangements that go with PKI.
The real benefits of digital signatures
There is a range of potential advantages in using PKI, including its cryptographic strength and resistance to identity theft (when implemented with private keys in hardware). Many of its benefits are shared with other technologies, but at least two are unique to PKI.
First, digital signatures provide robust evidence of the origin and integrity of electronic transactions, persistent over time and over 'distance’ (that is, the separation of sender and receiver). This greatly simplifies audit logging, evidence collection and dispute resolution, and cuts the future cost of investigation and fraud. If a digitally signed document is archived and checked at a later date, the quality of the signature remains undiminished over many years, even if the public key certificate has long since expired. And if a digitally signed message is passed from one relying party to another and on to many more, passing through all manner of intermediate systems, everyone still receives an identical, verifiable signature code authenticating the original message.
Electronic evidence of the origin and integrity of a message can, of course, be provided by means other than a digital signature. For example, the authenticity of typical e-business transactions can usually be demonstrated after the fact via audit logs, which indicate how a given message was created and how it moved from one machine to another. However, the quality of audit logs is highly variable and it is costly to produce legally robust evidence from them. Audit logs are not always properly archived from every machine, they do not always directly evince data integrity, and they are not always readily available months or years after the event. They are rarely secure in themselves, and they usually need specialists to interpret and verify them. Digital signatures on the other hand make it vastly simpler to rewind transactions when required.
Secondly, digital signatures and certificates are machine readable, allowing the credentials or affiliations of the sender to be bound to the message and verified automatically on receipt, enabling totally paperless transacting. This is an important but often overlooked benefit of digital signatures. When processing a digital certificate chain, relying party software can automatically tell that:
- the message has not been altered since it was originally created
- the sender was authorized to launch the transaction, by virtue of credentials or other properties endorsed by a recognized Certificate Authority
- the sender's credentials were valid at the time they sent the message; and
- the authority which signed the certificate was fit to do so.
One reason we can forget about the importance of machine readability is that we have probably come to expect person-to-person email to be the archetypal PKI application, thanks to email being the classic example to illustrate PKI in action. There is an implicit suggestion in most PKI marketing and training that, in regular use, we should manually click on a digital signature icon, examine the certificate, check which CA issued it, read the policy qualifier, and so on. Yet the overwhelming experience of PKI in practice is that it suits special purpose and highly automated applications, where the usual receiver of signed transactions is in fact a computer.
Characterising good applications
Reviewing the basic benefits of digital signatures allows us to characterize the types of e-business applications that merit investment in PKI.
Applications for which digital signatures are a good fit tend to have reasonably high transaction volumes, fully automatic or straight-through processing, and multiple recipients or multiple intermediaries between sender and receiver. In addition, there may be significant risk of dispute or legal ramifications, necessitating high quality evidence to be retained over long periods of time. These include:
- Tax returns
- Customs reporting
- E-health care
- Financial trading
- Electronic conveyancing
- Superannuation administration
- Patent applications.
This view of the technology helps to explain why many first-generation applications of PKI were problematic. Retail internet banking is a well-known example of e-business which flourished without the need for digital certificates. A few banks did try to implement certificates, but generally found them difficult to use. Most later reverted to more conventional access control and backend security mechanisms.Yet with hindsight, retail funds transfer transactions did not have an urgent need for PKI, since they could make use of existing backend payment systems. Funds transfer is characterized by tightly closed arrangements, a single relying party, built-in limits on the size of each transaction, and near real-time settlement. A threat and risk assessment would show that access to internet banking can rest on simple password authentication, in exactly the same way as antecedent phone banking schemes.
Trading complexity for specificity
As discussed, orthodox PKI was formulated with the tacit assumption that there is no specific context for the transaction, so the digital certificate is the sole means for authenticating the sender. Consequently, the traditional schemes emphasized high standards of personal identity, exhaustive contracts and unusual legal devices like Relying Party Agreements. They also often resorted to arbitrary 'reliance limits,' which have little meaning for most of the applications listed on the previous page. Notoriously, traditional PKI requires users to read and understand certification practice statements (CPS).
All that overhead stemmed from not knowing what the general-purpose digital certificate was going to be used for. On the other hand, if particular digital certificates are constrained to defined applications, then the complexity surrounding their specific usage can be radically reduced.
The role of PKI in all contemporary 'killer applications' is fundamentally to help automate the online processing of electronic transactions between parties with well-defined credentials. This is in stark contrast to the way PKI has historically been portrayed, where strangers Alice and Bob use their digital certificates to authenticate context-free general messages, often presumed to be sent by email. In reality, serious business messages are never sent stranger-to-stranger with no context or cues as to the parties' legitimacy.
Using generic email is like sending a fax on plain paper. Instead, business messaging is usually highly structured. Parties have an expectation that only certain types of transactions are going to occur between them and they equip themselves accordingly (for instance, a health insurance office is not set up to handle tax returns). The sender is authorized to act in defined types of transactions by virtue of professional credentials, a relevant license, an affiliation with some authority, endorsement by their employer, and so on. And the receiver recognizes the source of those credentials. The sender and receiver typically use prescribed forms and/or special purpose application software with associated user agreements and license conditions, adding context and additional layers of security around the transaction.
PKI got smart
When PKI is used to help automate the online processing of transactions between parties in the context of an existing business relationship, we should expect the legal arrangements between the parties to still apply. For business applications where digital certificates are used to identify users in specific contexts, the question of legal liability should be vastly simpler than it is in the general purpose PKI scenario where the issuer does not know what the certificates might be used for.
The new vision for PKI means the technology and processes should be no more of a burden on the user than a bank card. Rather than imagine that all public key certificates are like general purpose electronic passports, we can deploy multiple, special purpose certificates, and treat them more like electronic business cards. A public key certificate issued on behalf of a community of business users and constrained to that community can thereby stand for any type of professional credential or affiliation.
We can now automate and embed the complex cryptography deeply into smart devices -- smartcards, smart phones, USB keys and so on -- so that all terms and conditions for use are application focused. As far as users are concerned, a smartcard can be deployed in exactly the same way as any magnetic stripe card, without any need to refer to - or be limited by - the complex technology contained within (see also Simpler PKI is on the cards). Any application-specific smartcard can be issued under rules and controls that are fit for their purpose, as determined by the community of users or an appropriate recognized authority. There is no need for any user to read a CPS. Communities can determine their own evidence-of-identity requirements for issuing cards, instead of externally imposed personal identity checks. Deregulating membership rules dramatically cuts the overheads traditionally associated with certificate registration.
Finally, if we constrain the use of certificates to particular applications then we can factor the intended usage into PKI accreditation processes. Accreditation could then allow for particular PKI scheme rules to govern liability. By 'black-boxing' each community's rules and arrangements, and empowering the community to implement processes that are fit for its purpose, the legal aspects of accreditation can be simplified, reducing one of the more significant cost components of the whole PKI exercise (having said that, it never ceases to amaze how many contemporary healthcare PKIs still cling onto face-to-face passport grade ID proofing as if that's the only way to do digital certificates).
The preceding piece is a lightly edited version of the article ”Rethinking PKI” that first appeared in Secure Computing Magazine in 2003. Now, over a decade later, we’re seeing the same principles realised by the FIDO Alliance.
The FIDO protocols U2F and UAF enable specific attributes of a user and their smart devices to be transmitted to a server. Inherent to the FIDO methods are digital certificates that confer attributes and not identity, relatively large numbers of private keys stored locally in the users’ devices (and without the users needing to be aware of them as such) and digital signatures automatically applied to protocol messages to bind the relevant attributes to the authentication exchanges.
Surely, this is how PKI should have been deployed all along.
The problem of identity takeover
The root cause of much identity theft and fraud today is the sad fact that customer reference numbers, personal identifiers and attributes generally are so easy to copy and replay without permission and without detection. Simple numerical attributes like bank account numbers and health IDs can be stolen from many different sources, and replayed with impunity in bogus transactions.
Our personal data nowadays is leaking more or less constantly, through breached databases, websites, online forms, call centres and so on, to such an extent that customer reference numbers on their own are no longer reliable. Privacy consequentially suffers because customers are required to assert their identity through circumstantial evidence, like name and address, birth date, mother’s maiden name and other pseudo secrets. All this data in turn is liable to be stolen and used against us, leading to spiraling identity fraud.
To restore the reliability of personal attribute data, we need to know their pedigree. We need to know that a presented data item is genuine, that it originated from a trusted authority, it’s been stored safely by its owner, and it’s been presented with the owner’s consent. If confidence in single attributes can be restored then we can step back from all the auxiliary proof-of-identity needed for routine transactions, and thus curb identity theft.
A practical response to ID theft
Several recent breaches of government registers leave citizens vulnerable to ID theft. In Korea, the national identity card system was attacked and it seems that all Korean's citizen IDs will have to be re-issued. In the US, Social Security Numbers are often stolen and used tin fraudulent identifications; recently, SSNs of 800,000 Post Office employees appear to have been stolen along with other personal records.
Update 14 June 2015: Now last week we got news of a hugely worse breach of US SSNs (not to mention deep personal records) of four million federal US government employees, when the Office of Personnel Management was hacked.
We could protect people against having their stolen identifiers used behind their backs. It shouldn't actually be necessary to re-issue every Korean's ID, not to worry that US SSNs aren't usually replaceable. And great improvements may be made to the reliability of identification data presented online without dramatically changing Relying Parties' back-end processes. If for instance a service provider has always used SSN as part of its identification regime, they could continue to do so, if only the actual Social Security Numbers being received were reliable!
The trick is to be able to tell "original" ID numbers from "copies". But what does "original" even mean in the digital world? A more precise term for what we really want is pedigree. What we need is to be able to present attribute data in such a way that the receiver may be sure of their pedigree; that is, know that the attributes were originally issued by an authoritative body, that the data has been kept safe, and that each presentation of the attribute has occurred under the owner's control.
These objectives can be met with the help of smart cryptographic technologies which today are built into most smart phones and smartcards, and which are finally being properly exploited by initiatives like the FIDO Alliance.
"Notarising" attributes in chip devices
There are ways of issuing attributes to a smart chip device that prevent them from being stolen, copied and claimed by anyone else. One way to do so is to encapsulate and notarise attributes in a unique digital certificate issued to a chip. Today, a great many personal devices routinely embody cryptographically suitable chips for this purpose, including smart phones, SIM cards, "Secure Elements", smartcards and many wearable computers.
Consider an individual named Smith to whom Organisation A has issued a unique attribute N (which could be as simple as a customer reference number). If N is saved in ordinary computer memory or something like a magnetic stripe card, then it has no pedigree. Once the number N is presented by the cardholder in a transaction, it has the same properties as any other number. To better safeguard N in a chip device, it can be sealed into a digital certificate, as follows:
1. generate a fresh private-public key pair inside Smith’s chip
2. export the public key
3. create a digital certificate around the public key, with an attribute corresponding to N
4. have the certificate signed by (or on behalf of) organisation A.
The result of coordinating these processes and technologies is a logical triangle that inextricably binds cardholder Smith to their attribute N and to a specific personally controlled device. The certificate signed by organisation A attests to both Smith’s entitlement to N and Smith's control of a particular key unique to the device. Keys generated inside the chip are retained internally, never divulged to outsiders. It is not possible to copy the private key to another device, so the logical triangle cannot be reproduced or counterfeited.
Note that this technique lies at the core of the EMV "Chip-and-PIN" system where the smart payment card digitally signs cardholder and transaction data, rendering it immune to replay, before sending it to the merchant terminal. See also my 2012 paper Calling for a uniform approach to card fraud, offline and on. Now we should generalise notarised personal data and digitally signed transactions beyond Card-Present payments into as much online business as possible.
Restoring privacy and consumer control
When Smith wants to present their attribute N in an electronic transaction, instead of simply copying N out of memory (at which point it would lose its pedigree), Smith’s transaction software digitally signs the transaction using the certificate containing N. With standard security software, any third party can then verify that the transaction originated from a genuine chip holding the unique key certified by A as containing the attribute N.
Note that N doesn't have to be a customer number or numeric identifier; it could be any personal data, such as a biometric template, or a package of medical information like an allergy alert, or an interesting isolated (and anonymous) property of the user such as their age.
The capability to manage multiple key pairs and certificates, and to sign transactions with a nominated private key, is increasingly built into smart devices today. By narrowing down what you need to know about someone to a precise attribute or personal data item, we will reduce identity theft and fraud while radically improving privacy. This sort of privacy enhancing technology is the key to a safe Internet of Things, and fortunately now is widely available.
Addressing ID theft
Perhaps the best thing governments could do immediately is to adopt smartcards and equivalent smart phone apps for holding and presenting such attributes as official ID numbers. The US government has actually come close to such a plan many times; Chip-based Social Security Cards and Medicare Cards have been proposed before, without realising their full potential. These devices would best be used as above to hold a citizen's identifiers and present them cryptographically, without vulnerability to ID theft and takeover. We wouldn't have to re-issue compromised SSNs; we would instead switch from manual presentation of these numbers to automatic online presentation, with a chip card or smart phone app conveying the data through digitally signatures.
Days 3 and 4 at CIS Monterey.
Andre Durand's Keynote
The main sessions at the Cloud Identity Summit (namely days three and four overall) kicked off with keynotes from Ping Identity chief Andre Durand, New Zealand technology commentator Ben Kepes, and Ping Technical Director Mark Diodati. I'd like to concentrate on Andre's speech for it was truly fresh.
Andre has an infectious enthusiasm for identity, and is a magnificent host to boot. As I recall, his CIS keynote last year in Napa was pretty simply a dedication to the industry he loves. Not that there's anything wrong with that. But this year he went a whole lot further, with a rich deep dive into some things we take for granted: identity tokens and the multitude of security domains that bound our daily lives.
It's famously been said that "identity is the new perimeter" and Andre says that view informs all they do at Ping. It's easy I think to read that slogan to mean security priorities (and careers) are moving from firewalls to IDAM, but the meaning runs deeper. Identity is meaningless without context, and each context has an edge that defines it. Identity is largely about boundaries, and closure.
- MyPOV and as an aside: The move to "open" identities which has powered IDAM for a over a decade is subject to natural limits that arise precisely because identities are perimeters. All identities are closed in some way. My identity as an employee means nothing beyond the business activities of my employer; my identity as an American Express Cardholder has no currency at stores that don't accept Amex; my identity as a Qantas OneWorld frequent flyer gets me nowhere at United Airlines (nor very far at American, much to my surprise). We discovered years ago that PKI works well in closed communities like government, pharmaceutical supply chains and the GSM network, but that general purpose identity certificates are hopeless. So we would do well to appreciate that "open" cross-domain identity management is actually a special case and that closed IDAM systems are the general case.
Andre reviewed the amazing zoo of hardware tokens we use from day to day. He gave scores of examples, including driver licenses of course but license plates too; house key, car key, garage opener, office key; the insignias of soldiers and law enforcement officers; airline tickets, luggage tags and boarding passes; the stamps on the arms of nightclub patrons and the increasingly sophisticated bracelets of theme park customers; and tattoos. Especially vivid was Andre's account of how his little girl on arriving at CIS during the set-up was not much concerned with all the potential playthings but was utterly rapt to get her ID badge, for it made her "official".
Tokens indeed have always had talismanic powers.
Then we were given a fly-on-the-wall slide show of how Andre typically starts his day. By 7:30am he has accessed half a dozen token-controlled physical security zones, from his home and garage, through the road system, the car park, the office building, the elevator, the company offices and his own corner office. And he hasn't even logged into cyberspace yet! He left unsaid whether or not all these domains might be "federated".
- MyPOV: Isn't it curious that we never seem to beg for 'Single Sign On' of our physical keys and spaces? I suspect we know instinctively that one-key-fits-all would be ridiculously expensive to retrofit and would require fantastical cooperation between physical property controllers. We only try to federate virtual domains because the most common "keys" - passwords - suck, and because we tend to underestimate the the cost of cooperation amongst digital RPs.
Tokens are, as Andre reminded us, on hand when you need them, easy to use, easy to revoke, and hard to steal (at least without being noticed). And they're non-promiscuous in respect of the personal information they disclose about their bearers. It's a wondrous set of properties, which we should perhaps be more conscious of in our work. And tokens can be used off-line.
- MyPOV: The point about tokens working offline is paramount. It's a largely forgotten value. Andre's compelling take on tokens makes for a welcome contrast to the rarely questioned predominance of the cloud. Managing and resolving identity in the cloud complicates architectures, concentrates more of our personal data, and puts privacy at risk (for it's harder to unweave all the traditionally independent tracks of our lives).
In closing, Andre asked a rhetorical question which was probably forming in most attendees' minds: What is the ultimate token? His answer had a nice twist. I thought he'd say it's the mobile device. With so much value now remote, multi-factor cloud access control is crucial; the smart phone is the cloud control du jour and could easily become the paragon of tokens. But no, Andre considers that a group of IDAM standards could be the future "universal token" insofar as they beget interoperability and portability.
He said of the whole IDAM industry "together we are networking identity". That's a lovely sentiment and I would never wish to spoil Andre Durand's distinctive inclusion, but on that point technically he's wrong, for really we are networking attributes! More on that below and in my previous #CISmcc diary notes.
The identity family tree
My own CISmcc talk came at the end of Day 4. I think it was well received; the tweet stream was certainly keen and picked up the points I most wanted to make. Attendance was great, for which I should probably thank Andre Durand, because he staged the Closing Beach Party straight afterwards.
I'll post an annotated copy of my slides shortly. In brief I presented my research on the evolution of digital identity. There are plenty of examples of how identity technologies and identification processes have improved over time, with steadily stronger processes, regulations and authenticators. It's fascinating too how industries adopt authentication features from one another. Internet banking for example took the one-time password fob from late 90's technology companies, and the Australian PKI de facto proof-of-identity rules were inspired by the standard "100 point check" mandated for account origination.
Clearly identity techniques shift continuously. What I want to do is systematise these shifts under a single unifying "phylogeny"; that is, a rigorously worked-out family tree. I once used the metaphor of a family tree in a training course to help people organise their thinking about authentication, but the inter-relationships between techniques was guesswork on my part. Now I'm curious if there is a real family tree that can explain the profusion of identities we have been working so long on simplifying, often to little avail.
True Darwinian evolution requires there to be replicators that correspond to the heritable traits. Evolution results when the proportions of those replicators in the "gene pool" drift over generations as survival pressures in the environment filter beneficial traits. The definition of Digital Identity as a set of claims or attributes provides a starting point for a Darwinian treatment. I observe that identity attributes are like "Memes" - the inherited units of culture first proposed by biologist Richard Dawkins. In my research I am trying to define sets of available "characters" corresponding to technological, business and regulatory features of our diverse identities, and I'm experimenting with phylogenetic modelling programs to see what patterns emerge in sets of character traits shared by those identities.
So what? A rigorous scientific model for identity evolution would have many benefits. First and foremost it would have explanatory power. I do not believe that as an industry we have a satisfactory explanation for the failure of such apparently good ideas as Information Cards. Nor for promising federation projects like the Australian banking sector's "Trust Centre" and "MAMBO" lifetime portable account number. I reckon we have been "over federating" identity; my hunch is that identities have evolved to fit particular niches in the business ecosystem to such an extent that taking a student ID for instance and using it to log on to a bank is like dropping a saltwater fish into a freshwater tank. A stronger understanding of how attributes are organically interrelated would help us better plan federated identity, and to even do "memetic engineering" of the attributes we really want to re-use between applications and contexts.
If a phylogenetic tree can be revealed, it would confirm the 'secret lives' of attributes and thereby lend more legitimacy to the Attributes Push (which coincidentally some of us first spotted at a previous CIS, in 2013). It would also provide evidence that identification risks in local environments are why identities have come to be the way they are. In turn, we could pay more respect to authentication's idiosyncrasies, instead of trying to pigeonhole them into four rigid Levels of Assurance. At Sunday's NSTIC session, CTO Paul Grassi floated the idea of getting rid of LOAs. That would be a bold move of course; it could be helped along by a new fresh focus to attributes. And of course we kept hearing throughout CIS Monterey about the FIDO Alliance with its devotion to authentication through verified device attributes, and its strategy to stay away from the abstract business of identities.
Reflections on CIS 2014
I spoke with many people at CIS about what makes this event so different. There's the wonderful family program of course, and the atmosphere that creates. And there's the paradoxical collegiality. Ping has always done a marvelous job of collaborating in various standards groups, and likewise with its conference, Ping's people work hard to create a professional, non-competitive environment. There are a few notable absentees of course but all the exhibitors and speakers I spoke to - including Ping's direct competitors - endorsed CIS as a safe and important place to participate in the identity community, and to do business.
But as a researcher and analyst, the Cloud Identity Summit is where I think you can see the future. People report hearing about things for the first time at a CIS, only to find those things coming true a year or two later. It's because there are so many influencers here.
Last year one example was the Attributes Push. This year, the onus on Attributes has become entirely mainstream. For example, the NSTIC pilot partner ID.me (a start-up business focused on improving veterans' access to online discounts through improved verification of entitlements) talks proudly of their ability to convey attributes and reduce the exposure of identity. And Paul Grassi proposes much more focus on Attributes from 2015.
Another example is the "Authorization Agent" (AZA) proposed for SSO in mobile platforms, which was brand new when Paul Madsen presented it at CIS Napa in 2013. Twelve months on, AZA has broadened into the Native Apps (NAPPS) OpenID Working Group.
Then there are the things that are nearly completely normalised. Take mobile devices. They figured in just about every CISmcc presentation, but were rarely called out. Mobile is simply the way things are now.
The mobile form factor is now taken for granted. And now the cryptographic capabilities now standard in most handsets (and increasingly embedded in smart things and wearables), are getting a whole lot of express attention. Hardware crypto was a major theme at CIS. I've already made much of Andre Durand's keynote on tokens, but it was the same throughout the event.
- There was a session on hybrid Physical and Logical Access Control Systems (PACS-LACS) featuring the US Government's PIV-I smartcard standard and the major ongoing R&D on that platform sponsored by DHS.
- Companies like SecureKey are devoted to hardware-based keys, increasingly embedded in "street IDs" like driver licenses, and are working with numerous players deep in the SIM and smartcard supply chains.
- The FIDO Alliance is fundamentally about hardware based identity security measures, leveraging embedded key pairs to attest to the pedigree of authenticator models and the attributes that they transmit on behalf of their verified users. FIDO promises to open up the latent authentication power of many 100s of millions of devices already featuring Secure Elements of one kind or another. FIDO realises PKI the way nature intended all along.
- The good old concept of "What You See Is What You Sign" (WYSIWYS) is making a comeback, with mobile platform players appreciating that users of smartphones need reliable cues in the UX as to the integrity of transaction data served up in their rich operating systems. Clearly some exciting R&D lies ahead.
- In a world of formal standards, we should also acknowledge the informal standards around us - the benchmarks and conventions that represent the 'real way' to do things. Hardware based security is taken increasingly for granted. The FIDO protocols are based on key pairs that people just seem to assume (correctly) will be generated in the compliant devices during registration. And Apple with its iTouch has helped to 'train' end users that biometrics templates must never leave the safety of a controlled hardware end point. FIDO of course makes that a hard standard.
In my view, the Cloud Identity Summit is the only not-to-be missed event on the IDAM calendar. So long may it continue. And if CIS is where you go to see the future, what's next?
- Judging by CISmcc, I reckon we're going to see entire sessions next year devoted to Continuous Authentication, in which signals are collected from wearables and the Internet of Things at large, to gain insights into the state of the user at every important juncture.
- With the disciplined separation of abstract identities from concrete attributes, we're going to need an Digital Identity Stack for reference. FIDO's pyramid is on the right track, but it needs some work. I'm not sure the pyramid is the right visualisation; for one thing it evokes Maslow's Hierarchy of Needs in which the pinnacle corresponds to luxuries not essentials!
- Momentum will grow around Relationships. Kantara's new Identity Relationship Management (IRM) WG was talked about in the CISmcc corridors. I am not sure we're all using the word in the same way, but it's a great trend, for Digital Identity is only really a means to an end, and it's the relationships they support that make identities important.
So there's much to look forward to!
See you again next year (I hope) in Monterey!
Second Day Reflections from CIS Monterey.
Follow along on Twitter at #CISmcc (for the Monterey Conference Centre).
The Attributes push
At CIS 2013 in Napa a year ago, several of us sensed a critical shift in focus amongst the identerati - from identity to attributes. OIX launched the Attributes Exchange Network (AXN) architecture, important commentators like Andrew Nash were saying, 'hey, attributes are more interesting than identity', and my own #CISnapa talk went so far as to argue we should forget about identity altogether. There was a change in the air, but still, it was all pretty theoretical.
Twelve months on, and the Attributes push has become entirely practical. If there was a Word Cloud for the NSTIC session, my hunch is that "attributes" would dominate over "identity". Several live NSTIC pilots are all about the Attributes.
ID.me is a new company started by US military veterans, with the aim of improving access for the veterans community to discounted goods and services and other entitlements. Founders Matt Thompson and Blake Hall are not identerati -- they're entirely focused on improving online access for their constituents to a big and growing range of retailers and services, and offer a choice of credentials for proving veterans bona fides. It's central to the ID.me model that users reveal as little as possible about their personal identities, while having their veterans' status and entitlements established securely and privately.
Another NSTIC pilot Relying Party is the financial service sector infrastructure provider Broadridge. Adrian Chernoff, VP for Digital Strategy, gave a compelling account of the need to change business models to take maximum advantage of digital identity. Broadridge recently announced a JV with Pitney Bowes called Inlet, which will enable the secure sharing of discrete and validated attributes - like name, address and social security number - in an NSTIC compliant architecture.
Yesterday I said in my #CISmcc diary that I hoped to change my mind about something here, and half way through Day 2, I was delighted it was already happening. I've got a new attitude about NSTIC.
Over the past six months, I had come to fear NSTIC had lost its way. It's hard to judge totally accurately when lurking on the webcast from Sydney (at 4:00am) but the last plenary seemed pedestrian to me. And I'm afraid to say that some NSTIC committees have got a little testy. But today's NSTIC session here was a turning point. Not only are there a number or truly exciting pilots showing real progress, but Jeremy Grant has credible plans for improving accountability and momentum, and the new technology lead Paul Grassi is thinking outside the box and speaking out of school. The whole program seems fresh all over again.
In a packed presentation, Grassi impressed me enormously on a number of points:
- Firstly, he advocates a pragmatic NSTIC-focused extension of the old US government Authentication Guide NIST SP 800-63. Rather than a formal revision, a companion document might be most realistic. Along the way, Grassi really nailed an issue which we identity professionals need to talk about more: language. He said that there are words in 800-63 that are "never used anywhere else in systems development". No wonder, as he says, it's still "hard to implement identity"!
- Incidentally I chatted some more with Andrew Hughes about language; he is passionate about terms, and highlights that our term "Relying Party" is an especially terrible distraction for Service Providers whose reason-for-being has nothing to do with "relying" on anyone!
- Secondly, Paul Grassi wants to "get very aggressive on attributes", including emphasis on practical measurement (since that's really what NIST is all about). I don't think I need to say anything more about that than Bravo!
- And thirdly, Grassi asked "What if we got rid of LOAs?!". This kind of iconoclastic thinking is overdue, and was floated as part of a broad push to revamp the way government's orthodox thinking on Identity Assurance is translated to the business world. Grassi and Grant don't say LOAs can or should be abandoned by government, but they do see that shoving the rounded business concepts of identity into government's square hole has not done anyone much credit.
Just one small part of NSTIC annoyed me today: the persistent idea that federation hubs are inherently simpler than one-to-one authentication. They showed the following classic sort of 'before and after' shots, where it seems self-evident that a hub (here the Federal Cloud Credential Exchange FCCX) reduces complexity. The reality is that multilateral brokered arrangements between RPs and IdPs are far more complex than simple bilateral direct contracts. And moreover, the new forms of agreements are novel and untested in real world business. The time and cost and unpredictability of working out these new arrangements is not properly accounted for and has often been fatal to identity federations.
The dog barks and this time the caravan turns around
One of the top talking points at #CISmcc has of course been FIDO. The FIDO Alliance goes from strength to strength; we heard they have over 130 members now (remember it started with four or five less than 18 months ago). On Saturday afternoon there was a packed-out FIDO show case with six vendors showing real FIDO-ready products. And today there was a three hour deep dive into the two flagship FIDO protocols UAF (which enables better sharing of strong authentication signals such that passwords may be eliminated) and U2F (which standardises and strengthens Two Factor Authentication).
FIDO's marketing messages are improving all the time, thanks to a special focus on strategic marketing which was given its own working group. In particular, the Alliance is steadily clarifying the distinction between identity and authentication, and sticking adamantly to the latter. In other words, FIDO is really all about the attributes. FIDO leaves identity as a problem to be addressed further up the stack, and dedicates itself to strengthening the authentication signal sent from end-point devices to servers.
The protocol tutorials were excellent, going into detail about how "Attestation Certificates" are used to convey the qualities and attributes of authentication hardware (such as device model, biometric modality, security certifications, elapsed time since last user verification etc) thus enabling nice fine-grained policy enforcement on the RP side. To my mind, UAF and U2F show how nature intended PKI to have been used all along!
Some confusion remains as to why FIDO has two protocols. I heard some quiet calls for UAF and U2F to converge, yet that would seem to put the elegance of U2F at risk. And it's noteworthy that U2F is being taken beyond the original one time password 2FA, with at least one biometric vendor at the showcase claiming to use it instead of the heavier UAF.
Surprising use cases
Finally, today brought more fresh use cases from cohorts of users we socially privileged identity engineers for the most part rarely think about. Another NSTIC pilot partner is AARP, a membership organization providing "information, advocacy and service" to older people, retirees and other special needs groups. AARP's Jim Barnett gave a compelling presentation on the need to extend from the classic "free" business models of Internet services, to new economically sustainable approaches that properly protect personal information. Barnett stressed that "free" has been great and 'we wouldn't be where we are today without it' but it's just not going to work for health records for example. And identity is central to that.
There's so much more I could report if I had time. But I need to get some sleep before another packed day. All this changing my mind is exhausting.
Cheers again from Monterey.
With apologies to Friedrich Nietzsche. The hero of many a crypto folk tale Bob is dead, and we have killed him.
We now know that in PKI, Alice's Relying Party is almost always a machine and not a human being. The idea that two strangers would use PKI to work out whether or not to trust one another was deeply distracting and led to the complexity that in the main stymied early PKI.
All of which might be academic except the utopian idea persists that identity frameworks can and should underpin stranger-to-stranger e-business. With NSTIC for instance I fear we are sleep walking into a repeat of Big PKI, when we could be solving a simpler problem: the robust and bilateral presentation of digital identity data in established contexts, without changing the existing relationships that cover almost all serious transactional business.
The following is an extract from a past paper of mine, "Public Key Superstructure" which was presented to the NIST IDTrust Workshop in 2008. There I examine the shortfalls and pitfalls of using signed email as a digital signature archetype.
E-mail not a killer application for PKI
A total lack of real applications would explain why e-mail became by default the most talked about PKI application. Many PKI vendors to this day continue to illustrate their services and train their users with imaginary scenarios where our heroes Alice and Bob breathlessly exchange signed e-mails. Like the passport metaphor, e-mail seems easily understood, but it manifestly has not turned out to be a ‘killer application’, and worse still, has contributed to a host of misunderstandings.
The story usually goes go that Alice has received a secure e-mail from stranger Bob and wishes to work out if he is trustworthy. She double clicks on his digital signature and certificate in order to identify his CA. And now the fun begins. If Alice is not immediately trusting of the CA (presumably by reputation) then she is expected to download the CP and CPS, read them, and satisfy herself that the registration processes and security standards are adequate for her needs.
Does this sort of rigmarole have any parallel in the real world? A simple e-mail with no other context is closely equivalent to a letter or fax sent on plain white paper. Under what circumstances should we take seriously a message sent on plain paper from a stranger, even if we could track down their name?
In truth, the vast majority of serious communications occurs not between strangers but in a rich existing context, where the receiver has already been qualified in some way by the sender as likely being the right party to contact. In e-business, routine transactions are not usually conducted by e-mail but instead use special purpose software or dedicated websites with purpose built content. Thus we see most of the digital signature action in cases such as e-prescriptions, customs broking, trade documentation, company returns, patent filing and electronic conveyancing.
Several important simplifying assumptions flow from the fact that most e-business has a rich context, and these should be heeded when planning PKI:
Emphasise straight-through processing
In spite of the common worked example of Alice and Bob exchanging e-mails, the receiver of most routine transactions – such as payment instructions, tax returns, medical records, import/export declarations, or votes – is not a human but instead is a machine. The notion that a person will examine digital certificates and chase down the CA and its practices is simply false in the vast majority of cases. One of PKI’s great strengths is the way it aids straight-through processing, so it has been a great pity that vendors, through their training and marketing materials, have stressed manual over automatic processing.
Play down Relying Party Agreements
The sender and receiver of digitally signed transactions are hardly ever un-related. This is in stark contrast to orthodox legal analyses of PKI which foundered on the supposed lack of contractual privity between Relying Party and CA. For example the Australian Government’s extensive investigation into legal liability in digital certificates after 76 pages still could not reach a firm conclusion about whether a “CA may owe a duty of care to a [Relying Party] who is not known to the CA” [http://www.egov.vic.gov.au/pdfs/publication_utz1508.pdf]. The fact is, this sort of scenario is entirely academic and should never have been given the level of attention that it was. The idea of a “Relying Party Agreement” to join in contract the RP and the CA is moot in all “closed” e-business settings where PKI in thriving. It is this lesson that needs to be generalised by PKI regulators, not the hypothetical model of “open” PKI where all parties are strangers.
Play down certificate path discovery
The fact that in real life, parties are transacting in the context of some explicit scheme, means that the receiver’s software can predict the type of certificate that will most often be used by senders. For instance, when doctors are using e-prescribing software, there is not going to be a wide choice of certificate options; indeed, the appropriate scheme root keys and certificates for authenticating a whole class of doctors will likely be installed at both the sending and receiving ends, at the same time that the software is. When a doctor writes a prescription, their private key can be programmatically selected by their client and invoked to create a digital signature, according to business rules enshrined in the software design. And when such a transaction is received, the software of the pharmacist (or insurance company, government agency etc.) will similarly ‘know’ by design which classes of certificates are expected to verify the digital signature. All this logic in most transaction systems can be settled at design time, which can greatly simplify the task of certificate path discovery, or eliminate it altogether. In most systems it is straightforward for the sender’s software to attach the whole certificate chain to the digital signature, safe in the knowledge that the receiver’s software will be configured with the necessary trust anchors (i.e. Root CA certificates) with which to parse the chain.
How do we make best sense of the bewildering array of authenticators on the market? Most people are familiar with single factor versus two factor, but this simple dichotomy doesn’t help match technologies to applications. The reality is more complex. A family tree like the one sketched here may help navigate the complexity.
Different distinctions define various branch points. The first split is between what I call Transient authentication (i.e. access control) which tells if a user is allowed to get at a resource or not, and Persistent authentication, which lets a user leave a lasting mark (i.e. signature) on what they do, such as binding electronic transactions.
Working our way up the Transient branch, we see that most access controls are based either on shared secrets or biometrics. Dynamic shared secrets change with every session, either in a series of one time passwords or via challenge-response.
On the biometric branch, we should distinguish those traits that can be left behind inadvertently in the environment and are more readily stolen. The safer biometrics are “clean” and leave no residue. Note that while the voice might be recorded without the speaker’s knowledge, I don't see it as a residual biometric in practice because voice recognition solutions usually use dynamic phrases that resist replay.
For persistent authentication, the only practical option today is PKI and digital signatures, technology which is available in an increasingly wide range of forms. Embedded certificates are commonplace in smartcards, cell phones, and other devices.
The folliage in the family tree indicates which technologies I believe will continue to thrive, and which seem more likely to be dead-ends.
I'd appreciate feedback. Is this useful? Does anyone know of other taxonomies?
PKI has a reputation for terrible complexity, but it is actually simpler than many mature domestic technologies.
It's interesting to ponder why PKI got to be (or look) so complicated. There have been at least two reasons. First, the word is frequently taken to mean the original overblown "Big PKI" general purpose identification schemes, with their generic and context-free passport grade ID checks, horrid user agreements and fine print. Yet there are alternative ways to deploy public key technology in closed authentication schemes, and indeed that is where it thrives; see http://lockstep.com.au/library/pki. Second, there is all that gory technical detail foisted on lay people in the infamous "PKI 101" sessions. Typical explanations start with a tutorial on asymmetric cryptography even before they tell you what PKI is for. ]
I've long wondered what it is about PKI that leads its advocates to train people into the ground. Forty-odd years ago when introducing the newfangled mag stripe banking card, I bet the sales reps didn't feel the need to explain electromagnetism and ferric oxide chemistry.
This line of thought leads to fresh models for 'domesticating' PKI by embedding it in plastic cards. By re-framing PKI keys and certificates as being means to an end and not ends in themselves, we can also:
- identify dramatically improved supply chains to deliver PKI's benefits
- re-cast the traditionally difficult business model for CAs, and
- demystify how PKI can support a plurality of IDs and apps.
Consider the layered complexity of the conventional plastic card, and the way the layers correspond to steps in the supply chain. At its most basic level, the card is based on solid state physics and Maxwell's Equations for electromagnetism. These govern the properties of ferric oxide crystals, which are manufactured as powders and coated onto bulk tape by chemical companies like BASF and 3M. The tape is applied to blank cards, which are distributed to different schemes for personalisation. Usually the cards are pre-printed in bulk with artwork and physical security features specific to the scheme. In general, personalisation in each scheme starts with user registration. Data is written to the mag stripe according to one of a handful of coding standards which differ a little bwteen banks, airlines and other niches. The card is printed or embossed, and distributed.
The variety of distinct schemes using magnetic stripe cards is almost limitless: bank cards, credit cards, government entitlements, health insurance, clubs, loyalty cards, gift cards, driver licences, employee ID, universities, professional associations etc etc. They all use the same ferromagnetic components delivered through a global supply chain, which at the lower layers is very specialised and delivered by only a handful of companies.
And needless to say, hardly anyone needs to know Maxwell's Equations to make sense of a mag stripe card.
The smartcard supply chain is very similar. The only technical difference is the core technology used to encode the user information. The theoretical foundations are cryptography instead of electromagnetism, and instead of bulk ferric oxide powders and tapes, specialist semiconductor companies fabricate the ICs and preload them with firmware. From that point on, the smartcard and mag stripe card supply chains overlap. In fact the end user in most cases can't tell the difference between the different generations of card technologies.
Smartcards (and their kin: SIMs, USB keys and smartphones) are the natural medium for deploying PKI technology.
Re-framing PKI deployment like this ...
- decouples PK technology from the application and scheme layers, and tames the technical complexity; it shows where to draw the line in "PKI 101" training
- provides a model for transitioning from conventional "id" technology to PKI with minimum disruption of current business processes and supplier arrangements
- shows that it's perfectly natural for PKI to be implemented in closed communities of interest (schemes) and takes us away from the unhelpful orthodox Big PKI model
- suggests new "wholesale" business models for CAs; historically CAs found it difficult to sell certificates direct, but a clearly superior model is to provide certificates into the initialisation step
- demonstrates how easy to use PKI should be; that is, exactly as easy to use as the mag stripe card.
I once discussed this sort of bulk supply chain model at a conference in Tokyo. Someone in the audience asked me how many CAs I thought were needed worldwide. I said maybe three or four, and was greeted with incredulous laughter. But seriously, if certificates are reduced to digitally signed objects that bind a parcel of cardholder information to a key associated with a chip, why shouldn't certificates be manufactured by a fully automatic CA service on an outsourced managed service basis? It's no different from security printing, another specialised industry with the utmost "trust" requirements but none of the weird mystique that has bedevilled PKI.
What do you call it when a metaphor or analogy outgrows the word it is based on, thus co-opting that word to mean something quite new? Metaphors are meant to clarify complex concepts by letting people think of them in simpler terms. But if the detailed meaning is actually different, then the metaphor becomes misleading and dangerous.
I'm thinking of the idea of the electronic passport. Ever since the early days of Big PKI, there's been the beguiling idea of an electronic passport that will let the holder into all manner of online services and enable total strangers to "trust" one another online. Later Microsoft of course even named their digital identity service "Passport", and the word is still commonplace in discussing all manner of authentication solutions.
The idea is that the passport allows you to go wherever you like.
Yet there is no such thing.
A real world passport doesn't let you into any old country. It's not always sufficient; you often need a visa. You can't stay as long as you like in a foreign place. Some countries won't let you in at all if you carry the passport of an unfriendly nation. You need to complete a landing card and customs declarations specific to your particular journey. And finally, when you've got to the end of the arrivals queue, you are still at the mercy of an immigration officer who has the discretion to turn you away. As with all business, there is so much more going on here than personal identity.
So in the sense of the meaning important to the electronic passport metaphor, the "real" passport doesn't actually exist!
The simplistic notion of electronic passport is really deeply unhelpful. The dream and promise of general purpose digital certificates is what derailed PKI, for they're unwieldy, involve unprecedented mechanisms for conferring open-ended "trust", and are rarely useful on their own (ironically that's also a property of real passports). Think of the time and money wasted chasing the electronic passport when all along PKI technology was better suited to closed transactions. What matters in most transactions is not personal identity but rather, credentials specific to the business context. There never has been a single general purpose identity credential.
And now with "open" federated identity frameworks, we're sleep-walking into the same intractable problems, all because people have been seduced by a metaphor based on something that doesn't exist.
The well initiated understand that the Laws of Identity, OIX, NSTIC and the like involve a plurality of identities, and multiple attributes tuned to different contexts. Yet NSTIC in particular is still confused by many with a single new ID, a misunderstanding aided and abetted by NSTIC's promoters using terms like "interoperable" without care, and by casually 'imagining' that a student in future will log in to their bank using their student card .
Words are powerful and they're also malleable. Some might say I'm being too pedantic sticking to the traditional reality of the "passport". But no. It would be OK in my opinion for "passport" to morph into something more powerful and universal -- except that it can't. The real point in all of this is that multiple identities are an inevitable consequence of how identities evolve to suit distinct business contexts, and so the very idea of a digital passport is a bit delusional.