Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

PKI as nature intended

Few technologies are so fundamental and yet so derided at the same time as public key infrastructure. PKI is widely thought of as obsolete or generically intrusive yet it is ubiquitous in SIM cards, SSL, chip and PIN cards, and cable TV. Technically, public key infrastructure Is a generic term for a management system for keys and certificates; there have always been endless ways to build PKIs (note the plural) for different communities, technologies, industries and outcomes. And yet “PKI” has all too often come to mean just one way of doing identity management. In fact, PKI doesn’t necessarily have anything to do with identity at all.

This blog is an edited version of a feature I once wrote for SC Magazine. It is timely in the present day to re-visit the principles that make for good PKI implementations and contextualise them in one of the most contemporary instances of PKI: the FIDO Alliance protocols for secure attribute management. In my view, FIDO realises PKI ‘as nature intended’.

“Re-thinking PKI”

In their earliest conceptions in the early-to-mid 1990s, digital certificates were proposed to authenticate nondescript transactions between parties who had never met. Certificates were construed as the sole means for people to authenticate one another. Most traditional PKI was formulated with no other context; the digital certificate was envisaged to be your all-purpose digital identity.

Orthodox PKI has come in for spirited criticism. From the early noughties, many commentators pointed to a stark paradox: online transaction volumes and values were increasing rapidly, in almost all cases without the help of overt PKI. Once thought to be essential, with its promise of "non repdudiation", PKI seemed anything but, even for significant financial transactions.

There were many practical problems in “big” centralised PKI models. The traditional proof of identity for general purpose certificates was intrusive; the legal agreements were complex and novel; and private key management was difficult for lay people. So the one-size-fits-all electronic passport failed to take off. But PKI's critics sometimes throw the baby out with the bathwater.
In the absence of any specific context for its application, “big” PKI emphasized proof of personal identity. Early certificate registration schemes co-opted identification benchmarks like that of the passport. Yet hardly any regular business transactions require parties to personally identify one another to passport standards.

”Electronic business cards”

Instead in business we deal with others routinely on the basis of their affiliations, agency relationships, professional credentials and so on. The requirement for orthodox PKI users to submit to strenuous personal identity checks over and above their established business credentials was a major obstacle in the adoption of digital certificates.

It turns out that the 'killer applications' for PKI overwhelmingly involve transactions with narrow contexts, predicated on specific credentials. The parties might not know each other personally, but invariably they recognize and anticipate each other's qualifications, as befitting their business relationship.

Successful PKI came to be characterized by closed communities of interest, prior out-of-band registration of members, and in many cases, special-purpose application software featuring additional layers of context, security and access controls.

So digital certificates are much more useful when implemented as application-specific 'electronic business cards,' than as one-size-fits-all electronic passports. And, by taking account of the special conditions that apply to different e-business processes, we have the opportunity to greatly simplify the registration processes, user experience and liability arrangements that go with PKI.

The real benefits of digital signatures

There is a range of potential advantages in using PKI, including its cryptographic strength and resistance to identity theft (when implemented with private keys in hardware). Many of its benefits are shared with other technologies, but at least two are unique to PKI.

First, digital signatures provide robust evidence of the origin and integrity of electronic transactions, persistent over time and over 'distance’ (that is, the separation of sender and receiver). This greatly simplifies audit logging, evidence collection and dispute resolution, and cuts the future cost of investigation and fraud. If a digitally signed document is archived and checked at a later date, the quality of the signature remains undiminished over many years, even if the public key certificate has long since expired. And if a digitally signed message is passed from one relying party to another and on to many more, passing through all manner of intermediate systems, everyone still receives an identical, verifiable signature code authenticating the original message.

Electronic evidence of the origin and integrity of a message can, of course, be provided by means other than a digital signature. For example, the authenticity of typical e-business transactions can usually be demonstrated after the fact via audit logs, which indicate how a given message was created and how it moved from one machine to another. However, the quality of audit logs is highly variable and it is costly to produce legally robust evidence from them. Audit logs are not always properly archived from every machine, they do not always directly evince data integrity, and they are not always readily available months or years after the event. They are rarely secure in themselves, and they usually need specialists to interpret and verify them. Digital signatures on the other hand make it vastly simpler to rewind transactions when required.

Secondly, digital signatures and certificates are machine readable, allowing the credentials or affiliations of the sender to be bound to the message and verified automatically on receipt, enabling totally paperless transacting. This is an important but often overlooked benefit of digital signatures. When processing a digital certificate chain, relying party software can automatically tell that:

    • the message has not been altered since it was originally created
    • the sender was authorized to launch the transaction, by virtue of credentials or other properties endorsed by a recognized Certificate Authority
    • the sender's credentials were valid at the time they sent the message; and
    • the authority which signed the certificate was fit to do so.

One reason we can forget about the importance of machine readability is that we have probably come to expect person-to-person email to be the archetypal PKI application, thanks to email being the classic example to illustrate PKI in action. There is an implicit suggestion in most PKI marketing and training that, in regular use, we should manually click on a digital signature icon, examine the certificate, check which CA issued it, read the policy qualifier, and so on. Yet the overwhelming experience of PKI in practice is that it suits special purpose and highly automated applications, where the usual receiver of signed transactions is in fact a computer.

Characterising good applications

Reviewing the basic benefits of digital signatures allows us to characterize the types of e-business applications that merit investment in PKI.

Applications for which digital signatures are a good fit tend to have reasonably high transaction volumes, fully automatic or straight-through processing, and multiple recipients or multiple intermediaries between sender and receiver. In addition, there may be significant risk of dispute or legal ramifications, necessitating high quality evidence to be retained over long periods of time. These include:

    • Tax returns
    • Customs reporting
    • E-health care
    • Financial trading
    • Insurance
    • Electronic conveyancing
    • Superannuation administration
    • Patent applications.

This view of the technology helps to explain why many first-generation applications of PKI were problematic. Retail internet banking is a well-known example of e-business which flourished without the need for digital certificates. A few banks did try to implement certificates, but generally found them difficult to use. Most later reverted to more conventional access control and backend security mechanisms.Yet with hindsight, retail funds transfer transactions did not have an urgent need for PKI, since they could make use of existing backend payment systems. Funds transfer is characterized by tightly closed arrangements, a single relying party, built-in limits on the size of each transaction, and near real-time settlement. A threat and risk assessment would show that access to internet banking can rest on simple password authentication, in exactly the same way as antecedent phone banking schemes.

Trading complexity for specificity

As discussed, orthodox PKI was formulated with the tacit assumption that there is no specific context for the transaction, so the digital certificate is the sole means for authenticating the sender. Consequently, the traditional schemes emphasized high standards of personal identity, exhaustive contracts and unusual legal devices like Relying Party Agreements. They also often resorted to arbitrary 'reliance limits,' which have little meaning for most of the applications listed on the previous page. Notoriously, traditional PKI requires users to read and understand certification practice statements (CPS).

All that overhead stemmed from not knowing what the general-purpose digital certificate was going to be used for. On the other hand, if particular digital certificates are constrained to defined applications, then the complexity surrounding their specific usage can be radically reduced.

The role of PKI in all contemporary 'killer applications' is fundamentally to help automate the online processing of electronic transactions between parties with well-defined credentials. This is in stark contrast to the way PKI has historically been portrayed, where strangers Alice and Bob use their digital certificates to authenticate context-free general messages, often presumed to be sent by email. In reality, serious business messages are never sent stranger-to-stranger with no context or cues as to the parties' legitimacy.

Using generic email is like sending a fax on plain paper. Instead, business messaging is usually highly structured. Parties have an expectation that only certain types of transactions are going to occur between them and they equip themselves accordingly (for instance, a health insurance office is not set up to handle tax returns). The sender is authorized to act in defined types of transactions by virtue of professional credentials, a relevant license, an affiliation with some authority, endorsement by their employer, and so on. And the receiver recognizes the source of those credentials. The sender and receiver typically use prescribed forms and/or special purpose application software with associated user agreements and license conditions, adding context and additional layers of security around the transaction.

PKI got smart

When PKI is used to help automate the online processing of transactions between parties in the context of an existing business relationship, we should expect the legal arrangements between the parties to still apply. For business applications where digital certificates are used to identify users in specific contexts, the question of legal liability should be vastly simpler than it is in the general purpose PKI scenario where the issuer does not know what the certificates might be used for.
The new vision for PKI means the technology and processes should be no more of a burden on the user than a bank card. Rather than imagine that all public key certificates are like general purpose electronic passports, we can deploy multiple, special purpose certificates, and treat them more like electronic business cards. A public key certificate issued on behalf of a community of business users and constrained to that community can thereby stand for any type of professional credential or affiliation.

We can now automate and embed the complex cryptography deeply into smart devices -- smartcards, smart phones, USB keys and so on -- so that all terms and conditions for use are application focused. As far as users are concerned, a smartcard can be deployed in exactly the same way as any magnetic stripe card, without any need to refer to - or be limited by - the complex technology contained within (see also Simpler PKI is on the cards). Any application-specific smartcard can be issued under rules and controls that are fit for their purpose, as determined by the community of users or an appropriate recognized authority. There is no need for any user to read a CPS. Communities can determine their own evidence-of-identity requirements for issuing cards, instead of externally imposed personal identity checks. Deregulating membership rules dramatically cuts the overheads traditionally associated with certificate registration.

Finally, if we constrain the use of certificates to particular applications then we can factor the intended usage into PKI accreditation processes. Accreditation could then allow for particular PKI scheme rules to govern liability. By 'black-boxing' each community's rules and arrangements, and empowering the community to implement processes that are fit for its purpose, the legal aspects of accreditation can be simplified, reducing one of the more significant cost components of the whole PKI exercise (having said that, it never ceases to amaze how many contemporary healthcare PKIs still cling onto face-to-face passport grade ID proofing as if that's the only way to do digital certificates).

Fast forward

The preceding piece is a lightly edited version of the article ”Rethinking PKI” that first appeared in Secure Computing Magazine in 2003. Now, over a decade later, we’re seeing the same principles realised by the FIDO Alliance.

The FIDO protocols U2F and UAF enable specific attributes of a user and their smart devices to be transmitted to a server. Inherent to the FIDO methods are digital certificates that confer attributes and not identity, relatively large numbers of private keys stored locally in the users’ devices (and without the users needing to be aware of them as such) and digital signatures automatically applied to protocol messages to bind the relevant attributes to the authentication exchanges.

Surely, this is how PKI should have been deployed all along.

Posted in Security, PKI, Internet, Identity

Safeguarding the pedigree of identifiers

The problem of identity takeover

The root cause of much identity theft and fraud today is the sad fact that customer reference numbers and personal identifiers are so easy to copy. Simple numerical data like bank account numbers and health IDs can be stolen from many different sources, and replayed in bogus trans-actions.

Our personal data nowadays is leaking more or less constantly, through breached databases, websites, online forms, call centres and so on, to such an extent that customer reference numbers on their own are no longer reliable. Privacy consequentially suffers because customers are required to assert their identity through circumstantial evidence, like name and address, birth date, mother’s maiden name and other pseudo secrets. All this data in turn is liable to be stolen and used against us, leading to spiralling identity fraud.

To restore the reliability of personal identifiers, we need to know their pedigree. We need to know that a presented number is genuine, that it originated from a trusted authority, it’s been stored safely by its owner, and it’s been presented with the owner’s consent.

A practical response to ID theft

Several recent breaches of government registers leave citizens vulnerable to ID theft. In Korea, the national identity card system was attacked and it seems that all Korean's citizen IDs will have to be re-issued. In the US, Social Security Numbers are often stolen and used tin fraudulent identifications; recently, SSNs of 800,000 Post Office employees appear to have been stolen along with other personal records.

We could protect people against having their stolen identifiers used behind their backs. It shouldn't be necessary to re-issue every Korean's ID. And changes could be made to improve the relibaility of identification data, without dramatically changing the backend processes. That is, if a Relying Party has always used SSN fpor instance as part of its identification regime, they could continue to do so, if only the actual Social Security Numbers being received were reliable!

The trick is to be able to tell "original" ID numbers from "copies". But what does "original" even mean in the digital world? A more precise term for what we really want is pedigree. What we need is to be able to present numerical data in such a way that the receiver may be sure of its pedigree; that is, know that the data were originally issued by an authoritative body, that the data has been kept safe, and that each presentation of the data has occured under the owner's control.

These objectives can be met with the help of smart cryptographic technologies which today are built into most smart phones and smartcards, and which are finally being properly exploited by initiatives like the FIDO Alliance.

"Notarising" personal data in chip devices

There are ways of issuing personal data to a smart chip device that prevent those data from being stolen, copied and claimed by anyone else. One way to do so is to encapsulate and notarise personal data in a unique digital certificate issued to a chip. Today, a great many personal devices routinely embody cryptographically suitable chips for this purpose, including smart phones, SIM cards, “Secure Elements”, smartcards and many wearable computers.

Consider an individual named Smith to whom Organisation A has issued a unique customer reference number N. If N is saved in ordinary computer memory or something like a magnetic stripe card, then it has no pedigree. Once the number N is presented by the cardholder in a transaction, it looks like any other number. To better safeguard N in a chip device, it can be sealed into a digital certificate, as follows:

1. generate a fresh private-public key pair inside Smith’s chip
2. export the public key
3. create a digital certificate around the public key, with an attribute corresponding to N
4. have the certificate signed by (or on behalf of) organisation A.

Pedigree Diagram 140901

The result of coordinating these processes and technologies is a logical triangle that inextricably binds cardholder Smith to their reference number N and to a specific personally controlled device. The certificate signed by organisation A attests to Smith’s ownership of both N and a particular key unique to the device. Keys generated inside the chip are retained internally, never divulged to outsiders. It is impossible to copy the private key to another device, so the triangle cannot be cloned, reproduced or counterfeited.

Note that this technique lies at the core of the EMV "Chip-and-PIN" system where the smart payment card digitally signs cardholder and transaction data, rendering it immune to replay, before sending it to the merchant terminal. See also my 2012 paper Calling for a uniform approach to card fraud, offline and on. Now we should generalise notarised personal data and digitally signed transactions beyond Card-Present payments into as much online business as possible.

Restoring privacy and consumer control

When Smith wants to present their personal number in an electronic transaction, instead of simply copying N out of memory (at which point it would lose its pedigree), Smith’s transaction software digitally signs the transaction using the certificate containing N. With standard security software, any third party can then verify that the transaction originated from a genuine chip holding the unique key certified by A as matching the number N.

Note that N doesn’t have to be a customer number or numeric identifier; it could be any personal data, such as a biometric template or a package of medical information like an allergy alert.

The capability to manage multiple key pairs and certificates, and to sign transactions with a nominated private key, is increasingly built into smart devices today. By narrowing down what you need to know about someone to a precise customer reference number or similar personal data item, we will reduce identity theft and fraud while radically improving privacy. This sort of privacy enhancing technology is the key to a safe Internet of Things, and fortunately now is widely available.

Addressing ID theft

Perhaps the best thing governments could do immediately is to adopt smartcards and equivalent smart phone apps for holding and presenting ID numbers. The US government has actually come close to such a plan many times. Chip-based Social Security Cards and Medicare Cards have been proppsed before, without relaising their full potential. For these devices would best be used as above to hold a citizen's identifiers and present them cryptographically, without vulnerability to ID theft and takeover. We wouldn't have to re-issue compormised SSNs; we would instead switch from manual presentation of these numbers to automatic online presentation, with a chip card or smart phone app conveying the data through digitally signatures.

Posted in Smartcards, Security, PKI, Payments, Identity, Fraud, Biometrics

Postcard from Monterey 3 #CISmcc

Days 3 and 4 at CIS Monterey.

Andre Durand's Keynote

The main sessions at the Cloud Identity Summit (namely days three and four overall) kicked off with keynotes from Ping Identity chief Andre Durand, New Zealand technology commentator Ben Kepes, and Ping Technical Director Mark Diodati. I'd like to concentrate on Andre's speech for it was truly fresh.

Andre has an infectious enthusiasm for identity, and is a magnificent host to boot. As I recall, his CIS keynote last year in Napa was pretty simply a dedication to the industry he loves. Not that there's anything wrong with that. But this year he went a whole lot further, with a rich deep dive into some things we take for granted: identity tokens and the multitude of security domains that bound our daily lives.

It's famously been said that "identity is the new perimeter" and Andre says that view informs all they do at Ping. It's easy I think to read that slogan to mean security priorities (and careers) are moving from firewalls to IDAM, but the meaning runs deeper. Identity is meaningless without context, and each context has an edge that defines it. Identity is largely about boundaries, and closure.

  • MyPOV and as an aside: The move to "open" identities which has powered IDAM for a over a decade is subject to natural limits that arise precisely because identities are perimeters. All identities are closed in some way. My identity as an employee means nothing beyond the business activities of my employer; my identity as an American Express Cardholder has no currency at stores that don't accept Amex; my identity as a Qantas OneWorld frequent flyer gets me nowhere at United Airlines (nor very far at American, much to my surprise). We discovered years ago that PKI works well in closed communities like government, pharmaceutical supply chains and the GSM network, but that general purpose identity certificates are hopeless. So we would do well to appreciate that "open" cross-domain identity management is actually a special case and that closed IDAM systems are the general case.

Andre reviewed the amazing zoo of hardware tokens we use from day to day. He gave scores of examples, including driver licenses of course but license plates too; house key, car key, garage opener, office key; the insignias of soldiers and law enforcement officers; airline tickets, luggage tags and boarding passes; the stamps on the arms of nightclub patrons and the increasingly sophisticated bracelets of theme park customers; and tattoos. Especially vivid was Andre's account of how his little girl on arriving at CIS during the set-up was not much concerned with all the potential playthings but was utterly rapt to get her ID badge, for it made her "official".

IMG 5493 Ping Durand CISmcc Tokens make us official

Tokens indeed have always had talismanic powers.

Then we were given a fly-on-the-wall slide show of how Andre typically starts his day. By 7:30am he has accessed half a dozen token-controlled physical security zones, from his home and garage, through the road system, the car park, the office building, the elevator, the company offices and his own corner office. And he hasn't even logged into cyberspace yet! He left unsaid whether or not all these domains might be "federated".

  • MyPOV: Isn't it curious that we never seem to beg for 'Single Sign On' of our physical keys and spaces? I suspect we know instinctively that one-key-fits-all would be ridiculously expensive to retrofit and would require fantastical cooperation between physical property controllers. We only try to federate virtual domains because the most common "keys" - passwords - suck, and because we tend to underestimate the the cost of cooperation amongst digital RPs.
IMG 5493 Ping Durand CISmcc Tokens properties

Tokens are, as Andre reminded us, on hand when you need them, easy to use, easy to revoke, and hard to steal (at least without being noticed). And they're non-promiscuous in respect of the personal information they disclose about their bearers. It's a wondrous set of properties, which we should perhaps be more conscious of in our work. And tokens can be used off-line.

  • MyPOV: The point about tokens working offline is paramount. It's a largely forgotten value. Andre's compelling take on tokens makes for a welcome contrast to the rarely questioned predominance of the cloud. Managing and resolving identity in the cloud complicates architectures, concentrates more of our personal data, and puts privacy at risk (for it's harder to unweave all the traditionally independent tracks of our lives).

In closing, Andre asked a rhetorical question which was probably forming in most attendees' minds: What is the ultimate token? His answer had a nice twist. I thought he'd say it's the mobile device. With so much value now remote, multi-factor cloud access control is crucial; the smart phone is the cloud control du jour and could easily become the paragon of tokens. But no, Andre considers that a group of IDAM standards could be the future "universal token" insofar as they beget interoperability and portability.

He said of the whole IDAM industry "together we are networking identity". That's a lovely sentiment and I would never wish to spoil Andre Durand's distinctive inclusion, but on that point technically he's wrong, for really we are networking attributes! More on that below and in my previous #CISmcc diary notes.

The identity family tree

My own CISmcc talk came at the end of Day 4. I think it was well received; the tweet stream was certainly keen and picked up the points I most wanted to make. Attendance was great, for which I should probably thank Andre Durand, because he staged the Closing Beach Party straight afterwards.

I'll post an annotated copy of my slides shortly. In brief I presented my research on the evolution of digital identity. There are plenty of examples of how identity technologies and identification processes have improved over time, with steadily stronger processes, regulations and authenticators. It's fascinating too how industries adopt authentication features from one another. Internet banking for example took the one-time password fob from late 90's technology companies, and the Australian PKI de facto proof-of-identity rules were inspired by the standard "100 point check" mandated for account origination.

Steve Wilson CIS2014 Authentication Family Tree  Bank identity evolves blog CISmcc

Clearly identity techniques shift continuously. What I want to do is systematise these shifts under a single unifying "phylogeny"; that is, a rigorously worked-out family tree. I once used the metaphor of a family tree in a training course to help people organise their thinking about authentication, but the inter-relationships between techniques was guesswork on my part. Now I'm curious if there is a real family tree that can explain the profusion of identities we have been working so long on simplifying, often to little avail.

Steve Wilson CIS2014 Authentication Family Tree (1 0) CISmcc CIS Cloud Identity

True Darwinian evolution requires there to be replicators that correspond to the heritable traits. Evolution results when the proportions of those replicators in the "gene pool" drift over generations as survival pressures in the environment filter beneficial traits. The definition of Digital Identity as a set of claims or attributes provides a starting point for a Darwinian treatment. I observe that identity attributes are like "Memes" - the inherited units of culture first proposed by biologist Richard Dawkins. In my research I am trying to define sets of available "characters" corresponding to technological, business and regulatory features of our diverse identities, and I'm experimenting with phylogenetic modelling programs to see what patterns emerge in sets of character traits shared by those identities.

Steve Wilson CIS2014 Authentication Family Tree  Memome Characters blog CISmcc

So what? A rigorous scientific model for identity evolution would have many benefits. First and foremost it would have explanatory power. I do not believe that as an industry we have a satisfactory explanation for the failure of such apparently good ideas as Information Cards. Nor for promising federation projects like the Australian banking sector's "Trust Centre" and "MAMBO" lifetime portable account number. I reckon we have been "over federating" identity; my hunch is that identities have evolved to fit particular niches in the business ecosystem to such an extent that taking a student ID for instance and using it to log on to a bank is like dropping a saltwater fish into a freshwater tank. A stronger understanding of how attributes are organically interrelated would help us better plan federated identity, and to even do "memetic engineering" of the attributes we really want to re-use between applications and contexts.

If a phylogenetic tree can be revealed, it would confirm the 'secret lives' of attributes and thereby lend more legitimacy to the Attributes Push (which coincidentally some of us first spotted at a previous CIS, in 2013). It would also provide evidence that identification risks in local environments are why identities have come to be the way they are. In turn, we could pay more respect to authentication's idiosyncrasies, instead of trying to pigeonhole them into four rigid Levels of Assurance. At Sunday's NSTIC session, CTO Paul Grassi floated the idea of getting rid of LOAs. That would be a bold move of course; it could be helped along by a new fresh focus to attributes. And of course we kept hearing throughout CIS Monterey about the FIDO Alliance with its devotion to authentication through verified device attributes, and its strategy to stay away from the abstract business of identities.

Steve Wilson CIS2014 Authentication Family Tree  FIDO blog CISmcc

Reflections on CIS 2014

I spoke with many people at CIS about what makes this event so different. There's the wonderful family program of course, and the atmosphere that creates. And there's the paradoxical collegiality. Ping has always done a marvelous job of collaborating in various standards groups, and likewise with its conference, Ping's people work hard to create a professional, non-competitive environment. There are a few notable absentees of course but all the exhibitors and speakers I spoke to - including Ping's direct competitors - endorsed CIS as a safe and important place to participate in the identity community, and to do business.

But as a researcher and analyst, the Cloud Identity Summit is where I think you can see the future. People report hearing about things for the first time at a CIS, only to find those things coming true a year or two later. It's because there are so many influencers here.

Last year one example was the Attributes Push. This year, the onus on Attributes has become entirely mainstream. For example, the NSTIC pilot partner ID.me (a start-up business focused on improving veterans' access to online discounts through improved verification of entitlements) talks proudly of their ability to convey attributes and reduce the exposure of identity. And Paul Grassi proposes much more focus on Attributes from 2015.

Another example is the "Authorization Agent" (AZA) proposed for SSO in mobile platforms, which was brand new when Paul Madsen presented it at CIS Napa in 2013. Twelve months on, AZA has broadened into the Native Apps (NAPPS) OpenID Working Group.

Then there are the things that are nearly completely normalised. Take mobile devices. They figured in just about every CISmcc presentation, but were rarely called out. Mobile is simply the way things are now.

Hardware stores

So while the mobile form factor is taken for granted, the cryptographic goodies now standard in most handsets, and increasingly embedded in smart things and wearables, got a whole lot of express attention at CISmcc. I've already made much of Andre Durand's keynote on tokens. It was the same throughout the event.

    • There was a session on hybrid Physical and Logical Access Control Systems (PACS-LACS) featuring the US Government's PIV-I smartcard standard and the major ongoing R&D on that platform sponsored by DHS.
    • Companies like SecureKey are devoted to hardware-based keys, increasingly embedded in "street IDs" like driver licenses, and are working with numerous players deep in the SIM and smartcard supply chains.
    • The FIDO Alliance is fundamentally about hardware based identity security measures, leveraging embedded key pairs to attest to the pedigree of authenticator models and the attributes that they transmit on behalf of their verified users. FIDO promises to open up the latent authentication power of many 100s of millions of devices already featuring Secure Elements of one kind or another. FIDO realises PKI the way nature intended all along.
    • The good old concept of "What You See Is What You Sign" (WYSIWYS) is making a comeback, with mobile platform players appreciating that users of smartphones need reliable cues in the UX as to the integrity of transaction data served up in their rich operating systems. Clearly some exciting R&D lies ahead.
    • In a world of formal standards, we should also acknowledge the informal standards around us - the benchmarks and conventions that represent the 'real way' to do things. Hardware based security is taken increasingly for granted. The FIDO protocols are based on key pairs that people just seem to assume (correctly) will be generated in the compliant devices during registration. And Apple with its iTouch has helped to 'train' end users that biometrics templates must never leave the safety of a controlled hardware end point. FIDO of course makes that a hard standard.

What's next?

In my view, the Cloud Identity Summit is the only not-to-be missed event on the IDAM calendar. So long may it continue. And if CIS is where you go to see the future, what's next?

    • Judging by CISmcc, I reckon we're going to see entire sessions next year devoted to Continuous Authentication, in which signals are collected from wearables and the Internet of Things at large, to gain insights into the state of the user at every important juncture.
    • With the disciplined separation of abstract identities from concrete attributes, we're going to need an Digital Identity Stack for reference. FIDO's pyramid is on the right track, but it needs some work. I'm not sure the pyramid is the right visualisation; for one thing it evokes Maslow's Hierarchy of Needs in which the pinnacle corresponds to luxuries not essentials!
    • Momentum will grow around Relationships. Kantara's new Identity Relationship Management (IRM) WG was talked about in the CISmcc corridors. I am not sure we're all using the word in the same way, but it's a great trend, for Digital Identity is only really a means to an end, and it's the relationships they support that make identities important.

So there's much to look forward to!

See you again next year (I hope) in Monterey!

Posted in Smartcards, PKI, Language, Identity, Federated Identity, Cloud

Postcard from Monterey 2 #CISmcc

Second Day Reflections from CIS Monterey.

Follow along on Twitter at #CISmcc (for the Monterey Conference Centre).

The Attributes push

At CIS 2013 in Napa a year ago, several of us sensed a critical shift in focus amongst the identerati - from identity to attributes. OIX launched the Attributes Exchange Network (AXN) architecture, important commentators like Andrew Nash were saying, 'hey, attributes are more interesting than identity', and my own #CISnapa talk went so far as to argue we should forget about identity altogether. There was a change in the air, but still, it was all pretty theoretical.

Twelve months on, and the Attributes push has become entirely practical. If there was a Word Cloud for the NSTIC session, my hunch is that "attributes" would dominate over "identity". Several live NSTIC pilots are all about the Attributes.

ID.me is a new company started by US military veterans, with the aim of improving access for the veterans community to discounted goods and services and other entitlements. Founders Matt Thompson and Blake Hall are not identerati -- they're entirely focused on improving online access for their constituents to a big and growing range of retailers and services, and offer a choice of credentials for proving veterans bona fides. It's central to the ID.me model that users reveal as little as possible about their personal identities, while having their veterans' status and entitlements established securely and privately.

Another NSTIC pilot Relying Party is the financial service sector infrastructure provider Broadridge. Adrian Chernoff, VP for Digital Strategy, gave a compelling account of the need to change business models to take maximum advantage of digital identity. Broadridge recently announced a JV with Pitney Bowes called Inlet, which will enable the secure sharing of discrete and validated attributes - like name, address and social security number - in an NSTIC compliant architecture.

Mind Altering

Yesterday I said in my #CISmcc diary that I hoped to change my mind about something here, and half way through Day 2, I was delighted it was already happening. I've got a new attitude about NSTIC.

Over the past six months, I had come to fear NSTIC had lost its way. It's hard to judge totally accurately when lurking on the webcast from Sydney (at 4:00am) but the last plenary seemed pedestrian to me. And I'm afraid to say that some NSTIC committees have got a little testy. But today's NSTIC session here was a turning point. Not only are there a number or truly exciting pilots showing real progress, but Jeremy Grant has credible plans for improving accountability and momentum, and the new technology lead Paul Grassi is thinking outside the box and speaking out of school. The whole program seems fresh all over again.

In a packed presentation, Grassi impressed me enormously on a number of points:

  • Firstly, he advocates a pragmatic NSTIC-focused extension of the old US government Authentication Guide NIST SP 800-63. Rather than a formal revision, a companion document might be most realistic. Along the way, Grassi really nailed an issue which we identity professionals need to talk about more: language. He said that there are words in 800-63 that are "never used anywhere else in systems development". No wonder, as he says, it's still "hard to implement identity"!
  • Incidentally I chatted some more with Andrew Hughes about language; he is passionate about terms, and highlights that our term "Relying Party" is an especially terrible distraction for Service Providers whose reason-for-being has nothing to do with "relying" on anyone!
  • Secondly, Paul Grassi wants to "get very aggressive on attributes", including emphasis on practical measurement (since that's really what NIST is all about). I don't think I need to say anything more about that than Bravo!
  • And thirdly, Grassi asked "What if we got rid of LOAs?!". This kind of iconoclastic thinking is overdue, and was floated as part of a broad push to revamp the way government's orthodox thinking on Identity Assurance is translated to the business world. Grassi and Grant don't say LOAs can or should be abandoned by government, but they do see that shoving the rounded business concepts of identity into government's square hole has not done anyone much credit.

Just one small part of NSTIC annoyed me today: the persistent idea that federation hubs are inherently simpler than one-to-one authentication. They showed the following classic sort of 'before and after' shots, where it seems self-evident that a hub (here the Federal Cloud Credential Exchange FCCX) reduces complexity. The reality is that multilateral brokered arrangements between RPs and IdPs are far more complex than simple bilateral direct contracts. And moreover, the new forms of agreements are novel and untested in real world business. The time and cost and unpredictability of working out these new arrangements is not properly accounted for and has often been fatal to identity federations.

IMG 5412 BEFORE cropped
IMG 5413 AFTER cropped


The dog barks and this time the caravan turns around

One of the top talking points at #CISmcc has of course been FIDO. The FIDO Alliance goes from strength to strength; we heard they have over 130 members now (remember it started with four or five less than 18 months ago). On Saturday afternoon there was a packed-out FIDO show case with six vendors showing real FIDO-ready products. And today there was a three hour deep dive into the two flagship FIDO protocols UAF (which enables better sharing of strong authentication signals such that passwords may be eliminated) and U2F (which standardises and strengthens Two Factor Authentication).

FIDO's marketing messages are improving all the time, thanks to a special focus on strategic marketing which was given its own working group. In particular, the Alliance is steadily clarifying the distinction between identity and authentication, and sticking adamantly to the latter. In other words, FIDO is really all about the attributes. FIDO leaves identity as a problem to be addressed further up the stack, and dedicates itself to strengthening the authentication signal sent from end-point devices to servers.

The protocol tutorials were excellent, going into detail about how "Attestation Certificates" are used to convey the qualities and attributes of authentication hardware (such as device model, biometric modality, security certifications, elapsed time since last user verification etc) thus enabling nice fine-grained policy enforcement on the RP side. To my mind, UAF and U2F show how nature intended PKI to have been used all along!

Some confusion remains as to why FIDO has two protocols. I heard some quiet calls for UAF and U2F to converge, yet that would seem to put the elegance of U2F at risk. And it's noteworthy that U2F is being taken beyond the original one time password 2FA, with at least one biometric vendor at the showcase claiming to use it instead of the heavier UAF.

Surprising use cases

Finally, today brought more fresh use cases from cohorts of users we socially privileged identity engineers for the most part rarely think about. Another NSTIC pilot partner is AARP, a membership organization providing "information, advocacy and service" to older people, retirees and other special needs groups. AARP's Jim Barnett gave a compelling presentation on the need to extend from the classic "free" business models of Internet services, to new economically sustainable approaches that properly protect personal information. Barnett stressed that "free" has been great and 'we wouldn't be where we are today without it' but it's just not going to work for health records for example. And identity is central to that.

There's so much more I could report if I had time. But I need to get some sleep before another packed day. All this changing my mind is exhausting.

Cheers again from Monterey.

Posted in Security, Privacy, PKI, Language, Identity, Federated Identity, e-health

Bob is dead

With apologies to Friedrich Nietzsche. The hero of many a crypto folk tale Bob is dead, and we have killed him.

We now know that in PKI, Alice's Relying Party is almost always a machine and not a human being. The idea that two strangers would use PKI to work out whether or not to trust one another was deeply distracting and led to the complexity that in the main stymied early PKI.

All of which might be academic except the utopian idea persists that identity frameworks can and should underpin stranger-to-stranger e-business. With NSTIC for instance I fear we are sleep walking into a repeat of Big PKI, when we could be solving a simpler problem: the robust and bilateral presentation of digital identity data in established contexts, without changing the existing relationships that cover almost all serious transactional business.

The following is an extract from a past paper of mine, "Public Key Superstructure" which was presented to the NIST IDTrust Workshop in 2008. There I examine the shortfalls and pitfalls of using signed email as a digital signature archetype.

E-mail not a killer application for PKI

A total lack of real applications would explain why e-mail became by default the most talked about PKI application. Many PKI vendors to this day continue to illustrate their services and train their users with imaginary scenarios where our heroes Alice and Bob breathlessly exchange signed e-mails. Like the passport metaphor, e-mail seems easily understood, but it manifestly has not turned out to be a ‘killer application’, and worse still, has contributed to a host of misunderstandings.

The story usually goes go that Alice has received a secure e-mail from stranger Bob and wishes to work out if he is trustworthy. She double clicks on his digital signature and certificate in order to identify his CA. And now the fun begins. If Alice is not immediately trusting of the CA (presumably by reputation) then she is expected to download the CP and CPS, read them, and satisfy herself that the registration processes and security standards are adequate for her needs.

Does this sort of rigmarole have any parallel in the real world? A simple e-mail with no other context is closely equivalent to a letter or fax sent on plain white paper. Under what circumstances should we take seriously a message sent on plain paper from a stranger, even if we could track down their name?

In truth, the vast majority of serious communications occurs not between strangers but in a rich existing context, where the receiver has already been qualified in some way by the sender as likely being the right party to contact. In e-business, routine transactions are not usually conducted by e-mail but instead use special purpose software or dedicated websites with purpose built content. Thus we see most of the digital signature action in cases such as e-prescriptions, customs broking, trade documentation, company returns, patent filing and electronic conveyancing.

Several important simplifying assumptions flow from the fact that most e-business has a rich context, and these should be heeded when planning PKI:

Emphasise straight-through processing

In spite of the common worked example of Alice and Bob exchanging e-mails, the receiver of most routine transactions – such as payment instructions, tax returns, medical records, import/export declarations, or votes – is not a human but instead is a machine. The notion that a person will examine digital certificates and chase down the CA and its practices is simply false in the vast majority of cases. One of PKI’s great strengths is the way it aids straight-through processing, so it has been a great pity that vendors, through their training and marketing materials, have stressed manual over automatic processing.

Play down Relying Party Agreements

The sender and receiver of digitally signed transactions are hardly ever un-related. This is in stark contrast to orthodox legal analyses of PKI which foundered on the supposed lack of contractual privity between Relying Party and CA. For example the Australian Government’s extensive investigation into legal liability in digital certificates after 76 pages still could not reach a firm conclusion about whether a “CA may owe a duty of care to a [Relying Party] who is not known to the CA” [http://www.egov.vic.gov.au/pdfs/publication_utz1508.pdf]. The fact is, this sort of scenario is entirely academic and should never have been given the level of attention that it was. The idea of a “Relying Party Agreement” to join in contract the RP and the CA is moot in all “closed” e-business settings where PKI in thriving. It is this lesson that needs to be generalised by PKI regulators, not the hypothetical model of “open” PKI where all parties are strangers.

Play down certificate path discovery

The fact that in real life, parties are transacting in the context of some explicit scheme, means that the receiver’s software can predict the type of certificate that will most often be used by senders. For instance, when doctors are using e-prescribing software, there is not going to be a wide choice of certificate options; indeed, the appropriate scheme root keys and certificates for authenticating a whole class of doctors will likely be installed at both the sending and receiving ends, at the same time that the software is. When a doctor writes a prescription, their private key can be programmatically selected by their client and invoked to create a digital signature, according to business rules enshrined in the software design. And when such a transaction is received, the software of the pharmacist (or insurance company, government agency etc.) will similarly ‘know’ by design which classes of certificates are expected to verify the digital signature. All this logic in most transaction systems can be settled at design time, which can greatly simplify the task of certificate path discovery, or eliminate it altogether. In most systems it is straightforward for the sender’s software to attach the whole certificate chain to the digital signature, safe in the knowledge that the receiver’s software will be configured with the necessary trust anchors (i.e. Root CA certificates) with which to parse the chain.

Posted in PKI, Identity, Federated Identity

An authentication family tree

How do we make best sense of the bewildering array of authenticators on the market? Most people are familiar with single factor versus two factor, but this simple dichotomy doesn’t help match technologies to applications. The reality is more complex. A family tree like the one sketched here may help navigate the complexity.

Different distinctions define various branch points. The first split is between what I call Transient authentication (i.e. access control) which tells if a user is allowed to get at a resource or not, and Persistent authentication, which lets a user leave a lasting mark (i.e. signature) on what they do, such as binding electronic transactions.

Working our way up the Transient branch, we see that most access controls are based either on shared secrets or biometrics. Dynamic shared secrets change with every session, either in a series of one time passwords or via challenge-response.

On the biometric branch, we should distinguish those traits that can be left behind inadvertently in the environment and are more readily stolen. The safer biometrics are “clean” and leave no residue. Note that while the voice might be recorded without the speaker’s knowledge, I don't see it as a residual biometric in practice because voice recognition solutions usually use dynamic phrases that resist replay.

For persistent authentication, the only practical option today is PKI and digital signatures, technology which is available in an increasingly wide range of forms. Embedded certificates are commonplace in smartcards, cell phones, and other devices.

The folliage in the family tree indicates which technologies I believe will continue to thrive, and which seem more likely to be dead-ends.

I'd appreciate feedback. Is this useful? Does anyone know of other taxonomies?

Posted in Security, PKI, Biometrics

Simpler PKI is on the cards

PKI has a reputation for terrible complexity, but it is actually simpler than many mature domestic technologies.

PKI stack pics 110121 eye candy cropped

It's interesting to ponder why PKI got to be (or look) so complicated. There have been at least two reasons. First, the word is frequently taken to mean the original overblown "Big PKI" general purpose identification schemes, with their generic and context-free passport grade ID checks, horrid user agreements and fine print. Yet there are alternative ways to deploy public key technology in closed authentication schemes, and indeed that is where it thrives; see http://lockstep.com.au/library/pki. Second, there is all that gory technical detail foisted on lay people in the infamous "PKI 101" sessions. Typical explanations start with a tutorial on asymmetric cryptography even before they tell you what PKI is for. ]

I've long wondered what it is about PKI that leads its advocates to train people into the ground. Forty-odd years ago when introducing the newfangled mag stripe banking card, I bet the sales reps didn't feel the need to explain electromagnetism and ferric oxide chemistry.

This line of thought leads to fresh models for 'domesticating' PKI by embedding it in plastic cards. By re-framing PKI keys and certificates as being means to an end and not ends in themselves, we can also:

    • identify dramatically improved supply chains to deliver PKI's benefits
    • re-cast the traditionally difficult business model for CAs, and
    • demystify how PKI can support a plurality of IDs and apps.


Consider the layered complexity of the conventional plastic card, and the way the layers correspond to steps in the supply chain. At its most basic level, the card is based on solid state physics and Maxwell's Equations for electromagnetism. These govern the properties of ferric oxide crystals, which are manufactured as powders and coated onto bulk tape by chemical companies like BASF and 3M. The tape is applied to blank cards, which are distributed to different schemes for personalisation. Usually the cards are pre-printed in bulk with artwork and physical security features specific to the scheme. In general, personalisation in each scheme starts with user registration. Data is written to the mag stripe according to one of a handful of coding standards which differ a little bwteen banks, airlines and other niches. The card is printed or embossed, and distributed.

Lockstep PKI Tech Stack Ferric

The variety of distinct schemes using magnetic stripe cards is almost limitless: bank cards, credit cards, government entitlements, health insurance, clubs, loyalty cards, gift cards, driver licences, employee ID, universities, professional associations etc etc. They all use the same ferromagnetic components delivered through a global supply chain, which at the lower layers is very specialised and delivered by only a handful of companies.

And needless to say, hardly anyone needs to know Maxwell's Equations to make sense of a mag stripe card.

The smartcard supply chain is very similar. The only technical difference is the core technology used to encode the user information. The theoretical foundations are cryptography instead of electromagnetism, and instead of bulk ferric oxide powders and tapes, specialist semiconductor companies fabricate the ICs and preload them with firmware. From that point on, the smartcard and mag stripe card supply chains overlap. In fact the end user in most cases can't tell the difference between the different generations of card technologies.

Lockstep PKI Tech Stack Crypto

Smartcards (and their kin: SIMs, USB keys and smartphones) are the natural medium for deploying PKI technology.

Re-framing PKI deployment like this ...

    • decouples PK technology from the application and scheme layers, and tames the technical complexity; it shows where to draw the line in "PKI 101" training
    • provides a model for transitioning from conventional "id" technology to PKI with minimum disruption of current business processes and supplier arrangements
    • shows that it's perfectly natural for PKI to be implemented in closed communities of interest (schemes) and takes us away from the unhelpful orthodox Big PKI model
    • suggests new "wholesale" business models for CAs; historically CAs found it difficult to sell certificates direct, but a clearly superior model is to provide certificates into the initialisation step
    • demonstrates how easy to use PKI should be; that is, exactly as easy to use as the mag stripe card.


I once discussed this sort of bulk supply chain model at a conference in Tokyo. Someone in the audience asked me how many CAs I thought were needed worldwide. I said maybe three or four, and was greeted with incredulous laughter. But seriously, if certificates are reduced to digitally signed objects that bind a parcel of cardholder information to a key associated with a chip, why shouldn't certificates be manufactured by a fully automatic CA service on an outsourced managed service basis? It's no different from security printing, another specialised industry with the utmost "trust" requirements but none of the weird mystique that has bedevilled PKI.

Posted in PKI, Identity

No such thing as a passport

What do you call it when a metaphor or analogy outgrows the word it is based on, thus co-opting that word to mean something quite new? Metaphors are meant to clarify complex concepts by letting people think of them in simpler terms. But if the detailed meaning is actually different, then the metaphor becomes misleading and dangerous.

I'm thinking of the idea of the electronic passport. Ever since the early days of Big PKI, there's been the beguiling idea of an electronic passport that will let the holder into all manner of online services and enable total strangers to "trust" one another online. Later Microsoft of course even named their digital identity service "Passport", and the word is still commonplace in discussing all manner of authentication solutions.

The idea is that the passport allows you to go wherever you like.

Yet there is no such thing.

A real world passport doesn't let you into any old country. It's not always sufficient; you often need a visa. You can't stay as long as you like in a foreign place. Some countries won't let you in at all if you carry the passport of an unfriendly nation. You need to complete a landing card and customs declarations specific to your particular journey. And finally, when you've got to the end of the arrivals queue, you are still at the mercy of an immigration officer who has the discretion to turn you away. As with all business, there is so much more going on here than personal identity.

So in the sense of the meaning important to the electronic passport metaphor, the "real" passport doesn't actually exist!

The simplistic notion of electronic passport is really deeply unhelpful. The dream and promise of general purpose digital certificates is what derailed PKI, for they're unwieldy, involve unprecedented mechanisms for conferring open-ended "trust", and are rarely useful on their own (ironically that's also a property of real passports). Think of the time and money wasted chasing the electronic passport when all along PKI technology was better suited to closed transactions. What matters in most transactions is not personal identity but rather, credentials specific to the business context. There never has been a single general purpose identity credential.

And now with "open" federated identity frameworks, we're sleep-walking into the same intractable problems, all because people have been seduced by a metaphor based on something that doesn't exist.

The well initiated understand that the Laws of Identity, OIX, NSTIC and the like involve a plurality of identities, and multiple attributes tuned to different contexts. Yet NSTIC in particular is still confused by many with a single new ID, a misunderstanding aided and abetted by NSTIC's promoters using terms like "interoperable" without care, and by casually 'imagining' that a student in future will log in to their bank using their student card .

Words are powerful and they're also malleable. Some might say I'm being too pedantic sticking to the traditional reality of the "passport". But no. It would be OK in my opinion for "passport" to morph into something more powerful and universal -- except that it can't. The real point in all of this is that multiple identities are an inevitable consequence of how identities evolve to suit distinct business contexts, and so the very idea of a digital passport is a bit delusional.

Posted in Security, PKI, Language, Internet, Identity, Culture