Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Breaking down digital identity

Identity online is a vexed problem. The majority of Internet fraud today can be related to weaknesses in the way we authenticate people electronically. Internet identity is terribly awkward too. Unfortunately today we still use password techniques dating back to 1960s mainframes that were designed for technicians, by technicians.

Our identity management problems also stem from over-reach. For one thing, the information era heralded new ways to reach and connect with people, with almost no friction. We may have taken too literally the old saw “information wants to be free.” Further, traditional ways of telling who people are, through documents and “old boys networks” creates barriers, which are anathema to new school Internet thinkers.

For the past 10-to-15 years, a heady mix of ambitions has informed identity management theory and practice: improve usability, improve security and improve “trust.” Without ever pausing to unravel the rainbow, the identity and access management industry has created grandiose visions of global “trust frameworks” to underpin a utopia of seamless stranger-to-stranger business and life online.

Well-resourced industry consortia and private-public partnerships have come and gone over the past decade or more. Numerous “trust” start-up businesses have launched and failed. Countless new identity gadgets, cryptographic algorithms and payment schemes have been tried.

And yet the identity problem is still with us. Why is identity online so strangely resistant to these well-meaning efforts to fix it? In particular, why is federated identity so dramatically easier said than done?

Identification is a part of risk management. In business, service providers use identity to manage the risk that they might be dealing with the wrong person. Different transactions carry different risks, and identification standards are varied accordingly. Conversely, if a provider cannot be sure enough who someone is, they now have the tools to withhold or limit their services. For example, when an Internet customer signs in from an unusual location, payment processors can put a cap on the dollar amounts they will authorize.

Across our social and business walks of life, we have distinct ways of knowing people, which yields a rich array of identities by which we know and show who we are to others. These Identities have evolved over time to suit different purposes. Different relationships rest on different particulars, and so identities naturally become specific not general.

The human experience of identity is one of ambiguity and contradictions. Each of us simultaneously holds a weird and wonderful ensemble of personal, family, professional and social identities. Each is different, sometimes radically so. Some of us lead quite secret lives, and I’m not thinking of anything salacious, but maybe just the role-playing games that provide important escapes from the humdrum.

Most of us know how it feels when identities collide. There’s no better example than what I call the High School Reunion Effect: that strange dislocation you feel when you see acquaintances for the first time in decades. You’ve all moved on, you’ve adopted new personae in new contexts – not the least of which is the one defined by a spouse and your own new family. Yet you find yourself re-winding past identities, relating to your past contemporaries as you all once were, because it was those school relationships, now fossilised, that defined you.

Frankly, we’ve made a mess of the pivotal analogue-to-digital conversion of identity. In real life we know identity is malleable and relative, yet online we’ve rendered it crystalline and fragile.
We’ve come close to the necessary conceptual clarity. Some 10 years ago a network of “identerati” led by Kim Cameron of Microsoft composed the “Laws of Identity,” which contained a powerful formulation of the problem to be addressed. The Laws defined Digital Identity as “a set of claims made [about] a digital subject.”

Your Digital Identity is a proxy for a relationship, pointing to a suite of particulars that matter about you in a certain context. When you apply for a bank account, when you subsequently log on to Internet banking, when you log on to your work extranet, or to Amazon or PayPal or Twitter, or if you want to access your electronic health record, the relevant personal details are different each time.
The flip side of identity management is privacy. If authentication concerns what a Relying Party needs to know about you, then privacy is all about what they don’t need to know. Privacy amounts to information minimization; security professionals know this all too well as the “Need to Know” principle.

All attempts at grand global identities to date have failed. The Big Certification Authorities of the 1990s reckoned a single, all-purpose digital certificate would meet the needs of all business, but they were wrong. Ever more sophisticated efforts since then have also failed, such as the Infocard Foundation, Liberty Alliance and the Australian banking sector’s Trust Centre.

Significantly, federation for non-trivial identities only works within regulatory monocultures – for example the US Federal Bridge CA, or the Scandinavian BankID network – where special legislation authorises banks and governments to identify customers by the one credential. The current National Strategy for Trusted Identities in Cyberspace has pondered legislation to manage liability but has balked. The regulatory elephant remains in the room.

As an aside, obviously social identities like Facebook and Twitter handles federate very nicely, but these are issued by organisations that don't really know who we are, and they're used by web sites that don't really care who we are; social identity federation is a poor model for serious identity management.

A promising identity development today is the Open Identity Foundation’s Attribute Exchange Network, a new architecture seeking to organise how identity claims may be traded. The Attribute Exchange Network resonates with a growing realization that, in the words of Andrew Nash, a past identity lead at Google and at PayPal, “attributes are at least as interesting as identities – if not more so.”

If we drop down a level and deal with concrete attribute data instead of abstract identities, we will start to make progress on the practical challenges in authentication: better resistance to fraud and account takeover, easier account origination and better privacy.

My vision is that by 2019 we will have a fresh marketplace of Attribute Providers. The notion of “Identity Provider” should die off, for identity is always in the eye of the Relying Party. What we need online is an array of respected authorities and agents that can vouch for our particulars. Banks can provide reliable electronic proof of our payment card numbers; government agencies can attest to our age and biographical details; and a range of private businesses can stand behind attributes like customer IDs, membership numbers and our retail reputations.

In five years time I expect we will adopt a much more precise language to describe how to deal with people online, and it will reflect more faithfully how we’ve transacted throughout history. As the old Italian proverb goes: It is nice to “trust” but it’s better not to.

This article first appeared as "Abandoning identity in favor of attributes" in Secure ID News, 2 December, 2014.

Posted in Trust, Social Networking, Security, Identity, Federated Identity

You can de-identify but you can't hide

Acknowledgement: Daniel Barth-Jones kindly engaged with me after this blog was initially published, and pointed out several significant factual errors, for which I am grateful.

In 2014, the New York Taxi & Limousine Company (TLC) released a large "anonymised" dataset containing 173 million taxi rides taken in 2013. Soon after, software developer Vijay Pandurangan managed to undo the hashed taxi registration numbers. Subsequently, privacy researcher Anthony Tockar went on to combine public photos of celebrities getting in or out of cabs, to recreate their trips. See Anna Johnston's analysis here.

This re-identification demonstration has been used by some to bolster a general claim that anonymity online is increasingly impossible.

On the other hand, medical research advocates like Columbia University epidemiologist Daniel Barth-Jones argue that the practice of de-identification can be robust and should not be dismissed as impractical on the basis of demonstrations such as this. The identifiability of celebrities in these sorts of datasets is a statistical anomaly reasons Barth-Jones and should not be used to frighten regular people out of participating in medical research on anonymised data. He wrote in a blog that:

    • "However, it would hopefully be clear that examining a miniscule proportion of cases from a population of 173 million rides couldn’t possibly form any meaningful basis of evidence for broad assertions about the risks that taxi-riders might face from such a data release (at least with the taxi medallion/license data removed as will now be the practice for FOIL request data)."

As a health researcher, Barth-Jones is understandably worried that re-identification of small proportions of special cases is being used to exaggerate the risks to ordinary people. He says that the HIPAA de-identification protocols if properly applied leave no significant risk of re-id. But even if that's the case, HIPAA processes are not applied to data across the board. The TLC data was described as "de-identified" and the fact that any people at all (even stand-out celebrities) could be re-identified from data does create a broad basis for concern - "de-identified" is not what it seems. Barth-Jones stresses that in the TLC case, the de-identification was fatally flawed [technically: it's no use hashing data like registration numbers with limited value ranges because the hashed values can be reversed by brute force] but my point is this: who among us who can tell the difference between poorly de-identified and "properly" de-identified?

And how long can "properly de-identified" last? What does it mean to say casually that only a "minuscule proportion" of data can be re-identified? In this case, the re-identification of celebrities was helped by the fact lots of photos of them are readily available on social media, yet there are so many photos in the public domain now, regular people are going to get easier to be identified.

But my purpose here is not to play what-if games, and I know Daniel advocates statistically rigorous measures of identifiability. We agree on that -- in fact, over the years, we have agreed on most things. The point I am trying to make in this blog post is that, just as nobody should exaggerate the risk of re-identification, nor should anyone play it down. Claims of de-identification are made almost daily for really crucial datasets, like compulsorily retained metadata, public health data, biometric templates, social media activity used for advertising, and web searches. Some of these claims are made with statistical rigor, using formal standards like the HIPAA protocols; but other times the claim is casual, made with no qualification, with the aim of comforting end users.

"De-identified" is a helluva promise to make, with far-reaching ramifications. Daniel says de-identification researchers use the term with caution, knowing there are technical qualifications around the finite probability of individuals remaining identifiable. But my position is that the fine print doesn't translate to the general public who only hear that a database is "anonymous". So I am afraid the term "de-identified" is meaningless outside academia, and in casual use is misleading.

Barth-Jones objects to the conclusion that "it's virtually impossible to anonymise large data sets" but in an absolute sense, that claim is surely true. If any proportion of people in a dataset may be identified, then that data set is plainly not "anonymous". Moreover, as statistics and mathematical techniques (like facial recognition) improve, and as more ancillary datasets (like social media photos) become accessible, the proportion of individuals who may be re-identified will keep going up.

[Readers who wish to pursue these matters further should look at the recent Harvard Law School online symposium on "Re-identification Demonstrations", hosted by Michelle Meyer, in which Daniel Barth-Jones and I participated, among many others.]

Both sides of this vexed debate need more nuance. Privacy advocates have no wish to quell medical research per se, nor do they call for absolute privacy guarantees, but we do seek full disclosure of the risks, so that the cost-benefit equation is understood by all. One of the obvious lessons in all this is that "anonymous" or "de-identified" on their own are poor descriptions. We need tools that meaningfully describe the probability of re-identification. If statisticians and medical researchers take "de-identified" to mean "there is an acceptably small probability, namely X percent, of identification" then let's have that fine print. Absent the detail, lay people can be forgiven for thinking re-identification isn't going to happen. Period.

And we need policy and regulatory mechanisms to curb inappropriate re-identification. Anonymity is a brittle, essentially temporary, and inadequate privacy tool.

I argue that the act of re-identification ought to be treated as an act of Algorithmic Collection of PII, and regulated as just another type of collection, albeit an indirect one. If a statistical process results in a person's name being added to a hitherto anonymous record in a database, it is as if the data custodian went to a third party and asked them "do you know the name of the person this record is about?". The fact that the data custodian was clever enough to avoid having to ask anyone about the identity of people in the re-identified dataset does not alter the privacy responsibilities arising. If the effect of an action is to convert anonymous data into personally identifiable information (PII), then that action collects PII. And in most places around the world, any collection of PII automatically falls under privacy regulations.

It looks like we will never guarantee anonymity, but the good news is that for privacy, we don't actually need to. Privacy is the protection you need when you affairs are not anonymous, for privacy is a regulated state where organisations that have knowledge about you are restrained in what they do with it. Equally, the ability to de-anonymise should be restricted in accordance with orthodox privacy regulations. If a party chooses to re-identify people in an ostensibly de-identified dataset, without a good reason and without consent, then that party may be in breach of data privacy laws, just as they would be if they collected the same PII by conventional means like questionnaires or surveillance.

Surely we can all agree that re-identification demonstrations serve to shine a light on the comforting claims made by governments for instance that certain citizen datasets can be anonymised. In Australia, the government is now implementing telecommunications metadata retention laws, in the interests of national security; the metadata we are told is de-identified and "secure". In the UK, the National Health Service plans to make de-identified patient data available to researchers. Whatever the merits of data mining in diverse fields like law enforcement and medical research, my point is that any government's claims of anonymisation must be treated critically (if not skeptically), and subjected to strenuous and ongoing privacy impact assessment.

Privacy, like security, can never be perfect. Privacy advocates must avoid giving the impression that they seek unrealistic guarantees of anonymity. There must be more to privacy than identity obscuration (to use a more technically correct term than "de-identification"). Medical research should proceed on the basis of reasonable risks being taken in return for beneficial outcomes, with strong sanctions against abuses including unwarranted re-identification. And then there wouldn't need to be a moral panic over re-identification if and when it does occur, because anonymity, while highly desirable, is not essential for privacy in any case.

Posted in Social Media, Privacy, Identity, e-health, Big Data

Identity Management Moves from Who to What

The State Of Identity Management in 2015

Constellation Research recently launched the "State of Enterprise Technology" series of research reports. These assess the current enterprise innovations which Constellation considers most crucial to digital transformation, and provide snapshots of the future usage and evolution of these technologies.

My second contribution to the state-of-the-state series is "Identity Management Moves from Who to What". Here's an excerpt from the report:

Introduction

In spite of all the fuss, personal identity is not usually important in routine business. Most transactions are authorized according to someone’s credentials, membership, role or other properties, rather than their personal details. Organizations actually deal with many people in a largely impersonal way. People don’t often care who someone really is before conducting business with them. So in digital Identity Management (IdM), one should care less about who a party is than what they are, with respect to attributes that matter in the context we’re in. This shift in focus is coming to dominate the identity landscape, for it simplifies a traditionally multi-disciplined problem set. Historically, the identity management community has made too much of identity!

Six Digital Identity Trends for 2015

SoS IdM Summary Pic

1. Mobile becomes the center of gravity for identity. The mobile device brings convergence for a decade of progress in IdM. For two-factor authentication, the cell phone is its own second factor, protected against unauthorized use by PIN or biometric. Hardly anyone ever goes anywhere without their mobile - service providers can increasingly count on that without disenfranchising many customers. Best of all, the mobile device itself joins authentication to the app, intimately and seamlessly, in the transaction context of the moment. And today’s phones have powerful embedded cryptographic processors and key stores for accurate mutual authentication, and mobile digital wallets, as Apple’s Tim Cook highlighted at the recent White House Cyber Security Summit.

2. Hardware is the key – and holds the keys – to identity. Despite the lure of the cloud, hardware has re-emerged as pivotal in IdM. All really serious security and authentication takes place in secure dedicated hardware, such as SIM cards, ATMs, EMV cards, and the new Trusted Execution Environment mobile devices. Today’s leading authentication initiatives, like the FIDO Alliance, are intimately connected to standard cryptographic modules now embedded in most mobile devices. Hardware-based identity management has arrived just in the nick of time, on the eve of the Internet of Things.

3. The “Attributes Push” will shift how we think about identity. In the words of Andrew Nash, CEO of Confyrm Inc. (and previously the identity leader at PayPal and Google), “Attributes are at least as interesting as identities, if not more so.” Attributes are to identity as genes are to organisms – they are really what matters about you when you’re trying to access a service. By fractionating identity into attributes and focusing on what we really need to reveal about users, we can enhance privacy while automating more and more of our everyday transactions.

The Attributes Push may recast social logon. Until now, Facebook and Google have been widely tipped to become “Identity Providers”, but even these giants have found federated identity easier said than done. A dark horse in the identity stakes – LinkedIn – may take the lead with its superior holdings in verified business attributes.

4. The identity agenda is narrowing. For 20 years, brands and organizations have obsessed about who someone is online. And even before we’ve solved the basics, we over-reached. We've seen entrepreneurs trying to monetize identity, and identity engineers trying to convince conservative institutions like banks that “Identity Provider” is a compelling new role in the digital ecosystem. Now at last, the IdM industry agenda is narrowing toward more achievable and more important goals - precise authentication instead of general identification.

Digital Identity Stack (3 1)

5. A digital identity stack is emerging. The FIDO Alliance and others face a challenge in shifting and improving the words people use in this space. Words, of course, matter, as do visualizations. IdM has suffered for too long under loose and misleading metaphors. One of the most powerful abstractions in IT was the OSI networking stack. A comparable sort of stack may be emerging in IdM.

6. Continuity will shape the identity experience. Continuity will make or break the user experience as the lines blur between real world and virtual, and between the Internet of Computers and the Internet of Things. But at the same time, we need to preserve clear boundaries between our digital personae, or else privacy catastrophes await. “Continuous” (also referred to as “Ambient”) Authentication is a hot new research area, striving to provide more useful and flexible signals about the instantaneous state of a user at any time. There is an explosion in devices now that can be tapped for Continuous Authentication signals, and by the same token, rich new apps in health, lifestyle and social domains, running on those very devices, that need seamless identity management.

A snapshot at my report "Identity Moves from Who to What" is available for download at Constellation Research. It expands on the points above, and sets out recommendations for enterprises to adopt the latest identity management thinking.

Posted in Trust, Social Networking, Security, Privacy, Identity, Federated Identity, Constellation Research, Biometrics, Big Data

The latest FIDO Alliance research

I have just updated my periodic series of researh reports on the FIDO Alliance. The fourth report, "FIDO Alliance Update: On Track to a Standard" will be available at Constellation Research shortly

The Identity Management industry leader publishes its protocol specifications at v1.0, launches a certification program, and attracts support in Microsoft Windows 10.

Executive Summary

The FIDO Alliance is the fastest-growing Identity Management (IdM) consortium we have seen. Comprising technology vendors, solutions providers, consumer device companies, and e-commerce services, the FIDO Alliance is working on protocols and standards to strongly authenticate users and personal devices online. With a fresh focus and discipline in this traditionally complicated field, FIDO envisages simply “doing for authentication what Ethernet did for networking”.

Launched in early 2013, the FIDO Alliance has now grown to over 180 members. Included are technology heavyweights like Google, Lenovo and Microsoft; almost every SIM and smartcard supplier; payments giants Discover, MasterCard, PayPal and Visa; several banks; and e-commerce players like Alibaba and Netflix.

FIDO is radically different from any IdM consortium to date. We all know how important it is to fix passwords: They’re hard to use, inherently insecure, and lie at the heart of most breaches. The Federated Identity movement seeks to reduce the number of passwords by sharing credentials, but this invariably confounds the relationships we have with services and complicates liability when more parties rely on fewer identities.

In contrast, FIDO’s mission is refreshingly clear: Take the smartphones and devices most of us are intimately connected to, and use the built-in cryptography to authenticate users to services. A registered FIDO-compliant device, when activated by its user, can send verified details about the device and the user to service providers, via standardized protocols. FIDO leverages the ubiquity of sophisticated handsets and the tidal wave of smart things. The Alliance focuses on device level protocols without venturing to change the way user accounts are managed or shared.

The centerpieces of FIDO’s technical work are two protocols, called UAF and U2F, for exchanging verified authentication signals between devices and services. Several commercial applications have already been released under the UAF and U2F specifications, including fingerprint-based payments apps from Alibaba and PayPal, and Google’s Security Key from Yubico. After a rigorous review process, both protocols are published now at version 1.0, and the FIDO Certified Testing program was launched in April 2015. And Microsoft announced that FIDO support would be built into Windows 10.

With its focus, pragmatism and membership breadth, FIDO is today’s go-to authentication standards effort. In this report, I look at what the FIDO Alliance has to offer vendors and end user communities, and its critical success factors.

Posted in Smartcards, Security, PKI, Identity, Federated Identity, Constellation Research, Biometrics

Consumerization of Authentication

For the second year running, the FIDO Alliance hosted a consumer authentication showcase at CES, the gigantic Consumer Electronics Show in Las Vegas, this year featuring four FIDO Alliance members.

This is a watershed in Internet security and privacy - never before has authentication been a headline consumer issue.

Sure we've all talked about the password problem for ten years or more, but now FIDO Alliance members are doing something about it, with easy-to-use solutions designed specifically for mass adoption.

The FIDO Alliance is designing the authentication plumbing for everything online. They are creating new standards and technical protocols allowing secure personal devices (phones, personal smart keys, wearables, and soon a range of regular appliances) to securely transmit authentication data to cloud services and other devices, in some cases eliminating passwords altogether.

See also my ongoing FIDO Alliance research at Constellation.

Posted in Privacy, Identity, Constellation Research, Security

We cannot pigeon-hole risk

In electronic business, Relying Parties (RPs) need to understand their risks of dealing with the wrong person (say a fraudulent customer or a disgruntled ex employee), determine what they really need to know about those people in order to help manage risk, and then in many cases, design a registration process for bringing those people into the business fold. With federated identity, the aim is to offload the registration and other overheads onto an Identity Provider (IdP). But evaluating IdPs and forging identity management arrangements has proven to be enormously complex, and the federated identity movement has been looking for ways to streamline and standardize the process.

One approach is to categorise different classes of IdP, matched to different transaction types. "Levels of Assurance" (LOAs) have been loosely standardised by many governments and in some federated identity frameworks, like the Kantara Initiative. The US Authentication Guideline NIST SP 800-63 is one of the preeminent de facto standards, adopted by the National Strategy for Trusted Identities in Cyberspace (NSTIC). But over the years, adoption of SP 800-63 in business has been disappointing, and now NIST has announced a review.

One of my problem with LOAs is simply stated: I don't believe it's possible to pigeon-hole risk.

With risk management, the devil is in the detail. Risk Management standards like ISO 31000 require organisations to start by analysing the threats that are peculiar to their environment. It's folly to take short cuts here, and it's also well recognised that you cannot "outsource" liability.

To my mind, the LOA philosophy goes against risk management fundamental. To come up with an LOA rating is an intermediate step that takes an RP's risk analysis, squeezes it into a bin (losing lots of information as a result), which is then used to shortlist candidate IdPs, before going into detailed due diligence where all those risk details need to be put back on the table.

I think we all know by now of cases where RPs have looked at candidate IdPs at a given LOA, been less than satisfied with the available offerings, and have felt the need for an intermediate level, something like "LOA two and a half" (this problem was mentioned at CIS 2014 more than once, and I have seen it first hand in the UK IDAP).

Clearly what's going on here is an RP's idea of "LOA 2" differs from a given IdP's idea of the same LOA 2. This is because everyone's risk appetite and threat profile is different. Moreover, the detailed prescription of "LOA 2" must differ from one identity provider to the next. When an RP thinks they need "LOA 2.5" what they're relly asking for is a customised identification. If an off-the-shelf "LOA 2" isn't what it seems, then there can't be any hope for an agreed intermediate LOA 2.5. Even if an IdP and an RP agree in one instance, soon enough we will get a fresh call for "LOA 2.75 please".

We cannot pigeonhole risk. Attaching chunky one dimensional Levels of Assurance is misleading. There is no getting away from the need to do detailed analysis of the threats and therefore the authentication needs required.

Posted in Security, Identity, Federated Identity

Making cyber safe like cars

This is an updated version of arguments made in Lockstep's submission to the 2009 Cyber Crime Inquiry by the Australian federal government.

In stark contrast to other fields, cyber safety policy is almost exclusively preoccupied with user education. It's really an obsession. Governments and industry groups churn out volumes of well-meaning and technically reasonable security advice, but for the average user, this material is overwhelming. There is a subtle implication that security is for experts, and that the Internet isn't safe unless you go to extremes. Moreover, even if consumers do their very best online, their personal details can still be taken over in massive criminal raids on databases that hardly anyone even know exist.

Too much onus is put on regular users protecting themselves online, and this blinds us to potential answers to cybercrime. In other walks of life, we accept a balanced approach to safety, and governments are less reluctant to impose standards than they are on the Internet. Road safety for instance rests evenly on enforceable road rules, car technology innovation, certified automotive products, mandatory quality standards, traffic management systems, and driver training and licensing. Education alone would be nearly worthless.

Around cybercrime we have a bizarre allergy to technology. We often hear that 'Preventing data breaches not a technology issue' which may be politically correct but it's faintly ridiculous. Nobody would ever say that preventing car crashes is 'not a technology issue'.

Credit card fraud and ID theft in general are in dire need of concerted technological responses. Consider that our Card Not Present (CNP) payments processing arrangements were developed many years ago for mail orders and telephone orders. It was perfectly natural to co-opt the same processes when the Internet arose, since it seemed simply to be just another communications medium. But the Internet turned out to be more than an extra channel: it connects everyone to everything, around the clock.

The Internet has given criminals x-ray vision into peoples' banking details, and perfect digital disguises with which to defraud online merchants. There are opportunities for crime now that are both quantitatively and qualitatively radically different from what went before. In particular, because identity data is available by the terabyte and digital systems cannot tell copies from originals, identity takeover is child's play.

You don't even need to have ever shopped online to run foul of CNP fraud. Most stolen credit card numbers are obtained en masse by criminals breaking into obscure backend databases. These attacks go on behind the scenes, out of sight of even the most careful online customers.

So the standard cyber security advice misses the point. Consumers are told earnestly to look out for the "HTTPS" padlock that purportedly marks a site as secure, to have a firewall, to keep their PCs "patched" and their anti-virus up to date, to only shop online at reputable merchants, and to avoid suspicious looking sites (as if cyber criminals aren't sufficiently organised to replicate legitimate sites in their entirety). But none of this advice touches on the problem of coordinated massive heists of identity data.

Merchants are on the hook for unwieldy and increasingly futile security overheads. When a business wishes to accept credit card payments, it's straightforward in the real world to install a piece of bank-approved terminal equipment. But to process credit cards online, shopkeepers have to sign up to onerous PCI-DSS requirements that in effect require even small business owners to become IT security specialists. But to what end? No audit regime will ever stop organised crime. To stem identity theft, we need to make stolen IDs less valuable.

All this points to urgent public policy matters for governments and banks. It is not enough to put the onus on individuals to guard against ad hoc attacks on their credit cards. Systemic changes and technological innovation are needed to render stolen personal data useless to thieves. It's not that the whole payments processing system is broken; rather, it is vulnerable at just one point where stolen digital identities can be abused.

Digital identities are the keys to our personal kingdoms. As such they really need to be treated as seriously as car keys, which have become very high tech indeed. Modern car keys cannot be duplicated at a suburban locksmith. It's possible you've come across office and filing cabinet keys that carry government security certifications. And we never use the same keys for our homes and offices; we wouldn't even consider it (which points to the basic weirdness in Single Sign On and identity federation).

In stark contrast to car keys, almost no attention is paid to the pedigree of digital identities. Technology neutrality has bred a bewildering array of ad hoc authentication methods, including SMS messages, one time password generators, password calculators, grid cards and picture passwords; at the same time we've done nothing at all to inhibit the re-use of stolen IDs.

It's high time government and industry got working together on a uniform and universal set of smart identity tools to properly protect consumers online.

Stay tuned for more of my thoughts on identity safety, inspired by recent news that health identifiers may be back on the table in the gigantic U.S. e-health system. The security and privacy issues are large but the cyber safety technology is at hand!

Posted in Fraud, Identity, Internet, Payments, Privacy, Security

PKI as nature intended

Few technologies are so fundamental and yet so derided at the same time as public key infrastructure. PKI is widely thought of as obsolete or generically intrusive yet it is ubiquitous in SIM cards, SSL, chip and PIN cards, and cable TV. Technically, public key infrastructure Is a generic term for a management system for keys and certificates; there have always been endless ways to build PKIs (note the plural) for different communities, technologies, industries and outcomes. And yet “PKI” has all too often come to mean just one way of doing identity management. In fact, PKI doesn’t necessarily have anything to do with identity at all.

This blog is an edited version of a feature I once wrote for SC Magazine. It is timely in the present day to re-visit the principles that make for good PKI implementations and contextualise them in one of the most contemporary instances of PKI: the FIDO Alliance protocols for secure attribute management. In my view, FIDO realises PKI ‘as nature intended’.

“Re-thinking PKI”

In their earliest conceptions in the early-to-mid 1990s, digital certificates were proposed to authenticate nondescript transactions between parties who had never met. Certificates were construed as the sole means for people to authenticate one another. Most traditional PKI was formulated with no other context; the digital certificate was envisaged to be your all-purpose digital identity.

Orthodox PKI has come in for spirited criticism. From the early noughties, many commentators pointed to a stark paradox: online transaction volumes and values were increasing rapidly, in almost all cases without the help of overt PKI. Once thought to be essential, with its promise of "non repdudiation", PKI seemed anything but, even for significant financial transactions.

There were many practical problems in “big” centralised PKI models. The traditional proof of identity for general purpose certificates was intrusive; the legal agreements were complex and novel; and private key management was difficult for lay people. So the one-size-fits-all electronic passport failed to take off. But PKI's critics sometimes throw the baby out with the bathwater.
In the absence of any specific context for its application, “big” PKI emphasized proof of personal identity. Early certificate registration schemes co-opted identification benchmarks like that of the passport. Yet hardly any regular business transactions require parties to personally identify one another to passport standards.

”Electronic business cards”

Instead in business we deal with others routinely on the basis of their affiliations, agency relationships, professional credentials and so on. The requirement for orthodox PKI users to submit to strenuous personal identity checks over and above their established business credentials was a major obstacle in the adoption of digital certificates.

It turns out that the 'killer applications' for PKI overwhelmingly involve transactions with narrow contexts, predicated on specific credentials. The parties might not know each other personally, but invariably they recognize and anticipate each other's qualifications, as befitting their business relationship.

Successful PKI came to be characterized by closed communities of interest, prior out-of-band registration of members, and in many cases, special-purpose application software featuring additional layers of context, security and access controls.

So digital certificates are much more useful when implemented as application-specific 'electronic business cards,' than as one-size-fits-all electronic passports. And, by taking account of the special conditions that apply to different e-business processes, we have the opportunity to greatly simplify the registration processes, user experience and liability arrangements that go with PKI.

The real benefits of digital signatures

There is a range of potential advantages in using PKI, including its cryptographic strength and resistance to identity theft (when implemented with private keys in hardware). Many of its benefits are shared with other technologies, but at least two are unique to PKI.

First, digital signatures provide robust evidence of the origin and integrity of electronic transactions, persistent over time and over 'distance’ (that is, the separation of sender and receiver). This greatly simplifies audit logging, evidence collection and dispute resolution, and cuts the future cost of investigation and fraud. If a digitally signed document is archived and checked at a later date, the quality of the signature remains undiminished over many years, even if the public key certificate has long since expired. And if a digitally signed message is passed from one relying party to another and on to many more, passing through all manner of intermediate systems, everyone still receives an identical, verifiable signature code authenticating the original message.

Electronic evidence of the origin and integrity of a message can, of course, be provided by means other than a digital signature. For example, the authenticity of typical e-business transactions can usually be demonstrated after the fact via audit logs, which indicate how a given message was created and how it moved from one machine to another. However, the quality of audit logs is highly variable and it is costly to produce legally robust evidence from them. Audit logs are not always properly archived from every machine, they do not always directly evince data integrity, and they are not always readily available months or years after the event. They are rarely secure in themselves, and they usually need specialists to interpret and verify them. Digital signatures on the other hand make it vastly simpler to rewind transactions when required.

Secondly, digital signatures and certificates are machine readable, allowing the credentials or affiliations of the sender to be bound to the message and verified automatically on receipt, enabling totally paperless transacting. This is an important but often overlooked benefit of digital signatures. When processing a digital certificate chain, relying party software can automatically tell that:

    • the message has not been altered since it was originally created
    • the sender was authorized to launch the transaction, by virtue of credentials or other properties endorsed by a recognized Certificate Authority
    • the sender's credentials were valid at the time they sent the message; and
    • the authority which signed the certificate was fit to do so.

One reason we can forget about the importance of machine readability is that we have probably come to expect person-to-person email to be the archetypal PKI application, thanks to email being the classic example to illustrate PKI in action. There is an implicit suggestion in most PKI marketing and training that, in regular use, we should manually click on a digital signature icon, examine the certificate, check which CA issued it, read the policy qualifier, and so on. Yet the overwhelming experience of PKI in practice is that it suits special purpose and highly automated applications, where the usual receiver of signed transactions is in fact a computer.

Characterising good applications

Reviewing the basic benefits of digital signatures allows us to characterize the types of e-business applications that merit investment in PKI.

Applications for which digital signatures are a good fit tend to have reasonably high transaction volumes, fully automatic or straight-through processing, and multiple recipients or multiple intermediaries between sender and receiver. In addition, there may be significant risk of dispute or legal ramifications, necessitating high quality evidence to be retained over long periods of time. These include:

    • Tax returns
    • Customs reporting
    • E-health care
    • Financial trading
    • Insurance
    • Electronic conveyancing
    • Superannuation administration
    • Patent applications.

This view of the technology helps to explain why many first-generation applications of PKI were problematic. Retail internet banking is a well-known example of e-business which flourished without the need for digital certificates. A few banks did try to implement certificates, but generally found them difficult to use. Most later reverted to more conventional access control and backend security mechanisms.Yet with hindsight, retail funds transfer transactions did not have an urgent need for PKI, since they could make use of existing backend payment systems. Funds transfer is characterized by tightly closed arrangements, a single relying party, built-in limits on the size of each transaction, and near real-time settlement. A threat and risk assessment would show that access to internet banking can rest on simple password authentication, in exactly the same way as antecedent phone banking schemes.

Trading complexity for specificity

As discussed, orthodox PKI was formulated with the tacit assumption that there is no specific context for the transaction, so the digital certificate is the sole means for authenticating the sender. Consequently, the traditional schemes emphasized high standards of personal identity, exhaustive contracts and unusual legal devices like Relying Party Agreements. They also often resorted to arbitrary 'reliance limits,' which have little meaning for most of the applications listed on the previous page. Notoriously, traditional PKI requires users to read and understand certification practice statements (CPS).

All that overhead stemmed from not knowing what the general-purpose digital certificate was going to be used for. On the other hand, if particular digital certificates are constrained to defined applications, then the complexity surrounding their specific usage can be radically reduced.

The role of PKI in all contemporary 'killer applications' is fundamentally to help automate the online processing of electronic transactions between parties with well-defined credentials. This is in stark contrast to the way PKI has historically been portrayed, where strangers Alice and Bob use their digital certificates to authenticate context-free general messages, often presumed to be sent by email. In reality, serious business messages are never sent stranger-to-stranger with no context or cues as to the parties' legitimacy.

Using generic email is like sending a fax on plain paper. Instead, business messaging is usually highly structured. Parties have an expectation that only certain types of transactions are going to occur between them and they equip themselves accordingly (for instance, a health insurance office is not set up to handle tax returns). The sender is authorized to act in defined types of transactions by virtue of professional credentials, a relevant license, an affiliation with some authority, endorsement by their employer, and so on. And the receiver recognizes the source of those credentials. The sender and receiver typically use prescribed forms and/or special purpose application software with associated user agreements and license conditions, adding context and additional layers of security around the transaction.

PKI got smart

When PKI is used to help automate the online processing of transactions between parties in the context of an existing business relationship, we should expect the legal arrangements between the parties to still apply. For business applications where digital certificates are used to identify users in specific contexts, the question of legal liability should be vastly simpler than it is in the general purpose PKI scenario where the issuer does not know what the certificates might be used for.
The new vision for PKI means the technology and processes should be no more of a burden on the user than a bank card. Rather than imagine that all public key certificates are like general purpose electronic passports, we can deploy multiple, special purpose certificates, and treat them more like electronic business cards. A public key certificate issued on behalf of a community of business users and constrained to that community can thereby stand for any type of professional credential or affiliation.

We can now automate and embed the complex cryptography deeply into smart devices -- smartcards, smart phones, USB keys and so on -- so that all terms and conditions for use are application focused. As far as users are concerned, a smartcard can be deployed in exactly the same way as any magnetic stripe card, without any need to refer to - or be limited by - the complex technology contained within (see also Simpler PKI is on the cards). Any application-specific smartcard can be issued under rules and controls that are fit for their purpose, as determined by the community of users or an appropriate recognized authority. There is no need for any user to read a CPS. Communities can determine their own evidence-of-identity requirements for issuing cards, instead of externally imposed personal identity checks. Deregulating membership rules dramatically cuts the overheads traditionally associated with certificate registration.

Finally, if we constrain the use of certificates to particular applications then we can factor the intended usage into PKI accreditation processes. Accreditation could then allow for particular PKI scheme rules to govern liability. By 'black-boxing' each community's rules and arrangements, and empowering the community to implement processes that are fit for its purpose, the legal aspects of accreditation can be simplified, reducing one of the more significant cost components of the whole PKI exercise (having said that, it never ceases to amaze how many contemporary healthcare PKIs still cling onto face-to-face passport grade ID proofing as if that's the only way to do digital certificates).

Fast forward

The preceding piece is a lightly edited version of the article ”Rethinking PKI” that first appeared in Secure Computing Magazine in 2003. Now, over a decade later, we’re seeing the same principles realised by the FIDO Alliance.

The FIDO protocols U2F and UAF enable specific attributes of a user and their smart devices to be transmitted to a server. Inherent to the FIDO methods are digital certificates that confer attributes and not identity, relatively large numbers of private keys stored locally in the users’ devices (and without the users needing to be aware of them as such) and digital signatures automatically applied to protocol messages to bind the relevant attributes to the authentication exchanges.

Surely, this is how PKI should have been deployed all along.

Posted in Security, PKI, Internet, Identity

FIDO Alliance - Update

You can be forgiven if the FIDO Alliance is not on your radar screen. It was launched barely 18 months ago, to help solve the "password crisis" online, but it's already proven to be one of most influential security bodies yet.

The typical Internet user has dozens of accounts and passwords. Not only are they a pain in the arse, poor password practices are increasingly implicated in fraud and terrible misadventures like the recent "iCloud Hack" which exposed celebrities' personal details.

With so many of our assets, our business and our daily lives happening in cyberspace, we desperately need better ways to prove who we are online – and even more importantly, prove what we entitled to do there.

The FIDO Alliance is a new consortium of identity management vendors, product companies and service providers working on strong authentication standards. FIDO’s vision is to tap the powers of smart devices – smart phones today and wearables tomorrow – to log users on to online services more securely and more conveniently.

FIDO was founded by Lenovo, PayPal, and security technology companies AGNITiO, Nok Nok Labs and Validity Sensors, and launched in February 2013. Since then the Alliance has grown to over 130 members. Two new authentication standards have been published for peer review, half a dozen companies showcased FIDO-Ready solutions at the 2014 Consumer Electronic Show (CES) in Las Vegas, and PayPal has released its ground-breaking pay-by-fingerprint app for the Samsung Galaxy S5.

The FIDO Alliance includes technology heavyweights like Google, Lenovo, Microsoft and Samsung; payments giants Discover, MasterCard, PayPal and Visa; financial services companies such as Aetna, Bank of America and Goldman Sachs; and e-commerce players like Netflix and Salesforce.com. There are also a couple of dozen biometrics vendors, many leading Identity and Access Management (IDAM) solutions and services, and almost every cell phone SIM and smartcard supplier in the world.

I have been watching FIDO since its inception and reporting on it for Constellation Research. The third update in my series of research reports on FIDO is now available and can be downloaded here. The report looks in depth at what the Alliance has to offer vendors and end user communities, its critical success factors, and how and why this body is poised to shake up authentication like never before.

Posted in Constellation Research, Identity, Security, Smartcards

Safeguarding the pedigree of personal attributes

The problem of identity takeover

The root cause of much identity theft and fraud today is the sad fact that customer reference numbers, personal identifiers and attributes generally are so easy to copy and replay without permission and without detection. Simple numerical attributes like bank account numbers and health IDs can be stolen from many different sources, and replayed with impunity in bogus transactions.

Our personal data nowadays is leaking more or less constantly, through breached databases, websites, online forms, call centres and so on, to such an extent that customer reference numbers on their own are no longer reliable. Privacy consequentially suffers because customers are required to assert their identity through circumstantial evidence, like name and address, birth date, mother’s maiden name and other pseudo secrets. All this data in turn is liable to be stolen and used against us, leading to spiraling identity fraud.

To restore the reliability of personal attribute data, we need to know their pedigree. We need to know that a presented data item is genuine, that it originated from a trusted authority, it’s been stored safely by its owner, and it’s been presented with the owner’s consent. If confidence in single attributes can be restored then we can step back from all the auxiliary proof-of-identity needed for routine transactions, and thus curb identity theft.

A practical response to ID theft

Several recent breaches of government registers leave citizens vulnerable to ID theft. In Korea, the national identity card system was attacked and it seems that all Korean's citizen IDs will have to be re-issued. In the US, Social Security Numbers are often stolen and used tin fraudulent identifications; recently, SSNs of 800,000 Post Office employees appear to have been stolen along with other personal records.

Update 14 June 2015: Now last week we got news of a hugely worse breach of US SSNs (not to mention deep personal records) of four million federal US government employees, when the Office of Personnel Management was hacked.

We could protect people against having their stolen identifiers used behind their backs. It shouldn't actually be necessary to re-issue every Korean's ID, not to worry that US SSNs aren't usually replaceable. And great improvements may be made to the reliability of identification data presented online without dramatically changing Relying Parties' back-end processes. If for instance a service provider has always used SSN as part of its identification regime, they could continue to do so, if only the actual Social Security Numbers being received were reliable!

The trick is to be able to tell "original" ID numbers from "copies". But what does "original" even mean in the digital world? A more precise term for what we really want is pedigree. What we need is to be able to present attribute data in such a way that the receiver may be sure of their pedigree; that is, know that the attributes were originally issued by an authoritative body, that the data has been kept safe, and that each presentation of the attribute has occurred under the owner's control.

These objectives can be met with the help of smart cryptographic technologies which today are built into most smart phones and smartcards, and which are finally being properly exploited by initiatives like the FIDO Alliance.

"Notarising" attributes in chip devices

There are ways of issuing attributes to a smart chip device that prevent them from being stolen, copied and claimed by anyone else. One way to do so is to encapsulate and notarise attributes in a unique digital certificate issued to a chip. Today, a great many personal devices routinely embody cryptographically suitable chips for this purpose, including smart phones, SIM cards, "Secure Elements", smartcards and many wearable computers.

Consider an individual named Smith to whom Organisation A has issued a unique attribute N (which could be as simple as a customer reference number). If N is saved in ordinary computer memory or something like a magnetic stripe card, then it has no pedigree. Once the number N is presented by the cardholder in a transaction, it has the same properties as any other number. To better safeguard N in a chip device, it can be sealed into a digital certificate, as follows:

1. generate a fresh private-public key pair inside Smith’s chip
2. export the public key
3. create a digital certificate around the public key, with an attribute corresponding to N
4. have the certificate signed by (or on behalf of) organisation A.

Pedigree Diagram 140901

The result of coordinating these processes and technologies is a logical triangle that inextricably binds cardholder Smith to their attribute N and to a specific personally controlled device. The certificate signed by organisation A attests to both Smith’s entitlement to N and Smith's control of a particular key unique to the device. Keys generated inside the chip are retained internally, never divulged to outsiders. It is not possible to copy the private key to another device, so the logical triangle cannot be reproduced or counterfeited.

Note that this technique lies at the core of the EMV "Chip-and-PIN" system where the smart payment card digitally signs cardholder and transaction data, rendering it immune to replay, before sending it to the merchant terminal. See also my 2012 paper Calling for a uniform approach to card fraud, offline and on. Now we should generalise notarised personal data and digitally signed transactions beyond Card-Present payments into as much online business as possible.

Restoring privacy and consumer control

When Smith wants to present their attribute N in an electronic transaction, instead of simply copying N out of memory (at which point it would lose its pedigree), Smith’s transaction software digitally signs the transaction using the certificate containing N. With standard security software, any third party can then verify that the transaction originated from a genuine chip holding the unique key certified by A as containing the attribute N.

Note that N doesn't have to be a customer number or numeric identifier; it could be any personal data, such as a biometric template, or a package of medical information like an allergy alert, or an interesting isolated (and anonymous) property of the user such as their age.

The capability to manage multiple key pairs and certificates, and to sign transactions with a nominated private key, is increasingly built into smart devices today. By narrowing down what you need to know about someone to a precise attribute or personal data item, we will reduce identity theft and fraud while radically improving privacy. This sort of privacy enhancing technology is the key to a safe Internet of Things, and fortunately now is widely available.

Addressing ID theft

Perhaps the best thing governments could do immediately is to adopt smartcards and equivalent smart phone apps for holding and presenting such attributes as official ID numbers. The US government has actually come close to such a plan many times; Chip-based Social Security Cards and Medicare Cards have been proposed before, without realising their full potential. These devices would best be used as above to hold a citizen's identifiers and present them cryptographically, without vulnerability to ID theft and takeover. We wouldn't have to re-issue compromised SSNs; we would instead switch from manual presentation of these numbers to automatic online presentation, with a chip card or smart phone app conveying the data through digitally signatures.

Posted in Smartcards, Security, PKI, Payments, Identity, Fraud, Biometrics