Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Let's embrace Identity Plurality

In information security we've been saddled for years with the tacit assumption that deep down we each have one "true" identity, and that the best way to resolve rights and responsibilities is to render that identity as unique. This "singular identity" paradigm has had a profound and unhelpful influence on security and its sub-disciplines like authentication, PKI, biometrics and federated identity management.

Federated Identity is basically a sort of mash-up of the things that are known about us in different contexts. When describing federated identity, its proponents often point out how drivers licences are presented to boot-strap a new relationship. But it is a category error to abstract this case to as an example of Federated ID, because while a licence might prove your identity when joining a video store, it does not persist in that relationship. Instead the individual is given a new identity: that of a video store member.

A less trivial example is your identity as an employee. When you sign on, HR might sight your driver licence to make sure they get your legal name correct. But thereafter you carry a company ID badge - your identity in that context. You do not present your driver licence to get in the door at work.

Federated Identity posits, often implicitly, that we only really need one identity. The "Identity 2.0" movement properly stresses the multiplicity of our relationships but it usually seeks to hang all relationships off one ID. The beguiling yet utopian OSCON2005 presentation by Dick Hardt shows vividly how many ways there are to be known (although Harte went a step too far when he tried to create a single, albeit fuzzy, uber identity transcending all contexts).

I favor an alternate view - that each of us actually exercises a portfolio of separate identities and that we switch between them in different contexts. This is not an academic distinction; it really makes a big difference where you draw the line on how much you need to know to set a unique identity.

I am an authorised signatory to my company's corporate bank account. I happen to hold my personal bank account at the same institution, and thus I have two different key cards from the same bank. Technically, when I bank on behalf of my company, I exercise a different identity than when I bank for myself, even if I am in the same branch or at the same ATM. There is no "federation" between my corporate and personal identities; it is not even sensible to think in terms of my personal identity "plus" my corporate attributes when I am conducting business banking. After all, so much corporate law concerns separating the identity of a company's people from the company itself. And I think this is more than a technicality too because I truly feel like a different person when I'm conducting Lockstep banking compared to personal banking. I think it's because I am two different people.

Kim Cameron's seminal Laws of Identity deliberately promoted the plurality of identity. Cameron included a fresh definition of digital identity as "a set of claims made by one digital subject about itself or another digital subject". He knew that this relativist definition might be unfamiliar, admitting that it "does not jive with some widely held beliefs - for example that within a given context, identities have to be unique".

That "widely held belief" seems to be a special product of the computer age. Before the advent of "Identity Management", we lived happily in a world of plural identities. Each of us could be by turns a citizen, an employee, a chartered professional, a customer, a bank account holder, a credit cardholder, a patient, a club member, another club official, and so on. It was seemingly only after we started getting computer accounts that it occurred to people to think in terms of one "primary" identity threading a number of secondary roles. Conventional Access Control insists on a singular authentication of who I am, followed by multiple authorisations of what I am entitled to do. This principle was laid down by computer scientists in the 1970s.

The idea that we need to establish a true identity before granting access to particular services is unhelpful to many modern online services. Consider the importance of confidentiality in "apomediation" (where people seek medical information from non technical but "expert" patients) and online psychological counselling. Few will enrol in these important new patient-managed healthcare services if they have to identify themselves before providing an alias. Instead, participants in medical social networking will feel strongly that their avatars' identities in and of themselves are real. Likewise, in virtual worlds and in role playing online games, it's conventional wisdom that participants can adopt distinctly different personae compared to their workaday identities.

Despite the efforts of Kim Cameron and others, and despite the all-too-familiar experience of exercising a range of ids, the singular identity paradigm has proved hard to shake. In defiance of the plurality that features in the Laws of Identity, most federated identity formulations actually reuse identities across totally unrelated contexts, in order to conveniently hang multiple roles off the one identity.

The old paradigm also explains the surprisingly easy acceptance of biometrics. The very idea of biometric authentication plays straight into the world view that each user has one "true" identity. Yet these technologies are deeply problematic; in practice their accuracy is disappointing; worse, in the event a biometric is ever stolen, it's impossible with any of today's solutions to cancel and re-issue the identity. Biometrics' overwhelming intuitive appeal must be based on an idea that what matters in all transactions is the biological person. But it's not. In most real world transactions, the role is all that matters. Only rarely (such as when investigating fraud) do we go to the forensic extreme of knowing the person.

There are grave risks if we insist on the individual being bodily involved in routine transactions. It would make everything intrinsically linked, violating inherently and irreversibly the most fundamental privacy principle: Don't collect personal information when it's not required.

Why are so many people willing to embrace biometrics in spite of their risks and imperfections? It may be because we've been inadvertently seduced by the idea of a single identity.

Posted in Identity, Federated Identity, Culture

For all the talk of ecosystems ...

Yet another breathless report crossed my desk via Twitter this morning where the rise of mobile payments is predicted to lead to cards and cash "disappearing", in this case by 2020. Notably, this hyperventilation comes not from a tech vendor but instead from a "research" company.

So I started to wonder why the success of mobile payments (or any other disruptive technology) is so often framed in terms of winner-take-all. Surely we can imagine new payments modalities being super successful without having to see plastic cards and cash disappear? It might just be that press releases and Twitter tend towards polar language. More likely, and not unrelatedly, it's because a lot of people really think this way.

It's especially ironic given how the term "ecosystem" tops most Buzzword Bingo cards these days. If commentators were to actually think ecologically for a minute they'd realise that the extinction of a Family or Order at the hands of another is very rare indeed.

Posted in Payments, Language, Culture

Understanding biometrics and their necessary fallibility

Most lay people get their understanding of biometrics from watching science fiction movies, where people stare at a camera and money comes out. And unfortunately, some biometrics vendors even use sci-fi films in their sales presentations as if they're case studies. In reality, biometrics just don't work as portrayed.

Here we'll spend just five or ten minutes looking a bit more deeply, to help set reaslistic expectations of this technology.

In practice, the most important thing about biometrics is their fallibility. Because of the vagaries of human traits and the way they vary from day to day, biometrics have to cope with the same person appearing a little different each time they front up. Inevitably this means that occasionally a biometric system will confuse one person with another. So what? Well, there are two major foibles of all biometrics that go unmentioned by most vendors:

1. There is an inherent trade off in all biometrics, between their ability to discriminate between different people (specificity) and their ability to properly recognise all users (sensitivity). You can't have it both ways; a system that is very specific will be more inclined to reject a legitimate user, and conversely, a system that never fails to recognise you will also tend to occasionally confuse you with someone else. Yet biometrics vendors often quote their best case False Reject and False Accept figures side by side, as if they're achievable simultaneously.

2. The only way to improve sensitivity and specificity at the same time is to tighten the enrolment and scanning conditions and/or the mathematical models that underpin the algorithms. In other words, to make the systems choosier. This is why really serious biometrics like face recognition for passports and driver licences require stringent lighting conditions and image quality, and why we should be wary of biometrics in mobile devices where there is almost no control over lighting and sound.

Uncertainty accumulates

The least technical criticism of biometrics concerns the fallibility of all measurement methods. Cameras, sensors and microphones – like human eyes and ears – are imperfect, and the ability of a biometric authentication system to distinguish between subtly different people is limited by the precision of the input devices.

Even if the underlying biological traits of interest are truly unique, it does not follow that our machinery will be able to measure them faithfully. Take the iris. This biometric is often promoted with the impressive claim that the probability of two individuals’ iris patterns matching is one in ten to the power of 78. These are literally astronomical odds; there are fewer atoms in the universe than 10-to-the-78. Yet does this figure necessarily tell us how accurate the end-to-end biometric system really is? Consider the fact that there are ten billion stars in the Milky Way. If two people look up in the night sky and each pick a star at random, is the probability of a match one in ten billion? Of course not, because of the limits of our measurement apparatus, in this case the naked eye. Interference too affects the precision of any measurement; the odds of two people in a big city picking the same star might be no better than one in a hundred.

The Sensitivity-Specificity tradeoff: False Positives and False Negatives

Biometric authentication entails a long chain of processing steps, all of which are imperfect. Each step introduces a small degree of uncertainty, as shown in the schematic below. Uncertainty is inescapable even before the first processing step, because the body part being measured can never appear exactly the same. The angle and pressure of a finger on a scanner, the distance of a face from a camera, the tone and volume of the voice, the background noise and lighting, the cleanliness of a lens all change from day to day. A biometric system cannot afford to be too sensitive to subtle variations, or else it can fail to recognise its target; a biometric must tolerate variation in the input, and inevitably this means the system can sometimes confuse its target for someone else.

Lockstep FAR FMR explanation (0 4) A


Therefore all biometric systems inevitably commit two types of error:

1. A “False Negative” is when the system fails to recognise someone who is legitimately enrolled. False Negatives arise if the system cannot cope with subtle changes to the person’s features, the way they present themselves to the scanner, slight variations between scanners at different sites, and so on.

2. A “False Positive” is when the system confuses a stranger with someone else who is already enrolled. This may result from the system being rather too tolerant of variability from one day to another, or from site to site.

False Positives and False Negatives are inescapably linked. If we wish to make a given biometric system more specific – so that it is less likely to confuse strangers with enrolled users – then it will inevitably become less sensitive, tending to wrongly reject legitimate enrolled users more often.

The following schematics illustrate how a highly specific biometric system tends to commit more False Negatives, while a highly sensitive system exhibits relatively more False Positives.

Lockstep FAR FMR explanation (0 4) B


A design decision has to be made when implementing biometrics as to which type of error is less problematic. Where stopping impersonation is paramount, such as in a data centre or missile silo, a biometric system would be biased towards false negatives. Where user convenience is rated highly and where the consequences of fraud are not irreversible, as with Automatic Teller Machines, a biometric might be biased more towards false positives. For border control applications, the sensitivity-specificity trade-off is a very difficult problem, with significant downsides associated with both types of error – either immigration security breaches, or long queues of restless passengers.

Any biometric system, in principle at least, can be tuned towards higher sensitivity or higher specificity, depending on the overall desired balance of security versus convenience. The performance at different thresholds is conventionally shown by a "Detection Error Tradeoff" (DET) curve.

Biometrics vendors tend to keep their DET curves confidential, and usually release commercial solutions where the ratio of False Accept Rate (FAR) to False Reject Rate (FRR) is fixed. The following DET curves are over ten years old but they remain some of the few examples that are publicly available, and they usefully compare several biometric technologies side by side.

DET Curve Illustrations for blog post 120511


Ref: "Biometric Product Testing Final Report" Issue 1.0, 2001 by the UK Government Communications Electronics Security Group (CESG).

Vendors occasionally specify the "Equal Error Rate" for their solutions. It's important to understand what this spec is for. No real world biometric that I'm aware of is deployed with FAR and FRR tuned to be the same. Instead, the EER should be used as a benchmark for broadly comparing different technologies.

EER provides another useful ready reckoner. If a vendor specifies for example FAR = 0.0001% and FRR = 0.01% and yet you find that the EER is, say, 1% -- that is, greater than both the quoted FAR and FRR -- then you know that the vendor is quoting best case figures that cannot be realised simultaneously. Just look at the DET curves above. When False Accept Rate is 0.1% (ie false positives of 1 in a 1000) the False Reject Rate for ranges from at least 5% to as much as 30%. And we can see that an FAR of 0.0001% is really extreme; for most biometrics, such specificity leads to False Rejects of one in two or worse, rendering the solution unusable.

Failure To Enrol

Over and above the issues of False Positives and False Negatives is the unfortunate fact that not everyone will be able to enrol in a given biometric authentication system. At its extremes, this reality is obvious: individuals with missing fingers, or a severe speech impediment for example, may never be able to use certain biometrics.

However, failure to enrol has a deeper significance for more normal users. To minimise False Positives and False Negatives at the same time (as illustrated in the next figiure), a biometric method generally must tighten requirements on the quality of its input data. A fingerprint scanner for instance will perform better on high definition images, where more fingerprint features can be reliably extracted. If a fingerprint detector sets a relatively stringent cut-off for the quality of the image, then it may not be possible to enrol people who happen to have inherently faint fingerprints, such as the elderly, or those with particular skin conditions.

DET Curve Illustrations for blog post 120511 (EER)


More subtle still is the effect of modelling assumptions within biometric algorithms. In order to make sense of biological traits, the algorithm has to have certain expectations built into it as to how the features of interest generally appear and how those features vary across the population; after all, it is the quantifiable variation in features which allows for different individuals to be told apart. Therefore, face and voice recognition algorithms in particular might be optimised for the statistical characteristics of certain racial groups or nationalities, making it difficult for people from other groups to be enrolled.

The impossibility of enrolling 100% of the population into any biometric security system has important implications for public policy. Clearly there can be at least the perception of discrimination against certain minority groups, if factors like age, foreign accent, ethnicity, disabilities, and/or medical conditions impede the effectiveness of a biometric system. And careful consideration must be given to what fall-back security provisions will be offered to those who cannot be enrolled. If there is a presumption that a biometric somehow provides superior security, then special measures may be necessary to provide equivalent security for the un-enrolled minority.

Posted in Biometrics