Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Taking stock of the IdM scene

An awkward fracas has broken out in the identity standards community over the process that led to the drafting and approval of OAuth 2.0. I've participated in many standards committees myself and I agree they're difficult environments, populated with self-selected and often implacably parental technical specialists, and the processes are much complicated by corporate interests.

The arguments over committee machinations and the protocol itself concern arcane stuff that goes on under the hood of identity management. So the fuss doesn't really matter much at all ... except that as I've said before, a bigger starker problem meanwhile goes unremarked; namely, there's something amiss with the very idea of Federated Identity!

To recap ...

Framing identity management

While I wish to avoid semantic debates, it's worthwhile reminding ourselves that digital identity is a metaphor. Modern identity management theory holds that digital identity all about assertions or claims.

My digital identity as an employee is manifest as a set of claims (employee number, job role etc) that substantiate my relationship with my employer. Different sets of claims (e.g. account numbers, credit card numbers) evince my relationships with various banks, creating distinct Digital Identities meaningful in different contexts. Each of our metaphorical Digital Identities is a proxy for a different relationship we have with some service provider in a certain context.

Understanding Digital Identities in terms of relationships ought to illuminate the subtle complexity in identity federation. Re-using identities sounds simple but if we re-framed the objective in terms of re-using relationships, the challenges are more self-evident.

Unpacking the identity management problem

What problem are we trying to solve in a new overarching approach to identity management? I contend there are at least three major separable problems, which the ambitious visions of federated identity tend to bundle together:

1. "Identity theft": IDs in simple alphanumeric form (and personally identifying data in general) are vulnerable to theft and replay, because, unaided, computers cannot readily tell the difference between original data and copies.

2. Ease of Use: Authentication technologies have multiplied beyond reason and have become hard to use en masse or even individually. This is the "token necklace" problem.

3. Burden of Registration: There has been an explosion in the number of separate registrations required by online services. Many social media logons are trivial as there is no significant asset at risk in the face of misidentification. A separate issue under this heading is the mounting demand for online registration at e.g. virtual banks without in-person identity proofing.

And while it's not a problem per se, a fourth driver in many federated identity schemes is the perceived opportunity for established service providers to leverage their special knowledge of their customers into general purpose identities those customers can use elsewhere.

Beware extrapolating from social login

The low risk, low friction social media environment first spawned OpenID, and then led to social login via Facebook, Twitter and many more networks. Social login is the best and perhaps only really good example of federated identity. And it has inspired key elements of NSTIC, such as when Whitehouse security adviser Howard Schmidt blogged on the launch of NSTIC: "imagine that a student could get a digital credential from her cell phone provider and another one from her university and use either of them to log-in to her bank, her e-mail, her social networking site, and so on".

It's an alluring user experience but the social login model is divorced from real world liability arrangements that underpin the very much more serious business of telcos, universities and banks. These sorts of identities are issued by IdPs that don't know who you are, and they're used by RPs that don't care who you are.

This is a significant gap in most federated identity work to date between the easy intuition of being able to "share identities" and the reality of taking one party's knowledge of a user in a certain context and parlaying that into other contexts and applications over which the "identity provider" has no control.

Beware of designed ecosystems

The so-called identity ecosystem of NSTIC, Kantara, the Laws of Identity and so on has not evolved naturally but rather has been designed. Artificial ecosystems tend not to be as robust as ones that evolve naturally. This is not merely an academic distinction:

A. The model embodies a number of tacit assumptions and biases that should be made explicit. For instance, there is a widespread assumption that identity silos are inherently undesirable and should be done away with. Another assumption is that the abstractly different roles of IdPs and RPs really ought to be separated in practice, with banks, telcos, universities, governments and so on all sharing one another's identities.

B. Liability arrangements and risk management mechanisms have evolved naturally over many years in real world business ecosystems where different sets of rules have come to suit different niches. The total cost (including potential legislation) of forcing changes to existing liability arrangements to suit the federated identity world view turns out to be high, and unbounded unless secondary uses of identities is carefully bounded (which erodes the federation proposition).

C. The identity ecosystem is relatively new; nobody yet knows if it is sustainable. The troubled history of conventional PKI, and the difficulty in getting many federated identity schemes off the ground, suggests that pure play IdPs are going to be hard to sustain.

What do I think should be done?

1. Carefully review the various failings in this space. There is as yet still no satisfactory unifying explanation for the surprising failures of so many worthwhile projects and products. I would like to see fresh research into the reasons any or all of the following initiatives foundered: Sxip, Microsoft's Cardspace, the Internet Industry Association (IIA) authentication hub proposal, and the Australian banking sector's Trust Centre and MAMBO projects.

2. Make incremental progress on pressing issues. For the most part, businesses and governments do a reasonably good job of identifying people in the real world and establishing their bona fides. The most acute identity problem is a technological one: digital identity data is too vulnerable to replay. Moreover this technological problem is relatively easy to fix, by automatic digital signing of routine transactions using tamper resistant keys held in easy-to-use secure portable media (like chip cards, smart phones and the like).

Posted in Identity, Federated Identity

The double standard in biometrics analysis

The reverse engineering of biometric iris templates reported at Blackhat this month has attracted deserved attention. Iris now joins face and fingerprint as modalities that have been reverse engineered; that is, it has proved possible to synthesise an image that when processed by the algorithm in question, produces a match against a target template.

The biometrics industry reacts to these sorts of results in a way unbefitting of serious security practitioners.

Take for instance Securlinx CEO Barry Hodge's comment on the iris attack: "All of these articles obsessing over how to spoof a biometric are intellectually interesting but in the practical application irrelevant".

But nobody should belittle the significance of these sorts of results - especially when no practical biometric can be revoked and reissued after compromise.

Mr Hodge, security is an intellectually challenging field. Let's compare the biometrics industry's complacency with the way serious security professionals responded to the problems discovered in the SHA-1 hash algorithm.

Ideal hash algorithms are supposed to produce digest values that are truly random under any variation to the input data. Any ability to predict how a digest varies could conceivably lead to a number of attack scenarios including where digitally signed data might be tampered with, without impacting the signature. The only way to attack an ideal hash algorithm is by brute force: if an attacker wishes to synthesise a piece of data that produces a target hash value (a so-called "collision") they have to work their way through all possible permutations. For a 160 bit hash value, this brute force task taks on the order of 2 to the power of 159 trials, which would be beyond the power of all the world's computers running for millions of years.

In 2005, Chinese academic cryptologists discovered a weakness in the SHA-1 algorithm, that under some circumstances allows a reduction in the number or trials needed for brute force discovery of collisions. The researchers did not reduce the number of trials by very much, and they did not demonstrate any actual attack. No one else feared that this work would produce a practical exploit, and in the following eight years there still has been no report of an attack on SHA-1.

However, cryptographers, security strategists and policy makers worldwide were shaken by the SHA-1 research. They were deeply worried, intellectually, that a digest algorithm could have a structural weakness that compromises its randomness. It meant that the cryptographic community did not really understand SHA-1 as well as they might. And the policy response was swift. The US government sponsored a new digest algorithm competition, which yielded the SHA-2 algorithm, which is now being promulgated globally.

This is good security practice at work. Academics continuously work away at stressing existing techniques and uncovered weaknesses. To stay ahead of attackers, all verified academic weaknesses are taken seriously, and where critical security infrastructure is involved - even when no practical attack has yet been seen - we review and upgrade the security solutions, to stay ahead of our adversaries.

In stark contrast, biometrics advocates seem to fall back on a variation of the Bart Simpson Defence, namely, "I didn't do it; nobody saw me do it; you can't prove a thing".

Over the past few years I've tracked biometrics proponents claim firstly that templates cannot be reverse engineered. They went on to qualify their position to say that certain types of biometrics are "practically impossible" to reverse. And now Mr Hodge is saying it doesn't really matter if they are reversed.

There is no disaster recovery plan in biometrics; they cannot be cancelled and reissued, so of course their advocates cling to the idea they cannot be compromised. And with that attitude they further distinguish themselves in infosec, for no one else ever acts as though their technology is perfect.

Posted in Security, Biometrics

Card fraud in Australia even worse than feared

Seasoned security analysts know the card fraud trends, but the latest stats in Australia are surprisingly bad.

The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. Lockstep monitors these figures, crunches them and plots the trend data.

Here's the latest picture of Australian payment card fraud growth over the past six calendar years CY2006-11.

CNP trends pic to CY 2011

For the first time in many years, card fraud has grown in all categories at once. The ratio of Card Not Present fraud to all fraud remained steady at just under three quarters. Any up-turn in skimming and counterfeiting is surprising given the strong penetration of chip-and-PIN cards in Australia, although most ATMs here still use the stripe and remain vulnerable to carding. Still, CNP fraud remains the preferred MO of organised crime, and its cost grew by 61% from 2010 to 2011.

"Innovation" is a topical notion in Australian payments systems circles, but for the most part innovation is confined to back end systemic improvements to interbank settlements. Regulators take a light touch on the user side. The market is fostering innovative payments applications in mobile devices, but so far, security still proves to be too hard. APCA's only position on security is to wait and see what happens when 3D Secure comes to Australia. Given that nothing has stood in its way, and CNP fraud is doubling every two years, the very absence of 3D Secure here should be worrying to the regulators.

For more information about Lockstep Technologies' R&D in CNP payments security, see our recent blogs Killing two birds with one chip and CNP fraud is just online carding.

Posted in Security, Payments, Fraud