Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Surfacing identity

The metaphor of a spectrum is often used in today's identity discourse to describe a scale of knowingness. The degree to which someone is known is shown to range from zero (anonymity), up to some maximum (i.e. "verified identity") passing through pseudonymity and self-asserted identity along the way. It's a useful way of characterising some desirable features of identity management, but it's something of an oversimplification, and it contradicts modern risk management. While it's great to legitimise the plurality of identities (by illustrating how we can maintain several identities at different points on a spectrum), the metaphor is problematic. Spectra are linear, with just one independent variable whereas risk management is multi-dimensional. The metaphor implies that identities can be ordered from weak to strong. They can't.

A digital identity is a set of assertions [Ref: Laws of Identity] that are meaningful in some context. When an Identity Provider (IdP) identifies me in their context, what they're doing is testing and vouching for a closed set of n assertions: {A1, A2, ..., An}. When a Relying Party (RP) wants to use my identity, they need to be satisfied about a number of assertions relevant to their business; let's say there are m of them: {Ai, Aii, ..., Am}.

Federation requires, at the very least, that (1) the RP's m assertions are a subset of the IdP's n assertions, and (2) the IdP has tested each assertion to the right level of confidence for the RP's purposes. When designing a federation, the sets of assertions for all anticipated RPs need to be defined in advance, together with the required confidence levels. Closing the problem space and quantifying all its dimensions is a huge challenge.

When we look at identification risk management in a more multi-dimensional way, each identity looks more like a surface in a multidimensional space rather than a point on a 1D line. For example, let's imagine that a general purpose IdP ascertains and vouches for six assertions: given name, home address, date of birth, educational qualifications, residency and gender. The IdP gauges the accuracy with which it can make each assertion as follows:

Blog identity surface pics 120826 IDP


A1 Given name 90%
A2 Address 90%
A3 DOB 90%
A4 Gender 35%
A5 Qualifications 25%
A6 Residency 25%


For this Identity Provider to be useful to any given Relying Party, the assertions need to be of interest to the RP, and they have to be asserted with a minimum accuracy. Consider RP1, a bank, which needs to be sure of a customer's name, address and date of birth to at least 80% confidence under applicable KYC rules, and doesn't need to know anything else. We can plot RP1's identity expectation and compare it with the IdP's assertions. All well and good in this case, for the IdP covers the RP:

Surface  RP1


Now consider RP2, an adult social networking service. All it wants to know is that its anonymous customers are at least 18 years of age. Its requirement for Assertion 3 is 90%, and it doesn't care about anything else. So again, the IdP meets the needs of this RP (assuming that the identity management technology allows for selected disclosure of just the relevant assertion and hides all the others):

Surface  RP2


Finally, let's look at a hospital employing a casual doctor. Credentialing rules and malpractice risk means that the hospital is more interested in the individual's qualifications and residency (which must be known with 90% confidence), than their name and address (50%). And now we see that RP3's requirements are not covered by this particular IdP:

Surface  RP3


Returning to the idea of a spectrum, there is no sliding scale from anonymity up to "full" identity. Neither can trust in an identity be pinpointed somewhere between LOA 1 and LOA 4. In general, the more serious an identity gets, the more complex and multivariate is the set of assertions that it covers. I'm afraid the pseudonymous social logon experience at LOA 1 doesn't pave the way to more serious multifaceted identity federation "at the other end" of a spectrum. It's not like simply turning up the heat to step up from cold to hot.

Posted in Identity, Federated Identity, Trust

Memetic engineering our identities

This blog post builds a little further on my ecological ideas about the state of digital identity, first presented at the AusCERT 2011 conference. I have submitted a fresh RSAC 2013 speaker proposal where I hope to show a much more fully developed memetic model.

The past twenty years has seen a great variety of identity methods and devices emerge in the digital marketplace. In parallel, Internet business in many sectors has developed under the existing metasystems of laws, sectoral regulations, commercial contracts, industry codes, and traditional risk management arrangements.

Variety of identities 12824


As with Darwin's finches, the very variety of identity methods suggests an ecological explanation. It seems most likely that different methods have evolved in response to different environmental pressures.

The orthodox view today is that we are given a plurality of identities from the many organisations we do business with. Our bank account is thought to be an discrete identity, as is our employment, our studentship, our membership of a professional body, and our belonging to a social network. Identity federation seeks to take an identity out of its original context, and present it in another, so that we can strike up new relationships without having to repeat the enrolment processes. But in practice, established identities are brittle; they don't bend easily to new uses unanticipated by their original issuers. Even superficially similar identities are not readily relied upon, because of the contractual fine print. Famously in Australia, one cannot open a fresh bank account on the basis of having an existing accout at another bank, even though their identification protocols are essentially identical, under the law. Similarly, government agencies have historically struggled to cross-recognise each other's security clearances.

I have come to the conclusion that we have abstracted "identity" at too high a level. We need to drop down a level or two and make smarter use of how identities are constructed. It shouldn't be hard to do; we have a lot of the conceptual apparatus already. In particular, one of the better definitions of digital identity holds that it is a set of assertions or claims [Ref:The Laws of Identity]. Instead of federating rolled-up high level identities, we would have an easier time federating selected assertions.

Now, generalising beyond the claims and assertions, consider that each digital identity is built from a broad ensemble of discrete technological and procedural traits, spanning such matters as security techniques, registration processes, activation processes, identity proofing requirements (which are regulated in some industries like banking and the healthcare professions), user interface, algorithms, key lengths, liability arrangements, and so on. These traits together with the overt identity assertions -- like date of birth, home address and social security number -- can be seen as memes: heritable units of business and technological "culture".

IEEE Part B Diagram (2 0)


The ecological frame leads us to ask: where did these traits come from? What forces acted upon the constituent identity memes to create the forms we see today? Well, we can see that different selection pressures operate in different business environments, and that memes evolve over time in response. Example of selection pressures include fraud, privacy (with distinct pressures to both strengthen and weaken privacy playing out before our eyes), convenience, accessibility, regulations (like Basel II, banking KYC rules, medical credentialing rules, and HSPD-12), professional standards, and new business models like branch-less banking and associated Electronic Verification of Identity. Each of these factors shift over time, usually moving in and out of equilibrium with other forces, and the memes shift too. Successful memes -- where success means that some characteristic like key length or number of authentication factors has proven effective in reducing risk -- are passed on to successive generations of identity solution. The result is that at any time, the ensemble of traits that make up an "identity" in a certain context represents the most efficient way to manage misidentification risks.

The "memome" of any given rolled-up identity -- like a banking relationship for instance -- is built from all sorts of ways doing things, as illustrated. We can have different ways of registering new banking customers, checking their bona fides, storing their IDs, and activating their authenticators. Over time, these component memes develop in different ways, usually gradually, as the business environment changes, but sometimes in sudden step-changes when the environment is occasionally disrupted by e.g. a presidential security directive, or a w business model like branch-less banking. And as with real genomes, identity memes interact, changing how they are expressed, even switching each other on and off.

As they say, things are the way they are because they got that way.

I reckon before we try to make identities work across contexts they were not originally intended for, we need to first understand the evolutionary back stories for each of the identity memes, and the forces that shaped them to fit certain niches in business ecosystems. Then we may be able to literally do memetic engineering to adapt a core set of relationships to new business settings.

The next step is to rigorously document some of the back stories, and to see if the "phylomemetics" really hangs together.

Posted in Science, Identity, Federated Identity

Is quantum computing all it's cracked up to be?

Quantum computing continues to make strides. Now they've made a chip to execute Shor's quantum foctorisation algorithm. Until now, quantum computers were built from bench-loads of apparatus, and had yet to be fabricated in solid state. So this is pretty cool, taking QC from science into engineering.

The promise of quantum computing is that it will eventually render today's core cryptography obsolete, by making it possible to factorise large numbers very quickly. The RSA algorithm for now is effectively unbreakable because its keys are the products of prime numbers hundreds of digits long. The product of two primes can be computed in split seconds; but to find the factors by brute force - and thus crack the code - takes billions of computer-years.

I'm curious about one thing. Current prototype quantum computers are built with just a few qubits because of the 'coherence' problem (so they can only factorise little numbers like 15 = 3 x 5). The machinery has to hold all the qubits in a state of quantum uncertainty for long enough to complete the computation. The more qubits they have, the harder it is to maintain coherence. The task ahead is to scale up past the Proof-of-Concept stage to a manage a few hundred qubits and thus be able to crack 2048 bit RSA keys for instance.

Evidently it's hard to build say a 1000 qubit quantum computer right now. So my question is: What is the relationship between the difficulty of maintaining coherence and the number of qubits concerned? Is it exponentially difficult?

Because if it is, then the way to stay ahead of quantum computing attack might be to simply go out to RSA keys tens of thousands of digits long.

Posted in Security, Science

A response to M2SYS on reverse engineering

M2SYS posted on their blog a critique of the recent reverse engineering of iris templates. In my view, they misunderstand or misrepresent the significance of this sort of research. Their arguments merit rebuttal but the M2SYS blog is not accepting comments, and they seem reluctant to engage on these important issues on Twitter.

Here below is what I tried to post in response.

See also my post about the double standard in how biometrics proponents treat adverse research in comparison with serious cryptographers.

"You're right that reporting of the Black Hat results should not overstate the problem. By the same token, advocates for biometrics should be careful with their balance too. For example, is it fair to say as you do that biometrics are 'nearly impossible' to reverse engineer? And should Securlinx's Barry Hodge play down the reverse engineering as only 'intellectually interesting'?

"The point is not that iris scanning will suddenly be defeated left and right -- you're right the practical risk of spoofing is not widespread nor immediate. But this work and the publicity it attracts serves a useful purpose if it fosters more critical thinking. Most lay people out there get their understanding of biometrics from science fiction movies. Without needing to turn people into engineers, they ought to have a better handle on the technology and realities such as the false positive (security) / false negative (usability) tradeoff, and spoofing.

"My observation is that biometrics advocates have transitioned from more or less denying the possibility of reverse engineering, to now maintaining that it really doesn't matter. But until the industry comes up with a revokable biometric, I think it is only prudent to treat seriously even remote prospects of spoofing."

Posted in Biometrics