Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Identity is dead! Long live identity!

While the post mortems of Cardspace and OpenID continue, surely the elephant in the room is the whole federated identity project. Empirically, federated identity has proven to be easier said than done. In Australia alone at least four well funded projects foundered. Internationally there’s been a revolving door of industry groups and standards development, all well intended, but none of them yet cutting through. Like Simplified (nee Single) Sign On, federated identity chronically over-promises and under-delivers.

Aren't the woes of Cardspace and OpenID intimately connected to the federated identity paradigm? And don't they bode ill for the National Strategy for Trusted Identities in Cyberspace? We need to make the connections if the grand plans for identity are to succeed.

I call for a more critical appraisal of federated identity. We’ve been mesmerised en masse by an easy intuition that if I am known by a certain identity in one circle, then I should be recognisable by more or less the same identity in other circles. Like many intuitions, it’s simply wrong.

False intuitions

In brief, this is how I see the state of play as it now stands:

OpenID provides an unverified nickname to log on to websites that don’t care who you are. The same trick is achieved by easier-to-use Twitter ids or Facebook Connect, so these are proving more popular for blogs and the like. OpenID would be a mere curiosity except that it’s become the poster child of OIX and NSTIC. The Whitehouse extrapolates from the OpenID model to imagine that once you have an identity from a phone company or university you should be able to use it to log on to your bank.

The weird and wonderful Laws of Identity speak of deep truths about digital identity such as context, and they forcefully make the case for each of us exercising a plurality of identities, and never just one. The Laws expose the abstract roles of Identity Provider and Relying Party in what regular organisations like banks and governments do for their customers. Yet few if any of these institutions have been convinced by the Laws to openly embrace these roles, mainly because nobody has yet worked out a palatable way of allocating liability in multilateral brokered identity arrangements, without re-writing the contracts that currently govern how we buy, bank and access government services.

Cardspace is by turns a wondrous graphical user interface, and an implementation of the Identity Metasystem.

The Identity Metasystem is a utopian vision aiming high to enable stranger-to-stranger e-business. Ironically it’s a lot like the Big PKI of old in that it seeks to establish “trust” online. It inserts new players into what were previously tightly managed bilateral transactions, and changes the roles and risk profiles of conservative businesses like banks. In short, the Identity Metasystem is a radical change to how parties transact.

And finally all these new players and sub-plots are supposed to be parts of an “Identity Ecosystem”, and not merely isolated products & services in the next generation of a growing information security marketplace. The trouble here is that real ecosystems evolve rather than being architected. Artificial ecosystems like tropical aquariums and botanical gardens need constant care, attention and intervention to save them from collapse. Time will tell how the identity ecosystem fares if it's ever left to its own devices.

I have analysed different parts of the struggle for identity in greater detail elsewhere in my blog. To summarise:

  • 1. The evidence plainly shows that federation is harder than it looks; the reason is probably sheer legal novelty.
  • 5. The major problem in cyber space is prosaic and does not merit re-imagining how we conduct business; it is simply that the perfectly good identities we already have lose their pedigree when we take them casually from real world to digital.
  • 7. And we probably need a fresh frame for understanding how identities evolve in extant natural social ecosystems, so that we do a better job telling which identities are amenable to federation across contexts and which are best left alone in their current ecological niches.

And so in my view, the federated identity effort turns what really are straightforward technological problems -- the password plague and identity theft -- into intractable business and legal problems.

As the security marketplace absorbs the lessons of Cardspace and OpenID, for sure there will be fresh life breathed into digital identity.

Posted in Federated Identity, Culture, Identity

Programming is like playwriting

The software-as-a-profession debate continues largely untouched by each generation's innovations in production methods. Most of us in the 90s thought that formal methods, reuse and enforced modularity would introduce to software some of the hallmarks of real engineering: predictability, repeatability, measurability and quality. Yet despite Objected Oriented methods and sophisticated CASE tools, many of the human traits of software-as-a-craft remain with us.

The "software crisis" – the systemic inability to estimate software projects accurately, to deliver what's promised, and to meet quality expectations – is over 40 years old. Its causes are multiple and subtle, and despite ample innovation in languages, tools and methodologies, debates continue over what ails the software industry. The latest skirmish is a provocative suggestion from Forrester analyst Mark Gualtieri that we shift the emphasis from independent Quality Assurance back onto developers’ own responsibilities, or abandon QA altogether. There has been a strong reaction! The general devotion to QA I think is aided and abetted by today’s widespread fashion for corporate governance.

But we should listen to radical ideas like Gualtieri’s, rather than maintain a slavish devotion to orthodoxies. We should recognise that software engineering is inherently different from conventional engineering, because software itself is a different kind of material. Its properties make it less amenable to regular governance.

Simply, software does not obey the laws of physics. Building skyscrapers, tunnels, dams and bridges is relatively predictable. You start with site surveys and foundations, erect a sturdy framework and all sorts of temporary formers, flesh out the structure, fill in the services like power and plumbing, do the fit-out, and finally take away all the scaffolding.

Specifications for real engineering projects don’t change much, even over several years for really big initiatives. And the engineering tools don't change at all.

Software is utterly unlike this. You can start writing software anywhere you like, and before the spec is signed off. There aren’t any raw materials to specify and buy, and no quantity surveyors to deal with. Metaphorically speaking, the plumbing can go in before the framework. Hell, you don't even need a framework! Nothing physical holds a software system up. Flimsy software, as a material, is indistinguishable from the best code. No laws of physics dictate that you start at the bottom and work your way slowly upwards, using a symbiosis of material properties and gravity to keep your construction stable and well-behaved as it grows. The process of real world engineering is thick with natural constraints that ensure predictability (just imagine how wobbly a house would be if you could lay the bricks from the top down) whereas software development processes are almost totally arbitrary, except for the odd stricture imposed by high level languages.

Real world systems are thoroughly compartmentalised naturally. If a bearing fails in an air-conditioning plant in the basement, it’s not going to affect the integrity of any of the floor plates. On the other hand, nothing physically decouples lines of code; a bug in one part of the program can impinge on almost any other part (which incidentally renders traditional failure modes and effects analysis impossible). We only modularise software by artificial means, like banning goto statements and self-modifying code.

Coding is fast and furious. In a single day, a programmer can create a system probably more complex than an airport that takes more than 10,000 person-years to build. And software development is tremendous creative fun. Let's be honest: it's why the majority of programmers chose their craft in the first place.

Ironically the rapidity of programming contributes significantly to software project overruns. We only use software in information systems because it's faster to make and easier to modify than wired logic. So the temptation is irresistible to keep specs fluid and to accommodate new requirements at any time. Famously, the differences between prototype, beta and production product are marginal and arbitrary. Management and marketing take advantage of this fact, and unfortunately software engineers themselves yield too readily to the attraction of the last minute tweak.

I suggest programming is more like play writing than engineering, and many programmers (especially the really good ones!) are just as manageable as poets.

In both software and play writing, structure is almost entirely arbitrary. Because neither obey the laws of physics, the structure of software and plays comes from the act of composition. A good software engineer will know their composition from end to end. But another programmer can always come along and edit the work, inserting their own code as they see fit. It is received wisdom in programming that most bugs arise from imprudent changes made to old code.

Messing with a carefully written piece of software is fraught with danger, just as it is with a finished play. I could take Hamlet for instance, and hack it as easily as I might hack an old program -- add a character or two, or a whole new scene -- but the entire internal logic of the play would almost certainly be wrecked. It would be “buggy”.

I was a software development manager for some years in the cardiac pacemaker industry. We developed the world’s first software controlled automatic implantable defibrillator. It had several tens of thousands of lines of C, developed at a rate of about one tested line of code per person per day. We quantified it as the most reliable real time software ever written at the time.

I believe the outstanding quality resulted from a handful of special grassroots techniques:


  • We had independent software test teams that developed their own test cases and tools
  • We did obsessive source code inspections on all units before integration. And in the end, before we shipped, we did an end-to-end walkthrough of the frozen software. It took six of us two months straight. So we had several people who knew the entire object intimately.
  • We did early informal design reviews. As team leader, I favoured having my developers do a whiteboard presentation to the team of their early design ideas, no more than 48 hours after being given responsibility for a module. This helped prevent designers latching onto misconceptions at the formative stages.
  • We took our time. I was concerned that the CASE tools we introduced in the mid 90s might make code rather too easy to trot out, so at the same time I set a new rule that developers had toturn their workstations off for a whole day once a week, and work with pen and paper.
  • My internal coding standard included a requirement that when starting a new module, developers write their comments before they write their code, and their comments had to describe ‘why’ not ‘what’. Code is all syntax; the meaning and intent of any software can only be found in the natural language comments.

Code is so very unlike the stuff of other professions – soil and gravel, metals and alloys, nuts and bolts, electronics, even human flesh and blood - that the metaphor of engineering in the phrase “software engineering” may be dangerously misplaced. By coopting the term we have might have started out on the wrong foot, underestimating the fundamental challenge of forging a software profession. It won't be until software engineering develops the normative tools and standards, culture and patience of a true profession that the software crisis will turn around. And then corporate governance will have something to govern in software development.

Posted in Software engineering, Language, Culture

Identities are brittle but crystal clear

This blog was updated and re-posted on 12 June 2012.

I have been blogging and commenting left and right that there is an alternative theory behind the woes of Cardspace and OpenID. Yes, vendor psychology, standardisation and commercial politics have frustrated progress on the "Identity Metasystem" but a less fashionable explanation is that it's just not as great an idea as first appears. The Identity Metasystem is way over-engineered. It tries to solve stranger-to-stranger "trust" (as did Big Fat PKI in the 1990s) and seeks to allow parties to confirm one another's unanticipated identity assertions.

These are almost academic problems. By far the most economically important transactions on the Internet occur between parties that already have their local "metasystem" in place. Payments, e-health, share trading, e-government etc. all take place within overarching risk management and legal arrangements involving specific registration protocols, formal credentials, terms & conditions, liability allocation etc. The analysis and design of business transaction systems anticipates the risks and responds with identification protocols and rules for participating. Parties in these different transaction contexts know precisely where they sit. They know their roles & responsibilities before they transact, even before they've installed whatever extra software and authentication devices are required according to the local risk analyses.

The "price" we pay for this level of crystalline certainty is that our different identities are brittle. They are highly context dependent, which is exactly what the Laws of Identity teach us.

On the other hand, the utopian Identity Metasystem tries to teach us to bend those identities, hopeful that a smaller number of them might be re-used cross-context. As if this will have a relatively minor impact on all those local risk management arrangements, and so reduce the total cost of ownership of IDs. Sorry, it just doesn't.

Posted in Security, Internet, Identity

Simplifying assumptions for digital identity

I have criticised federated identity for being over-engineered, and described the Laws of Identity as "weird and wonderful" for being overly abstract. It's not that the Laws are wrong; they speak of deep truths to do with identity. I prize them as a seminal contribution to the field. But I believe that in practice, they haven't done as much as we had hoped to help resolve the urgent issues of identity security.

The Laws of Identity call implicitly for conventional e-business actors like banks and government agencies to take on broader generalised roles as open Identity Providers, and to insert themselves into otherwise bilateral transactions between Customers and Service Providers. It's a big call! The Identity Metasystem architecture fundamentally changes business relationships and risk management mechanisms. Most of these arrangements are expressed in legal contracts; some are actually legislated, as in banks' Know Your Customer rules. These contracts and laws are not easily varied. So full blown Federated Identity fundamentally changes the business world.

So I believe instead of complicating generalisations about identity provision and the like, we need a fresh set of simplifying assumptions. Federation has a lot to offer in the new pure play digital activities like blogging and social networking, but in hybrid and high risk services like banking and healthcare, it's much tougher. A new set of assumptions might help us tell the difference, to avoid expensive project failures, and create a stronger more graduated bridge from today's bricks-and-mortar conventions to the world of the Laws of Identity.

Here's a hopeful start.

Assumption: There aren't many strangers in real life business

The idea of 'stranger-to-stranger' transactions is implicit in open identity theory. Yet most e-business automates routine transactions between parties that have already signed up to an over-arching set of arrangements, like a credit card agreement or a supplier contract. The first and foremost aim of most digital identities should be to faithfully represent existing real world credentials, allowing them to be exercised online without changing their meaning or their terms & conditions.

Assumption: Relying Party and "Identity Provider" are often the same

The central generalisation in the Identity Metasystem, and its progeny like the Open Identity Exchange (OIX) Framework and the National Strategy for Trusted Identities in Cyberspace, is that Identity Providers are separate from Service Providers. This may be perfectly true in the abstract, but it plays into the flawed intuition that the identity I have with one bank for instance should be readily recognisable by another.

When you take an identity outside of its original context and try to make sense of it in other contexts, then you break its original Ts&Cs. Worse, you undercut any risk analysis that was done on the issuance process. If a bank doesn't know how its customers are going to use their ids, how can it manage its risks?

In reality, when the Relying Party is the Identity Provider, they retain closed-loop control over identification risk management and transaction risk management. This is the natural state of affairs in business and it does not yield easily to earnest efforts to 'break down the silos'. In many cases it will streamline digital identity (and minimise total cost of ownership) if we simply let certain Relying Parties continue to act as siloed Identity Providers.

Assumption: There are no surprise credentials

One of the leading new identity technologies, U-Prove, has the objective of proving "unanticipated properties of protected identity assertions". That is, two strangers can use this solution to work out what they need to know about each other in real time before they transact. That's obviously very powerful but less obviously perhaps, it's not what we really need right now.

Unanticipated identity assertions are quite academic. The vast majority of assertions in mainstream business are in fact anticipated, and are completely worked out in advance of designing and implementing the transaction system. When you go shopping for instance, the merchant anticipates you will present a credit card number, so much so that they invest thousands of dollars in card processing infrastructure. When you log onto the corporate network, the relevant identity assertion is anticipated to be your employee number. When a doctor signs a prescription, the relevant assertion is their medical provider number, and pharmacists anticipate that number (after all, they can't read the typical doctor's signature!). Just think for a moment of the huge cost and tiny benefit of reengineering the doctor-pharmacist arrangement so that some alternative unanticipated assertion could be presented in place of a medical provider number to authorise a prescription.

In almost all cases, the transaction context pre-defines what identity will be relevant, and we arrange ahead of time for the parties to be equipped with the right credentials. There may be interesting use cases where strangers can use U-Prove to strike up new relationships in cyberspace, but I simply argue that for most routine economically important e-business today, the practical identity needs are more prosaic and more simply solved.

Posted in Security, Identity

An improved frame for understanding Digital Identity

I’ve always been uneasy about the term “ecosystem” being coopted when what is really meant is “marketplace”. There’s nothing wrong with marketplaces! They are where we technologists study problems and customer needs, launch our solutions, and jockey for share. The suddenly-orthodox “Identity Ecosystem” as expressed in the NSTIC is an elaborate IT architecture, that defines specific roles for users, identity providers, relying parties and other players. And the proposed system is not yet complete; many commentators have anticipated that NSTIC may necessitate new legislation to allocate liability.

So it’s really not an "ecosystem”. True ecosystems evolve; they are not designed. If NSTIC isn’t even complete, then badging it an “ecosystem” seems more marketing than ecology; it’s an effort to raise NSTIC as a policy initiative above the hurly burly of competitive IT.

My unease about “ecosystem” got me thinking about ecology, and whether genuine ecological thinking could be useful to analyse the digital identity environment. I believe there is potential here for a new and more powerful way to frame the identity problem.

We are surrounded by mature business ecosystems, in which different sectors and communities-of-interest, large and small, have established their own specific arrangements for managing risk. A special part of risk management is the way in which customers, users, members or business partners are identified. There is almost always a formal protocol by which an individual joins one of these communities-of-interest; that is, when they join a company, qualify for a professional qualification, or open a credit account. Some of these registration protocols are set freely by employers, merchants, associations and the like; others have a legislated element, in regulated industries like aviation, healthcare and finance; and in some cases registration is almost trivial, as when you sign up to a blog site. The conventions, rules, professional charters, contracts, laws and regulations that govern how people do business in different contexts are types of memes. They have evolved in different contexts to minimise risk, and have literally been passed on from one generation to another.

As business environments change, risk management rules in response change too. And so registration processes are subject to natural selection. An ecological treatment of identity recognises that “selection pressures” act on those rules. For instance, to deal with increasing money laundering and terrorist financing, many prudential regulators have tightened the requirements for account opening. To deal with ID theft and account takeover, banks have augmented their account numbers with Two Factor Authentication. The US government’s PIV-I rules for employees and contractors were a response to Homeland Security Presidential Directive HSPD-12. Cell phone operators and airlines likewise now require extra proof of ID. Medical malpractice in various places has led hospitals to tighten their background checks on new staff.

It's natural and valuable, up to a point, to describe "identities" being provided to people acting in these different contexts. This abstraction is central to the Laws of Identity. Unfortunately the word identity is suggestive of a sort of magic property that can be taken out of one context and used in another. So despite the careful framing of the Laws of Identity, it seems that people still carry around a somewhat utopian idea of digital identity, and a tacit belief in the possibility of a universal digital passport (or at least a greatly reduced number of personal IDs). I have argued elsewhere that the passport is actually implausible. If I am right about that, then what is sorely needed next is a better frame for understanding digital identity, a perspective that helps people stay away of the dangerous temptation of a single passport, and to understand that a plurality of identities is the natural state of being.

All modern identity thinking recognises that digital identities are context dependent. Yet it seems that federated identity projects have repeatedly underestimated the strength of that dependence. The federated identity movement is based on an optimism that we can change context for the IDs we have now, and still preserve some recognisable and reusable core identity. Or alternatively, create a new smaller set of IDs that will be useful for transacting with a superset of services. Such “interoperability” has only been demonstrated to date in near-trivial use cases like logging onto blog sites with unverified OpenIDs, or Facebook or Twitter handles. More sophisticated re-use of identities across context – such as the Australian banking sector’s ill-fated Trust Centre project – have foundered, even when there is pre-existing common ground in identification protocols.

The greatest challenge in federated identity is getting service and identity providers that are accustomed to operating in their own silos, to accept risks arising from identification and/or authentication performed on/by their members in other silos. This is where the term “identity” can be distracting. It is important to remember that the “identities” issued by banks, government agencies, universities, cell phone companies, merchants, social networks and blog sites are really proxies for the arrangements to which members have signed up.

This is why identity is so very context dependent, and so why some identities are so tricky to federate.

If we think ecologically, then a better word for the “context” that an identity operates in may be niche. This term properly evokes the tight evolved fit between an identity and the setting in which it is meaningful. In most cases, if we want to understand the natural history of identities, we can look to the existing business ecosystem from where they came. The environmental conditions that shaped the particular identities issued by banks, credit card companies, employers, governments, professional bodies etc. are not fundamentally changed by the Internet. As such, we should expect that when these identities transition from real world to digital, their properties – especially their “interoperability” and liability arrangements -- cannot change a great deal. It is only the pure cyber identities like blogger names, OSN handles and gaming avatars that are highly malleable, because their environmental niches are not so specific.

As an aside, noting how they spread far and wide, and too quickly for us to predict the impacts, maybe it's accurate to liken OpenID and Facebook Connect to weeds!

A lot more work needs to be done on this ecological frame, but I’m thinking we need a new name, distinct from "federated" identity. It seems to me that most business, whether it be online or off, turns on Evolved Identities. If we appreciate the natural history of identities as having evolved, then we should have more success with digital identity. It will become clearer which identities can federate easily, and which cannot, because of their roots in real world business ecosystems.

Taking a digital identity (like a cell phone account) out of its natural niche and hoping it will interoperate in another niche (like banking) can be compared to taking a tropical salt water fish and dropping it into a fresh water tank. If NSTIC is an ecosystem, it is artificial. As such it may be as fragile as an exotic botanic garden or tropical aquarium. I fear that full blown federated identity systems are going to need constant care and intervention to save them from collapse.

Posted in Science, Language, Identity, Federated Identity, Culture, Security

Total cost of ownership of Multiple Identities

Federated Identity and before that, Single Sign On (SSO), are responses to the password plague. The cost of managing multiple passwords and multiple identities rises as they proliferate. The total cost of identity management encompasses the time and personal burden of keeping track of them all, the resources wasted on password resets and similar administrative overheads, and the extra effort needed to enrol afresh for each new identity.

I don't know if there are formal studies of Total Cost of Ownership (TCO) in Identity Management, but people figure intuitively that it goes like this:

Slide1 v2

It seems reasonable. The more ids, the more the hassle and effort, and the higher the cost.

Yet can we assume that the reverse applies? If we reduce the number of ids, will the TCO fall? It depends how far down we want to go. The implicit assumption in SSO is that the total cost of having just one identity will be minimum. But SSO turned out to be easier said than done; the initials have come to mean "Simplifed" Sign On. So what's going on here?

My experience of federated identity is that there are major legal complexities and costs that are often unanticipated when framing these initiatives. In particular, when a service provider like a bank or government agency wants to authenticate its customers through a third party identity, there are significant new overheads. The service provider's risk assessment needs to be reviewed, and quite often, negotiations will be entered into with the new identity providers (or applicable authentication brokers). By the same token (pun intended) if an identity issuer like a bank is going to allow their customers to re-use those identities for additional services, then there will need to be new contracts and Ts&Cs drawn up.

The more powerful the federated identies, the more complex will be the new arrangements.

Therefore the TCO of a set of identities will at some point start to increase as the size of the set drops. Nobody as yet has got close to a single identity. Big PKI failed in that ambition. Even in the reasonably closed banking environment, attempts to federate a single identity, like the Australian Trust Centre and "MAMBO" initiatives failed to get off the ground, partly because of unresolved contractual complexities. These snags point to higher costs, not lower.

So it must be the case that the relationship between TCO and the number of discrete identities anyone has is bowl shaped, as below. The cost of a single identity is incalculable, and might literally be infinite (i.e. the single ID is unattainable) since the fine print associated with any 'master ID' will mean that it cannot in fact meet the needs of all Relying Parties.

Slide2

An interesting research question is: what is the value of the 'ideal' number of identities where TCO is at a minimum? An empirical ballpark estimate is 10-15, based simply on the number of identities most us currently carry in our purses and wallets. We can think about this magic number ecologically. If the current business ecosystem has settled on a dozen or so discrete identities (bank accounts, credit cards, a driver licence, a public health insurance card, a private health insurance card, an employee ID, a passport) then the cost of consolidating them is probably higher than the cost of leaving them alone, otherwise we would have seen natural federation already.

Posted in Security, Identity, Federated Identity