With the term "ecosystem" being bandied about so much, I started thinking ecologically last year. A two part particle on my new Ecological Theory of Identity is being published in SC Magazine Australia.
Here's a little extract of the next installment:
If we think ecologically, we can better explain the surprising power of context in identity management. It is ironic that the Laws of Identity emphasise the importance of context, and yet federated identity programs repeatedly underestimate how strongly IDs resist changing context.
The tight fit that evolves between each given identity and the setting in which it is intended to be used is best described as an ecological niche. As with real life ecology, characteristics that bestow fitness in one niche can work against the organism -- or digital identity -- in another.
Identity "silos" are much derided but we can see now they are a natural consequence of how all business rules are matched to particular contexts. The environmental conditions that shaped the particular identities issued by banks, credit card companies, employers, governments and professional bodies are not fundamentally changed by the Internet. As such, we should expect that when these identities transition from real world to digital, their properties -- especially their "interoperability" and liability arrangements -- cannot readily adapt.
So, taking a mature digital identity (like a university student ID) out of its natural niche and hoping it will interoperate in another context (like banking) is a lot like taking a salt water fish and dropping it into a fresh water tank.
On the other hand, the ecological frame neatly explains why the purely virtual identities like blogger names, OSN handles and gaming avatars are so highly interoperable: it's because their environmental niches are not so specific. Thinking about how quickly and widely social identities like Facebook Connect have spread, in a very real sense we can describe them as weeds!
My longer article on a new ecological theory of digital identity is available here.
A colleague drew my attention to what he called "yet another management standard". Which got me thinking about where our preoccupation with standards might be heading and where it might end.
Most modern risk management standards allow for exception management. If a company has a formal procedure in place -- for example a Disaster Recovery Plan -- but something out of the ordinary comes up, then the latest standards provide management with flexibility to vary their response to suit their particular circumstances; in other words, management can generally waive regular procedures and "accept the risk". The company can remain in compliance with management systems and standards if it documents these exceptions carefully.
So ... what if a company says "the hell with this latest management standard, we don't want to have anything to do with it". If the standard allows for exceptions, then the company may still be in compliance with the standard by not being in compliance with it.
How about that: a standard you cannot help but comply with!
And then we wouldn't need auditors. We might even start to make some real progress.
Here's a less facetious analysis of the perils of over-standardisation: http://lockstep.com.au/blog/2010/12/21/no-algorithm-for-management.
Posted in Management theory
After the scandal broke of how the iPhone app "Path" was accessing users' address books and transmitting them back to base, many in the developer community said they thought this was pretty common. The good folks over at Veracode decided to check, so they built another app that simply scans all code on your device for signs that the address book is being accessed. Believe it or not, the Apple operating system has a standard call, available to every app, called "ABAddressBookCopyArrayOfAllPeople".
Talking to the Veracode Research team about this iOS address book madness, the consensus was that none of this should come to a surprise to anyone who's been following mobile development or security research for mobile platforms (emphasis added).
This is terrific work.
Despite the Veracode team's reaction, I'm sure most of the public - even the technologically informed public - would indeed be very surprised to know any old app can freely access their contact lists. If developers are not surprised, perhaps they look at privacy differently?
What probably will surprise many technologists is that under black letter privacy law in Australia, Europe and elsewhere, it would be an offence for the company deploying the app to access contact information on a phone without a good reason and/or user consent (let alone to do it without any notice at all as was the case with Path). As Kriegsman writes in the Veracode article, it's hard to imagine why many of these apps have any cause to call ABAddressBookCopyArrayOfAllPeople.
Developers sometimes seem to think that if information is accessible to them, then it's fair game for re-use or innocant "research". The classic example was the collection of wifi transmissions by Google Street View cars. Many said at the time that if data is in the "public domain" then it's free to be collected and used. And they were very surprised indeed to learn that their presumption is simply wrong at law. Many privacy laws are generally blind to where Personally Identifiable Information is collected. If information is identifiable, and if you have no business collecting it, then you're not allowed to. It's black and white.
A modest little quote from a biometrics expert caught my eye this week. Neil Fisher, VP of Global Security Solutions at Unisys was cited describing the False Acceptance Rate of iris scanning as "in the region of 0.1%". See Believing in biometrics, at "Airport Technology", http://www.airport-technology.com/features/featurebelieving-in-biometrics.
This figure is, to put it mildly, rather less than what we've been led to believe by iris scanning proponents over the years.
It is widely reported that the probability of two randomly selected irises matching is one in 10 to the power of 78 . This is indeed a staggering denominator, far greater than the number of stars in all the galaxies in all the universe [Yet that number is near meaningless if the iris scanning equipments isn't perfect. Consider that there are 100 billion stars in the Milky Way but that figure doesn't predict the odds of two people picking out the same star with the naked eye, which is one in a few hundred or worse depending on the lighting conditions.]
Yet the recognised inventor of iris recognition, John Daugman of Cambridge University, never claimed his method was as good as all that. In 2000, Daugman published a technical paper  on iris detection decision thresholds. Based on data from an ophthalmology research database, his calculations implied  a False Match rate as low as one in 10 to the power of 14.
In 2005 Daugman experimentally verified his very low error rate claim using data on over 600,000 individuals sampled in the United Arab Emirates' immigration security system . He reported that "False Match rate is less than 1 in 200 billion" or one in 10 to the power of 11. But it should have been clear to all that the result would be very best case, for border security biometrics systems impose tight control over image quality and lighting conditions for both enrolment and subsequent capture events; without such control, measurement fidelity suffers.
And indeed, independent government testing of iris biometrics, while impressive, show error rates millions of times worse than Daugman's estimates. For example, the UK Government in 2001 found a False Match rate of 0.0001% or one in a million .
And now we have a leading biometrics implementer say that in practice, the iris False Match Rate is typically 0.1% or a pretty ordinary one in 1,000. If that's the real life benchmark, then the folkloric figure of one in 10 to the power of 78 represents an exaggeration of one thousand, trillion, trillion, trillion, trillion, trillion, trillion times.
 Biometric decision landscapes, Daugman, 2000; http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-482.pdf
 Results from 200 billion iris cross-comparisons John Daugman, University of Cambridge Computer Laboratory, http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-635.pdf.
 Biometric Product Testing Final Report, Issue 1.0; Mansfield et al, Centre for Mathematics and Scientific Computing, National Physical Laboratory, for the UK Government Communications Electronics Security Group (CESG) Biometric Test Programme, 2001; http://www.cesg.gov.uk/publications/Documents/biometrictestreportpt1.pdf.
Posted in Biometrics
Imagine this. Two grain growers are neighbours. One farms wheat and the other corn. Both have invested a lot of money in their silos and grain handling equipment, all of which continues to be a significant cost in their operations. The corn farmer is an innovator and comes up with a bright idea. She approaches her neighbour and gives him the following proposition: since their infrastructure is such an overhead, why not, in the name of efficiency, join up and share their silos?
What farmer wouldn't reject this idea out of hand? If a grain grower needs more capacity, in theory they could re-engineer the entire storage and handling system to use someone else's silo, strike up new support arrangements with their equipment providers, and seek insurance to cover new risks of mixing up their grains. But it would be simpler, cheaper and quicker to just build themselves another silo!
"Break down the silos" is one of the catch cries of modern management practice, and it's a special rallying call in the Federated Identity movement. Nobody denies that myriad passwords and security devices have become a huge headache, but attempts to solve what is really a technology and human factors challenge, by sharing identities and identity provisioning all too often come unstuck.
It's not for nothing that we call identity domains "silos". Grain silos are architecturally elegant, strong and safe; they are critical infrastructure for farmers.
Of all the metaphors in identity management, "silo" is actually one of the good ones. And you have to wonder when and why it became a dirty word in our industry. Identity silos are actually carefully constructed risk management arrangements and in IDAM, risk is the name of the game. As such, silos are not to be trifled with!
In modern information security we implore businesses to understand the risks of their particular business contexts, and to enact security mechanisms that are attuned to their environment. There is no one-size-fits-all risk management arrangement. And infosec professionals frown upon one company uplifting another's security system without first analysing their own situation and fune tuning the controls.
The inherent differences between business settings is the clear reason why authentication rules have evolved into different silos.
And yet the dominant idea in contemporary identity management remains federation: the unreal optimism that one identity can efficiently work across multiple unrelated contexts.
It seems to me like a law of nature - perhaps something like a Conservation of Risk Management Energy - that the effort and cost required to devise one identity that interoperates across N contexts cannot be less than the total overhead of maintaining N separate identities.
It's truer today than ever before: you cannot cut corners in risk management.