Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The identerati fiddle while Rome burns

I love a bit of philosophy as much as the next engineer does, and I mean no offence to a bunch of generally lovely, well-meaning folks. But it's high time the identerati took a long hard look at themselves, and stopped fiddling while Rome burns.

For two days a frantic thread has been running on the Identity Commons list community@lists.idcommons.net. It's a lot of the usual stuff, turning technological problems into philosophical and sociological conundrums.

At the very same time, another little article has come out about NSTIC and containing this gem: "The White House Cyber Security Adviser Howard Schmidt and Commerce Secretary Gary Locke have recently announced a proposal for mandatory virtual ID cards for Internet users ...".

I am no fan of NSTIC but it's well intended and it's plainly not mandatory nor singular. NSTIC's advocates have gone over this again and again. So for pity's sake, how can journalists and commentators still get NSTIC so terribly wrong?

Well I'll tell ya: It's because after a decade or more of oh-so-earnest work on IdM, collectively we still don't know what we're talking about! Just look at some of this stuff! So if the identerati are confused, it's not surprising that commentators and politicans are tapping around in the dark coming up with crap like "Mandatory virtual ID cards".

Meanwhile, Facebook and Google are getting on with it ... and we should shudder to think what a de facto privately controlled universal ID will mean. The one thing worse than the government tatooing ID numbers on all our foreheads is Mark Zuckerberg and Eric Schmidt doing it, without our consent, for commercial gain, and with the majority of netizens lulled into thinking it's actually kinda cool man.

We urgently need some simplifying assumptions, some practical technological advances to protect digital identity data, and a little real progress on Identity Management!

Posted in Identity

Biometrics and false advertising

Use of the word “unique” in biometrics constitutes false advertising.

There is little scientific basis for any of the common biometrics to be inherently “unique”. The iris is a notable exception, where the process of embryonic development of eye tissue is known to create random features. But there's little or no literature to suggest that finger vein patterns or gait or voice traits should be highly distinctive and randomly distributed in ways that create what security people call "entropy". In fact, one of the gold standards in biometrics - fingerprinting - has been shown to be based more on centuries old folklore than science (see the work of Simon Cole).

But more's the point, even if a trait is highly distinctive, the vagaries of real world measurement apparatus and conditions mean that every system commits false positives. Body parts age, sensors get grimy, lighting conditions change, and biometric systems must tolerate such variability. In turn, they make odd mistakes. In fact, consumer biometrics are usually tuned to deliberately increase the False Accept Rate, so as not to inconvenience too many bona fide users with a high False Reject Rate.

So no biometric system ever behaves like the trait is unique! Every system has a finite False Accept Rate; FARs of one or two percent are not uncommon. If one in fifty people are confused with someone else on a measured trait, how is that trait “unique”?

The word "unique" should be banned in conenction with biometrics. It's not accurate, and it's used to create over-statements in biometric product marketing.

This is not mere nit picking. The biometrics industry gets away with terrible hyperbole, aided and abetted by loose talk, lulling users into a false sense of security. Managers and strategists need to understand at every turn that there is no such thing as perfect security. Biometric systems fail. But when lay people hear “unique” they think that’s the end of the story. They’re not encouraged to look at the error rate specs and think deeply about what they really mean.

Exaggeration in use of the word "unique" is just the tip of the iceberg. Biometrics vendors are full of it:

Economical with the truth

    • Major palm vein vendors claim spectacular error rates of FAR = 0.00008% and FRR = 0.01%. Their brochures show these specs side-by-side, without any mention of the fact that these are best case figures, and utterly impossible to achieve together. I've been asking one vendor for their Detection Error Tradeoff (DET) curves for years but I'm told they're commercial in confidence. The vendor won't even cough up the Equal Error Rate. And why? Because the tradeoff is shocking.
    • The International Biometric Group in 2006 published the only palm vein DET curve I have managed to find, in its Comparative Biometric Testing Round 6 ("CBT 6"). Curiously this report is hard to find nowadays, but I have a copy if anyone wants to see it. The DET curves give the lie to the best case vendor specs. For when the palm vein system is tuned to highest security setting with a best possible False Match Rate of 0.0007%, the False Non Match rate deteriorates to 12%, or worse than one in ten. [Ref: CBT6 Executive Summary, p6]

Clueless about privacy

    • You'd think that biometric vendors would brush up on privacy. One of them attempted recently to calm fears over facial recognition by asserting that "a face is not, nor has it ever been, considered private". This red herring belies a terrible misunderstanding of information privacy. Once faces are rendered personally identifiable by OSNs and names attached to the terabytes of hitherto anonymous snapshots in their stores, then that data becomes automatically subject to privacy law in many jurisdictions. It's a scandal of the highest order: albums innocently uploaded into the cloud over many years, now suddently rendered identifiable, and trawled for commercially valuable intelligence, without consent, and without any explanation in the operators' Privacy Policies.

Ignoring published research

    • And you'd think that for such a research-intensive field (where many products are barely out of the lab) vendors would be up to date. Yet one of them has repeatedly claimed that biometric templates "are nearly impossible to be reverse engineered". This is either a lie or willful ignorance. The academic literature has many examples of facial and fingerprint templates being reverse engineered by successive approximation methods to create synthetic raw biometrics that generate matches with target templates. Tellingly, the untruth that templates can't be reversed has been recently repeated in connection with the possible theft of biometric data of all Israeli citizens. When passwords or keys or any normal security secrets are breached, then the first thing we do is cancel them and re-issue the users with new ones, along with abject apologies for the inconvenience. But with biometrics, that's not an option. So no wonder vendors are so keen to stretch the truth about template security; to admit there is a risk of identity theft, without the ability to reinstate the biometrics of affected victims, would be catastrophic

With more critical thinking, managers and biometric buyers would start to ask the tough questions. Such as How are you testing this system? How do real life error rates compare with bench testing (which the FBI warns is always optimistic)? And what is the disaster recovery plan in the event that a criminal steals a user’s biometric?

Posted in Security, Language, Biometrics

Despite the IdM hype, privacy and security remain uneasy bedfellows

The information security sub-specialisation of Digital Identity has spurred prodigious activity in the past decade, from academics, policy makers and IT vendors. We've seen new "Laws of Identity", national identity strategies, numerous big industry consortia, many new technical standards for federating identities and exchanging interoperable "identity assertions", and a flood of new products. All the while, enhanced privacy is held to be axiomatic in the new identity frameworks.

Yet despite all this, technologists' views on privacy have been diverging, often dramatically. Data breaches by big information companies―whether accidental or slyly intended―seem to have only got worse. The responses of security professionals to cases like the collection of wifi data by Google Streetview cars have been muddle-headed, with many not seeing the problem at all. Social network operators like Facebook and Google have sought to re-cast societal norms, by banning nicknames and insisting that members use only their one "real" name. Facebook's Mark Zuckerberg argues that those who use more than one name lack integrity.

Distressingly, at every level, security and privacy remain very uneasy bedfellows.

Technocrats give lip service to privacy. They skate over privacy principles, often presuming to know what privacy laws say without actually reading them. In their deeds and in their crazy talk, the Zuckerbergs and Schmidts of the world reveal grave misunderstandings about the topic. Of course it passes understanding that anyone listens to these guys on privacy when their multi-billion dollar fortunes are made on the back of pirating Personal Information.

And yet even well meaning technologists also seem to be on a different wavelength from privacy strategists. For instance, the architects of OpenID and grand plans like NSTIC try to deal with privacy and yet the claimed privacy benefits are problematic when looked at closely. Orthodox federated identity brings a host of privacy challenges that have not yet been properly canvassed (possibly because US privacy perspectives are especially "high tech" whereas in other jurisdictions, information privacy focuses on controlling the flow of personally identifiable information, which is often a surprisingly low tech business). I see immense privacy challenges in federated identity formulations, including:


  • Many Identity Providers will be start-ups. Or they'll often be existing enterprises setting up new business units to strike out into brand new authentication markets. Either way, in a worryingly familiar replay of Big PKI in the 1990s, these players will be aggregating vast amounts of Personal Information, making them honey pots for organised crime, and lucrative corporate takeover targets.
  • Federated Identity transforms elegant time-honoured private bilateral transactions into complicated multi-lateral dealings, with excessive PI being collected where previously it was not needed.
  • The total amount of PI collected in the federated identity "metasystem" is larger than what is collected today. Not only will there be new registration databases at the new IdPs, but there will be many new multi-party audit trails tracking who we've been interacting with. It's always important in privacy to consider proportionality: Is all this extra Collection really worthwhile? Are there not other ways to protect privacy that avoid the inherent risks of amassing so much new Personal Information?
  • The new privacy constructs are highly technical and artificial. For instance, "Verified Anonymity" services and many new age verification bureaus would work by collecting loads of PI at registration time (including Social Security numbers) only to hide it from Relying Parties at transaction time.

A re-think of security and privacy is urgently needed. Let's recognise that digital identity is really a metaphor for the way we act in certain complex relationships. As such, "identity" is not an intrinsic characteristic at all but instead is an emergent property of the collection, use and disclosure of personal information in different contexts. It's not the sort of stuff that demands fancy new theories, just a recognition that we deal with individuals in constrained ways in the real world, and we should continue to do so online. If we could just demystify digital identity a little, we should find it easier to marry information privacy and security.

Posted in Privacy, Identity, Federated Identity

If it sounds too good to be true, it probably is

Imagine a new secretarial agency that provides you with a Personal Assistant. They're a really excellent PA. They look after your diary, place calls, make bookings, plan your travel, send messages for you, take dictation. Like all good PAs, they get to know you, so they'll even help decide where to have dinner.

And you'll never guess: there's no charge!

But ... at the end of each day, the PA reports back to their agency, and provides a full transcript of all you've said, everyone you've been in touch with, everything you've done. The agency won't say what they plan to do with all this data, how long they'll keep it, nor who they'll share it with.

If you're still interested in this deal, here's the PA's name: Siri.

Seriously now ... Siri may be a classic example of the unfair bargain at the core of free social media. Natural language processing is a fabulous idea of course, and will improve the usability of smart phones many times over. But Siri is only "free" because Apple are harvesting personal information with the intent to profit from it. A cynic could even call it a Trojan Horse.

There wouldn't be anything wrong with this bargain if Apple were up-front about it. In their Privacy Policy they should detail what Personal Information they are collecting out of all the voice data; they should explain why they collect it, what they plan to do with it, how long they will retain it, and how they might limit secondary usage. It's not good enough to vaguely reserve their rights to "use personal information to help us develop, deliver, and improve our products, services, content, and advertising".

Apple's Privacy Policy today (dated 21 June 2010 [*]) in fact makes no mention of voice data at all, nor the import of contacts and other PI from the iPhone to help train its artificial intelligence algorithms.

I myself will decline to use Siri while the language processing is done in the cloud, and while Apple does not constrain its use of my voice data. I'll wait for NLP to be done on the device with the data kept private. And I'd happily pay for that app.

Update 28 Nov 2011

Apple updated their Privacy Policy in October, but curiously, the document still makes no mention of Siri, nor voice data in general. By rights (literally in Europe) Apple's Privacy Policy should detail amongst other things why it retains identifiable voice data, and what future use it plans to make of the data.

Posted in Social Networking, Social Media, Privacy, Cloud

Fighting cyber crime like it really matters

It is no exaggeration to characterise the theft of personal information as an epidemic. Personal information in digital form is the lifeblood of banking and payments, government services, healthcare, a great deal of retail commerce, and entertainment. But personal records―especially digital identities―are stolen in the millions by organised criminals, to appropriate not just money but also the broader and fast growing intangible assets of “digital natives”. The Internet has given criminals x-ray vision into peoples’ details, and perfect digital disguises with which to defraud business and governments.

Credit card fraud over the Internet is the model cyber crime. Childs play to perpetrate, and fuelled by a thriving black market in stolen personal data, online card fraud represents 70% of all card fraud in Australia, continues to grow at 30-50% p.a., and here cost over A$120 million in 2010 (see http://lockstep.com.au/blog/2011/09/27/au-cnp-fraud-cy2010). The importance of this crime goes beyond the gross losses, for some of the proceeds are going to fund terrorism, as acknowledged by the US Homeland Security Committee.

Yet there is a deeper lesson in online card fraud: it needs to be seen as a special case of digital identity theft. ID theft is perpetrated by sophisticated organised crime gangs, behind the backs of the best trained and best behaved users, aided and abetted by insiders corrupted by enormous rewards. No amount of well meaning security policy or user awareness can defeat the profit motives of today’s online fraudsters.

As the digital economy is to the wider economy, so cyber crime is to crime at large. And yet the e-business environment remains stuck in a Wild West stage of development: it’s everyone for themselves! There is no consistency in the gadgets foisted upon consumers to access online businesses and services; worse, most are flawed and readily subverted by hackers. We could build security deep into our transaction platforms to prevent identity theft, phishing, web site spoofing and spam―the requisite building blocks like digital signature toolkits and personal smart devices are now ubiquitous―but instead, almost all attention turns to user awareness. Yet education has reached its use-by date, rendered utterly obsolete by the industrialisation of cybercrime (see also http://lockstep.com.au/library/online_banking_review/obr-lockstep-200810-many-hand.pdf). Most everyone now knows they need a firewall and anti-virus software but they're misguided measures when most identities are stolen in other channels utterly beyond the users' control. The predominant technology neutral policy position of government and the banking industry has not fostered market driven innovation as hoped but instead has created a leadership vacuum, leaving consumers to fend for themselves.

To really curtail cyber crime we need the sort of concerted and balanced effort that typifies security in all other walks of life, like transportation, energy and finance. Car owners don't fit their own seat belts and airbags as after-market nice-to-haves; bank customers don’t need to install their own security screens; bank robbers are not kept at bay by security audits alone. The time has come, now that we’re constructing the digital economy, to embrace intelligent security technologies that can actually prevent identity theft and cyber crime.

Posted in Internet, Fraud