Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Customer, How do I know thee?

One of the main contentions of the Identity Metasystem, NSTIC and like models is that banks, governments, telcos, universities and so on will be able to generalise their roles as Identity Provider, so that their customers can use their identities with other system participants. See for example "Envision it" No. 5 in the NSTIC strategy paper:

Ann learns that her recently issued bank card and her new university card are both Identity Ecosystem-approved credentials She also discovers that her email provider and social networking site accept both of these credentials, while her health care provider and local utility companies accept the higher assurance bank card.

I agree it's useful to model banks and other institutions as issuing identities to their customers, but it's only a model. "Identity" is really a metaphor here; to be precise, digital identities are proxies for the relationships that certain organisations have with their members or customers. They cannot be taken out of their traditional contexts and bent without limit to suit other contexts without eventually breaking them. The identities issued by banks are special purpose and cannot be easily opened up to new Relying Parties. Past attempts to open up banking identities and federate them into other domains -- like the Australian Trust Centre and the Internet Industry Association Two Factor Authentication hub -- could not convince banks that the risks were manageable while delivering a positive nett benefit.

There is a promise in many federated identity formulations -- like NSTIC -- that banks will be able to become IdPs for external Relying Parties, based on the fact that they already know their customers so well, and the system will provide arrangements for others to rely on that knowledge. How would that work in detail?

A would-be IdP must work out what knowledge it has about its customers that it is prepared to warrant to outside RPs, and for what purpose, and with what limitations. At present, a bank knows its customers with sufficient precision to suit its own purposes (and banking regulators). But underwriting identity assertions for the benefit of outsiders brings new risks to the bank that they have never before had to contemplate.

If the bank wants to productize the identification of its customers, then it needs to analyse its liability in the event that transactions go wrong between its customers and those external RPs. This is a tough problem when the bank has no necessary connection with those RPs, nor any control over the transactions. Of course, the bank might seek to gain some control, by qualifying just what it is that its customers are allowed to do with their bank-issued identities. But then this starts to look like the fine print that helped to sink Big PKI over a decade ago.

I reckon that the cost of even analysing the risks, much less putting new contractual (or legislated) liability arrangements in place will outweigh the costs of merely maintaining the diverse and separately evolved identities we have today. There is a middle road, where IdPs could qualify what their identities are good for (e.g. Bank A might support Health Care Providers P, Q, T and W and no others) but this would significantly dilute and devalue the vision of NSTIC. It's not what the strategy promotes.

Posted in Federated Identity, Identity

Facial recognition isn't creepy: it's dangerous

Why do people use soft, subjective words like "creepy" to criticise facial recognition in social networking sites? Eric Schmidt has said that facial recognition is 'Too creepy even for Google' but by not damning it more strongly, does he deliberately leave himself wiggle room?

We can and really should analyse facial recognition objectively.

First and foremost is a basic technicality: facial recognition converts vast drifts of hitherto anonymous image data into Personally Identifiable Information, and in so doing instantly creates obligations under black letter privacy law in Europe and elsewhere

The collection of Personal Information needs to be properly disclosed. Facebook appears evasive in the way it describes (or not) biometric templates. It proudly announces that members can remove tags yet it actually retains the biometric templates until an extra step is taken to have them deleted too (see http://www.facebook.com/help/?faq=225110000848463 ***).

*** Update: Some time in early 2012 Facebook updated its Help pages to provide more information, and to improve the way they manage templates. The link above is now dead, but it's preserved for the record. Facebook now describes how templates are created, and they now delete templates when you disable photo tagging: https://www.facebook.com/help/tag-suggestions. This improved transparency is welcome, as is the automatic deletion of templates, without the extra un-obvious step that was previously necessary.

Facebook promotes as privacy enhancing the fact that only friends are allowed to suggest tags. Let's not be naive about this. Facebook are cleverly crowd-sourcing the training of their biometric algorithms, and they wouldn't want too many guesses from strangers polluting their data.

And then there is the purpose of the collection. Let's be plain about why Facebook and the other informopolies are so keen on facial recognition: it's to improve their ability to make connections. They will now be able to spot when two people are in the same place at the same time. And they will be able to tell what cars people like to drive, what movies they're watching, what devices they like to use -- without anyone needing to expressly "like" anything anymore. This is pure gold.

By maintaining emotive or intuitive descriptions of biometric concerns, technologists are leaving biometric critics in a soft corner. This might be an innocent side effect of trying to use plain language, or it could be cleverly calculated in order to keep their options open. But either way, dumbing down the debate won't help in the long run.

Posted in Social Networking, Privacy, Language, Biometrics

Two faced

How should we approach the question of Facebook’s facial recognition? A good place to start is the reasonably cut-and-dried treatment of Personal Information in international data privacy law, as adopted in over 100 countries worldwide. Recall that Personal Information (or in US parlance, PII) is basically any information about an individual where their identity is apparent or can reasonably be worked out.

Photos of strangers are not Personal Information. But tagged photos in Facebook are. If someone renders a photo identifiable by tagging it, then Facebook as holder of the photographic data is suddenly in possession of PII, and ergo has collected it. On the face of general privacy law, the person who has become named in the photo has a right to be reasonably informed of the collection, especially when the collection is done indirectly, as is the case when a third party does the tagging. Indeed, Facebook alerts members the instant they've been tagged by another member, and that's a good thing. But there are subtlties aplenty. In particular, when names are generated automatically and added to the photo database, that's a form of collection even before the tag is disseminated.

There is a legal technicality that will hit Facebook in Australia, namely changes to our Privacy act that treat biometric templates as Sensitive Information, a special class of PII that carries extra obligations. In particular, while indirect collection of regular PII is usually permitted if the collector makes reasonable efforts to inform the subject after the fact, with Sensitive Information, consent is required prior to collection. This would seem to mean that photo tagging by third parties would not be permissible without prior consent, and algorithmic collection might not be practicable at all.

Living in Sydney, I’ve long pondered how many countless tourist snapshots must accidentally include me in the background. That’s of no concern to me - there must be billions of images filed away in photo albums worldwide, printed pictures of incidental strangers, remaining unknown and unknowable. But when such images are digital, and in Facebook’s databases where they are run automatically against face recognition templates, they are no longer anonymous but personally identifiable and immensely valuable.

Facebook says their aim is simply to suggest tags to the people whose images have been recognised, but surely they will go much further than this. Think about the connections Facebook can make by facial recognition. Once they recognise that two different people were in the same place at the same time, might they treat this new fact in the same way as when two people are in the same address book of a third party? I don't want to be sent friend suggestions for strangers just because we were both spotted hanging around Bondi Beach. Or Oxford Street (if you know what I mean).

Facebook and the other informopolies have a stark track record of commercially exploiting any Personal Information they can get their hands on - or more's the point, extract from the environment. The temptation will inevitably arise to disclose the names of people who are matched via facial recognition to things of commercial interest. For example, Facebook will be able to compile lists of people that stay with certain hotels, check in with certain airlines, use certain brands of phone or computer, or read certain books. I don't trust them to not exploit this information, especially when their Privacy Policy is silent on secondary use of photo templates.

Personal Information is gold in the digital economy. We need to grasp the extent to which Facebook and other social businesses manufactures PII. With facial recognition, they are refining vast lodes of hitherto anonymous images into commercially valuable PII.

Posted in Social Networking, Privacy, Biometrics