Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Customer, How do I know thee?

One of the main contentions of the Identity Metasystem, NSTIC and like models is that banks, governments, telcos, universities and so on will be able to generalise their roles as Identity Provider, so that their customers can use their identities with other system participants. See for example "Envision it" No. 5 in the NSTIC strategy paper:

Ann learns that her recently issued bank card and her new university card are both Identity Ecosystem-approved credentials She also discovers that her email provider and social networking site accept both of these credentials, while her health care provider and local utility companies accept the higher assurance bank card.

I agree it's useful to model banks and other institutions as issuing identities to their customers, but it's only a model. "Identity" is really a metaphor here; to be precise, digital identities are proxies for the relationships that certain organisations have with their members or customers. They cannot be taken out of their traditional contexts and bent without limit to suit other contexts without eventually breaking them. The identities issued by banks are special purpose and cannot be easily opened up to new Relying Parties. Past attempts to open up banking identities and federate them into other domains -- like the Australian Trust Centre and the Internet Industry Association Two Factor Authentication hub -- could not convince banks that the risks were manageable while delivering a positive nett benefit.

There is a promise in many federated identity formulations -- like NSTIC -- that banks will be able to become IdPs for external Relying Parties, based on the fact that they already know their customers so well, and the system will provide arrangements for others to rely on that knowledge. How would that work in detail?

A would-be IdP must work out what knowledge it has about its customers that it is prepared to warrant to outside RPs, and for what purpose, and with what limitations. At present, a bank knows its customers with sufficient precision to suit its own purposes (and banking regulators). But underwriting identity assertions for the benefit of outsiders brings new risks to the bank that they have never before had to contemplate.

If the bank wants to productize the identification of its customers, then it needs to analyse its liability in the event that transactions go wrong between its customers and those external RPs. This is a tough problem when the bank has no necessary connection with those RPs, nor any control over the transactions. Of course, the bank might seek to gain some control, by qualifying just what it is that its customers are allowed to do with their bank-issued identities. But then this starts to look like the fine print that helped to sink Big PKI over a decade ago.

I reckon that the cost of even analysing the risks, much less putting new contractual (or legislated) liability arrangements in place will outweigh the costs of merely maintaining the diverse and separately evolved identities we have today. There is a middle road, where IdPs could qualify what their identities are good for (e.g. Bank A might support Health Care Providers P, Q, T and W and no others) but this would significantly dilute and devalue the vision of NSTIC. It's not what the strategy promotes.

Posted in Federated Identity, Identity

Facial recognition isn't creepy: it's dangerous

Why do people use soft, subjective words like "creepy" to criticise facial recognition in social networking sites? Eric Schmidt has said that facial recognition is 'Too creepy even for Google' but by not damning it more strongly, does he deliberately leave himself wiggle room?

We can and really should analyse facial recognition objectively.

First and foremost is a basic technicality: facial recognition converts vast drifts of hitherto anonymous image data into Personally Identifiable Information, and in so doing instantly creates obligations under black letter privacy law in Europe and elsewhere

The collection of Personal Information needs to be properly disclosed. Facebook appears evasive in the way it describes (or not) biometric templates. It proudly announces that members can remove tags yet it actually retains the biometric templates until an extra step is taken to have them deleted too (see http://www.facebook.com/help/?faq=225110000848463 ***).

*** Update: Some time in early 2012 Facebook updated its Help pages to provide more information, and to improve the way they manage templates. The link above is now dead, but it's preserved for the record. Facebook now describes how templates are created, and they now delete templates when you disable photo tagging: https://www.facebook.com/help/tag-suggestions. This improved transparency is welcome, as is the automatic deletion of templates, without the extra un-obvious step that was previously necessary.

Facebook promotes as privacy enhancing the fact that only friends are allowed to suggest tags. Let's not be naive about this. Facebook are cleverly crowd-sourcing the training of their biometric algorithms, and they wouldn't want too many guesses from strangers polluting their data.

And then there is the purpose of the collection. Let's be plain about why Facebook and the other informopolies are so keen on facial recognition: it's to improve their ability to make connections. They will now be able to spot when two people are in the same place at the same time. And they will be able to tell what cars people like to drive, what movies they're watching, what devices they like to use -- without anyone needing to expressly "like" anything anymore. This is pure gold.

By maintaining emotive or intuitive descriptions of biometric concerns, technologists are leaving biometric critics in a soft corner. This might be an innocent side effect of trying to use plain language, or it could be cleverly calculated in order to keep their options open. But either way, dumbing down the debate won't help in the long run.

Posted in Social Networking, Privacy, Language, Biometrics

Two faced

How should we approach the question of Facebook’s facial recognition? A good place to start is the reasonably cut-and-dried treatment of Personal Information in European style Information Privacy Law, as adopted at least in part in other places like Australia. Recall that Personal Information (PI) is basically any information about an individual where their identity is apparent.

Photos of strangers are not PI. Tagged photos in Facebook are PI. If someone renders a photo identifiable by tagging it, then Facebook as holder of the photographic data is suddenly in possession of PI -- and ergo has collected it. Generally, the named person in the photo has a legal right to be informed of the collection, especially when the collection is done indirectly, as is the case when a third party does the tagging.

Now, Facebook alerts members the instant they've been tagged, and that's a good thing.

But there are subtlties aplenty. When someone is informed by Facebook that they have been tagged, they can ask for the tag to be removed. On its face that's fine, but Facebook is classically slippery on the details. They are not at all clear about this in their Privacy Policy, but after making inquiries I found out that there is a deeper level of "summary information" about names and photos. This information is, in other words, the biometric templates. It seems clear that asking that a tag be removed does not cause the template to be deleted, because removing the “summary information” requires a separate request (see http://www.facebook.com/help/?faq=225110000848463). So, Facebook generally retains templates after the visible tags are removed. And nowhere do they disclose what they might do with them.

There is an impending legal technicality that will hit Facebook hard in Australia, namely changes recommended by the Australian Law Reform Commission that will treat biometric templates as Sensitive Information, a special class of PI that carries extra obligations. In particular, while indirect collection of regular PI is usually permitted if the collector makes reasonable efforts to inform the subject after the fact, with Sensitive Information, consent is required prior to collection. This would seem to mean that photo tagging by third parties would not be permissible without prior consent.

Living in Sydney, I’ve long amused myself daydreaming about how many countless tourist snapshots must accidentally include me in the background. It’s of no concern. There must be billions of images filed away in photo albums worldwide, pictures of incidental strangers remaining unknown and unknowable forever. But when such images are in Facebook’s databases, they are running them against their templates and populating them with covert naming labels.

Facebook says their aim is simply to suggest tags to the people whose images have been recognised, but I have to think they will go much further than this. In their unfair bargain for PI, no service is ever offered "for free".

Surely Facebook will use the facial recognition to make connections. I am not being paranoid about this -- making connections is their lifeblood. Once they recognise that two different people were in the same place at the same time, they will treat this exactly the same as when two people are in the same address book of a third party: they will make introductions. Now, I don't want to be sent friend suggestions for strangers just because we were both spotted hanging around Bondi Beach. Or Oxford Street (if you know what I mean).

Facebook and the other informopolies have a stark track record of commercially exploiting Personal Information. The temptation will inevitably arise to disclose names of people matched via facial recognition to things of business interest. For example, Facebook will be able to compile lists of people that stay with certain hotels, check in with certain airlines, use certain brands of phone or computer, or read certain books. I don't trust them to not exploit this information, especially when their Privacy Policy is totally silent on secondary use of photo templates. The policy doesn’t even mention templates, just the visible tags.

Personal Information is gold in the digital economy. We need to grasp the extent to which Facebook manufacture Personal Information. With facial recognition, they are converting vast drifts of anonymous images into commercially valuable and geo-locatable PI. They are literally printing money.

Posted in Privacy, Biometrics, Social Networking