Two faced

How should we approach the question of Facebook’s facial recognition? A good place to start is the reasonably cut-and-dried treatment of Personal Information in international data privacy law, as adopted in over 100 countries worldwide. Recall that Personal Information (or in US parlance, PII) is basically any information about an individual where their identity is apparent or can reasonably be worked out.

Photos of strangers are not Personal Information. But tagged photos in Facebook are. If someone renders a photo identifiable by tagging it, then Facebook as holder of the photographic data is suddenly in possession of PII, and ergo has collected it. On the face of general privacy law, the person who has become named in the photo has a right to be reasonably informed of the collection, especially when the collection is done indirectly, as is the case when a third party does the tagging. Indeed, Facebook alerts members the instant they’ve been tagged by another member, and that’s a good thing. But there are subtlties aplenty. In particular, when names are generated automatically and added to the photo database, that’s a form of collection even before the tag is disseminated.

There is a legal technicality that will hit Facebook in Australia, namely changes to our Privacy act that treat biometric templates as Sensitive Information, a special class of PII that carries extra obligations. In particular, while indirect collection of regular PII is usually permitted if the collector makes reasonable efforts to inform the subject after the fact, with Sensitive Information, consent is required prior to collection. This would seem to mean that photo tagging by third parties would not be permissible without prior consent, and algorithmic collection might not be practicable at all.

Living in Sydney, I’ve long pondered how many countless tourist snapshots must accidentally include me in the background. That’s of no concern to me – there must be billions of images filed away in photo albums worldwide, printed pictures of incidental strangers, remaining unknown and unknowable. But when such images are digital, and in Facebook’s databases where they are run automatically against face recognition templates, they are no longer anonymous but personally identifiable and immensely valuable.

Facebook says their aim is simply to suggest tags to the people whose images have been recognised, but surely they will go much further than this. Think about the connections Facebook can make by facial recognition. Once they recognise that two different people were in the same place at the same time, might they treat this new fact in the same way as when two people are in the same address book of a third party? I don’t want to be sent friend suggestions for strangers just because we were both spotted hanging around Bondi Beach. Or Oxford Street (if you know what I mean).

Facebook and the other informopolies have a stark track record of commercially exploiting any Personal Information they can get their hands on – or more’s the point, extract from the environment. The temptation will inevitably arise to disclose the names of people who are matched via facial recognition to things of commercial interest. For example, Facebook will be able to compile lists of people that stay with certain hotels, check in with certain airlines, use certain brands of phone or computer, or read certain books. I don’t trust them to not exploit this information, especially when their Privacy Policy is silent on secondary use of photo templates.

Personal Information is gold in the digital economy. We need to grasp the extent to which Facebook and other social businesses manufactures PII. With facial recognition, they are refining vast lodes of hitherto anonymous images into commercially valuable PII.