Smile! You’re on Candid Apple

Apple is reported to have acquired the “Polar Rose” technology that allows photos to be tagged with names through automated facial recognition.

The iPhone FAQ site says:
Interesting uses for the technology include automatically tagging people in photos and recognizing FaceTime callers from contact information. As the photographs taken on the iPhone improve, various image analysis algorithms could also be used to automatically classify and organize photos by type or subject.
Apple’s iPhoto currently recognizes faces in pictures for tagging purposes. It’s possible Apple is looking to improve and expand this functionality. Polar Rose removed its free tagging services for Facebook and Flickr earlier this month, citing interest from larger companies in licensing their technology.

The privacy implications are many and varied. Fundamentally, such technology will see hitherto anonymous image data converted into personal information, at those informopolies like Google, Facebook and Apple which hold vast personal photo archives.

Facial recognition systems obviously need to be trained. Members will upload photos, name the people in the photos, and then have the algorithm run over other images in the database. So it seems that Apple (in this case) will have lists of the all-important bindings between biometric template and names. What stops them running the algorithm and binding over any other images they happen to have in their databases? Apple has already shown a propensity to hang on to rich geolocation data generated by the iPhone, and a reluctance to specify what they intend to do with that data.

If facial recognition worked well, then the shady possibilities are immediately obvious. Imagine that I have been snapped in a total stranger’s photo — say some tourist on the Manly ferry — and they’ve uploaded the image to a host of some sort. What if the host, or some third party data miner, runs the matching algorithm over the stranger’s photo and recognises me in it? If they’re quick, a cunning business might SMS me a free ice cream offer, seeing I’m heading towards the corso. Or they might work out I’m a visitor, because the day before I was snapped in Auckland, and they can start to fill in my travel profile.

This is probably sci-fi for now, because in fact, facial recognition doesn’t work at all well when image capture conditions aren’t tightly controlled. But this is no cause for complacency, for the very inaccuracy of the biometric method might make the privacy implications even worse.

To analyse this, as with any privacy assessment, we should start with information flows. Consider what’s going on when a photo is uploaded to this kind of system. Say my friend Alice discloses to Apple that “manly ferry 11dec2010.jpg” is an image of Steve Wilson. Apple has then collected Personal Information about me, and they’ve done it indirectly, which under Australia’s privacy regime is something they’re supposed to inform me of as soon as practical.

Then Apple reduces the image to a biometric template, like “Steve Wilson sample 001.bio”. The Australian Law Reform Commission has recommended that biometric data be treated as Sensitive Information, and that collection be subject to express consent. That is, a company won’t be allowed to collect facial recognition template data without getting permission first.

Setting aside that issue for a moment, consider what happens when later, someone runs the algorithm against a bunch of other images, and it generates some matches: e.g. “uluru 30jan2008.jpg”-is-an-image-of-Steve-Wilson. It doesn’t actually matter whether the match is true or false: it’s still a brand new piece of Personal Information about me, created and collected by Apple, without my knowledge.

I reckon both false matches and true matches satisfy the definition of Personal Information in the Australian Privacy Act, which includes “an opinion … whether true or not”.

Remember: The failures of biometrics often cause greater privacy problems than do their successes.