Who owns a facial match?

I use the word “own” here figuratively, in the sense of taking responsibility for the consequences of biometric face matching.

Kashmir Hill’s forceful new book “Your face belongs to us” sets out the challenges posed by facial recognition technology for privacy, personal safety, law and order, even national security.  She expertly stakes out the territory, for which there is no clear way to best respond.

There have been isolated regulatory initiatives to rein in facial recognition such as Illinois’s Biometric Information Privacy Act 2008.

At the same time, biometric authentication has become normalised through smart phones. Here the technology is well managed and in my view benign, thanks to strict standards set by handset manufacturers and the FIDO Alliance that restrain biometric storage and matching to the secure elements inside personal devices.

Facial recognition for authentication is a hot topic in Australia. Here the federal government’s IDMatch facility includes a Face Verification Service. And the recently released draft Digital ID Bill incorporates various biometric safeguards, in anticipation of facially verified identification.

New laws to regulate new technologies is a familiar battleground. It seems that the same sort of public debates have come and gone for years, and reappeared. When a given technology has legitimate uses, the arguments are especially fraught.

I am not a lawyer, but I have been close to law reform since the late 1990s when electronic signature regulations emerged, and then through several waves of privacy reviews. I have learned the practical strength of technology neutrality and now laud the superpower of principles-based data protection. This regulatory philosophy leads to controls that are relatively resilient in the face of technologicl progress.

Given how difficult it is to write new laws, it’s important to make use of existing jurisprudence wherever we can. I call myself a regulatory conservationist – if there is such a thing.

In this regard, existing privacy law may contain a way to restrain facial recognition, at least in the predominant technology-neutral jurisdictions.

Australia’s data protection authority, the Office of the Australian Information Commissioner (OAIC) has led the way in extending the principle of collection limitation from direct to indirect collection. In looking at the way data analytics can generate personal information, the OAIC has issued this guidance:

“The concept of ‘collects’ applies broadly, and includes gathering, acquiring or obtaining personal information from any source and by any means. This includes collection by ‘creation’ which may occur when information is created with reference to, or generated from, other information” (italics added by me).

So this suggests to me that the generation of facial matches constitutes personal information collection and as such, needs to comply with data protection laws.

Consider what happens when a facial recognition service such as Clearview AI is used.

These services hold databases of reference images and names (exactly how these databases are put together is another story). A query is sent to the service with an image of the target person, known in biometrics as a probe, which is run against the reference set.  If a match is found, then a response is generated with the effect of adding a name or other identification to the probe image.

So here’s the thing. A record that associates an image of a natural person with their name is new personal data.

That record did not exist before the biometric match was made. And under technology neutral laws as we have seen, such a record can be deemed to have been collected by both the facial recognition service and the party making the query. How the record is then used is another matter. Even if the record is not used, the Collection Limitation principle may apply (and anyway, the facial recognition services probably do retain and continue to use the new match as part of their ongoing training efforts).

[Note also that it is irrelevant if reference material is sourced from the “public domain” (by trawling web sites and social media as Clearview AI has done). The publicness of the reference material is not relevant to the case I am trying to make here. I know there are separate arguments that repurposing “public” images (for training biometric algorithms for instance) represents another sort of privacy breach.]

This sort of remote identification is therefore subject to existing data privacy principles and laws where applicable.  Such laws might mandate that the naming of a facial image be done only with the consent of the named individual. And the fact that sort of creation and collection is being offered would also have to be declared and explained in a privacy notice.

Lockstep’s Data Verification Platform is a scheme to rationalise and organise data flows between data originators such as government and the risk owners who rely on accurate data to guide decisions. Join us in conversation.

If you’d like to follow the development of the Data Verification Platform model, please subscribe for email updates.​