The fundamental privacy challenges in biometrics

The EPIC privacy tweet chat of October 16 included “the Privacy Perils of Biometric Security”. Consumers and privacy advocates are often wary of this technology, sometimes fearing a hidden agenda. To be fair, function creep and unauthorised sharing of biometric data are issues that are anticipated by standard data protection regulations and can be well managed by judicious design in line with privacy law.

However, there is a host of deeper privacy problems in biometrics that are not often aired.

  • The privacy policies of social media companies rarely devote reasonable attention to biometric technologies like facial recognition or natural language processing. Only recently has Facebook openly described its photo tagging templates. Apple on the other hand continues to be completely silent about Siri in its Privacy Policy, despite the fact that when Siri takes dictated emails and text messages, Apple is collecting and retaining without limit personal telecommunications that are strictly out of bounds even to the carriers! Some speculate that biometric voice recognition is a natural next step for Siri, but it’s not a step that can be taken without giving notice today that personally identifiable voice data may in future be used for that purpose.
  • Personal Information (in Australia) is defined in the law as “information or an opinionwhether true or not about an individual whose identity is apparent …” [emphasis added]. This definition is interesting in the context of biometrics. Because biometrics are fuzzy, we can regard a biometric identification as a sort of opinion. Technically, a biometric match is declared when the probability of a scanned trait corresponding to an enrolled template exceeds some preset threshold, like 95%. When a false match results, mistaking say “Alice” for “Bob”, it seems to me that the biometric system has created Personal Information about both Alice and Bob. There will be raw data, templates, audit files and metadata in the system pertaining to both individuals, some of it right and some of it wrong, but all of which needing to be accounted for under data protection and information privacy law.
  • In privacy, proportionality is important. The foremost privacy principle is Collection Limitation: organisations must not collect more personal information than they reasonably need to carry out their business. Biometric security is increasingly appearing in mundane applications with almost trivial security requirements, such as school canteens. Under privacy law, biometrics implementations in these sorts of environments may be hard to justify.
  • Even in national security deployments, biometrics lead to over-collection, exceeding what may be reasonable. Very little attention is given in policy debates to exception management, such as the cases of people who cannot enroll. The inevitable failure of some individuals to enroll in a biometric can have obvious causes (like missing digits or corneal disease) and not so obvious ones. The only way to improve false positive and false negative performance for a biometric at the same time is to tighten the mathematical modelling underpinning the algorithm (see also “Failure to enroll” at This can constrain the acceptable range of the trait being measured leading to outliers being rejected altogether. So for example, accurate fingerprint scanners need to capture a sharp image, making enrollment sometimes difficult for the elderly or manual workers. It’s not uncommon for a biometric modality to have a Fail-to-Enroll rate of 1%. Now, what is to be done with those unfortunates who cannot use the biometric? In the case of border control, additional identifying information must be collected. Biometric security sets what the public are told is a ‘gold standard’ for national security, so there is a risk that individuals who for no fault of their own are ‘incompatible’ with the technology will form a de facto underclass. Imagine the additional opprobrium that would go with being in a particular ethnic or religious minority group and having the bad luck to fail biometric enrollment. The extra interview questions that go with sorting out these outliers at border control points is a collection necessitated not by any business need but rather the pitfalls of the technology.
  • And finally, there is something of a cultural gap between privacy and technology that causes blind spots amongst biometrics developers. Too many times, biometrics advocates misapprehend what information privacy is all about. It’s been said more than once that “faces are not private” and there is “no expectation or privacy” with regards to one’s face in public. Even if they were true, these judgement calls are moot, for information privacy laws are concerned with any data about identifiable individuals. So when facial recognition technology takes anonymous imagery from CCTV or photo albums and attaches names to it, Personal Information is being collected, and the law applies. It is this type of crucial technicality that Facebook has smacked into headlong in Germany.