Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The strengths and weaknesses of Data Privacy in the Age of Big Data

This is the abstract of a current privacy conference proposal.

Synopsis

Many Big Data and online businesses proceed on a naive assumption that data in the "public domain" is up for grabs; technocrats are often surprised that conventional data protection laws can be interpreted to cover the extraction of PII from raw data. On the other hand, orthodox privacy frameworks don't cater for the way PII can be created in future from raw data collected today. This presentation will bridge the conceptual gap between data analytics and privacy, and offer new dynamic consent models to civilize the trade in PII for goods and services.

Abstract

It’s often said that technology has outpaced privacy law, yet by and large that's just not the case. Technology has certainly outpaced decency, with Big Data and biometrics in particular becoming increasingly invasive. However OECD data privacy principles set out over thirty years ago still serve us well. Outside the US, rights-based privacy law has proven effective against today's technocrats' most worrying business practices, based as they are on taking liberties with any data that comes their way. To borrow from Niels Bohr, technologists who are not surprised by data privacy have probably not understood it.

The cornerstone of data privacy in most places is the Collection Limitation principle, which holds that organizations should not collect Personally Identifiable Information beyond their express needs. It is the conceptual cousin of security's core Need-to-Know Principle, and the best starting point for Privacy-by-Design. The Collection Limitation principle is technology neutral and thus blind to the manner of collection. Whether PII is collected directly by questionnaire or indirectly via biometric facial recognition or data mining, data privacy laws apply.

It’s not for nothing we refer to "data mining". But few of those unlicensed data gold diggers seem to understand that the synthesis of fresh PII from raw data (including the identification of anonymous records like photos) is merely another form of collection. The real challenge in Big Data is that we don’t know what we’ll find as we refine the techniques. With the best will in the world, it is hard to disclose in a conventional Privacy Policy what PII might be collected through synthesis down the track. The age of Big Data demands a new privacy compact between organisations and individuals. High minded organizations will promise to keep people abreast of new collections and will offer ways to opt in, and out and in again, as the PII-for-service bargain continues to evolve.

Posted in Social Networking, Privacy, Biometrics, Big Data

FIDO Alliance goes from strength to strength

With a bunch of exciting new members joining up on the eve of the RSA Conference, the FIDO Alliance is going from strength to strength. And they've just published the first public review drafts of their core "universal authentication" protocols.

An update to my Constellation Research report on FIDO is now available. Here's a preview.

The Go-To standards alliance in protocols for modern identity management

The FIDO Alliance – for Fast IDentity Online – is a fresh, fast growing consortium of security vendors and end users working out a new suite of protocols and standards to connect authentication endpoints to services. With an unusual degree of clarity in this field, FIDO envisages simply "doing for authentication what Ethernet did for networking".

Launched in early 2013, the FIDO Alliance has already grown to nearly 100 members, amongst which are heavyweights like Google, Lenovo, MasterCard, Microsoft and PayPal as well as a couple of dozen biometrics vendors, many of the leading Identity and Access Management solutions and service providers and several global players in the smartcard supply chain.

FIDO is different. The typical hackneyed elevator pitch in Identity and Access Management promises to "fix the password crisis" – usually by changing the way business is done. Most IDAM initiatives unwittingly convert clear-cut technology problems into open-ended business transformation problems. In contrast, FIDO's mission is refreshingly clear cut: it seeks to make strong authentication interoperable between devices and servers. When users have activated FIDO-compliant endpoints, reliable fine-grained information about their client environment becomes readily discoverable by any servers, which can then make access control decisions, each according to its own security policy.

With its focus, pragmatism and critical mass, FIDO is justifiably today's go-to authentication standards effort.

In February 2014, the FIDO Alliance announced the release of its first two protocol drafts, and a clutch of new members including powerful players in financial services, the cloud and e-commerce. Constellation notes in particular the addition to the board of security leader RSA and another major payments card, Discover. And FIDO continues to strengthen its vital “Relying Party” (service provider) representation with the appearance of Aetna, Goldman Sachs, Netflix and Salesforce.com.

It's time we fixed the Authentication plumbing

In my view, the best thing about FIDO is that it is not about federated identity but instead it operates one layer down in what we call the digital identity stack. This might seem to run against the IDAM tide, but it's refreshing, and it may help the FIDO Alliance sidestep the quagmire of identity policy mapping and legal complexities. FIDO is not really about the vexed general issue of "identity" at all! Instead, it's about low level authentication protocols; that is, the plumbing.

The FIDO Alliance sets out its mission as follows:

  • Change the nature of online authentication by:
    • Developing technical specifications that define an open, scalable, interoperable set of mechanisms that reduce the reliance on passwords to authenticate users.
    • Operating industry programs to help ensure successful worldwide adoption of the Specifications.
    • Submitting mature technical Specification(s) to recognized standards development organization(s) for formal standardization.

The engineering problem underlying Federated Identity is actually pretty simple: if we want to have a choice of high-grade physical, multi-factor "keys" used to access remote services, how do we convey reliable cues to those services about the type of key being used and the individual who's said to be using it? If we can solve that problem, then service providers and Relying Parties can sort out for themselves precisely what they need to know about the users, sufficient to identify and authenticate them.

All of these leaves the 'I' in the acronym "FIDO" a little contradictory. It's such a cute name (alluding of course to the Internet dog) that it's unlikely to change. Instead, I overheard that the acronym might go the way of "KFC" where eventually it is no longer spelled out and just becomes a word in and of itself.

FIDO Alliance Board Members

  • Blackberry
  • CrucialTec (manufactures innovative user input devices for mobiles)
  • Discover Card
  • Google
  • Lenovo
  • MasterCard
  • Microsoft
  • Nok Nok Labs (a specialist authentication server software company)
  • NXP Semiconductors (a global supplier of card chips, SIMs and Secure Elements)
  • Oberthur Technologies (a multinational smartcard and mobility solutions provider)
  • PayPal
  • RSA
  • Synaptics (fingerprint biometrics)
  • Yubico (the developer of the YubiKey PKI enabled 2FA token).

FIDO Alliance Board Sponsor Level Members

  • Aetna
  • ARM
  • AGNITiO
  • Dell
  • Discretix
  • Entersekt
  • EyeLock Inc.
  • Fingerprint Cards AB
  • FingerQ
  • Goldman Sachs
  • IdentityX
  • IDEX ASA
  • Infineon
  • Kili
  • Netflix
  • Next Biometrics Group
  • Oesterreichische Staatsdruckerei GmbH
  • Ping Identity
  • SafeNet
  • Salesforce
  • SecureKey
  • Sonavation
  • STMicroelectronics
  • Wave Systems

Stay tuned for the updated Constellation Research report.

Posted in Smartcards, Security, Identity, Federated Identity, Constellation Research, Biometrics

My analysis of the FIDO Alliance

I've written a new Constellation Research "Quark" Report on the FIDO Alliance ("Fast Identity Online"), a fresh, fast growing consortium working out protocols and standards to connect authentication endpoints to services.

With a degree of clarity that is uncommon in Identity and Access Management (IDAM), FIDO envisages simply "doing for authentication what Ethernet did for networking".

Not quite one year old, 2013, the FIDO Alliance has already grown to nearly 70 members, amongst which are heavyweights like Google, Lenovo, MasterCard, Microsoft and PayPal as well as a dozen biometrics vendors and several global players in the smartcard supply chain.

STOP PRESS! Discover Card joined a few days ago at board level.

FIDO is different. The typical hackneyed IDAM elevator pitch in promises to "fix the password crisis" but usually with unintended impacts on how business is done. Most IDAM initiatives unwittingly convert clear-cut technology problems into open-ended business transformation problems.

In welcome contrast, FIDO’s mission is clear cut: it seeks to make strong authentication interoperable between devices and servers. When users have activated FIDO-compliant endpoints, reliable fine-grained information about the state of authentication becomes readily discoverable by any server, which can then make access control decisions according to its own security policy.

FIDO is not about federation; it's not even about "identity"!

With its focus, pragmatism and critical mass, FIDO is justifiably today’s go-to authentication industry standards effort.

For more detail, please have a look at The FIDO Alliance at the Constellation Research website.

Posted in Smartcards, Security, Identity, Biometrics

Opinion: Why NFC trumps biometrics.

This is a copy of an op-ed I wrote in IT News on 20 September.

It’s been suggested that with Apple’s introduction of biometric technology, the “i” in iPhone now stands for “identity”. Maybe “i” is for “ironic” because there is another long-awaited feature that would have had much more impact on the device’s identity credentials.

The fingerprint scanner has appeared in the new iPhone 5s, as predicted, and ahead of Near Field Communications capability. In my view, NFC is much more important for identity. NFC is usually thought of as a smartcard emulator, allowing mobile devices to appear to merchant terminals as payments instruments, but the technology has another lesser known mode: reader emulation.

NFC devices can be programmed to interface with any contactless card: smart driver licenses, health cards, employee ID and so on. The power to identify and authenticate to business and enterprise apps using real world credentials would be huge for identity management, but it seems we have to wait.

Meanwhile, what does the world’s instantly most famous fingerprint reader mean for privacy and security? As is the case with all things biometric, the answers are not immediately apparent.

Biometric authentication might appear to go with mobiles like strawberries and cream. Smartphones are an increasingly central fixture in daily life and yet something like 40% of users fail to protect this precious asset with a PIN. So automatic secure logon is an attractive idea.

There are plenty of options for biometrics in smartphones, thanks to the built in camera and other sensors. Android devices have had face unlock for a long time now, and iris authentication is also available. Start-up EyeVerify scans the vein pattern in the whites of the eye; gait recognition has been mooted; and voice recognition would seem an obvious alternative.

With its US$365M acquisition of Authentec in 2012, Apple made a conspicuous commitment to a biometric technology that was always going to involve significant new hardware in the handset. The iPhone 5s incorporates a capacitive fingerprint detector in a subtly modified Home button. Ostensibly the button operates as it always has, but it automatically scans the user’s finger in the time it takes to press and release. Self-enrolment is said to be quite painstaking, with the pad of the finger being comprehensively scanned and memorised. This allows the relatively small scanner to still do its job no matter what fraction of the fingertip happens to be presented. Up to five alternate fingers can be enrolled, which allows for a fall-back if the regular digit is damaged, as well as additional users like family members to be registered.

This much we know. What’s less clear is the security performance of the iPhone 5s.

Remember that all biometrics commit two types of error: False Rejects where an enrolled user is mistakenly blocked, and False Accepts where someone else is confused for the legitimate user. Both type of error are inevitable, because biometrics must be designed to tolerate a little variability. Each time a body part is presented, it will look a little different; fingers get dirty or scarred or old; sensors get scratched; angle and pressure vary. But in allowing for change, the biometric is liable to occasionally think similar people are the same.

The propensity to make either False Positive or False Negative errors must be traded off in every biometric application, to deliver reasonable security and convenience. Data centre biometrics for instance are skewed towards security and as a result can be quite tricky and time consuming to use. With consumer electronics, the biometric trade-off goes very much the other way. Consumers only ever directly experience one type of error – False Rejects – and they can be very frustrating. Most users don’t in fact ever lose their phone, so False Accepts are irrelevant.

Thus the iPhone 5s finger reader will be heavily biased towards convenience, but at what cost? Frustratingly, it is almost impossible to tell. Independent biometrics researchers like Jim Wayman have long warned that lab testing is a very poor predictor of biometric performance in the field. The FBI advises that field performance is always significantly worse than reported by vendors, especially in the face of determined attack.

All we have to go on is anecdotes. We’re assured that the Authentec technology has “liveness detection” to protect against fake fingers but it’s a hollow promise. There are no performance standards or test protocols for verifying the claim of liveness detection.

The other critical promise made by Apple is that the fingerprint templates stored securely with the handset will never made accessible to third party applications nor the cloud. This is a significant privacy measure, and is to be applauded. It’s vital that Apple stick to this policy.

But here’s the rub for identity: if the biometric matching is confined to the phone, then it’s nothing more than a high tech replacement for the PIN, with indeterminate effectiveness. Certainly smartphones have great potential for identity management, but the advantages are to be gained from digital wallets and NFC, not from biometrics.

Some have quipped that the “S” in iPhone 5S stands for “security” but to me it’s more like “speculation”.

Posted in Biometrics, Security

How Bart Simpson might defend TouchID

A week and a bit after Apple released the iPhone 5S with its much vaunted "TouchID" biometric, the fingerprint detector has been subverted by the Chaos Computer Club (CCC). So what are we to make of this?

Security is about economics. The CCC attack is not a trivial exercise. It entailed a high resolution photograph, high res printing, and a fair bit of phaffing about with glue and plastics. Plus of course the attacker needs to have taken possession of the victim's phone because one good thing about Apple's biometric implementation is that the match is done on the device. So one question is, Does the effort required to beat the system outweigh the gains to be made by a successful attacker? For a smartphone with a smart user (who takes care not to load up their device with real valuables) the answer is probably no.

But security is also about transparency and verification, and TouchID is the latest example of the biometrics industry falling short of security norms. Apple has released its new "security" feature with no security specs. No stated figures on False Accept Rate, False Reject Rate or Failure to Enroll Rate, and no independent test results. All we have is anecdotes that the False Reject Rate is very very low (in keeping with legendary Apple human factors engineering), and odd claims that a dead finger won't activate the Authentec technology. It's held out to be a security measure but the manufacturer feels no need to predict how well the device will withstand criminal attack.

There is no shortage of people lining up to say the CCC attack is not a practical threat. Which only begs the question, ok, just how "secure" do we want biometrics to be? Crucially, that's actually impossible to answer, because there are still no agreed real life test protocols for any biometric, and no liveness detection standards. Vendors can make any marketing claim they like for a biometric solution without being held to account. Contrast this Wild West situation with the rigor applied to any other branch of security like cryptographic algorithms, key lengths, Trusted Platform Modules, smartcards and Secure Elements.

You can imagine Bart Simpson defending the iPhone 5S fingerprint scanner:

"It won't be spoofed!
I never said it couldn't be spoofed!
It doesn't really matter if it is spoofed!!!"

Demonstrations of biometric failings need to be taken more seriously - not because they surprise hardened security professionals (they don't) but because the demos lay bare the laziness of many biometrics vendors and advocates, and their willful disregard for security professionalism. People really need to be encouraged to think more critically about biometrics. For one thing, they need to understand subtleties like the difference between the One-to-One authentication of the iPhone 5S and One-to-Many authentication of fanciful fingerprint payment propositions like PayTango.

The truth is that consumer biometrics are all about convenience, not security. And that would be ok, if only manufacturers were honest about it.

Posted in Security, Biometrics

Utopia

"Here's your cool new identifier! It's so easy to use. No passwords to forget, no cards to leave behind. And it's so high tech, you're going to be able to use it for everything eventually: payments, banking, e-health, the Interwebs, unlocking your house or office, starting your car!

"Oh, one thing though, there are some little clues about your identifier around the place. Some clues are on the ATM, some clues are in Facebook and others in Siri. There may be a few in your trash. But it's nothing to worry about. It's hard for hackers to decipher the clues. Really quite hard.

"What's that you say? What if some hacker does figure out the puzzle? Gosh, um, we're not exactly sure, but we got some guys doing their PhDs on that issue. Sorry? Will we give you a new identifier in the meantime? Well, no actually, we can't do that right now. Ok, no other questions? Cool!

"Good luck!"

Posted in Biometrics

The beginning of privacy!

The headlines proclaim that the newfound ability to re-identify anonymous DNA donors means The End Of Privacy!.

No it doesn't, it only means the end of anonymity.

Anonymity is not the same thing as privacy. Anonymity keeps people from knowing what you're doing, and it's a vitally important quality in many settings. But in general we usually want people (at least some people) to know what we're up to, so long as they respect that knowledge. That's what privacy is all about. Anonymity is a terribly blunt instrument for protecting privacy, and it's also fragile. If anonymity was all you have, then you're in deep trouble when someone manages to defeat it.

New information technologies have clearly made anonymity more difficult, yet it does not follow that we must lose our privacy. Instead, these developments bring into stark relief the need for stronger regulatory controls that compel restraint in the way third parties deal with Personal Information that comes into their possession.

A great example is Facebook's use of facial recognition. When Facebook members innocently tag one another in photos, Facebook creates biometric templates with which it then automatically processes all photo data (previously anonymous), looking for matches. This is how they can create tag suggestions, but Facebook is notoriously silent on what other applications it has for facial recognition. Now and then we get a hint, with, for example, news of the Facedeals start up last year. Facedeals accesses Facebook's templates (under conditions that remain unclear) and uses them to spot customers as they enter a store to automatically check them in. It's classic social technology: kinda sexy, kinda creepy, but clearly in breach of Collection, Use and Disclosure privacy principles.

And indeed, European regulators have found that Facebook's facial recognition program is unlawful. The chief problem is that Facebook never properly disclosed to members what goes on when they tag one another, and they never sought consent to create biometric templates with which to subsequently identify people throughout their vast image stockpiles. Facebook has been forced to shut down their facial recognition operations in Europe, and they've destroyed their historical biometric data.

So privacy regulators in many parts of the world have real teeth. They have proven that re-identification of anonymous data by facial recognition is unlawful, and they have managed to stop a very big and powerful company from doing it.

This is how we should look at the implications of the DNA 'hacking'. Indeed, Melissa Gymrek from the Whitehead Institute said in an interview: "I think we really need to learn to deal with the fact that we cannot ever make data sets truly anonymous, and that I think the key will be in regulating how we are allowed to use this genetic data to prevent it from being used maliciously."

Perhaps this episode will bring even more attention to the problem in the USA, and further embolden regulators to enact broader privacy protections there. Perhaps the very extremeness of the DNA hacking does not spell the end of privacy so much as its beginning.

Posted in Social Media, Science, Privacy, Biometrics, Big Data

Technological imperialism

Abstract

Biometrics seems to be going gang busters in the developing world. I fear we're seeing a new wave of technological imperialism. In this post I will examine whether the biometrics field is mature enough for the lofty social goal of empowering the world's poor and disadvantaged with "identity".

The independent Center for Global Development has released a report "Identification for Development: The Biometrics Revolution" which looks at 160 different identity programs using biometric technologies. By and large, it's a study of the vital social benefits to poor and disadvantaged peoples when they gain an official identity and are able to participate more fully in their countries and their markets.

The CGD report covers some of the kinks in how biometrics work in the real world, like the fact that a minority of people can be unable to enroll and they need to be subsequently treated carefully and fairly. But I feel the report takes biometric technology for granted. In contrast, independent experts have shown there is insufficient science for biometric performance to be predicted in the field. I conclude biometrics are not ready to support such major public policy initiatives as ID systems.

The state of the science of biometrics

I recently came across a weighty assessment of the science of biometrics presented by one of the gurus, Jim Wayman, and his colleagues to the NIST IBPC 2010 biometric testing conference. The paper entitled "Fundamental issues in biometric performance testing: A modern statistical and philosophical framework for uncertainty assessment" should be required reading for all biometrics planners and pundits.

Here are some important extracts:

[Technology] testing on artificial or simulated databases tells us only about the performance of a software package on that data. There is nothing in a technology test that can validate the simulated data as a proxy for the “real world”, beyond a comparison to the real world data actually available. In other words, technology testing on simulated data cannot logically serve as a proxy for software performance over large, unseen, operational datasets. [p15, emphasis added].

In a scenario test, [False Non Match Rate and False Match Rate] are given as rates averaged over total transactions. The transactions often involve multiple data samples taken of multiple persons at multiple times. So influence quantities extend to sampling conditions, persons sampled and time of sampling. These quantities are not repeatable across tests in the same lab or across labs, so measurands will be neither repeatable nor reproducible. We lack metrics for assessing the expected variability of these quantities between tests and models for converting that variability to uncertainty in measurands.[p17].

To explain, a biometric "technology test" is when a software package is exercised on a standardised data set, usually in a bake-off such as NIST's own biometric performance tests over the years. And a "scenario test" is when the biometric system is tested in the lab using actual test subjects. The meaning of the two dense sentences underlined by me in the extracts is: technology test results from one data set do not predict performance on any other data set or scenario, and biometrics practitioners still have no way to predict the accuracy of their solutions in the real world.

The authors go on:

[To] report false match and false non-match performance metrics for [iris and face recognition] without reporting on the percentage of data subjects wearing contact lenses, the period of time between collection of the compared image sets, the commercial systems used in the collection process, pupil dilation, and lighting direction is to report "nothing at all". [pp17-18].

And they conclude, amongst other things:

[False positive and false negative] measurements have historically proved to be neither reproducible nor repeatable except in very limited cases of repeated execution of the same software package against a static database on the same equipment. Accordingly, "technology" test metrics have not aligned well with "scenario" test metrics, which have in turn failed to adequately predict field performance. [p22].

The limitations of biometric testing has repeatedly been stressed by no less an authority than the US FBI. In their State-of-the-Art Biometric Excellence Roadmap (SABER) Report the FBI cautions that:

For all biometric technologies, error rates are highly dependent upon the population and application environment. The technologies do not have known error rates outside of a controlled test environment. Therefore, any reference to error rates applies only to the test in question and should not be used to predict performance in a different application. [p4.10]

The SABER report also highlighted a widespread weakness in biometric testing, namely that accuracy measurements usually only look at accidental errors:

The intentional spoofing or manipulation of biometrics invalidates the “zero effort imposter” assumption commonly used in performance evaluations. When a dedicated effort is applied toward fooling biometrics systems, the resulting performance can be dramatically different. [p1.4]

A few years ago, the Future of Identity in the Information Society Consortium ("FIDIS", a research network funded by the European Community’s Sixth Framework Program) wrote a major report on forensics and identity systems. FIDIS looked at the spoofability of many biometrics modalities in great detail (pp 28-69). These experts concluded:

Concluding, it is evident that the current state of the art of biometric devices leaves much to be desired. A major deficit in the security that the devices offer is the absence of effective liveness detection. At this time, the devices tested require human supervision to be sure that no fake biometric is used to pass the system. This, however, negates some of the benefits these technologies potentially offer, such as high-throughput automated access control and remote authentication. [p69]

Biometrics in public policy

To me, biometrics is in an appalling and astounding state of affairs. The prevailing public understanding of how these technologies work is utopian, based probably on nothing more than science fiction movies, and the myth of biometric uniqueness. In stark contrast, scientists warn there is no telling how biometrics will work in the field, and the FBI warns that bench testing doesn't predict resistance to attack. It's very much like the manufacturer of a safe confessing to a bank manager they don't know how it will stand up in an actual burglary.

This situation has bedeviled enterprise and financial services security for years. Without anyone admitting it, it's possible that the slow uptake of biometrics in retail and banking (save for Japan and their odd hand vein ATMs) is a result of hard headed security officers backing off when they look deep into the tech. But biometrics is going gang busters in the developing world, with vendors thrilling to this much bigger and faster moving market.

The stakes are so very high in national ID systems, especially in the developing world, where resistance to their introduction is relatively low, for various reasons. I'm afraid there is great potential for technological imperialism, given the historical opacity of this industry and its reluctance to engage with the issues.

To be sure vendors are not taking unfair advantage of the developing world ID market, they need to answer some questions:


  • Firstly, how do they respond to Jim Wayman, the FIDIS Consortium and the FBI? Is it possible to predict how finger print readers, face recognition and iris scanners are going to operate, over years and years, in remote and rural areas?

  • In particular, how good is liveness detection? Can these solutions be trusted in unattended operation for such critical missions as e-voting?

  • What contingency plans are in place for biometric ID theft? Can the biometric be cancelled and reissued if compromised? Wouldn't it be catastrophic for the newly empowered identity holder to find themselves cut out of the system if their biometric can no longer be trusted?

Posted in Security, Identity, Culture, Biometrics

The fundamental privacy challenges in biometrics

The EPIC privacy tweet chat of October 16 included "the Privacy Perils of Biometric Security". Consumers and privacy advocates are often wary of this technology, sometimes fearing a hidden agenda. To be fair, function creep and unauthorised sharing of biometric data are issues that are anticipated by standard data protection regulations and can be well managed by judicious design in line with privacy law.

However, there is a host of deeper privacy problems in biometrics that are not often aired.


  • The privacy policies of social media companies rarely devote reasonable attention to biometric technologies like facial recognition or natural language processing. Only recently has Facebook openly described its photo tagging templates. Apple on the other hand continues to be completely silent about Siri in its Privacy Policy, despite the fact that when Siri takes dictated emails and text messages, Apple is collecting and retaining without limit personal telecommunications that are strictly out of bounds even to the carriers! Some speculate that biometric voice recognition is a natural next step for Siri, but it's not a step that can be taken without giving notice today that personally identifiable voice data may in future be used for that purpose.
  • Personal Information (in Australia) is defined in the law as "information or an opinion ... whether true or not about an individual whose identity is apparent ..." [emphasis added]. This definition is interesting in the context of biometrics. Because biometrics are fuzzy, we can regard a biometric identification as a sort of opinion. Technically, a biometric match is declared when the probability of a scanned trait corresponding to an enrolled template exceeds some preset threshold, like 95%. When a false match results, mistaking say "Alice" for "Bob", it seems to me that the biometric system has created Personal Information about both Alice and Bob. There will be raw data, templates, audit files and metadata in the system pertaining to both individuals, some of it right and some of it wrong, but all of which needing to be accounted for under data protection and information privacy law.
  • In privacy, proportionality is important. The foremost privacy principle is Collection Limitation: organisations must not collect more personal information than they reasonably need to carry out their business. Biometric security is increasingly appearing in mundane applications with almost trivial security requirements, such as school canteens. Under privacy law, biometrics implementations in these sorts of environments may be hard to justify.
  • Even in national security deployments, biometrics lead to over-collection, exceeding what may be reasonable. Very little attention is given in policy debates to exception management, such as the cases of people who cannot enroll. The inevitable failure of some individuals to enroll in a biometric can have obvious causes (like missing digits or corneal disease) and not so obvious ones. The only way to improve false positive and false negative performance for a biometric at the same time is to tighten the mathematical modelling underpinning the algorithm (see also "Failure to enroll" at http://lockstep.com.au/blog/2012/05/06/biometrics-must-be-fallible). This can constrain the acceptable range of the trait being measured leading to outliers being rejected altogether. So for example, accurate fingerprint scanners need to capture a sharp image, making enrollment sometimes difficult for the elderly or manual workers. It's not uncommon for a biometric modality to have a Fail-to-Enroll rate of 1%. Now, what is to be done with those unfortunates who cannot use the biometric? In the case of border control, additional identifying information must be collected. Biometric security sets what the public are told is a 'gold standard' for national security, so there is a risk that individuals who for no fault of their own are 'incompatible' with the technology will form a de facto underclass. Imagine the additional opprobrium that would go with being in a particular ethnic or religious minority group and having the bad luck to fail biometric enrollment. The extra interview questions that go with sorting out these outliers at border control points is a collection necessitated not by any business need but rather the pitfalls of the technology.
  • And finally, there is something of a cultural gap between privacy and technology that causes blind spots amongst biometrics developers. Too many times, biometrics advocates misapprehend what information privacy is all about. It's been said more than once that "faces are not private" and there is "no expectation or privacy" with regards to one's face in public. Even if they were true, these judgement calls are moot, for information privacy laws are concerned with any data about identifiable individuals. So when facial recognition technology takes anonymous imagery from CCTV or photo albums and attaches names to it, Personal Information is being collected, and the law applies. It is this type of crucial technicality that Facebook has smacked into headlong in Germany.

Posted in Privacy, Biometrics

A response to M2SYS on reverse engineering

M2SYS posted on their blog a critique of the recent reverse engineering of iris templates. In my view, they misunderstand or misrepresent the significance of this sort of research. Their arguments merit rebuttal but the M2SYS blog is not accepting comments, and they seem reluctant to engage on these important issues on Twitter.

Here below is what I tried to post in response.

See also my post about the double standard in how biometrics proponents treat adverse research in comparison with serious cryptographers.

"You're right that reporting of the Black Hat results should not overstate the problem. By the same token, advocates for biometrics should be careful with their balance too. For example, is it fair to say as you do that biometrics are 'nearly impossible' to reverse engineer? And should Securlinx's Barry Hodge play down the reverse engineering as only 'intellectually interesting'?

"The point is not that iris scanning will suddenly be defeated left and right -- you're right the practical risk of spoofing is not widespread nor immediate. But this work and the publicity it attracts serves a useful purpose if it fosters more critical thinking. Most lay people out there get their understanding of biometrics from science fiction movies. Without needing to turn people into engineers, they ought to have a better handle on the technology and realities such as the false positive (security) / false negative (usability) tradeoff, and spoofing.

"My observation is that biometrics advocates have transitioned from more or less denying the possibility of reverse engineering, to now maintaining that it really doesn't matter. But until the industry comes up with a revokable biometric, I think it is only prudent to treat seriously even remote prospects of spoofing."

Posted in Biometrics