The reverse engineering of biometric iris templates reported at Blackhat this month has attracted deserved attention. Iris now joins face and fingerprint as modalities that have been reverse engineered; that is, it has proved possible to synthesise an image that when processed by the algorithm in question, produces a match against a target template.
The biometrics industry reacts to these sorts of results in a way unbefitting of serious security practitioners.
Take for instance Securlinx CEO Barry Hodge’s comment on the iris attack: “All of these articles obsessing over how to spoof a biometric are intellectually interesting but in the practical application irrelevant”.
But nobody should belittle the significance of these sorts of results – especially when no practical biometric can be revoked and reissued after compromise.
Mr Hodge, security is an intellectually challenging field. Let’s compare the biometrics industry’s complacency with the way serious security professionals responded to the problems discovered in the SHA-1 hash algorithm.
Ideal hash algorithms are supposed to produce digest values that are truly random under any variation to the input data. Any ability to predict how a digest varies could conceivably lead to a number of attack scenarios including where digitally signed data might be tampered with, without impacting the signature. The only way to attack an ideal hash algorithm is by brute force: if an attacker wishes to synthesise a piece of data that produces a target hash value (a so-called “collision”) they have to work their way through all possible permutations. For a 160 bit hash value, this brute force task taks on the order of 2 to the power of 159 trials, which would be beyond the power of all the world’s computers running for millions of years.
In 2005, Chinese academic cryptologists discovered a weakness in the SHA-1 algorithm, that under some circumstances allows a reduction in the number or trials needed for brute force discovery of collisions. The researchers did not reduce the number of trials by very much, and they did not demonstrate any actual attack. No one else feared that this work would produce a practical exploit, and in the following eight years there still has been no report of an attack on SHA-1.
However, cryptographers, security strategists and policy makers worldwide were shaken by the SHA-1 research. They were deeply worried, intellectually, that a digest algorithm could have a structural weakness that compromises its randomness. It meant that the cryptographic community did not really understand SHA-1 as well as they might. And the policy response was swift. The US government sponsored a new digest algorithm competition, which yielded the SHA-2 algorithm, which is now being promulgated globally.
This is good security practice at work. Academics continuously work away at stressing existing techniques and uncovered weaknesses. To stay ahead of attackers, all verified academic weaknesses are taken seriously, and where critical security infrastructure is involved – even when no practical attack has yet been seen – we review and upgrade the security solutions, to stay ahead of our adversaries.
In stark contrast, biometrics advocates seem to fall back on a variation of the Bart Simpson Defence, namely, “I didn’t do it; nobody saw me do it; you can’t prove a thing”.
Over the past few years I’ve tracked biometrics proponents claim firstly that templates cannot be reverse engineered. They went on to qualify their position to say that certain types of biometrics are “practically impossible” to reverse. And now Mr Hodge is saying it doesn’t really matter if they are reversed.
There is no disaster recovery plan in biometrics; they cannot be cancelled and reissued, so of course their advocates cling to the idea they cannot be compromised. And with that attitude they further distinguish themselves in infosec, for no one else ever acts as though their technology is perfect.