The inventor of forensic DNA testing, Dr Alec Jeffreys, has cautioned that once millions of DNA samples are collected in population databases, false matches rise significantly.
DNA testing is not an infallible proof of identity. While Jeffreys' original technique compared scores of markers to create an individual "fingerprint," modern commercial DNA profiling compares a number of genetic markers - often 5 or 10 - to calculate a likelihood that the sample belongs to a given individual.
Jeffreys estimates the probability of two individuals' DNA profiles matching in the most commonly used tests at between one in a billion or one in a trillion, "which sounds very good indeed until you start thinking about large DNA databases." In a database of 2.5 million people, a one-in-a-billion probability becomes a one-in-400 chance of at least one match.[Ref: DNA Fingerprint Privacy Concerns Jill Lawless, CBS News.]
Dr Jeffreys is alluding to the Birthday Paradox, where the chance of any pair of people being matched on a random trait rises dramatically and counter-intuitively in groups of people. At a gathering of just 25 people, the chances are better than 50:50 that a pair of people in the group will have the same birthday. The implication for forensic databases is that it's highly likely that somewhere in the set, there will be pairs of different people that happen to have biometric data that fall within the tolerance of the matching algorithm. In other words, the matching software will confuse them. The designers of driver licence and immigration databases need to put protocols in place that double-check automatic matches so as to avoid impugning innocent people. By-and-large, the protocols I have seen in practice work well, but these practicalities are glossed over by biometrics vendors who continue to over-hype their technologies.
In the context of population databases, we see once again why the adjective "unique" is a wrong and misleading way of describing biometrics. No biometric trait has a zero probability of a false match, so none of them can be described as "unique". And even the highly distinctive traits like DNA can lead to surprisingly frequent false detects in large databases.
So it bears repeating, biometrics don't work as well as suggested by science fiction movies.
Posted in Biometrics
With apologies to Friedrich Nietzsche. The hero of many a crypto folk tale Bob is dead, and we have killed him.
We now know that in PKI, Alice's Relying Party is almost always a machine and not a human being. The idea that two strangers would use PKI to work out whether or not to trust one another was deeply distracting and led to the complexity that in the main stymied early PKI.
All of which might be academic except the utopian idea persists that identity frameworks can and should underpin stranger-to-stranger e-business. With NSTIC for instance I fear we are sleep walking into a repeat of Big PKI, when we could be solving a simpler problem: the robust and bilateral presentation of digital identity data in established contexts, without changing the existing relationships that cover almost all serious transactional business.
The following is an extract from a past paper of mine, "Public Key Superstructure" which was presented to the NIST IDTrust Workshop in 2008. There I examine the shortfalls and pitfalls of using signed email as a digital signature archetype.
E-mail not a killer application for PKI
A total lack of real applications would explain why e-mail became by default the most talked about PKI application. Many PKI vendors to this day continue to illustrate their services and train their users with imaginary scenarios where our heroes Alice and Bob breathlessly exchange signed e-mails. Like the passport metaphor, e-mail seems easily understood, but it manifestly has not turned out to be a ‘killer application’, and worse still, has contributed to a host of misunderstandings.
The story usually goes go that Alice has received a secure e-mail from stranger Bob and wishes to work out if he is trustworthy. She double clicks on his digital signature and certificate in order to identify his CA. And now the fun begins. If Alice is not immediately trusting of the CA (presumably by reputation) then she is expected to download the CP and CPS, read them, and satisfy herself that the registration processes and security standards are adequate for her needs.
Does this sort of rigmarole have any parallel in the real world? A simple e-mail with no other context is closely equivalent to a letter or fax sent on plain white paper. Under what circumstances should we take seriously a message sent on plain paper from a stranger, even if we could track down their name?
In truth, the vast majority of serious communications occurs not between strangers but in a rich existing context, where the receiver has already been qualified in some way by the sender as likely being the right party to contact. In e-business, routine transactions are not usually conducted by e-mail but instead use special purpose software or dedicated websites with purpose built content. Thus we see most of the digital signature action in cases such as e-prescriptions, customs broking, trade documentation, company returns, patent filing and electronic conveyancing.
Several important simplifying assumptions flow from the fact that most e-business has a rich context, and these should be heeded when planning PKI:
Emphasise straight-through processing
In spite of the common worked example of Alice and Bob exchanging e-mails, the receiver of most routine transactions – such as payment instructions, tax returns, medical records, import/export declarations, or votes – is not a human but instead is a machine. The notion that a person will examine digital certificates and chase down the CA and its practices is simply false in the vast majority of cases. One of PKI’s great strengths is the way it aids straight-through processing, so it has been a great pity that vendors, through their training and marketing materials, have stressed manual over automatic processing.
Play down Relying Party Agreements
The sender and receiver of digitally signed transactions are hardly ever un-related. This is in stark contrast to orthodox legal analyses of PKI which foundered on the supposed lack of contractual privity between Relying Party and CA. For example the Australian Government’s extensive investigation into legal liability in digital certificates after 76 pages still could not reach a firm conclusion about whether a “CA may owe a duty of care to a [Relying Party] who is not known to the CA” [http://www.egov.vic.gov.au/pdfs/publication_utz1508.pdf]. The fact is, this sort of scenario is entirely academic and should never have been given the level of attention that it was. The idea of a “Relying Party Agreement” to join in contract the RP and the CA is moot in all “closed” e-business settings where PKI in thriving. It is this lesson that needs to be generalised by PKI regulators, not the hypothetical model of “open” PKI where all parties are strangers.
Play down certificate path discovery
The fact that in real life, parties are transacting in the context of some explicit scheme, means that the receiver’s software can predict the type of certificate that will most often be used by senders. For instance, when doctors are using e-prescribing software, there is not going to be a wide choice of certificate options; indeed, the appropriate scheme root keys and certificates for authenticating a whole class of doctors will likely be installed at both the sending and receiving ends, at the same time that the software is. When a doctor writes a prescription, their private key can be programmatically selected by their client and invoked to create a digital signature, according to business rules enshrined in the software design. And when such a transaction is received, the software of the pharmacist (or insurance company, government agency etc.) will similarly ‘know’ by design which classes of certificates are expected to verify the digital signature. All this logic in most transaction systems can be settled at design time, which can greatly simplify the task of certificate path discovery, or eliminate it altogether. In most systems it is straightforward for the sender’s software to attach the whole certificate chain to the digital signature, safe in the knowledge that the receiver’s software will be configured with the necessary trust anchors (i.e. Root CA certificates) with which to parse the chain.
How do we make best sense of the bewildering array of authenticators on the market? Most people are familiar with single factor versus two factor, but this simple dichotomy doesn’t help match technologies to applications. The reality is more complex. A family tree like the one sketched here may help navigate the complexity.
Different distinctions define various branch points. The first split is between what I call Transient authentication (i.e. access control) which tells if a user is allowed to get at a resource or not, and Persistent authentication, which lets a user leave a lasting mark (i.e. signature) on what they do, such as binding electronic transactions.
Working our way up the Transient branch, we see that most access controls are based either on shared secrets or biometrics. Dynamic shared secrets change with every session, either in a series of one time passwords or via challenge-response.
On the biometric branch, we should distinguish those traits that can be left behind inadvertently in the environment and are more readily stolen. The safer biometrics are “clean” and leave no residue. Note that while the voice might be recorded without the speaker’s knowledge, I don't see it as a residual biometric in practice because voice recognition solutions usually use dynamic phrases that resist replay.
For persistent authentication, the only practical option today is PKI and digital signatures, technology which is available in an increasingly wide range of forms. Embedded certificates are commonplace in smartcards, cell phones, and other devices.
The folliage in the family tree indicates which technologies I believe will continue to thrive, and which seem more likely to be dead-ends.
I'd appreciate feedback. Is this useful? Does anyone know of other taxonomies?
Not before time, merchants are pushing back on the PCI-DSS regime, with a new law suit brought by a restaurant against the card companies. Infosec commentators like Ben Wright ask why all the onus should be on merchants when the payments industry could invest in better security technology?
Credit card numbers are a bit like nitroglycerin: handle them with great care or they'll blow up. The slightest slip-up, the smallest weakness in database security in the face of sophisticated Advanced Persistent Threats, and tens of millions of card numbers are lost to criminals. PCI-DSS compliance is fiercely expensive, but all it does is protect against accidents; it is powerless to stop determined attackers or corrupt insiders.
Is it fair to hold merchants responsible for the highly technical handling procedures of the PCI-DSS regime, when instead the card companies could stabilise their highly volatile card data?
The PCI regime is like an elaborate set of handling instructions for nitroglycerin. The approach is unsustainable. What we need is the equivalent of Alfred Nobel's invention of dynamite: something that stabilises the explosive so it's safe to handle.
The fundamental problem with payment card safety (as is the case with most digital identity security) is that numbers are replayable. It's child's play to take account data and replay it against unsuspecting merchants, either via cloned mag stripe cards or even easier, in online Card Not Present fraud.[See also updated CNP fraud trends for FY2011.]
Yet with chip technologies now widespread, and digital signature primitives ubiquitous in computing and Internet platforms, it is technologically straightforward to eliminate replay attacks. We could make cardholder details non-replayable online by digitally signing them, just as EMV Cards do when interacting with merchant terminals. Not only could we dramatically reduce the cost of stolen card details, we'd pull the rug out from under organised crime, and we'd boost privacy by cutting the vicious cycle of gathering more and more ancillary personal data for proving customer identity.
See also Lockstep Technologies' R&D in CNP fraud and digital wallets.