Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

PKI as nature intended

Few technologies are so fundamental and yet so derided at the same time as public key infrastructure. PKI is widely thought of as obsolete or generically intrusive yet it is ubiquitous in SIM cards, SSL, chip and PIN cards, and cable TV. Technically, public key infrastructure Is a generic term for a management system for keys and certificates; there have always been endless ways to build PKIs (note the plural) for different communities, technologies, industries and outcomes. And yet “PKI” has all too often come to mean just one way of doing identity management. In fact, PKI doesn’t necessarily have anything to do with identity at all.

This blog is an edited version of a feature I once wrote for SC Magazine. It is timely in the present day to re-visit the principles that make for good PKI implementations and contextualise them in one of the most contemporary instances of PKI: the FIDO Alliance protocols for secure attribute management. In my view, FIDO realises PKI ‘as nature intended’.

“Re-thinking PKI”

In their earliest conceptions in the early-to-mid 1990s, digital certificates were proposed to authenticate nondescript transactions between parties who had never met. Certificates were construed as the sole means for people to authenticate one another. Most traditional PKI was formulated with no other context; the digital certificate was envisaged to be your all-purpose digital identity.

Orthodox PKI has come in for spirited criticism. From the early noughties, many commentators pointed to a stark paradox: online transaction volumes and values were increasing rapidly, in almost all cases without the help of overt PKI. Once thought to be essential, with its promise of "non repdudiation", PKI seemed anything but, even for significant financial transactions.

There were many practical problems in “big” centralised PKI models. The traditional proof of identity for general purpose certificates was intrusive; the legal agreements were complex and novel; and private key management was difficult for lay people. So the one-size-fits-all electronic passport failed to take off. But PKI's critics sometimes throw the baby out with the bathwater.
In the absence of any specific context for its application, “big” PKI emphasized proof of personal identity. Early certificate registration schemes co-opted identification benchmarks like that of the passport. Yet hardly any regular business transactions require parties to personally identify one another to passport standards.

”Electronic business cards”

Instead in business we deal with others routinely on the basis of their affiliations, agency relationships, professional credentials and so on. The requirement for orthodox PKI users to submit to strenuous personal identity checks over and above their established business credentials was a major obstacle in the adoption of digital certificates.

It turns out that the 'killer applications' for PKI overwhelmingly involve transactions with narrow contexts, predicated on specific credentials. The parties might not know each other personally, but invariably they recognize and anticipate each other's qualifications, as befitting their business relationship.

Successful PKI came to be characterized by closed communities of interest, prior out-of-band registration of members, and in many cases, special-purpose application software featuring additional layers of context, security and access controls.

So digital certificates are much more useful when implemented as application-specific 'electronic business cards,' than as one-size-fits-all electronic passports. And, by taking account of the special conditions that apply to different e-business processes, we have the opportunity to greatly simplify the registration processes, user experience and liability arrangements that go with PKI.

The real benefits of digital signatures

There is a range of potential advantages in using PKI, including its cryptographic strength and resistance to identity theft (when implemented with private keys in hardware). Many of its benefits are shared with other technologies, but at least two are unique to PKI.

First, digital signatures provide robust evidence of the origin and integrity of electronic transactions, persistent over time and over 'distance’ (that is, the separation of sender and receiver). This greatly simplifies audit logging, evidence collection and dispute resolution, and cuts the future cost of investigation and fraud. If a digitally signed document is archived and checked at a later date, the quality of the signature remains undiminished over many years, even if the public key certificate has long since expired. And if a digitally signed message is passed from one relying party to another and on to many more, passing through all manner of intermediate systems, everyone still receives an identical, verifiable signature code authenticating the original message.

Electronic evidence of the origin and integrity of a message can, of course, be provided by means other than a digital signature. For example, the authenticity of typical e-business transactions can usually be demonstrated after the fact via audit logs, which indicate how a given message was created and how it moved from one machine to another. However, the quality of audit logs is highly variable and it is costly to produce legally robust evidence from them. Audit logs are not always properly archived from every machine, they do not always directly evince data integrity, and they are not always readily available months or years after the event. They are rarely secure in themselves, and they usually need specialists to interpret and verify them. Digital signatures on the other hand make it vastly simpler to rewind transactions when required.

Secondly, digital signatures and certificates are machine readable, allowing the credentials or affiliations of the sender to be bound to the message and verified automatically on receipt, enabling totally paperless transacting. This is an important but often overlooked benefit of digital signatures. When processing a digital certificate chain, relying party software can automatically tell that:

    • the message has not been altered since it was originally created
    • the sender was authorized to launch the transaction, by virtue of credentials or other properties endorsed by a recognized Certificate Authority
    • the sender's credentials were valid at the time they sent the message; and
    • the authority which signed the certificate was fit to do so.

One reason we can forget about the importance of machine readability is that we have probably come to expect person-to-person email to be the archetypal PKI application, thanks to email being the classic example to illustrate PKI in action. There is an implicit suggestion in most PKI marketing and training that, in regular use, we should manually click on a digital signature icon, examine the certificate, check which CA issued it, read the policy qualifier, and so on. Yet the overwhelming experience of PKI in practice is that it suits special purpose and highly automated applications, where the usual receiver of signed transactions is in fact a computer.

Characterising good applications

Reviewing the basic benefits of digital signatures allows us to characterize the types of e-business applications that merit investment in PKI.

Applications for which digital signatures are a good fit tend to have reasonably high transaction volumes, fully automatic or straight-through processing, and multiple recipients or multiple intermediaries between sender and receiver. In addition, there may be significant risk of dispute or legal ramifications, necessitating high quality evidence to be retained over long periods of time. These include:

    • Tax returns
    • Customs reporting
    • E-health care
    • Financial trading
    • Insurance
    • Electronic conveyancing
    • Superannuation administration
    • Patent applications.

This view of the technology helps to explain why many first-generation applications of PKI were problematic. Retail internet banking is a well-known example of e-business which flourished without the need for digital certificates. A few banks did try to implement certificates, but generally found them difficult to use. Most later reverted to more conventional access control and backend security mechanisms.Yet with hindsight, retail funds transfer transactions did not have an urgent need for PKI, since they could make use of existing backend payment systems. Funds transfer is characterized by tightly closed arrangements, a single relying party, built-in limits on the size of each transaction, and near real-time settlement. A threat and risk assessment would show that access to internet banking can rest on simple password authentication, in exactly the same way as antecedent phone banking schemes.

Trading complexity for specificity

As discussed, orthodox PKI was formulated with the tacit assumption that there is no specific context for the transaction, so the digital certificate is the sole means for authenticating the sender. Consequently, the traditional schemes emphasized high standards of personal identity, exhaustive contracts and unusual legal devices like Relying Party Agreements. They also often resorted to arbitrary 'reliance limits,' which have little meaning for most of the applications listed on the previous page. Notoriously, traditional PKI requires users to read and understand certification practice statements (CPS).

All that overhead stemmed from not knowing what the general-purpose digital certificate was going to be used for. On the other hand, if particular digital certificates are constrained to defined applications, then the complexity surrounding their specific usage can be radically reduced.

The role of PKI in all contemporary 'killer applications' is fundamentally to help automate the online processing of electronic transactions between parties with well-defined credentials. This is in stark contrast to the way PKI has historically been portrayed, where strangers Alice and Bob use their digital certificates to authenticate context-free general messages, often presumed to be sent by email. In reality, serious business messages are never sent stranger-to-stranger with no context or cues as to the parties' legitimacy.

Using generic email is like sending a fax on plain paper. Instead, business messaging is usually highly structured. Parties have an expectation that only certain types of transactions are going to occur between them and they equip themselves accordingly (for instance, a health insurance office is not set up to handle tax returns). The sender is authorized to act in defined types of transactions by virtue of professional credentials, a relevant license, an affiliation with some authority, endorsement by their employer, and so on. And the receiver recognizes the source of those credentials. The sender and receiver typically use prescribed forms and/or special purpose application software with associated user agreements and license conditions, adding context and additional layers of security around the transaction.

PKI got smart

When PKI is used to help automate the online processing of transactions between parties in the context of an existing business relationship, we should expect the legal arrangements between the parties to still apply. For business applications where digital certificates are used to identify users in specific contexts, the question of legal liability should be vastly simpler than it is in the general purpose PKI scenario where the issuer does not know what the certificates might be used for.
The new vision for PKI means the technology and processes should be no more of a burden on the user than a bank card. Rather than imagine that all public key certificates are like general purpose electronic passports, we can deploy multiple, special purpose certificates, and treat them more like electronic business cards. A public key certificate issued on behalf of a community of business users and constrained to that community can thereby stand for any type of professional credential or affiliation.

We can now automate and embed the complex cryptography deeply into smart devices -- smartcards, smart phones, USB keys and so on -- so that all terms and conditions for use are application focused. As far as users are concerned, a smartcard can be deployed in exactly the same way as any magnetic stripe card, without any need to refer to - or be limited by - the complex technology contained within (see also Simpler PKI is on the cards). Any application-specific smartcard can be issued under rules and controls that are fit for their purpose, as determined by the community of users or an appropriate recognized authority. There is no need for any user to read a CPS. Communities can determine their own evidence-of-identity requirements for issuing cards, instead of externally imposed personal identity checks. Deregulating membership rules dramatically cuts the overheads traditionally associated with certificate registration.

Finally, if we constrain the use of certificates to particular applications then we can factor the intended usage into PKI accreditation processes. Accreditation could then allow for particular PKI scheme rules to govern liability. By 'black-boxing' each community's rules and arrangements, and empowering the community to implement processes that are fit for its purpose, the legal aspects of accreditation can be simplified, reducing one of the more significant cost components of the whole PKI exercise (having said that, it never ceases to amaze how many contemporary healthcare PKIs still cling onto face-to-face passport grade ID proofing as if that's the only way to do digital certificates).

Fast forward

The preceding piece is a lightly edited version of the article ”Rethinking PKI” that first appeared in Secure Computing Magazine in 2003. Now, over a decade later, we’re seeing the same principles realised by the FIDO Alliance.

The FIDO protocols U2F and UAF enable specific attributes of a user and their smart devices to be transmitted to a server. Inherent to the FIDO methods are digital certificates that confer attributes and not identity, relatively large numbers of private keys stored locally in the users’ devices (and without the users needing to be aware of them as such) and digital signatures automatically applied to protocol messages to bind the relevant attributes to the authentication exchanges.

Surely, this is how PKI should have been deployed all along.

Posted in Security, PKI, Internet, Identity

Dumbing down Snowden

Ed Snowden was interviewed today as part of the New Yorker festival. This TechCruch report says Snowden "was asked a couple of variants on the question of what we can do to protect our privacy. His first answer called for a reform of government policies." He went on to add some remarks about Google, Facebook and encryption and that's what the report chose to focus on. The TechCrunch headline: "Snowden's Privacy Tips".

Mainstream and even technology media reportage does Snowden a terrible disservice and takes the pressure off from government policy.

I've listened to the New Yorker online interview. After being asked by a listener what they should do about privacy, Snowden gave a careful, nuanced, and comprehensive answer over five minutes. His very first line was this is an incredibly complex topic and he did well to stick to plain language throughout. He canvassed a great many issues including: the need for policy reform, the 'Nothing to Hide' argument, the inversion of civil rights when governments ask us to justify the right to be left alone, the collusion of companies and governments, the poor state of product security and usability, the chilling effect on industry of government intervention in security, metadata, and the radicalisation of computer scientists today being comparable with physicists in the Cold War.

Only after all that, and a follow up question about 'ordinary people', did Snowden say 'don't use Dropbox'.

Consistently, when Snowden is asked what to do about privacy, his answers are primarily about politics not technology. When pressed, he dispenses the odd advice about using Tor and disk encryption, but Snowden's chief concerns (as I have discussed in depth previously) are around accountability, government transparency, better cryptology research, better security product quality, and so on. He is no hacker.

I am simply dismayed how Snowden's sophisticated analyses are dumbed down to security tips. He has never been a "cyber Agony Aunt". The proper response to NSA overreach has to be agitation for regime change, not do-it-yourself cryptography. That is Snowden's message.

Posted in Social Media, Security, Privacy, Internet

Four Corners' 'Privacy Lost': A demonstration of the Collection Principle

Tonight, Australian Broadcasting Corporation’s Four Corners program aired a terrific special, "Privacy Lost" written and produced by Martin Smith from the US public broadcaster PBS’s Frontline program.

Here we have a compelling demonstration of the importance and primacy of Collection Limitation for protecting our privacy.

UPDATE: The program we saw in Australia turns out to be a condensed version of PBS's two part The United States of Secrets from May 2014.

About the program

Martin Smith summarises brilliantly what we know about the NSA’s secret surveillance programs, thanks to the revelations of Ed Snowden, the Guardian’s Glenn Greenwald and the Washington Post’s Barton Gellman; he holds many additional interviews with Julia Angwin (author of “Dragnet Nation”), Chris Hoofnagle (UC Berkeley), Steven Levy (Wired), Christopher Soghoian (ACLU) and Tim Wu (“The Master Switch”), to name a few. Even if you’re thoroughly familiar with the Snowden story, I highly recommend “Privacy Lost” or the original "United States of Secrets" (which unlike the Four Corners edition can be streamed online).

The program is a ripping re-telling of Snowden’s expose, against the backdrop of George W. Bush’s PATRIOT Act and the mounting suspicions through the noughties of NSA over-reach. There are freshly told accounts of the intrigues, of secret optic fibre splitters installed very early on in AT&T’s facilities, scandals over National Security Letters, and the very rare case of the web hosting company Calyx who challenged their constitutionality (and yet today, with the letter withdrawn, remains unable to tell us what the FBI was seeking). The real theme of Smith’s take on surveillance then emerges, when he looks at the rise of data-driven businesses -- first with search, then advertising, and most recently social networking -- and the “data wars” between Google, Facebook and Microsoft.

In my view, the interplay between government surveillance and digital businesses is the most important part of the Snowden epic, and it receives the proper emphasis here. The depth and breadth of surveillance conducted by the private sector, and the insights revealed about what people might be up to creates irresistible opportunities for the intelligence agencies. Hoofnagle tells us how the FBI loves Facebook. And we see the discovery of how the NSA exploits the tracking that’s done by the ad companies, most notably Google’s “PREF” cookie.

One of the peak moments in “Privacy Lost” comes when Gellman and his specialist colleague Ashkan Soltani present their evidence about the PREF cookie to Google – offering an opportunity for the company to comment before the story is to break in the Washington Post. The article ran on December 13, 2013; we're told it was then the true depth of the privacy problem was revealed.

My point of view

Smith takes as a given that excessive intrusion into private affairs is wrong, without getting into the technical aspects of privacy (such as frameworks for data protection, and various Privacy Principles). Neither does he unpack the actual privacy harms. And that’s fine -- a TV program is not the right place to canvass such technical arguments.

When Gellman and Soltani reveal that the NSA is using Google’s tracking cookie, the government gets joined irrefutably to the private sector in a mass surveillance apparatus. And yet I am not sure the harm is dramatically worse when the government knows what Facebook and Google already know.

Privacy harms are tricky to work out. Yet obviously no harm can come from abusing Personal Information if that information is not collected in the first place! I take away from “Privacy Lost” a clear impression of the risks created by the data wars. We are imperiled by the voracious appetite of digital businesses that hang on indefinitely to masses of data about us, while they figure out ever cleverer ways to make money out of it. This is why Collection Limitation is the first and foremost privacy protection. If a business or government doesn't have a sound and transparent reason for having Personal Information about us, then they should not have it. It’s as simple as that.

Martin Smith has highlighted the symbiosis between government and private sector surveillance. The data wars not only made dozens of billionaires but they did much of the heavy lifting for the NSA. And this situation is about to get radically more fraught. On the brink of the Internet of Things, we need to question if we want to keep drowning in data.

Posted in Social Networking, Social Media, Security, Privacy, Internet

Unintended consequences

The "Right to be Forgotten" debate reminds me once again of the cultural differences between technology and privacy.

On September 30, I was honoured to be part of a panel discussion hosted by the IEEE on RTBF; a recording can be viewed here. In a nutshell, the European Court of Justice has decided that European citizens have the right to ask search engine businesses to suppress links to personal information, under certain circumstances. I've analysed and defended the aims of the ECJ in another blog.

One of the IEEE talking points was why RTBF has attracted so much scorn. My answer was that some critics appear to expect perfection in the law; when they look at the RTBF decision, all they see is problems. Yet nobody thinks this or any law is perfect; the question is whether it helps improve the balance of rights in a complex and fast changing world.

It's a little odd that technologists in particular are so critical of imperfections in the law, when they know how flawed is technology. Indeed, the security profession is almost entirely concerned with patching problems, and reminding us there will never be perfect security.

Of course there will be unwanted side-effects of the new RTBF rules and we should trust that over time these will be reviewed and dealt with. I wish that privacy critics could be more humble about this unfolding environment. I note that when social conservatives complain about online pornography, or when police decry encryption as a tool of criminals, technologists typically play those problems down as the unintended consequences of new technologies, which on average overwhelmingly do good not evil.

And it's the same with the law. It really shouldn't be necessary to remind anyone that laws have unintended consequences, for they are the stuff of the entire genre of courtroom drama. So everyone take heart: the good guys nearly always win in the end.

Posted in Security, Privacy, Culture

Simply Secure is not simply private

Another week, another security collaboration launch!

"Simply Secure" calls itself “a small but growing organization [with] expertise in usability research, design, software development, and product management". Their mission has to do with improving the security functions that built-in so badly in most software today. Simply Secure is backed by Google and Dropbox, and supported by a diverse advisory board.

It's early days (actually early day, singular) so it might be churlish to point out that Simply Secure's strategic messaging is a little uneven ... except that the words being used to describe it shed light on the clarity of the thinking.

My first exposure to Simply Secure came last night, when I read an article in the Guardian by Cory Doctorow (who is one of their advisers). Doctorow places enormous emphasis on privacy; the word “privacy" outnumbers “security" 16 to three in the body of his column. Another admittedly shorter report about the launch by The Next Web doesn't mention privacy at all. And then there's the Simply Secure blog post, which cites privacy a great deal but every single time in conjunction with security, as in “security and privacy". That repeated phrasing conveys, to me at least, some discomfort. As I say, it's early days and the team is doubtless sorting out how to weigh and progress these closely related objectives.

But I hope they do it quickly. On the face of it, Simply Secure might only scratch the surface of privacy.

Doctorow's Guardian article is mostly concerned with encryption and the terrible implementations that have plagued us since the dawn of the Internet. It's definitely important that we improve here – and radically. If the Simply Secure initiative does nothing but make encryption easier to integrate into commodity software, that would be a great thing. I'm all for it. But it won't necessarily or even probably lead to better privacy, because privacy is about restraint not secrecy or anonymity.
As we go about our lives, we actually want to be known by others, but we want those who know us to be restrained in what they do with the knowledge they have about us. Privacy is the protection you need when your affairs are not secret.

I know Doctorow knows this – I've seen his terrific little speech on the steps on Comic-Con about PRISM. So I'm confused by his focus on cryptography.

How far does encryption get us? If we're using social networks, or if we're shopping and opting in to loyalty programs or selected targeted marketing, or if we're sharing our medical records with relatives, medicos, hospitals and researchers, then encryption becomes moot. We need mechanisms to restrain what the receivers of our personal information do with it. We all know the business model at work behind “free" online services; using encryption to protect privacy in social networking for instance would be like using an armoured van to deliver your valuables to Bernie Madoff.

Another limitation of user-centric or user-managed encryption has to do with Big Data. A great deal of personal information about us is created and collected unseen behind our backs, by sensors, and by analytics processes than manage to work out who we are by linking disparate data streams together. How could SS ameliorate those sorts of problems? If the SS vision includes encryption at rest as well as in transit, then how will the user control or even see all the secondary uses of their encrypted personal information?

There's a combativeness in Doctorow's explanation of Simply Secure and his tweets from yesterday on the topic. His aim is expressly to thwart the surveillance state, which in his view includes a symbiosis (if not conspiracy) between government and internet companies, where the former gets their dirty work done by the latter. I'm sure he and I both find that abhorrent in equal measure. But I argue the proper response to these egregious behaviours is political not technological (and political in the broad sense; I love that Snowden talks as much about accountability, legal processes, transparency and research as he does about encryption). If you think the government is exploiting the exploiters, then DIY encryption is a pretty narrow counter-measure. This is not the sort of society we want to live in, so let's work to change the establishment, rather than try to take it on in a crypto shoot-out.

Yes security technology is important but it's not nearly as important for privacy as the Rule of Law. Data privacy regimes instil restraint. The majority of businesses come to know that they are not at liberty to over-collect personal information, nor to re-use personal information unexpectedly and without consent. A minority of organisations flout data privacy principles, for example by slyly refining raw data into valuable personal knowledge, exploiting the trust citizens and users put in them. Some of these outfits flourish in the United States – the Canary Islands of privacy. Worldwide, the policing of privacy is patchy indeed, yet there have been spectacular legal victories in Europe and elsewhere against the excessive practices of really big companies like Facebook with their biometric data mining of photo albums, and Google's drift net-like harvesting of traffic from unencrypted Wi-Fi networks.

Pragmatically, I'm afraid encryption is such a fragile privacy measure. Once secrecy is penetrated, we need regulations to stem exploitation of our personal information.

By all means, let's improve cryptographic engineering and I wish the Simply Secure initiative all the best. So long as they don't call security privacy.

Posted in Security, Privacy, Language, Big Data

FIDO Alliance - Update

You can be forgiven if the FIDO Alliance is not on your radar screen. It was launched barely 18 months ago, to help solve the "password crisis" online, but it's already proven to be one of most influential security bodies yet.

The typical Internet user has dozens of accounts and passwords. Not only are they a pain in the arse, poor password practices are increasingly implicated in fraud and terrible misadventures like the recent "iCloud Hack" which exposed celebrities' personal details.

With so many of our assets, our business and our daily lives happening in cyberspace, we desperately need better ways to prove who we are online – and even more importantly, prove what we entitled to do there.

The FIDO Alliance is a new consortium of identity management vendors, product companies and service providers working on strong authentication standards. FIDO’s vision is to tap the powers of smart devices – smart phones today and wearables tomorrow – to log users on to online services more securely and more conveniently.

FIDO was founded by Lenovo, PayPal, and security technology companies AGNITiO, Nok Nok Labs and Validity Sensors, and launched in February 2013. Since then the Alliance has grown to over 130 members. Two new authentication standards have been published for peer review, half a dozen companies showcased FIDO-Ready solutions at the 2014 Consumer Electronic Show (CES) in Las Vegas, and PayPal has released its ground-breaking pay-by-fingerprint app for the Samsung Galaxy S5.

The FIDO Alliance includes technology heavyweights like Google, Lenovo, Microsoft and Samsung; payments giants Discover, MasterCard, PayPal and Visa; financial services companies such as Aetna, Bank of America and Goldman Sachs; and e-commerce players like Netflix and Salesforce.com. There are also a couple of dozen biometrics vendors, many leading Identity and Access Management (IDAM) solutions and services, and almost every cell phone SIM and smartcard supplier in the world.

I have been watching FIDO since its inception and reporting on it for Constellation Research. The third update in my series of research reports on FIDO is now available and can be downloaded here. The report looks in depth at what the Alliance has to offer vendors and end user communities, its critical success factors, and how and why this body is poised to shake up authentication like never before.

Posted in Constellation Research, Identity, Security, Smartcards

Safeguarding the pedigree of identifiers

The problem of identity takeover

The root cause of much identity theft and fraud today is the sad fact that customer reference numbers and personal identifiers are so easy to copy. Simple numerical data like bank account numbers and health IDs can be stolen from many different sources, and replayed in bogus trans-actions.

Our personal data nowadays is leaking more or less constantly, through breached databases, websites, online forms, call centres and so on, to such an extent that customer reference numbers on their own are no longer reliable. Privacy consequentially suffers because customers are required to assert their identity through circumstantial evidence, like name and address, birth date, mother’s maiden name and other pseudo secrets. All this data in turn is liable to be stolen and used against us, leading to spiralling identity fraud.

To restore the reliability of personal identifiers, we need to know their pedigree. We need to know that a presented number is genuine, that it originated from a trusted authority, it’s been stored safely by its owner, and it’s been presented with the owner’s consent.

"Notarising" personal data in chip devices

There are ways of issuing personal data to a smart chip device that prevent those data from being stolen, copied and claimed by anyone else. One way to do so is to encapsulate and notarise personal data in a unique digital certificate issued to a chip. Today, a great many personal devices routinely embody cryptographically suitable chips for this purpose, including smart phones, SIM cards, “Secure Elements”, smartcards and many wearable computers.

Consider an individual named Smith to whom Organisation A has issued a unique customer reference number N. If N is saved in ordinary computer memory or something like a magnetic stripe card, then it has no pedigree. Once the number N is presented by the cardholder in a transaction, it looks like any other number. To better safeguard N in a chip device, it can be sealed into a digital certificate, as follows:

1. generate a fresh private-public key pair inside Smith’s chip
2. export the public key
3. create a digital certificate around the public key, with an attribute corresponding to N
4. have the certificate signed by (or on behalf of) organisation A.

Pedigree Diagram 140901

The result of coordinating these processes and technologies is a logical triangle that inextricably binds cardholder Smith to their reference number N and to a specific personally controlled device. The certificate signed by organisation A attests to Smith’s ownership of both N and a particular key unique to the device. Keys generated inside the chip are retained internally, never divulged to outsiders. It is impossible to copy the private key to another device, so the triangle cannot be cloned, reproduced or counterfeited.

Note that this technique lies at the core of the EMV "Chip-and-PIN" system where the smart payment card digitally signs cardholder and transaction data, rendering it immune to replay, before sending it to the merchant terminal. See also my 2012 paper Calling for a uniform approach to card fraud, offline and on. Now we should generalise notarised personal data and digitally signed transactions beyond Card-Present payments into as much online business as possible.

Restoring privacy and consumer control

When Smith wants to present their personal number in an electronic transaction, instead of simply copying N out of memory (at which point it would lose its pedigree), Smith’s transaction software digitally signs the transaction using the certificate containing N. With standard security software, any third party can then verify that the transaction originated from a genuine chip holding the unique key certified by A as matching the number N.

Note that N doesn’t have to be a customer number or numeric identifier; it could be any personal data, such as a biometric template or a package of medical information like an allergy alert.

The capability to manage multiple key pairs and certificates, and to sign transactions with a nominated private key, is increasingly built into smart devices today. By narrowing down what you need to know about someone to a precise customer reference number or similar personal data item, we will reduce identity theft and fraud while radically improving privacy. This sort of privacy enhancing technology is the key to a safe Internet of Things, and fortunately now is widely available.

Posted in Smartcards, Security, PKI, Payments, Identity, Fraud, Biometrics

Engaging engineers in privacy

Updated from original post January 2013.

I have come to believe that a systemic conceptual shortfall affects typical technologists’ thinking about privacy. It may be that engineers tend to take literally the well-meaning slogan that “privacy is not a technology issue”. And I say this in all seriousness. We are forever sugar coating privacy, urging that "privacy is good for business". It's naive. There are plenty of extremes where - sadly - some businesses do very well ignoring privacy. In the mainstream, many organization struggle to resolve privacy with other competing demands, like security, usability, cost and time to market.

I believe the best thing we can do for privacy systemically is to treat it like another one of the many often conflicting requirements faced by designers and engineers, and improve the tools they have to resolve the right balance. This is what engineers do.

Online, we’re talking about data privacy, or data protection, but systems designers bring to work a spectrum of personal outlooks about privacy in the human sphere. Yet what matters is the precise wording of data privacy law, like Australia’s Privacy Act. To illustrate the difference, here’s the sort of experience I’ve had time and time again.

During the course of conducting a PIA in 2011, I spent time with the development team working on a new government database. These were good, senior people, with sophisticated understanding of information architecture, and they’d received in-house privacy training. But they harboured restrictive views about privacy. An important clue was the way they habitually referred to “private” information rather than Personal Information (or equivalently, Personally Identifiable Information, PII). After explaining that Personal Information is the operable term in Australian legislation, and reviewing its definition as essentially any information about an identifiable person, we found that the team had not appreciated the extent of the PII in their system. They had overlooked that most of their audit logs collect PII, albeit indirectly and automatically, and that information about clients in their register provided by third parties was also PII (despite it being intuitively ‘less private’ by virtue of originating from others).

I attributed these blind spots to the developers’ loose framing of “private” information. Online and in privacy law alike, things are very crisp. The definition of PII as any data relating to an individual whose identity is readily apparent sets a low bar, embracing a great many data classes and, by extension, informatics processes. It might be counter-intuitive that PII originating from so many places (even the public domain) falls under privacy regulations, yet the definition of PII is clear cut and readily factored into systems analysis. After getting that, the team engaged in the PIA with fresh energy, and we found and rectified several privacy risks that had gone unnoticed.

Here are some more of the recurring misconceptions I’ve noticed over the past decade:


  • “Personal” Information is sometimes taken to mean especially delicate information such as payment card details, rather than any information pertaining to an identifiable individual; see also this exchange with US data breach analyst Jake Kouns over the Epsilon incident in 2011 in which tens of millions of user addresses were taken from a bulk email house;
  • the act of collecting PII is sometimes regarded only in relation to direct collection from the individual concerned; technologists can overlook that PII provided by a third party to a data custodian is nevertheless being collected by the custodian; likewise technologists may not appreciate that generating PII internally, through event logging for instance, also represent collection.

These instances and others show that many ICT practitioners suffer important gaps in their understanding. Security professionals in particular may be forgiven for thinking that most legislated Privacy Principles are legal technicalities irrelevant to them, for generally only one of the principles in any given set is overtly about security. Yet every one of the privacy principles in any data protection regime are impacted by information technology and security practices; see Mapping Privacy requirements onto the IT function, Privacy Law & Policy Reporter, v10.1 & 10.2, 2003. I believe the gaps in the privacy knowledge of ICT practitioners are not random but are systemic, probably resulting from privacy training for non-privacy professionals not being properly integrated with their particular world views.

To properly deal with data privacy, ICT practitioners need to have privacy framed in a way that leads to objective design requirements. Luckily there already exist several unifying frameworks for systematising the work of development teams. One tool that resonates strongly with data privacy practice is the Threat & Risk Assessment (TRA).

A TRA is for analysing infosec requirements and is widely practiced in the public and private sectors in Australia. There are a number of standards that guide the conduct of TRAs, such as ISO 31000. A TRA is used to systematically catalogue all foreseeable adverse events that threaten an organisation’s information assets, identify candidate security controls to mitigate those threats, and prioritise the deployment of controls to bring all risks down to an acceptable level. The TRA process delivers real world management decisions, understanding that non zero risks are ever present, and that no organisation has an unlimited security budget.

The TRA exercise is readily extensible to help Privacy by Design. A TRA can expressly incorporate privacy as an aspect of information assets worth protecting, alongside the conventional security qualities of confidentiality, integrity and availability ("C.I.A.").

Lockstep AusCERT 2013 Designing Privacy by Design (1 2) ASSET INVENTORY  pbd tra

A crucial subtlety here is that privacy is not the same as confidentiality, yet they are frequently conflated. A fuller understanding of privacy leads designers to consider the Collection, Use, Disclosure and Access & Correction principles, over and above confidentiality when they analyse information assets. The table above illustrates how privacy related factors can be accounted for alongside “C.I.A.”. In another blog post I discuss the selection of controls to mitigate privacy threats, within a unified TRA framework.

And in this post I look at how the definitional uncertainties in privacy and the unfolding identifiability of PII should not cause security professionals much anxiety - because they're trained to deal with uncertainties and likelihoods.

We continue to actively research the closer integration of security and privacy practices.

Posted in Security, Privacy

Hacking over breakfast

The other morning, out of the blue, a sort of mini DEF CON came to a business breakfast in Sydney, with a public demonstration of how to crack the Australian government's logons for businesses.

Hardware infosec specialists ICT Security convened a breakfast meeting ostensibly to tell people about Bitcoin. The only clue they had a bigger agenda was buried in the low key byline "How could Bitcoin technology compromise your password database security?". I confess I missed the sub-plot altogether.

After a wide-ranging introduction to all things Bitcoin - including the theory of money, random numbers, Block Chains, ASICs and libertarianism - an ICT Security architect stepped up to talk about AusKEY, the Australian Government B2G Single Sign On system. And what was the Bitcoin connection? Well it happens that the technology needed for Bicoin mining - namely affordable, high-performance custom chips for number crunching - is exactly what's needed to mount brute-force attacks on hashed passwords. And so ICT Security went on to demonstrate that the typical AusKEY password can be easily cracked. Moreover, they also showed off security holes in the AusKEY Java code where 'master' key details can be found in the clear.

The company says it has brought these vulnerabilities to the government's attention.

They said that their technique could defeat passwords as long as 10 mixed characters, which exceeds the regular advice for password safe practices.

It's not entirely clear what ICT Security was seeking to achieve by now demonstrating the attack in public.

White hat exposees are a keen feature of the security ecosystem, and very problematic. In Australia, such exercises are often met with criminal investigation. For example, in 2011 First State Super reported a young man to police after he sent them evidence that he found how the fund's client logons could be guessed. Early this year, Public Transport Victoria called in the law after a self-professed "security researcher" reported (at first privately) a simple hack to expose travellers' confidential details. And merely being in possession of evidence of an alleged cyber break-in was enough to get journalist Ben Grubb arrested by Queensland Police in 2011. So alleged hacking can attract zealous policing casting a wide net.

Government security managers will likely be smarting about the adverse AusKEY publicity. Just three months ago the hacker and writer Nik Cubrilovic published a raft of weaknesses in "MyGov", a Single Sign On for individuals in Australia's social security system. In classic style, Cubrilovic first raised his findings privately with the Department of Human Services, but when he got no satisfaction, he went public. At this stage, I don't know if the government has taken the MyGov matter further.

For mine, the main lesson of this morning's demonstration is that single factor government authentication is obsolete. It is not good enough for citizens to be brought into e-government systems using twenty year old password security. The world is moving on and fast; see the advances being made by the FIDO Alliance to standardise Multi Factor Authentication.

In fact the AusKEY system actually offers an optional hardware USB key, but it hasn't been popular. That must change. E-government is way too important for single factor authentication. Which is probably the name of ICT Security's game.

Posted in Security

Getting the security privacy balance wrong

National security analyst Dr Anthony Bergin of the Australian Strategic Policy Institute wrote of the government’s data retention proposals in the Sydney Morning Herald of August 14. I am a privacy advocate who accepts in fact that law enforcement needs new methods to deal with terrorism. I myself do trust there is a case for greater data retention in order to weed out terrorist preparations, but I reject Bergin’s patronising call that “Privacy must take a back seat to security”. He speaks soothingly of balance yet he rejects privacy out of hand. As such his argument for balance is anything but balanced.

Suspicions are rightly raised by the murkiness of the Australian government’s half-baked data retention proposals and by our leaders’ excruciating inability to speak cogently even about the basics. They bandy about metaphors for metadata that are so bad, they smack of misdirection. Telecommunications metadata is vastly more complex than addresses on envelopes; for one thing, the Dynamic IP Addresses of cell phones means for police to tell who made a call requires far more data than ASIO and AFP are letting on (more on this by Internet expert Geoff Huston here).

The way authorities jettison privacy so casually is of grave concern. Either they do not understand privacy, or they’re paying lip service to it. In truth, data privacy is simply about restraint. Organisations must explain what personal data they collect, why they collect, who else gets to access the data, and what they do with it. These principles are not at all at odds with national security. If our leaders are genuine in working with the public on a proper balance of privacy and security, then long-standing privacy principles about proportionality, transparency and restraint provide the perfect framework in which to hold the debate. Ed Snowden himself knows this; people should look beyond the trite hero-or-pariah characterisations and listen to his balanced analysis of national security and civil rights.

Cryptographers have a saying: There is no security in obscurity. Nothing is gained by governments keeping the existence of surveillance programs secret or unexplained, but the essential trust of the public is lost when their privacy is treated with contempt.

Posted in Trust, Security, Privacy