Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Getting the security privacy balance wrong

National security analyst Dr Anthony Bergin of the Australian Strategic Policy Institute wrote of the government’s data retention proposals in the Sydney Morning Herald of August 14. I am a privacy advocate who accepts in fact that law enforcement needs new methods to deal with terrorism. I myself do trust there is a case for greater data retention in order to weed out terrorist preparations, but I reject Bergin’s patronising call that “Privacy must take a back seat to security”. He speaks soothingly of balance yet he rejects privacy out of hand. As such his argument for balance is anything but balanced.

Suspicions are rightly raised by the murkiness of the Australian government’s half-baked data retention proposals and by our leaders’ excruciating inability to speak cogently even about the basics. They bandy about metaphors for metadata that are so bad, they smack of misdirection. Telecommunications metadata is vastly more complex than addresses on envelopes; for one thing, the Dynamic IP Addresses of cell phones means for police to tell who made a call requires far more data than ASIO and AFP are letting on (more on this by Internet expert Geoff Huston here).

The way authorities jettison privacy so casually is of grave concern. Either they do not understand privacy, or they’re paying lip service to it. In truth, data privacy is simply about restraint. Organisations must explain what personal data they collect, why they collect, who else gets to access the data, and what they do with it. These principles are not at all at odds with national security. If our leaders are genuine in working with the public on a proper balance of privacy and security, then long-standing privacy principles about proportionality, transparency and restraint provide the perfect framework in which to hold the debate. Ed Snowden himself knows this; people should look beyond the trite hero-or-pariah characterisations and listen to his balanced analysis of national security and civil rights.

Cryptographers have a saying: There is no security in obscurity. Nothing is gained by governments keeping the existence of surveillance programs secret or unexplained, but the essential trust of the public is lost when their privacy is treated with contempt.

Posted in Trust, Security, Privacy

Are we ready to properly debate surveillance and privacy?

The cover of Newsweek magazine on 27 July 1970 featured an innocent couple being menaced by cameras and microphones and new technologies like computer punch cards and paper tape. The headline hollered “IS PRIVACY DEAD?”.

The same question has been posed every few years ever since.

In 1999, Sun Microsystems boss Scott McNally urged us to “get over” the idea we have “zero privacy”; in 2008, Ed Giorgio from the Office of the US Director of National Intelligence chillingly asserted that “privacy and security are a zero-sum game”; Facebook’s Mark Zuckerberg proclaimed in 2010 that privacy was no longer a “social norm”. And now the scandal around secret surveillance programs like PRISM and the Five Eyes’ related activities looks like another fatal blow to privacy. But the fact that cynics, security zealots and information magnates have been asking the same rhetorical question for over 40 years suggests that the answer is No!

PRISM, as revealed by whistle blower Ed Snowden, is a Top Secret electronic surveillance program of the US National Security Agency (NSA) to monitor communications traversing most of the big Internet properties including, allegedly, Apple, Facebook, Google, Microsoft, Skype, Yahoo and YouTube. Relatedly, intelligence agencies have evidently also been obtaining comprehensive call records from major telephone companies, eavesdropping on international optic fibre cables, and breaking into the cryptography many take for granted online.

In response, forces lined up at tweet speed on both sides of the stereotypical security-privacy divide. The “hawks” say privacy is a luxury in these times of terror, if you've done nothing wrong you have nothing to fear from surveillance, and in any case, much of the citizenry evidently abrogates privacy in the way they take to social networking. On the other side, libertarians claim this indiscriminate surveillance is the stuff of the Stasi, and by destroying civil liberties, we let the terrorists win.

Governments of course are caught in the middle. President Obama defended PRISM on the basis that we cannot have 100% security and 100% privacy. Yet frankly that’s an almost trivial proposition. It's motherhood. And it doesn’t help to inform any measured response to the law enforcement challenge, for we don’t have any tools that would let us design a computer system to an agreed specification in the form of, say “98% Security + 93% Privacy”. It’s silly to us the language of “balance” when we cannot measure the competing interests objectively.

Politicians say we need a community debate over privacy and national security, and they’re right (if not fully conscientious in framing the debate themselves). Are we ready to engage with these issues in earnest? Will libertarians and hawks venture out of their respective corners in good faith, to explore this difficult space?

I suggest one of the difficulties is that all sides tend to confuse privacy for secrecy. They’re not the same thing.

Privacy is a state of affairs where those who have Personal Information (PII) about us are constrained in how they use it. In daily life, we have few absolute secrets, but plenty of personal details. Not many people wish to live their lives underground; on the contrary we actually want to be well known by others, so long as they respect what they know about us. Secrecy is a sufficient but not necessary condition for privacy. Robust privacy regulations mandate strict limits on what PII is collected, how it is used and re-used, and how it is shared.

Therefore I am a privacy optimist. Yes, obviously too much PII has broken the banks in cyberspace, yet it is not necessarily the case that any “genie” is “out of the bottle”.
If PII falls into someone’s hands, privacy and data protection legislation around the world provides strong protection against re-use. For instance, in Australia Google was found to have breached the Privacy Act when its StreetView cars recorded unencrypted Wi-Fi transmissions; the company cooperated in deleting the data concerned. In Europe, Facebook’s generation of tag suggestions without consent by biometric processes was ruled unlawful; regulators there forced Facebook to cease facial recognition and delete all old templates.

We might have a better national security debate if we more carefully distinguished privacy and secrecy.

I see no reason why Big Data should not be a legitimate tool for law enforcement. I have myself seen powerful analytical tools used soon after a terrorist attack to search out patterns in call records in the vicinity to reveal suspects. Until now, there has not been the technological capacity to use these tools pro-actively. But with sufficient smarts, raw data and computing power, it is surely a reasonable proposition that – with proper and transparent safeguards in place – population-wide communications metadata can be screened to reveal organised crimes in the making.

A more sophisticated and transparent government position might ask the public to give up a little secrecy in the interests of national security. The debate should not be polarised around the falsehood that security and privacy are at odds. Instead we should be debating and negotiating appropriate controls around selected metadata to enable effective intelligence gathering while precluding unexpected re-use. If (and only if) credible and verifiable safeguards can be maintained to contain the use and re-use of personal communications data, then so can our privacy.

For me the awful thing about PRISM is not that metadata is being mined; it’s that we weren’t told about it. Good governments should bring the citizenry into their confidence.

Are we prepared to honestly debate some awkward questions?

  • Has the world really changed in the past 10 years such that surveillance is more necessary now? Should the traditional balances of societal security and individual liberties enshrined in our traditional legal structures be reviewed for a modern world?
  • Has the Internet really changed the risk landscape, or is it just another communications mechanism. Is the Internet properly accommodated by centuries old constitutions?
  • How can we have confidence in government authorities to contain their use of communications metadata? Is it possible for trustworthy new safeguards to be designed?

Many years ago, cryptographers adopted a policy of transparency. They have forsaken secret encryption algorithms, so that the maths behind these mission critical mechanisms is exposed to peer review and ongoing scrutiny. Secret algorithms are fragile in the long term because it’s only a matter of time before someone exposes them and weakens their effectiveness. Security professionals have a saying: “There is no security in obscurity”.

For precisely the same reason, we must not have secret government monitoring programs either. If the case is made that surveillance is a necessary evil, then it would actually be in everyone’s interests for governments to run their programs out in the open.

Posted in Trust, Security, Privacy, Internet, Big Data

Attribute wallets

There's little debate now that attributes are at least as important as "identity" in making decisions about authorization online. This was a recurring theme at the recent Cloud Identity Summit and in subsequent discussions on Twitter, my blog site and Kuppinger Cole's. The attention to attributes might mean a return to basics, with a focus on what it is we really need to know about each other in business. It takes me back to the old APEC definition of authentication: the means by which the recipient of a transaction or message can make an assessment as to whether to accept or reject that transaction.

A few questions remain, like what is the best way for attributes to be made available? And where does all this leave the IdP? The default architecture in many peoples' minds is that attributes should be served up online by Attribute Providers in response to Relying Party's needing to know things about Subjects instantaneously. The various real time negotiations are threaded together by one or more Identity Providers. Here I want to present an alternative but complementary vision, in which attributes are presented to Relying Parties out of digital wallets controlled by the Subjects concerned, and with little or no involvement of Identity Providers as such.

Terminology: In this post and in most of my works I use the nouns attribute, claim and [identity] assertion interchangeably. What we're talking about are specific factoids about the first party to a transaction (the "Subject") that are interesting to the second party in the transaction (the "Relying Party" or Service Provider). In general, each attribute is vouched for by an authoritative third party referred to as an Attribute Provider. In some special cases, an RP can trust the Subject to assert certain things about themselves, but the more interesting general case is where the Relying Party needs external assurance that a given attribute is true for the Subject in question. I don't have much to say about self-asserted attributes.

The need to know

As much as we're all interested in identity and "trust" online, the currency of most transactions is context-specific attributes. The context of a transaction often determines (and determines completely) the attributes that determine whether a party is authorised. For example, if the Subject is a shopper, the RP a merchant and the transaction a credit card purchase, then the attributes of interest are the cardholder name, account number, billing address and maybe the card verification code. In the context of dispensing an electronic prescription, the only attribute might be the doctor's prescriber number (pharmacists of course don't care who the doctor 'really is'; notoriously they can't even read the doctor's handwriting). For authorising a purchase order on behalf of a company, the important attributes might be the employee position and staff ID. For opening a new bank account, Know-Your-Customer (KYC) rules in most jurisdictions will dictate that such attributes as legal name, address, date of birth and so on be presented in a prescribed form (typically by way of original government issued ID documents).

For most of the common attributes of interest in routine business, there are natural recognised Attribute Authorities. Some are literally authoritative over particular attributes. Professional bodies for instance issue registration numbers to accountants, doctors, engineers and so on; employees assign staff IDs; banks issue credit card numbers. In other cases, there are de facto authorities; most famously, driver licenses are relied on almost universally as proof of age around the world.

Sometimes rules are laid down that designate certain organisations to act as Attribute Providers - without necessarily using that term. Consider how KYC rules in effect designate Attribute Authorities. In Australia, the Financial Transaction Reports Act 1988 (FTRA) has long established an identity verification procedure called the "100 point check". FTRA regulations prescribe a number of points to various identification documents, and in order to open a bank account here, you need to present a total of 100 points worth of documents. Notable documents include:

  • Birth certificate: 70 points
  • Current passport: 70 points
  • Australian driver licence [bearing a photo]: 40 points
  • Foreign driver licence [not necessarily bearing a photo]: 25 points
  • Credit card: 25 points.

So in effect, the financial regulators in Australia have designated driver license bureaus and credit card issuers to be Attribute Providers for names (again, without actually using the label "AP"). Under legislated KYC rules, a bank creating a new customer account can rely on assertions made by other banks or even foreign driver license authorities about the customer's name, without needing to have any relationship with the "APs". Crucially, the bank need not investigate for itself nor understand the detailed identification processes of the "APs" listed in the KYC rules. Of course we can presume that KYC legislators took advice on the details of how various identity documents are put together, and in the event that an error is found somewhere in the production of an identity feeder document then forensic investigation would follow, but the important point is that routinely, the inner workings of all the various APs are opaque to most relying parties. The bank as RP does not need to know how a license bureau does its job.

And yet we do know that the recognised Attribute Providers continuously improve what they do. Consider driver licenses. In Australia up until the 1970s, driver licenses were issued on paper. Then plastic cards were introduced with photographs. Numerous anti-copying measures have been rolled out since then, such as holograms, and guilloche, optically variable and micro printing. Now the first chipped driver licenses are being issued, in which cryptographic technology not only makes counterfeiting difficult but also enables digitally signed cardholder details to be transmitted electronically (the same trick utilised in EMV to stop skimming and carding). Less obvious to users, biometric facial recognition is also used now during issuance and renewal to detect fraudsters. So over time the attributes conveyed by driver licenses have not changed at all - name, address and date of birth have always meant the same thing - but the reliability of these attributes when presented via licenses is better than ever.

Imposters are better detected during the issuance process, the medium has become steadily more secure, and, more subtly, the binding between each licence and its legitimate holder is stronger.

We are accustomed in traditional business to dealing with others on the basis of their official credentials alone, without needing to know much about who they 'really are'. When deciding if we can accept someone in a particular transaction context, we rely on recognised providers of relevant attributes. Hotel security checks a driver license for a patron's age; householders check the official ID badges of repair people and meter readers; a pathologist checks the medical credentials of an ordering doctor; an architect only deals with licensed surveyors and structural engineers; shareholders only need to know that a company's auditors are properly certified accountants. In none of these routine cases is the personal identity of the first party of any real interest. What matters is the attributes that authorise them to deal in each context.

Digital wallets

Now, in the online environment, what is the best way to access attributes? My vision is of digital wallets. I advocate that users be equipped to hold close any number of recognised attributes in machine readable formats, so they can present selected attributes as the need arises, directly to Relying Parties. This sort of approach is enabled by the fact that the majority of economically important transaction settings draw on a relatively small number of attributes, and we can define a useful attribute superset in advance. As discussed previously such a superset could include:

  • {Given name, Residential address, Postal address, Date of Birth, "Over 18", Residential status, Professional qualification(s), Association Membership(s), Social security number, Student number, Employee Number, Bank account number(s), Credit card number(s), Customer Reference Number(s), Medicare Number, Health Insurance No., Health Identifier(s), OSN Membership(s)}

Many of these attributes have just one natural authoritative provider each; others could be provided by a number of alternative organisations that happen to verify them as a matter of course and could stand ready to vouch for them. The decision to accept any AP's word for a given attribute is ultimately up to the Relying Party; each RP has its own standards for the required bona fides of the attributes it cares about.

There are a few obvious candidates for digital attribute wallets:


  • A smart phone could come pre-loaded with attributes that have been verified as a matter of course by the telephone company, like the credit card number associated with the account, or proof of age. A digital wallet on the phone could later be topped up with additional attributes, over the air or via some other more secure over-the-counter protocol.

  • A smart driver license could hold digital certificates signed by the licensing bureau, asserting name, address, date of birth, and/or simpler de-identified statements like "the older is over 18". Note that the assertions could be made separately or in useful combinations; for privacy, a proof of age certificate need not name the holder but simply specify that the assertion is carried on a particular type of chip, signed by the authoritative issuer.

  • When you receive a smart bank card, the issuer bank could load the chip with your name, address, date of birth, PANs and/or certified copies of identity documents presented to open the account. Such personal identity assertions could then be presented by the customer to other RPs like financial institutions or retailers to originate other accounts.

Do we need an "Identity Provider" to thread together these attributes? No. While it is important that RPs can trust that each attribute is in the right hands, the issuance process (including the provisioning of attribute carrying tokens like cards and mobile phones) is just one aspect of the Attribute Provider's job. If we can trust say a licensing bureau to verify the particulars of a license holder, then we can also trust them as part of that process to ensure that the license is in the hands of its rightful owner.

In contrast with the real time 'negotiated' attributes exchange architectures, the digital wallet approach has the following advantages:


  • Decentralised architecture: lower cost and quicker to deploy; we can start local and scale up as Attribute Providers gain ground;

  • Fast: digitally signed attributes presented from smart devices diret to Relying Parties can be cryptographically verified instantaneously, for higher performance, especially in bandwidth limited environments.

  • Intrinsically private: Direct presentation of attributes minimises the exposure of personal information to third parties.

  • ”Natural”: Digital wallets of attributes is congruent with the way we hold diverse pieces of personal documentation in regular wallets; unlike big federation model, no novel new intermediaries are involved.

  • Legally simpler: It is relatively simple matter for Attribute Authorities to warrant the accuracy of separate particulars like name, date of birth, account number, without any making any other broad representations of who the Subject 'really is'. There is none of the legal fine print that bedevilled Big PKI Certification Authorities in the past and which proved fatal in federation programs like the Internet Industry Association 2FA pilot.

Notes

  • On a case by case basis, as dictated by their risk management strategies, RPs can revert to an online AP to check the up-to-the-minute validity of an attribute. In practice this is not necessary in many cases; many of the common attributes in business are static, and once issued (or vouched for by a reputable body) do not change. If attributes are conveyed by digital certificates, then their validity can be efficiently checked online by OCSP and near-line by CRL.
  • The patient smartcards already widespread in Europe are an ideal carrier for a plurality of human services identifiers (such as public health insurance numbers, health record identifiers, medical social networking handles, and research tracking numbers; see also a previous presentation on anonymity and pseudonymity in e-research).
  • As other conventional plastic cards are progressively upgraded to chip - such as the proposed US Medicare card modernization - we have a natural opportunity to load them with secure digital assertions too.
  • In the medium to long term, digitally signed attributes could be made to chain through communities of CAs to a small number of globally recognised Root Authorities. For a model, refer to s4.4 "How to convey fitness for purpose" of my Public Key Superstructure presentation to the 2008 NIST IDTrust workshop.

Posted in Smartcards, Security, Identity, Federated Identity, Trust

The devil is in the legals

Many of the identerati campaign on Twitter and on the blogosphere for a federated new order, where banks in particular should be able to deal with new customers based on those customer’s previous registrations with other banks. Why, they ask, should a bank put you through all that identity proofing palava when you must have already passed muster at any number of banks before? Why can’t your new bank pick up the fact that you’ve been identified already? The plea to federate makes a lot of sense, but as I’ve argued previously, the devil is in the legals.

Funnily enough, a clue as to the nature of this problem is contained in the disclaimers on many of the identerati’s blogs and Twitter accounts:

"These are personal opinions only and do not reflect the position of my employer".

Come on. We all know that’s bullshit.

The bloggers I’m talking about are thought leaders at their employers. Many of them have written the book on identity. They're chairing the think tanks. What they say goes! So their blogs do in fact reflect very closely what their employers think.

So why the disclaimer? It's a legal technicality. A company’s lawyers do not want the firm held liable for the consequences of a random reader following an opinion provided outside the very tightly controlled conditions of a consulting contract; the lawyers do not want any remarks in a blog to be taken as advice.

And it's the same with federated identity. Accepting another bank's identification of an individual is something that cannot be done casually. Regardless of the common sense embodied in federated identity, the banks’ lawyers are saying to all institutions, sure, we know you're all putting customers through the same identity proofing protocols, but unless there is a contract in place, you must not rely on another bank's process; you have to do it yourself.

Now, there is a way to chip away at the tall walls of legal habit. This is going to sound a bit semantic, but we are talking about legal technicalities here, and semantics is the name of the game. Instead of Bank X representing to Bank Y that X can provide the "Identity" of a new customer, Bank X could provide a digitally notarised copy of some of the elements of the identity proofing. Elements could be provided as digitally signed messages saying "Here's a copy of Steve’s gas bill" or "Here's a copy of Steve’s birth certificate which we have previously verified". We could all stop messing around with abstract identities (which in the fine print mean different things to different Relying Parties) and instead drop down a level and exchange information about verified claims, or "identity assertions". Individual RPs could then pull together the elements of identity they need, add them up to an identification fit for their own purpose, and avoid the implications of having third parties "provide identity". The semantics would be easier if we only sought to provide elements of identity. All IdPs could be simplified and streamlined as Attribute Providers.

See also An identity claims exchange bus and Identity is in the I of the beholder.

Posted in Trust, Internet, Federated Identity

I never trusted trust

From the archives.

  • "It is often put simply that in e-business, authentication means that you know who you're dealing with. Authentication is inevitably cited as one of the four or five 'pillars of security' (the others being integrity, non-repudiation, confidentiality and, sometimes, availability).
  • "To be a little more precise, let's examine the functional definition of authentication adopted by the Asia Pacific Economic Co-operation (APEC) E-Security Task Group, namely the means by which the recipient of a transaction or message can make an assessment as to whether to accept or reject that transaction.
  • "Note that this definition does not have identity as an essential element, let alone the complex notion of 'trust'. Identity and trust all too frequently complicate discussions around authentication. Of course, personal identity is important in many cases, but it should not be enshrined in the definition of authentication. Rather, the fundamental issue is one’s capacity to act in the transaction at hand. Depending on the application, this may have more to do with credentials, qualifications, memberships and account status, than identity per se, especially in business transactions."

Making Sense of your Authentication Options in e-Business
Journal of the PricewaterhouseCoopers Cryptographic Centre of Excellence, No. 5, 2001.

See also http://lockstep.com.au/library/quotes.

Posted in Identity, Trust

Surfacing identity

Editorial Note 19 May 2014: I changed "assertions" to "attributes" in the body of the blog, to use the more popular term right now. See how Bob Pinheiro in his comment rightly used the terms attributes/assertions interchangeably. I'm sure myself attributes, assertions and claims are synonymous for the purposes of "identity management".

The metaphor of a spectrum is often used to describe a sliding scale of knowingness. The degree to which someone is known is shown to range from zero (anonymity), up to some maximum (i.e. "verified identity") passing through pseudonymity and self-asserted identity along the way. It's a useful way of characterising some desirable features of identity management; it's definitely good to show that in different settings, we need to know different things about people. But the spectrum is something of an oversimplification, and it contradicts modern risk management. While it's great to legitimise the plurality of identities (by illustrating how we can maintain several identities at different points on a spectrum), the metaphor is problematic. Spectra are linear, with just one independent variable whereas risk management is multi-dimensional. The metaphor implies that identities can be ordered from weak to strong -- they can't -- and insidiously suggests that identities at the right hand end of the scale are superior.

A Digital Identity is a set of claims (aka attributes) that are meaningful in some context [Ref: Kim Cameron's Laws of Identity]. When an Identity Provider (IdP) identifies me in their context, what they're doing is testing and vouching for a closed set of n attributes: {A1, A2, ..., An}. When a Relying Party (RP) wants to identify me, they need to be satisfied about a number of particular attributes relevant to their business; let's say there are m of them: {Ai, Aii, ..., Am}. These sets are disjoint; the things about me that matter to an RP may or may not be the same things than the IdP is able to assert about me.

Meaningful Identity Federation requires, at the very least, that (1) the RP's m attributes are a subset of the IdP's n attributes, and (2) the IdP has tested each attribute to an acceptable level of confidence for the RP's purposes. When designing a federation, the sets of attributes for all anticipated RPs need to be defined in advance, together with the required confidence levels. Closing the "attribute space" and quantifying all its dimensions is a huge challenge.

When we look at identification risk management in a more multi-dimensional way, each identity looks more like a surface in a multidimensional space than a simple point on a 1D line. For example, let's imagine that a general purpose IdP ascertains and vouches for six attributes: given name, home address, date of birth, educational qualifications, residency and gender. The IdP gauges the accuracy with which it can make each attribute as follows:

Blog identity surface pics 120826 IDP


A1 Given name 90%
A2 Address 90%
A3 DOB 90%
A4 Gender 35%
A5 Qualifications 25%
A6 Residency 25%


For this Identity Provider to be useful to any given Relying Party, the attributes need to be of interest to the RP, and they have to be asserted with a minimum accuracy. Consider RP1, a bank, which needs to be sure of a customer's name, address and date of birth to at least 80% confidence under applicable KYC rules, and doesn't need to know anything else. We can plot RP1's identity expectation and compare it with the IdP's attributes. All well and good in this case, for the IdP covers the RP:

Surface  RP1


Now consider RP2, an adult social networking service. All it wants to know is that its anonymous customers are at least 18 years of age. Its requirement for Attribute 3 is 90%, and it doesn't care about anything else. So again, the IdP meets the needs of this RP (assuming that the identity management technology allows for selected disclosure of just the relevant attribute and hides all the others):

Surface  RP2


Finally, let's look at a hospital employing a casual doctor. Credentialing rules and malpractice risk means that the hospital is more interested in the individual's qualifications and residency (which must be known with 90% confidence), than their name and address (50%). And now we see that RP3's requirements are not covered by this particular IdP:

Surface  RP3


Returning to the idea of a spectrum, there is no sliding scale from anonymity up to "full" identity. Neither can trust in an identity be pinpointed somewhere between LOA 1 and LOA 4. In general, the more serious an identity gets, the more complex and multivariate is the set of attributes that it covers. I'm afraid the pseudonymous social logon experience at LOA 1 doesn't pave the way to more serious multifaceted identity federation "at the other end" of a spectrum. It's not like simply turning up the heat to step up from cold to hot.

Posted in Trust, Identity, Federated Identity

Designing out identification uncertainty

A few of us have been debating Levels of Assurance on Twitter. It seems crucial that we look at authentication at enrolment or registration time separately from authentication at transaction time, as Jim Fenton touched on in his comment at http://lockstep.com.au/blog/2011/03/11/nstic-and-banking#comments.

A big problem in the discourse is that the process of checking identity at enrollment/registration time and at transaction time are both called "authentication" .

I contend that using quaternary LOAs to gauge someone we’re about to transact with is a very big change from the way we do routine business today, and that any major change to business liability arrangements, even if it is worthwhile, introduces such legal complexities that it can kill federated identity initiatives. Interestingly, lots of others also say LOAs are a big change but for different reasons! Paul Madsen @paulmadsen and Phil Hunt @independentid argue that trust in the real world is analogue, that it is determined on a continuum, and that we are never 100% sure of who some really is.

That’s true, and it’s crucial to consider the uncertainty when authenticating someone at enrollment time. But I think it’s a moot point when we authenticate a counter-party at transaction time. During a great many routine transactions, the counter-party is either authorised to deal or they are not. The question becomes binary, not analogue and not even quaternary! When a pharmacist processes a prescription, they want to know if it was signed by a doctor, or not. When I withdraw money at the bank, the teller checks if I am account holder, or not. The authority to sign purchase orders, tax returns, audit reports, legal advice, radiology reports, insurance loss estimates, survey reports etc. etc. is always binary. When the Relying Parties to any of these sorts of transactions process them, I do not believe that they spend any time at all pondering the analogue trustworthiness of the sender, or the percentage probability that the sender is not really who they say they are.

So what’s going on here? I admit there is uncertainty at the time of registration, but where has it gone by the time RPs make their real time binary decisions to accept or not? The answer is that uncertainty and its consequences are calculated in advance at design time and factored into enrollment protocols and other systemic risk management mechanisms, in such a way that during routine transactions, the RP need not worry.

In detail, consider doctors’ registration. The process for credentialing doctors is not perfect; let’s say 0.1% of doctors out there are frauds of some sort, carrying an authority that is not true. I contend that all those relying on a doctor’s credentials for routine transactions (these are pharmacists, pathologists, other doctors, hospital administrators, health insurers etc.) authenticate the doctor on a binary basis, by checking if they seem to hold a valid credential. If the doctor holds a credential, the RP simply assumes it’s genuine (naturally the RP will need to check for revocation, and tampering, but if the credential is mechanically valid, then the RP assumes it’s genuine). This 'blind' assumption by the RP results in a finite system-wide error rate in prescriptions, insurance claims and so on. There are system-wide quality control mechanisms that in effect monitor this error rate and initiates corrections when it gets too high (or when the odd spectacular screw up occurs like a fraudulent doctor kills a patient).

So there is a cascade of identification risk decisions being made at different points in the identity life cycle:

  • 1. At design time, health system managers, policy makers and regulators work out what the averaged consequences are of misidentifying doctors, and create processes and rules to manage the risk. These include up front credentialing procedures, as well as downstream safety nets, and ongoing monitoring and review of the rules.
  • 2. At registration time (when they get through college), candidate doctors are evaluated against those rules. Many different inputs will be involved; some, like reference checks, might be near-analogue. The process will be judgement based, and usually iterative, to deal with exceptions like overseas trained doctors, and may take time to execute. Yet it results in an authority to practice that is black-and-white [actually doctors typically carry a portfolio of specific b&w authorisations, pertaining to e.g. writing a prescription, writing a prescription for narcotics, carrying out certain operations, filing a public health insurance claim, admitting rights to a given private hospital, signing a pathology report etc.]
  • 3. At transaction time, when a doctor asserts their claim to be a doctor, the RP will make a go/no go decision, usually entirely automatically. If the RP follows all the rules of the transaction system they will generally bear no liability if a mis-identification has occurred upstream. RPs really want transaction-time authentication to be a highly mechanical process, with as little human intervention as possible; pharmacists for example don’t ever want to be in a position of having to make a fresh judgement on each and every routine prescription about the trustworthiness of the doctor.

My experience is that a great many economically important transactions can be accepted /rejected based on a very small number of assertions (usually just one) that the Subject can present directly to the RP. There need not be any real time Q&A back and forth between the RP and the Subject's IdP at transaction time. For example, a doctor’s Provider Number can be baked into a digital certificate issued by a recognised authority and asserted for each transaction via digital signature; RPs don’t need to know anything else about a doctor in order to accept or reject the vast majority of e-health communications. If a revocation check is necessary, it should be done at the medical credential issuer; an independent IdP adds no value here. Anyone remember a company called Valicert? They tried to inter-mediate status checks for digital certificates, but they failed to deliver any real value; RPs found they were better off going direct to a CA to check if a certificate was valid.

Posted in Trust, Security, Identity

Reading Peter Steiner's Internet dog

How are we to read Peter Steiner's famous cartoon "On the Internet, nobody knows you're a dog"? It wasn't an editorial cartoon, so Steiner wasn't trying to make a point. I believe he was just trying to be funny.

Why is Internet dog funny? I think it's because dogs are mischievous (especially the ordinary muts in question). Dogs chew your slippers when you're not looking. So imagine what fun they would have on the Internet. They would probably sell your slippers given the chance on eBay.

Technologists especially latched onto the cartoon and gave it deeper meanings, particularly relating to "trust". Whether or not the cartoon triggered it, it coincided with a rush of interest in the topic. Through most of the 1990s, hoards of people became preoccupied with "trust" as a precondition for e-business. Untold hours were spent researching, debating, deconstructing and redefining "trust", as if the human race didn't really understand it. Really? Was there ever really a "trust" problem per se? Did the advent of the Internet truly demand such earnest reappraisal?

No. We should read the Steiner cartoon as being all about fidelity not trust. It goes without saying that you wouldn't trust a dog. The challenge online is really pretty prosaic: it is to tell what someone is. Trust then follows from that knowledge in context.

I maintain that by and large we trust people well enough in the real world. There's no end of conventions, rules and instincts for establishing trust - none perfect, but perfectly good enough, and of course, evolving all the time. It's true that establishing trust in new business relationships is subtle and muti-pathed, but in routine business transactions - the sort that the Internet is good for - trust is not subtle at all. The only thing that matters in most transactions is the parties' formal credentials (not even their identities) in the context at hand. For example, a pharmacist doesn't "trust" the doctor as such when filling a prescription. Medicos, accountants, engineers, bankers, lawyers, architects and so on have professional qualifications that authorise them to perform certain transactions. Consider that in the traditional mercantile world, the shopkeeper or sales assistant is typically a total stranger, but we know that consumer protection legislation, credit card agreements and big companies' reputations all keep us safe. So we don't actually "trust" most people we do business with at all. We don't have to.

There is an old Italian proverb that perfectly sums up most business:

It is good to trust, but it is better not to.

That should be the defining slogan of Internet sociology, not "no one knows you're a dog". If we weren't over-doing trust, the transition from real world to digital would not be so daunting, and the foundational concepts of identity would not need re-defining.

Posted in Trust, Privacy, Internet, Identity, Culture, Cloud