Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Privacy watch

Update 22 September 2014

Last week, Apple suddenly went from silent to expansive on privacy, and the thrust of my blog straight after the Apple Watch announcement is now wrong. Apple posted a letter from CEO Tim Cook at www.apple.com/privacy along with a document that sets outs how "We’ve built privacy into the things you use every day".

The paper is very interesting. It's a sophisticated and balanced account of policy, business strategy and technology elements that go to create privacy. Apple highlights that they:

  • forswear the exploitation of customer data
  • do not scan content or messages
  • do not let their small "iAd" business take data from other Apple departments
  • require certain privacy protective practices on the part of their health app developers.

They have also provided quite decent information about how Siri and health data is handled.

Apple's stated privacy posture is all about respect and self-restraint. Setting out these principles and commitments is a very welcome development indeed. I congratulate them.

Today Apple launched their much anticipated wrist watch, described by CEO Tim Cook as "the most personal device they have ever developed". He got that right!

Rather more than a watch, it's a sort of guardian angel. The Apple Watch has Siri built-in, along with new haptic sensors and buzzers, a heartbeat monitor, accelerometer, and naturally the GPS and Wi-Fi geolocation capability to track your speed and position throughout the day. So they say "Apple Watch is an all-day fitness tracker and a highly advanced sports watch in a single device".

Apple Watch

The Apple Watch will be a paragon of digital disruption. To understand and master disruption today requires the coordination of mobility, Big Data, the cloud and user interfaces. These cannot be treated as isolated technologies, so when a company like Apple controls them all, at scale, real transformation follows.

Thus Apple is one of the few businesses that can make promises like this: "Over time, Apple Watch gets to know you the way a good personal trainer would". In this we hear echoes of the smarts that power Siri, and we are reminded that amid the novel intimacy we have with these devices, many serious privacy problems have yet to be resolved.

The Apple Event today was a play in four acts:
Act I: the iPhone 6 release;
Act II: Apple Pay launch;
Act III: the Apple Watch announcement;
Act IV: U2 played live and released their new album free on iTunes!

It was fascinating to watch the thematic differences across these stanzas. With Apple Pay, they stressed security and privacy; we were told about the Secure Element, the way card numbers are replaced by random numbers (tokenization), and an architecture where Apple cannot see how much you spend nor where you spend it. On the other hand, when it came to the Apple Watch and its integrated health sensors, privacy wasn't mentioned, not at all. We are left to deduce that aggregating personal health data at Apple's servers is a part of a broader plan.

The cornerstones of data privacy include Collection Limitation, Use Limitation (or "Purpose Specification") and Openness. Custodians of our Personally Identifiable Information (PII) should refrain from collecting and retaining PII they don't really need; they should specify what they do with PII and restrict unrelated secondary usage; and they should tell people what they're doing, generally in a Privacy Policy. With Siri, Apple sadly fails all these tests.See Update 22 September 2014 above.

The Apple Privacy Policy is altogether silent on Siri. The document details the sorts of information collected through its overt business processes like registration, sales and support, but it says nothing about the voice recordings and transcripts of Siri communications. Neither does the Siri FAQ mention what is done with all that data. It's quite an omission, seeing that when you dictate an SMS or an email to Siri, Apple retains a copy of communications that are normally out of bounds for your telecomms carrier.

It's been left to journalists to try and find out what Apple does with the information it mines from Siri. Wired magazine discovered eventually that Apple retains masked Siri voice recordings for six months; it then purportedly de-identifies them and keeps them for a further 18 months, for research. Yet even these explanations don't touch on the extracted contents of the communications, nor the metadata, like the trends and correlations that go to Siri's learning. If the purpose of Siri is ostensibly to automate the operation of the iPhone and its apps, then Apple should be refrain from using the by-products of Siri's voice processing for anything else. But we just don't know what they do, and Apple imposes no self-restraint.See Update 22 September 2014 above.

We should hope for radically greater transparency with the Apple Watch and its health apps. Most of the watch's data processing and analytics will be carried out in the cloud. So Apple will come to hold detailed records of its users' exercise regimes, their performance figures, trend data and correlations. These are health records. Inevitably, health applications will take in other medical data, like food diaries entered by users, statistics imported from other databases, and detailed measurements from Internet-connected scales, blood pressure monitors and even medical devices. Apple will see what we're doing to improve our health, day by day, year on year. They will come to know more about what's making us healthy and what's not than we do ourselves.

Apple Watch Activity App

Now, the potential benefits from this sort of personal technology to self-managed care and preventative medicine are enormous. But so are the data management and privacy obligations.

Within the US, Apple will doubtless be taking steps to avoid falling under the stringent HIPAA regulations, yet in the rest of the world, a more subtle but far-reaching problem looms. Many broad based data privacy regimes forbid the collection of health information without consent. And the laws of the European Union, Australia, New Zealand and elsewhere are generally technology neutral. This means that data collected directly from patients or doctors, and fresh data collected by way of automated algorithms are treated essentially the same way. So when a sophisticated health management app running in the cloud somewhere mines all that exercise and lifestyle data, and starts to make inferences about health and wellbeing, great care needs to be taken that the indiviuals concerned know what's going on in advance, and have given their informed consent.

One of the deep privacy challenges in Big Data is that data miners don't know what they're going to find. Even with the best will in the world, a company can struggle to say in its Privacy Policy what PII is expects to extract (and thus collect) in future from the raw data it collects today. At Constellation Research we've been fleshing out a new sort of compact between businesses and individuals that seeks to keep users abreast of developments in data analytics, and promises to provide people with proper control of personal Big Data results.

It ought to be possible to expressly opt in to Big Data processes when you can understand the pros and cons and the net benefits, and to later opt out, and opt back in again, as the benefit equation shifts over time. But even visualising the products of Big Data is hard; I believe graphical user interfaces (GUIs) to allow people to comprehend and actively control the process will be one of the great software design problems of our age.

Apple are obviously preeminent in GUI and user experience innovation. You would think if anyone can create the novel yet intuitive interfaces desperately needed to control Big Data PII, Apple can. But first they will have to embrace their responsibilities for the increasingly intimate details they are helping themselves to. If the Apple Watch is "the most personal device they've ever designed" then let's see privacy and data protection commitments to match.

Posted in Privacy, e-health, Constellation Research, Cloud, Big Data

BlackBerry Security Summit, 29 July 2014

Summary: BlackBerry is poised for a fresh and well differentiated play in the Internet of Things, with its combination of handset hardware security, its uniquely rated QNX operating system kernel, and its experience with the FIDO device authentication protocols.

To put it plainly, BlackBerry is not cool.

And neither is security.

But maybe two wrongs can make a right, in terms of a compelling story. BlackBerry's security story has always been strong, it's getting stronger, and it could save them.

Today I attended the BlackBerry Security Summit in New York City (Disclosure: my travel and accommodation were paid by BlackBerry). The event was announced very recently; none of my colleagues had heard of it. So what was the compelling need to put on a security show in New York? It turned out to be the 9:00am announcement that BlackBerry is acquiring the German voice security specialists Secusmart. BlackBerry and Secusmart have worked together for a long time; their stated aim is to put a real secure phone in the "hand of every President and every Chancellor".

Secusmart CEO Hans-Christoph Quelle is a forceful champion of voice security; in this age of evidently routine spying by state and competitors alike, there is enormous demand building for counter-surveillance in telephony and messaging. Secusmart is also responsible for the highly rated Micro SD cards that BlackBerry proudly use as removable security modules in their handsets. And this is where the SecuSmart tie-up really resonates for me. It comes hot on the heels of last week's Cloud Security Summit, where there was so much support for personal Hardware Security Modules (HSMs), be they Micro SD cards, USB keys, NFC Secure Elements, the good old "Trusted Platform Module" (TPM) or any number of proprietary chip sets.

Today's event also showcased BlackBerry's QNX division (acquired in 2010) and its secure operating system. CEO John Chen reckons that the software in 50% of connected cars runs on the QNX OS (and in high reliability settings like power stations, wind turbines and even gaming machines, the penetration is even higher). And so he is positioning BlackBerry as a major player in the Internet of Things.

We heard from QNX founder Dan Dodge about the elegance of their system. At just 100,000 lines of code, Dodge stressed that his team knows the software inside-out. There is not a single line of code in their OS that QNX did not write themselves. In contrast, such mastery is utterly impossible in the 15,000,000 lines that make up Linux or the estimated 50-70 million lines in Windows. It happens that I've recently lamented the parlous state of software quality and the need to return to first principles security. So I am on Dan Dodge's wavelength.

BlackBerry's security people had a little bit to say about identity as well, and apparently more's to come. For now, they are flagging that with 250 million customers in their messaging system, BBM represents "one of the biggest identity systems in the world". And as such the company does plan to "federate" it somehow. They reminded us at the same time of the BlackBerry Cloud slated for launch in December.

Going forward, the importance of strong, physical Two Factor Authentication for accessing the cloud is almost a given now. And the smartphone is fast becoming the predominant access mechanism, so the combination of secure elements, handsets and high security infrastructure is potent.

There's a lot that BlackBerry is keeping close to its chest, but for me one extant piece of the IoT puzzle was conspicuously absent today: the role of the FIDO Alliance protocols. After all, BlackBerry has been a FIDO Board Member for a long time. It seems to me that FIDO's protocols for exchanging verified authentication signals and information about devices should be an important element of BlackBerry's play in both its software infrastructure and its devices.

In closing, I'll revisit the very first thing we heard at today's event. It was a video testimonial, telling us "If you need nuclear security, you need BlackBerry". As I said, security really isn't cool. Jazzing up the company's ability to deliver "nuclear" grade to demanding clients is actually not the right message. Security in the Internet of Things -- and therefore in everyday life -- may turn out to be just as important.

We basically know that nuclear power plants are inherently risky; we know that planes will occasionally fall out of the sky. Paradoxically, the community has a reasonable appetite for risk and failures in very complex systems like those. Individually and/or collectively we have decided we just can't live without electricity and travel and so we've come to settle on a roughly acceptable finite cost in terms of failures. But when the mundanities of life go digital, the tolerance of failure will drop. When our cars and thermostats and light switches are connected to the Internet, and when a bug or a script kiddie's stunt can soon send whole neighbourhoods into a spin, consumers won't stand for it.

So the very best security we can currently engineer is in fact going to be necessary at scale for smart appliances, wearables, connected homes, smart meters and networked cars. We need a different gauge for this type of security, and it's going to be very tough to engineer and deploy economically. But right now, with its deep understanding of dependable OS's and commitment to high quality device hardware, it seems to me BlackBerry has a head-start in the Internet of Things.

Posted in Software engineering, Security, Identity, Cloud

Postcard from Monterey 3 #CISmcc

Days 3 and 4 at CIS Monterey.

Andre Durand's Keynote

The main sessions at the Cloud Identity Summit (namely days three and four overall) kicked off with keynotes from Ping Identity chief Andre Durand, New Zealand technology commentator Ben Kepes, and Ping Technical Director Mark Diodati. I'd like to concentrate on Andre's speech for it was truly fresh.

Andre has an infectious enthusiasm for identity, and is a magnificent host to boot. As I recall, his CIS keynote last year in Napa was pretty simply a dedication to the industry he loves. Not that there's anything wrong with that. But this year he went a whole lot further, with a rich deep dive into some things we take for granted: identity tokens and the multitude of security domains that bound our daily lives.

It's famously been said that "identity is the new perimeter" and Andre says that view informs all they do at Ping. It's easy I think to read that slogan to mean security priorities (and careers) are moving from firewalls to IDAM, but the meaning runs deeper. Identity is meaningless without context, and each context has an edge that defines it. Identity is largely about boundaries, and closure.

  • MyPOV and as an aside: The move to "open" identities which has powered IDAM for a over a decade is subject to natural limits that arise precisely because identities are perimeters. All identities are closed in some way. My identity as an employee means nothing beyond the business activities of my employer; my identity as an American Express Cardholder has no currency at stores that don't accept Amex; my identity as a Qantas OneWorld frequent flyer gets me nowhere at United Airlines (nor very far at American, much to my surprise). We discovered years ago that PKI works well in closed communities like government, pharmaceutical supply chains and the GSM network, but that general purpose identity certificates are hopeless. So we would do well to appreciate that "open" cross-domain identity management is actually a special case and that closed IDAM systems are the general case.

Andre reviewed the amazing zoo of hardware tokens we use from day to day. He gave scores of examples, including driver licenses of course but license plates too; house key, car key, garage opener, office key; the insignias of soldiers and law enforcement officers; airline tickets, luggage tags and boarding passes; the stamps on the arms of nightclub patrons and the increasingly sophisticated bracelets of theme park customers; and tattoos. Especially vivid was Andre's account of how his little girl on arriving at CIS during the set-up was not much concerned with all the potential playthings but was utterly rapt to get her ID badge, for it made her "official".

IMG 5493 Ping Durand CISmcc Tokens make us official

Tokens indeed have always had talismanic powers.

Then we were given a fly-on-the-wall slide show of how Andre typically starts his day. By 7:30am he has accessed half a dozen token-controlled physical security zones, from his home and garage, through the road system, the car park, the office building, the elevator, the company offices and his own corner office. And he hasn't even logged into cyberspace yet! He left unsaid whether or not all these domains might be "federated".

  • MyPOV: Isn't it curious that we never seem to beg for 'Single Sign On' of our physical keys and spaces? I suspect we know instinctively that one-key-fits-all would be ridiculously expensive to retrofit and would require fantastical cooperation between physical property controllers. We only try to federate virtual domains because the most common "keys" - passwords - suck, and because we tend to underestimate the the cost of cooperation amongst digital RPs.
IMG 5493 Ping Durand CISmcc Tokens properties

Tokens are, as Andre reminded us, on hand when you need them, easy to use, easy to revoke, and hard to steal (at least without being noticed). And they're non-promiscuous in respect of the personal information they disclose about their bearers. It's a wondrous set of properties, which we should perhaps be more conscious of in our work. And tokens can be used off-line.

  • MyPOV: The point about tokens working offline is paramount. It's a largely forgotten value. Andre's compelling take on tokens makes for a welcome contrast to the rarely questioned predominance of the cloud. Managing and resolving identity in the cloud complicates architectures, concentrates more of our personal data, and puts privacy at risk (for it's harder to unweave all the traditionally independent tracks of our lives).

In closing, Andre asked a rhetorical question which was probably forming in most attendees' minds: What is the ultimate token? His answer had a nice twist. I thought he'd say it's the mobile device. With so much value now remote, multi-factor cloud access control is crucial; the smart phone is the cloud control du jour and could easily become the paragon of tokens. But no, Andre considers that a group of IDAM standards could be the future "universal token" insofar as they beget interoperability and portability.

He said of the whole IDAM industry "together we are networking identity". That's a lovely sentiment and I would never wish to spoil Andre Durand's distinctive inclusion, but on that point technically he's wrong, for really we are networking attributes! More on that below and in my previous #CISmcc diary notes.

The identity family tree

My own CISmcc talk came at the end of Day 4. I think it was well received; the tweet stream was certainly keen and picked up the points I most wanted to make. Attendance was great, for which I should probably thank Andre Durand, because he staged the Closing Beach Party straight afterwards.

I'll post an annotated copy of my slides shortly. In brief I presented my research on the evolution of digital identity. There are plenty of examples of how identity technologies and identification processes have improved over time, with steadily stronger processes, regulations and authenticators. It's fascinating too how industries adopt authentication features from one another. Internet banking for example took the one-time password fob from late 90's technology companies, and the Australian PKI de facto proof-of-identity rules were inspired by the standard "100 point check" mandated for account origination.

Steve Wilson CIS2014 Authentication Family Tree  Bank identity evolves blog CISmcc

Clearly identity techniques shift continuously. What I want to do is systematise these shifts under a single unifying "phylogeny"; that is, a rigorously worked-out family tree. I once used the metaphor of a family tree in a training course to help people organise their thinking about authentication, but the inter-relationships between techniques was guesswork on my part. Now I'm curious if there is a real family tree that can explain the profusion of identities we have been working so long on simplifying, often to little avail.

Steve Wilson CIS2014 Authentication Family Tree (1 0) CISmcc CIS Cloud Identity

True Darwinian evolution requires there to be replicators that correspond to the heritable traits. Evolution results when the proportions of those replicators in the "gene pool" drift over generations as survival pressures in the environment filter beneficial traits. The definition of Digital Identity as a set of claims or attributes provides a starting point for a Darwinian treatment. I observe that identity attributes are like "Memes" - the inherited units of culture first proposed by biologist Richard Dawkins. In my research I am trying to define sets of available "characters" corresponding to technological, business and regulatory features of our diverse identities, and I'm experimenting with phylogenetic modelling programs to see what patterns emerge in sets of character traits shared by those identities.

Steve Wilson CIS2014 Authentication Family Tree  Memome Characters blog CISmcc

So what? A rigorous scientific model for identity evolution would have many benefits. First and foremost it would have explanatory power. I do not believe that as an industry we have a satisfactory explanation for the failure of such apparently good ideas as Information Cards. Nor for promising federation projects like the Australian banking sector's "Trust Centre" and "MAMBO" lifetime portable account number. I reckon we have been "over federating" identity; my hunch is that identities have evolved to fit particular niches in the business ecosystem to such an extent that taking a student ID for instance and using it to log on to a bank is like dropping a saltwater fish into a freshwater tank. A stronger understanding of how attributes are organically interrelated would help us better plan federated identity, and to even do "memetic engineering" of the attributes we really want to re-use between applications and contexts.

If a phylogenetic tree can be revealed, it would confirm the 'secret lives' of attributes and thereby lend more legitimacy to the Attributes Push (which coincidentally some of us first spotted at a previous CIS, in 2013). It would also provide evidence that identification risks in local environments are why identities have come to be the way they are. In turn, we could pay more respect to authentication's idiosyncrasies, instead of trying to pigeonhole them into four rigid Levels of Assurance. At Sunday's NSTIC session, CTO Paul Grassi floated the idea of getting rid of LOAs. That would be a bold move of course; it could be helped along by a new fresh focus to attributes. And of course we kept hearing throughout CIS Monterey about the FIDO Alliance with its devotion to authentication through verified device attributes, and its strategy to stay away from the abstract business of identities.

Steve Wilson CIS2014 Authentication Family Tree  FIDO blog CISmcc

Reflections on CIS 2014

I spoke with many people at CIS about what makes this event so different. There's the wonderful family program of course, and the atmosphere that creates. And there's the paradoxical collegiality. Ping has always done a marvelous job of collaborating in various standards groups, and likewise with its conference, Ping's people work hard to create a professional, non-competitive environment. There are a few notable absentees of course but all the exhibitors and speakers I spoke to - including Ping's direct competitors - endorsed CIS as a safe and important place to participate in the identity community, and to do business.

But as a researcher and analyst, the Cloud Identity Summit is where I think you can see the future. People report hearing about things for the first time at a CIS, only to find those things coming true a year or two later. It's because there are so many influencers here.

Last year one example was the Attributes Push. This year, the onus on Attributes has become entirely mainstream. For example, the NSTIC pilot partner ID.me (a start-up business focused on improving veterans' access to online discounts through improved verification of entitlements) talks proudly of their ability to convey attributes and reduce the exposure of identity. And Paul Grassi proposes much more focus on Attributes from 2015.

Another example is the "Authorization Agent" (AZA) proposed for SSO in mobile platforms, which was brand new when Paul Madsen presented it at CIS Napa in 2013. Twelve months on, AZA has broadened into the Native Apps (NAPPS) OpenID Working Group.

Then there are the things that are nearly completely normalised. Take mobile devices. They figured in just about every CISmcc presentation, but were rarely called out. Mobile is simply the way things are now.


Hardware stores

The mobile form factor is now taken for granted. And now the cryptographic capabilities now standard in most handsets (and increasingly embedded in smart things and wearables), are getting a whole lot of express attention. Hardware crypto was a major theme at CIS. I've already made much of Andre Durand's keynote on tokens, but it was the same throughout the event.

    • There was a session on hybrid Physical and Logical Access Control Systems (PACS-LACS) featuring the US Government's PIV-I smartcard standard and the major ongoing R&D on that platform sponsored by DHS.
    • Companies like SecureKey are devoted to hardware-based keys, increasingly embedded in "street IDs" like driver licenses, and are working with numerous players deep in the SIM and smartcard supply chains.
    • The FIDO Alliance is fundamentally about hardware based identity security measures, leveraging embedded key pairs to attest to the pedigree of authenticator models and the attributes that they transmit on behalf of their verified users. FIDO promises to open up the latent authentication power of many 100s of millions of devices already featuring Secure Elements of one kind or another. FIDO realises PKI the way nature intended all along.
    • The good old concept of "What You See Is What You Sign" (WYSIWYS) is making a comeback, with mobile platform players appreciating that users of smartphones need reliable cues in the UX as to the integrity of transaction data served up in their rich operating systems. Clearly some exciting R&D lies ahead.
    • In a world of formal standards, we should also acknowledge the informal standards around us - the benchmarks and conventions that represent the 'real way' to do things. Hardware based security is taken increasingly for granted. The FIDO protocols are based on key pairs that people just seem to assume (correctly) will be generated in the compliant devices during registration. And Apple with its iTouch has helped to 'train' end users that biometrics templates must never leave the safety of a controlled hardware end point. FIDO of course makes that a hard standard.

What's next?

In my view, the Cloud Identity Summit is the only not-to-be missed event on the IDAM calendar. So long may it continue. And if CIS is where you go to see the future, what's next?

    • Judging by CISmcc, I reckon we're going to see entire sessions next year devoted to Continuous Authentication, in which signals are collected from wearables and the Internet of Things at large, to gain insights into the state of the user at every important juncture.
    • With the disciplined separation of abstract identities from concrete attributes, we're going to need an Digital Identity Stack for reference. FIDO's pyramid is on the right track, but it needs some work. I'm not sure the pyramid is the right visualisation; for one thing it evokes Maslow's Hierarchy of Needs in which the pinnacle corresponds to luxuries not essentials!
    • Momentum will grow around Relationships. Kantara's new Identity Relationship Management (IRM) WG was talked about in the CISmcc corridors. I am not sure we're all using the word in the same way, but it's a great trend, for Digital Identity is only really a means to an end, and it's the relationships they support that make identities important.

So there's much to look forward to!

See you again next year (I hope) in Monterey!

Posted in Smartcards, PKI, Language, Identity, Federated Identity, Cloud

Postcard from Monterey #CISmcc

First Day Reflections from CIS Monterey.

Follow along on Twitter at #CISmcc (for the Monterey Conference Centre).

The Cloud Identity Summit really is the top event on the identity calendar. The calibre of the speakers, the relevance and currency of the material, the depth and breadth of the cohort, and the international spread are all unsurpassed. It's been great to meet old cyber-friends in "XYZ Space" at last -- like Emma Lindley from the UK and Lance Peterman. And to catch up with such talented folks like Steffen Sorensen from New Zealand once again.

A day or two before, Ian Glazer of Salesforce asked in a tweet what we were expecting to get out of CIS. And I replied that I hoped to change my mind about something. It's unnerving to have your understanding and assumptions challenged by the best in the field ... OK, sometimes it's outright embarrassing ... but that's what these events are all about. A very wise lawyer said to me once, around 1999 at the dawn of e-commerce, that he had changed his mind about authentication a few times up to that point, and that he fully expected to change his mind again and again.

I spent most of Saturday in Open Identity Foundation workshops. OIDF chair Don Thibeau enthusiastically stressed two new(ish) initiatives: Mobile Connect in conjunction with the mobile carrier trade association GSM Association @GSMA, and HIE Connect for the health sector. For the uninitiated, HIE means Health Information Exchange, namely a hub for sharing structured e-health records among hospitals, doctors, pharmacists, labs, e-health records services, allied health providers, insurers, drug & device companies, researchers and carers; for the initiated, we know there is some language somewhere in which the letters H.I.E. stand for "Not My Lifetime".

But seriously, one of the best (and pleasantly surprising) things about HIE Connect as the OIDF folks tell it, is the way its leaders unflinchingly take for granted the importance of privacy in the exchange of patient health records. Because honestly, privacy is not a given in e-health. There are champions on the new frontiers like genomics that actually say privacy may not be in the interests of the patients (or more's the point, the genomics businesses). And too many engineers in my opinion still struggle with privacy as something they can effect. So it's great -- and believe me, really not obvious -- to hear the HIE Connects folks -- including Debbie Bucci from the US Dept of Health and Human Services, and Justin Richer of Mitre and MIT -- dealing with it head-on. There is a compelling fit for the OAUTH and OIDC protocols here, with their ability to manage discrete pieces of information about users (patients) and to permission them all separately. Having said that, Don and I agree that e-health records permissioning and consent is one of the great UI/UX challenges of our time.

Justin also highlighted that the RESTful patterns emerging for fine-grained permissions management in healthcare are not confined to healthcare. Debbie added that the ability to query rare events without undoing privacy is also going to be a core defining challenge in the Internet of Things.

MyPOV: We may well see tremendous use cases for the fruits of HIE Exchange before they're adopted in healthcare!

In the afternoon, we heard from Canadian and British projects that have been working with the Open Identity Exchange (OIX) program now for a few years each.

Emma Lindley presented the work they've done in the UK Identity Assurance Program (IDAP) with social security entitlements recipients. These are not always the first types of users we think of for sophisticated IDAM functions, but in Britain, local councils see enormous efficiency dividends from speeding up the issuance of eg disabled parking permits, not to mention reducing imposters, which cost money and lead to so much resentment of the well deserved. Emma said one Attributes Exchange beta project reduced the time taken to get a 'Blue Badge' permit from 10 days to 10 minutes. She went on to describe the new "Digital Sources of Trust" initiative which promises to reconnect under-banked and under-documented sections of society with mainstream financial services. Emma told me the much-abused word "transformational" really does apply here.

MyPOV: The Digital Divide is an important issue for me, and I love to see leading edge IDAM technologies and business processes being used to do something about it -- and relatively quickly.

Then Andre Boysen of SecureKey led a discussion of the Canadian identity ecosystem, which he said has stabilised nicely around four players: Federal Government, Provincial Govt, Banks and Carriers. Lots of operations and infrastructure precedents from the payments industry have carried over.
Andre calls the smart driver license of British Columbia the convergence of "street identity and digital identity".

MyPOV: That's great news - and yet comparable jurisdictions like Australia and the USA still struggle to join governments and banks and carriers in an effective identity synthesis without creating great privacy and commercial anxieties. All three cultures are similarly allergic to identity cards, but only in Canada have they managed to supplement drivers licenses with digital identities with relatively high community acceptance. In nearly a decade, Australia has been at a standstill in its national understanding of smartcards and privacy.

For mine, the CIS Quote of the Day came from Scott Rice of the Open ID Foundation. We all know the stark problem in our industry of the under-representation of Relying Parties in the grand federated identity projects. IdPs and carriers so dominate IDAM. Scott asked us to imagine a situation where "The auto industry was driven by steel makers". Governments wouldn't put up with that for long.

Can someone give us the figures? I wonder if Identity and Access Management is already more economically ore important than cars?!

Cheers from Monterey, Day 1.

Posted in Smartcards, Security, Identity, Federated Identity, e-health, Cloud, Biometrics, Big Data

The IdP is Dead!

"All Hail the Relyingpartyrati!"

I presented my ecological theory of identity to the Cloud Identity Summit last week.

The quote "The IdP is Dead! Hail the Relyingpartyrati" is from my conclusion (reflecting the running inside joke at CIS that something has to die each year). One of my ideas is that because identification is carried out by Relying Parties, it's more correct (and probably liberating) to think of identity as being created by the RP. The best option for what we call "Identity Providers" today may be to switch to providing more specific Attributes or identity assertions. In fact, the importance of attributes kept recurring throughout CIS; Andrew Nash for example said at one point that "attributes are at least as interesting if not more so than identities".

[Now it has emerged during the debates that people might use the word "attribute" in slightly different ways. I said at CIS that I treat 'attributes', 'assertions' and 'claims' interchangeably; I think we should focus functionally on reliable exchange of whatever factoids a Relying Party needs to know about a Subject in order to be able to transact. I'll blog in more detail about Attributes shortly.]

To summarise my CIS talk:

Federated Identity is easier said than done. In response, we should drop down a level, and instead of trading in abstract high level identities, let's instead federate concrete component attributes. We don't really need Identity Providers as such; we need a marketplace of Attribute Providers from which Relying Parties can get exactly the right information they need to identify their users.


  • identities evolve over time in response to risk factors in the natural business environment; we need to understand why authentication has got the way it is before we rush in to federate disparate methods
  • identities appear to be "memetic", composed of heritable traits relating to business rules and practices, conventions, standards, regulations, technologies, algorithms, cryptographic parameters, form factors and so on
  • the dreaded identity silos are actually ecological niches; taking an identity from the context in which it evolved and using it in another is like taking a salt water fish and dropping it into a fresh water tank
  • if identity is memetic then we should be able to sequence digital identities into their constituent memes, and thence re-engineer them more carefully to match desired new applications
  • it is an over-simplification to think of a (one dimensional) identity spectrum; instead each RP's "identification" requirements are multi-dimensional, and best visualised as a surface.

Posted in Identity, Federated Identity, Cloud

Taking logon seriously

To a great extent, many of the challenges in information security boil down to human factors engineering. We tend to have got the security-convenience trade-off in infosec badly wrong. The computer password is a relic of the 1960s, devised by technicians, for technicians. If we look at traditional security, we see that people are universally habituated to good practices with keys and locks.

The terrible experience of Wired writer Mat Honan being hacked created one of those classic overnight infosec sensations. He's become the poster boy for the movement to 'kill the password'. His follow up post of that name was tweeted over two thousand times in two days.

Why are we so late to this realisation? Why haven't we had proper belts-and-braces access security for our computers ever since the dawn of e-commerce? We all saw this coming -- the digital economy would become the economy; the information superhighway would become more important than the asphalt one; our computing devices would become absolutely central to all we do.

It's conspicuous to me that we have always secured our serious real world assets with proper keys. Our cars, houses, offices and sheds all have keys. Many of us would have been issued with special high security keys in the workplace. Cars these days have very serious keys indeed, with mechanical and electronic anti-copying design features. It's all bog standard.

Mul t lock
Car keys


But for well over a decade now, cyber security advocates speak earnestly about Two Factor Authentication as if it's something new and profound. And what's worse, IT people have let the term Two Factor Authentication (which in my view necessarily entails a physical hardware device - something you will be aware of when you've lost it) become bastardised in various ways. People can talk of "Multi Factor Authentication" including Knowledge Based Authentication as if they're equivalent to 2FA.

For a few extra bucks we could build proper physically keyed security into all our computers and networked devices. The ubiquity of contactless interfaces by Wi-Fi or NFC opens the way for a variety of radio frequency keys in different form factors for log on.

There's something weird about the computing UX that has long created different standards for looking at the cyber world and the real world. A personal story illustrates the point. About nine years ago, I met with a big e-commerce platform provider that was experiencing a boom in fraud against the online merchants it was hosting. They wanted to offer their merchant tenants better security against hijackers. I suggested including a USB key for mutual authentication and strong digital signatures, but the notion of any physical token was rejected out of hand. They could not stomach the idea that the merchant might be inconvenienced in the event they misplaced their key. What an astonishing double standard! I asked them to imagine being a small business owner, who one day drives to the office to find they've left door key behind. What do you want to do? Have some magic protocol that opens the door for you, or do you put up with the reality of having to turn around and get your keys? Could they not see that softening the pain of losing one's keys by creating magic remedies was going to compromise security?

We are universally habituated to physical keys and key rings. They offer a brilliant combination of usability and security. If we had comparably easy-to-use physical keys for accessing virtual assets, we could easily manage a suite of 10 or 15 or more distinct digital identities, just as we manage that many real world keys. Serious access security for our computers would be simple, if we just had the will to engineer our hardware properly.

Posted in Security, Cloud

If it sounds too good to be true, it probably is

Imagine a new secretarial agency that provides you with a Personal Assistant. They're a really excellent PA. They look after your diary, place calls, make bookings, plan your travel, send messages for you, take dictation. Like all good PAs, they get to know you, so they'll even help decide where to have dinner.

And you'll never guess: there's no charge!

But ... at the end of each day, the PA reports back to their agency, and provides a full transcript of all you've said, everyone you've been in touch with, everything you've done. The agency won't say what they plan to do with all this data, how long they'll keep it, nor who they'll share it with.

If you're still interested in this deal, here's the PA's name: Siri.

Seriously now ... Siri may be a classic example of the unfair bargain at the core of free social media. Natural language processing is a fabulous idea of course, and will improve the usability of smart phones many times over. But Siri is only "free" because Apple are harvesting personal information with the intent to profit from it. A cynic could even call it a Trojan Horse.

There wouldn't be anything wrong with this bargain if Apple were up-front about it. In their Privacy Policy they should detail what Personal Information they are collecting out of all the voice data; they should explain why they collect it, what they plan to do with it, how long they will retain it, and how they might limit secondary usage. It's not good enough to vaguely reserve their rights to "use personal information to help us develop, deliver, and improve our products, services, content, and advertising".

Apple's Privacy Policy today (dated 21 June 2010 [*]) in fact makes no mention of voice data at all, nor the import of contacts and other PI from the iPhone to help train its artificial intelligence algorithms.

I myself will decline to use Siri while the language processing is done in the cloud, and while Apple does not constrain its use of my voice data. I'll wait for NLP to be done on the device with the data kept private. And I'd happily pay for that app.

Update 28 Nov 2011

Apple updated their Privacy Policy in October, but curiously, the document still makes no mention of Siri, nor voice data in general. By rights (literally in Europe) Apple's Privacy Policy should detail amongst other things why it retains identifiable voice data, and what future use it plans to make of the data.

Posted in Social Networking, Social Media, Privacy, Cloud

Reading Peter Steiner's Internet dog

How are we to read Peter Steiner's famous cartoon "On the Internet, nobody knows you're a dog"? It wasn't an editorial cartoon, so Steiner wasn't trying to make a point. I believe he was just trying to be funny.

Why is Internet dog funny? I think it's because dogs are mischievous (especially the ordinary muts in question). Dogs chew your slippers when you're not looking. So imagine what fun they would have on the Internet. They would probably sell your slippers given the chance on eBay.

Technologists especially latched onto the cartoon and gave it deeper meanings, particularly relating to "trust". Whether or not the cartoon triggered it, it coincided with a rush of interest in the topic. Through most of the 1990s, hoards of people became preoccupied with "trust" as a precondition for e-business. Untold hours were spent researching, debating, deconstructing and redefining "trust", as if the human race didn't really understand it. Really? Was there ever really a "trust" problem per se? Did the advent of the Internet truly demand such earnest reappraisal?

No. We should read the Steiner cartoon as being all about fidelity not trust. It goes without saying that you wouldn't trust a dog. The challenge online is really pretty prosaic: it is to tell what someone is. Trust then follows from that knowledge in context.

I maintain that by and large we trust people well enough in the real world. There's no end of conventions, rules and instincts for establishing trust - none perfect, but perfectly good enough, and of course, evolving all the time. It's true that establishing trust in new business relationships is subtle and muti-pathed, but in routine business transactions - the sort that the Internet is good for - trust is not subtle at all. The only thing that matters in most transactions is the parties' formal credentials (not even their identities) in the context at hand. For example, a pharmacist doesn't "trust" the doctor as such when filling a prescription. Medicos, accountants, engineers, bankers, lawyers, architects and so on have professional qualifications that authorise them to perform certain transactions. Consider that in the traditional mercantile world, the shopkeeper or sales assistant is typically a total stranger, but we know that consumer protection legislation, credit card agreements and big companies' reputations all keep us safe. So we don't actually "trust" most people we do business with at all. We don't have to.

There is an old Italian proverb that perfectly sums up most business:

It is good to trust, but it is better not to.

That should be the defining slogan of Internet sociology, not "no one knows you're a dog". If we weren't over-doing trust, the transition from real world to digital would not be so daunting, and the foundational concepts of identity would not need re-defining.

Posted in Trust, Privacy, Internet, Identity, Culture, Cloud

Smile! You're on Candid Apple

Apple is reported to have acquired the "Polar Rose" technology that allows photos to be tagged with names through automated facial recognition.

The iPhone FAQ site says:
Interesting uses for the technology include automatically tagging people in photos and recognizing FaceTime callers from contact information. As the photographs taken on the iPhone improve, various image analysis algorithms could also be used to automatically classify and organize photos by type or subject.
Apple's iPhoto currently recognizes faces in pictures for tagging purposes. It's possible Apple is looking to improve and expand this functionality. Polar Rose removed its free tagging services for Facebook and Flickr earlier this month, citing interest from larger companies in licensing their technology.

The privacy implications are many and varied. Fundamentally, such technology will see hitherto anonymous image data converted into personal information, at those informopolies like Google, Facebook and Apple which hold vast personal photo archives.

Facial recognition systems obviously need to be trained. Members will upload photos, name the people in the photos, and then have the algorithm run over other images in the database. So it seems that Apple (in this case) will have lists of the all-important bindings between biometric template and names. What stops them running the algorithm and binding over any other images they happen to have in their databases? Apple has already shown a propensity to hang on to rich geolocation data generated by the iPhone, and a reluctance to specify what they intend to do with that data.

If facial recognition worked well, then the shady possibilities are immediately obvious. Imagine that I have been snapped in a total stranger's photo -- say some tourist on the Manly ferry -- and they've uploaded the image to a host of some sort. What if the host, or some third party data miner, runs the matching algorithm over the stranger's photo and recognises me in it? If they're quick, a cunning business might SMS me a free ice cream offer, seeing I'm heading towards the corso. Or they might work out I'm a visitor, because the day before I was snapped in Auckland, and they can start to fill in my travel profile.

This is probably sci-fi for now, because in fact, facial recognition doesn't work at all well when image capture conditions aren't tightly controlled. But this is no cause for complacency, for the very inaccuracy of the biometric method might make the privacy implications even worse.

To analyse this, as with any privacy assessment, we should start with information flows. Consider what's going on when a photo is uploaded to this kind of system. Say my friend Alice discloses to Apple that "manly ferry 11dec2010.jpg" is an image of Steve Wilson. Apple has then collected Personal Information about me, and they've done it indirectly, which under Australia's privacy regime is something they're supposed to inform me of as soon as practical.

Then Apple reduces the image to a biometric template, like "Steve Wilson sample 001.bio". The Australian Law Reform Commission has recommended that biometric data be treated as Sensitive Information, and that collection be subject to express consent. That is, a company won't be allowed to collect facial recognition template data without getting permission first.

Setting aside that issue for a moment, consider what happens when later, someone runs the algorithm against a bunch of other images, and it generates some matches: e.g. "uluru 30jan2008.jpg"-is-an-image-of-Steve-Wilson. It doesn't actually matter whether the match is true or false: it's still a brand new piece of Personal Information about me, created and collected by Apple, without my knowledge.

I reckon both false matches and true matches satisfy the definition of Personal Information in the Australian Privacy Act, which includes "an opinion ... whether true or not".

Remember: The failures of biometrics often cause greater privacy problems than do their successes.

Posted in Social Media, Privacy, Identity, Cloud

Is the cloud sustainable?

The value proposition of cloud computing is basically that backend or server-side computing is somehow better than front-end or client side. History suggests that the net benefit tends to swing like a pendulum between front and back. I don't think cloud computing will last, for there is an inexorable trend towards the client. It seems people like to keep their computing close.

It's often said that cloud computing is not unlike time-slice computing of the 1960s. Or the network computers of the 1980s. These are telling comparisons. So what was the attraction of backend computing in past eras?

In the 1960s, hardware was fiercely expensive and few could afford more than dumb terminals. Moore's Law fixed that problem.

In the 1980s, it was software that was expensive. The basic value proposition of the classic Sun NetPC was that desktop apps from you-know-who were too costly. But software prices have dropped, and the Free and Open Source movements in a sense outcompeted the network computer.

In the current cycle I think the differences between front and back are more complex (as is the business environment) and there are a number of different reasons to shift once more to the backend. For consumers until recently, it had to do with the cost of storage; filesharing for photos and the like made sense while terabytes were unaffordable but already that has changed.

A good deal of cloud 'migration' is happening by stealth, with great new IT services having their origins in the cloud. I'm thinking of course of Facebook and its ilk. A generation seems to be growing up having never experienced fat client e-mail or building their own website; they aren't moving anything to the cloud; they have been born up there and have never experienced anything else. A fascinating dynamic is how Facebook is now trying to attract businesses.

For corporates, much of the benefit of cloud computing relates to compliance. In particular, security, PCI-DSS and data breach disclosure obligations are proving prohibitive for smaller organisations, and outsourcing their IT to cloud providers makes sense.

Yet compliance costs at present are artificially high and are bound to fall. The PCI regime for instance is proving to be a wild goose chase, which will end sooner or later when proper security measures are finally deployed to prevent replay of payment card numbers. Information security in general is expensive largely because our commodity PCs, applicances and desk top apps aren't so well engineered. This has to change -- even if it takes another decade -- and when it does, the safety margin of outsourcing services will drop, and once again, people will probably prefer to do their computing closer to home.

Still, if cloud computing provides corporates with lower compliance costs for another ten years, then that will be a pretty good trot.

Posted in Security, Privacy, Cloud