The identerati sometimes refer to the challenge of “binding carbon to silicon”. That’s a poetic way of describing how the field of Identity and Access Management (IDAM) is concerned with associating carbon-based life forms (as geeks fondly refer to people) with computers (or silicon chips).
To securely bind users’ identities or attributes to their computerised activities is indeed a technical challenge. In most conventional IDAM systems, there is only circumstantial evidence of who did what and when, in the form of access logs and audit trails, most of which can be tampered with or counterfeited by a sufficiently determined fraudster. To create a lasting, tamper-resistant impression of what people do online requires some sophisticated technology (in particular, digital signatures created using hardware-based cryptography).
On the other hand, working out looser associations between people and computers is the stock-in-trade of social networking operators and Big Data analysts. So many signals are emitted as a side effect of routine information processing today that even the shyest of users may be uncovered by third parties with sufficient analytics know-how and access to data.
So privacy is in peril. For the past two years, big data breaches have only got bigger: witness the losses at Target (110 million), EBay (145 million), Home Depot (109 million records) and JPMorgan Chase (83 million) to name a few. Breaches have got deeper, too. Most notably, in June 2015 the U.S. federal government’s Office of Personnel Management (OPM) revealed it had been hacked, with the loss of detailed background profiles on 15 million past and present employees.
I see a terrible systemic weakness in the standard practice of information security. Look at the OPM breach: what was going on that led to application forms for employees dating back 15 years remaining in a database accessible from the Internet? What was the real need for this availability? Instead of relying on firewalls and access policies to protect valuable data from attack, enterprises need to review which data needs to be online at all.
We urgently need to reduce the exposed attack surface of our information assets. But in the information age, the default has become to make data as available as possible. This liberality is driven both by the convenience of having all possible data on hand, just in case in it might be handy one day, and by the plummeting cost of mass storage. But it's also the result of a technocratic culture that knows "knowledge is power," and gorges on data.
In communications theory, Metcalfe’s Law states that the value of a network is proportional to the square of the number of devices that are connected. This is an objective mathematical reality, but technocrats have transformed it into a moral imperative. Many think it axiomatic that good things come automatically from inter-connection and information sharing; that is, the more connection the better. Openness is an unexamined rallying call for both technology and society. “Publicness” advocate Jeff Jarvis wrote (admittedly provocatively) that: “The more public society is, the safer it is”. And so a sort of forced promiscuity is shaping up as the norm on the Internet of Things. We can call it "superconnectivity", with a nod to the special state of matter where electrical resistance drops to zero.
In thinking about privacy on the IoT, a key question is this: how much of the data emitted from Internet-enabled devices will actually be personal data? If great care is not taken in the design of these systems, the unfortunate answer will be most of it.
My latest investigation into IoT privacy uses the example of the Internet connected motor car. "Rationing Identity on the Internet of Things" will be released soon by Constellation Research.
And don't forget Constellation's annual innovation summit, Connected Enterprise at Half Moon Bay outside San Francisco, November 4th-6th. Early bird registration closes soon.
In the latest course of a 15 month security feast, BlackBerry has announced it is acquiring mobile device management (MDM) provider Good Technology. The deal is said to be definitive, for US$425 million in cash.
As BlackBerry boldly re-positions itself as a managed service play in the Internet of Things, adding an established MDM capability to its portfolio will bolster its claim -- which still surprises many -- to be handset neutral. But the Good buy is much more than that. It has to be seen in the context of John Chen's drive for cross-sector security and privacy infrastructure for the IoT.
As I reported from the recent BlackBerry Security Summit in New York, the company has knitted together a comprehensive IoT security fabric. Look at how they paint their security platform:
And see how Good will slip neatly into the Platform Services column. It's the latest in what is now a $575 million investment in non-organic security growth (following purchases of Secusmart, Watchdox, Movirtu and Athoc).
According to BlackBerry,
- Good will bring complementary capabilities and technologies to BlackBerry, including secure applications and containerization that protects end user privacy. With Good, BlackBerry will expand its ability to offer cross-platform EMM solutions that are critical in a world with varying deployment models such as bring-your-own-device (BYOD); corporate owned, personally enabled (COPE); as well as environments with multiple user interfaces and operating systems. Good has expertise in multi-OS management with 64 percent of activations from iOS devices, followed by a broad Android and Windows customer base.(1) This experience combined with BlackBerry’s strength in BlackBerry 10 and Android management – including Samsung KNOX-enabled devices – will provide customers with increased choice for securely deploying any leading operating system in their organization.
The strategic acquisition of Good Technology will also give the Identity-as-a-Service sector a big kick. IDaaS is become a crowded space with at least ten vendors (CA, Centrify, IBM, Microsoft, Okta, OneLogin, Ping, Salepoint, Salesforce, VMware) competing strongly around a pretty well settled set of features and functions. BlackBerry themselves launched an IDaaS a few months ago. At the Security Summit, I asked their COO Marty Beard what is going to distinguishe their offering in such a tight market, and he said, simply, mobility. Presto!
But IDaaS is set to pivot. We all know that mobility is now the locus of security , and we've seen VMware parlay its AirWatch investment into a competitive new cloud identity service. This must be more than a catch-up play with so many entrenched IDaaS vendors.
Here's the thing. I foresee identity actually disappearing from the user experience, which more and more will just be about the apps. I discussed this development in a really fun "Identity Innovators" video interview recorded with Ping at the recent Cloud Identity Summit. For identity to become seamless with the mobile application UX, we need two things. Firstly, federation protocols so that different pieces of software can hand over attributes and authentication signals to one another, and these are all in place now. But secondly we also need fully automated mobile device management as a service, and that's where Good truly fits with the growing BlackBerry platform.
Now stay tuned for new research coming soon via Constellation on the Internet of Things, identity, privacy and software reliability.
See also The State of Identity Management in 2015.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. For years, Lockstep has been monitoring these figures, plotting the trend data and analysing what the industry is and is not doing about it. A few weeks ago, statistics for calendar year 2014 came out.
As we reported last time, despite APCA's optimistic boosting of 3D Secure and education measures for many years, Card Not Present (CNP) online fraud was not falling as hoped. And what we see now in the latest numbers is the second biggest jump in CNP fraud ever! CY 2014 online card fraud losses were very nearly AU$300M, up 42% in 12 months.
Again, APCA steadfastly rationalises in its press release (PDF) that high losses simply reflect the popularity of online shopping. That's cold comfort to the card holders and merchants who are affected.
APCA has a love-ignore relationship with 3D Secure. This is one of the years when 3D Secure goes unmentioned. Instead the APCA presser talks up tokenization, I think for the first time. Yet the payments industry has had tokenization for about a decade. It's just another band-aid over the one fundamental crack in the payment card system: nothing stops stolen card numbers being replayed.
A proper fix to replay attack is easily within reach, which would re-use the same cryptography that solves skimming and carding, and would restore a seamless payment experience for card holders. See my 2012 paper Calling for a Uniform Approach to Card Fraud Offline and On" (PDF).
The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.
This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.
With all the innovation in payments leveraging cryptographic Secure Elements in mobile phones - the exemplar being Apple Pay for Card Present business - it beggars belief that we have yet to modernise CNP payments for web and mobile shopping.
On July 23, BlackBerry hosted its second annual Security Summit, once again in New York City. As with last year’s event, this was a relatively intimate gathering of analysts and IT journalists, brought together for the lowdown on BlackBerry’s security and privacy vision.
By his own account, CEO John Chen has met plenty of scepticism over his diverse and, some say, chaotic product and services portfolio. And yet it’s beginning to make sense. There is a strong credible thread running through Chen’s initiatives. It all has to do with the Internet of Things.
Disclosure: I traveled to the Blackberry Security Summit as a guest of Blackberry, which covered my transport and accommodation.
The Growth Continues
In 2014, John Chen opened the show with the announcement he was buying the German voice encryption firm Secusmart. That acquisition appears to have gone well for all concerned; they say nobody has left the new organisation in the 12 months since. News of BlackBerry’s latest purchase - of crisis communications platform AtHoc - broke a few days before this year’s Summit, and it was only the most recent addition to the family. In the past 12 months, BlackBerry has been busy spending $150M on inorganic growth, picking up:
Chen has also overseen an additional $100M expenditure in the same timeframe on organic security expansion (over and above baseline product development). Amongst other things BlackBerry has:
The Growth Explained - Secure Mobile Communications
Executives from different business units and different technology horizontals all organised their presentations around what is now a comprehensive security product and services matrix. It looks like this (before adding AtHoc):
BlackBerry is striving to lead in Secure Mobile Communications. In that context the highlights of the Security Summit for mine were as follows.
The Internet of Things
BlackBerry’s special play is in the Internet of Things. It’s the consistent theme that runs through all their security investments, because as COO Marty Beard says, IoT involves a lot more than machine-to-machine communications. It’s more about how to extract meaningful data from unbelievable numbers of devices, with security and privacy. That is, IoT for BlackBerry is really a security-as-a-service play.
Chief Security Officer David Kleidermacher repeatedly stressed the looming challenge of “how to patch and upgrade devices at scale”.
- MyPOV: Functional upgrades for smart devices will of course be part and parcel of IoT, but at the same time, we need to work much harder to significantly reduce the need for reactive security patches. I foresee an angry consumer revolt if things that never were computers start to behave and fail like computers. A radically higher standard of quality and reliability is required. Just look at the Jeep Uconnect debacle, where it appears Chrysler eventually thought better of foisting a patch on car owners and instead opted for a much more expensive vehicle recall. It was BlackBerry’s commitment to ultra high reliability software that really caught my attention at the 2014 Security Summit, and it convinces me they grasp what’s going to be required to make ubiquitous computing properly seamless.
Refreshingly, COO Beard preferred to talk about economic value of the IoT, rather than the bazillions of devices we are all getting a little jaded about. He said the IoT would bring about $4 trillion of required technology within a decade, and that the global economic impact could be $11 trillion.
BlackBerry’s real time operating system QNX is in 50 million cars today.
AtHoc is a secure crisis communications service, with its roots in the first responder environment. It’s used by three million U.S. government workers today, and the company is now pushing into healthcare.
Founder and CEO Guy Miasnik explained that emergency communications involves more than just outbound alerts to people dealing with disasters. Critical to crisis management is the secure inbound collection of info from remote users. AtHoc is also not just about data transmission (as important as that is) but it works also at the application layer, enabling sophisticated workflow management. This allows procedures for example to be defined for certain events, guiding sets of users and devices through expected responses, escalating issues if things don’t get done as expected.
We heard more about BlackBerry’s collaboration with Oxford University on the Centre for High Assurance Computing Excellence, first announced in April at the RSA Conference. CHACE is concerned with a range of fundamental topics, including formal methods for verifying program correctness (an objective that resonates with BlackBerry’s secure operating system division QNX) and new security certification methodologies, with technical approaches based on the Common Criteria of ISO 15408 but with more agile administration to reduce that standard’s overhead and infamous rigidity.
CSO Kleidermacher announced that CHACE will work with the Diabetes Technology Society on a new healthcare security standards initiative. The need for improved medical device security was brought home vividly by an enthralling live demonstration of hacking a hospital drug infusion pump. These vulnerabilities have been exposed before at hacker conferences but BlackBerry’s demo was especially clear and informative, and crafted for a non-technical executive audience.
- MyPOV: The message needs to be broadcast loud and clear: there are life-critical machines in widespread use, built on commercial computing platforms, without any careful thought for security. It’s a shameful and intolerable situation.
I was impressed by BlackBerry’s privacy line. It's broader and more sophisticated than most security companies, going way beyond the obvious matters of encryption and VPNs. In particular, the firm champions identity plurality. For instance, WorkLife by BlackBerry, powered by Movirtu technology, realizes multiple identities on a single phone. BlackBerry is promoting this capability in the health sector especially, where there is rarely a clean separation of work and life for professionals. Chen said he wants to “separate work and private life”.
The health sector in general is one of the company’s two biggest business development priorities (the other being automotive). In addition to sophisticated telephony like virtual SIMs, they plan to extend extend AtHoc into healthcare messaging, and have tasked the CHACE think-tank with medical device security. These actions complement BlackBerry’s fine words about privacy.
So BlackBerry’s acquisition plan has gelled. It now has perhaps the best secure real time OS for smart devices, a hardened device-independent Mobile Device Management backbone, new data-centric privacy and rights management technology, remote certificate management, and multi-layered emergency communications services that can be diffused into mission-critical rules-based e-health settings and, eventually, automated M2M messaging. It’s a powerful portfolio that makes strong sense in the Internet of Things.
BlackBerry says IoT is 'much more than device-to-device'. It’s more important to be able to manage secure data being ejected from ubiquitous devices in enormous volumes, and to service those things – and their users – seamlessly. For BlackBerry, the Internet of Things is really all about the service.
Identity online is a vexed problem. The majority of Internet fraud today can be related to weaknesses in the way we authenticate people electronically. Internet identity is terribly awkward too. Unfortunately today we still use password techniques dating back to 1960s mainframes that were designed for technicians, by technicians.
Our identity management problems also stem from over-reach. For one thing, the information era heralded new ways to reach and connect with people, with almost no friction. We may have taken too literally the old saw “information wants to be free.” Further, traditional ways of telling who people are, through documents and “old boys networks” creates barriers, which are anathema to new school Internet thinkers.
For the past 10-to-15 years, a heady mix of ambitions has informed identity management theory and practice: improve usability, improve security and improve “trust.” Without ever pausing to unravel the rainbow, the identity and access management industry has created grandiose visions of global “trust frameworks” to underpin a utopia of seamless stranger-to-stranger business and life online.
Well-resourced industry consortia and private-public partnerships have come and gone over the past decade or more. Numerous “trust” start-up businesses have launched and failed. Countless new identity gadgets, cryptographic algorithms and payment schemes have been tried.
And yet the identity problem is still with us. Why is identity online so strangely resistant to these well-meaning efforts to fix it? In particular, why is federated identity so dramatically easier said than done?
Identification is a part of risk management. In business, service providers use identity to manage the risk that they might be dealing with the wrong person. Different transactions carry different risks, and identification standards are varied accordingly. Conversely, if a provider cannot be sure enough who someone is, they now have the tools to withhold or limit their services. For example, when an Internet customer signs in from an unusual location, payment processors can put a cap on the dollar amounts they will authorize.
Across our social and business walks of life, we have distinct ways of knowing people, which yields a rich array of identities by which we know and show who we are to others. These Identities have evolved over time to suit different purposes. Different relationships rest on different particulars, and so identities naturally become specific not general.
The human experience of identity is one of ambiguity and contradictions. Each of us simultaneously holds a weird and wonderful ensemble of personal, family, professional and social identities. Each is different, sometimes radically so. Some of us lead quite secret lives, and I’m not thinking of anything salacious, but maybe just the role-playing games that provide important escapes from the humdrum.
Most of us know how it feels when identities collide. There’s no better example than what I call the High School Reunion Effect: that strange dislocation you feel when you see acquaintances for the first time in decades. You’ve all moved on, you’ve adopted new personae in new contexts – not the least of which is the one defined by a spouse and your own new family. Yet you find yourself re-winding past identities, relating to your past contemporaries as you all once were, because it was those school relationships, now fossilised, that defined you.
Frankly, we’ve made a mess of the pivotal analogue-to-digital conversion of identity. In real life we know identity is malleable and relative, yet online we’ve rendered it crystalline and fragile.
We’ve come close to the necessary conceptual clarity. Some 10 years ago a network of “identerati” led by Kim Cameron of Microsoft composed the “Laws of Identity,” which contained a powerful formulation of the problem to be addressed. The Laws defined Digital Identity as “a set of claims made [about] a digital subject.”
Your Digital Identity is a proxy for a relationship, pointing to a suite of particulars that matter about you in a certain context. When you apply for a bank account, when you subsequently log on to Internet banking, when you log on to your work extranet, or to Amazon or PayPal or Twitter, or if you want to access your electronic health record, the relevant personal details are different each time.
The flip side of identity management is privacy. If authentication concerns what a Relying Party needs to know about you, then privacy is all about what they don’t need to know. Privacy amounts to information minimization; security professionals know this all too well as the “Need to Know” principle.
All attempts at grand global identities to date have failed. The Big Certification Authorities of the 1990s reckoned a single, all-purpose digital certificate would meet the needs of all business, but they were wrong. Ever more sophisticated efforts since then have also failed, such as the Infocard Foundation, Liberty Alliance and the Australian banking sector’s Trust Centre.
Significantly, federation for non-trivial identities only works within regulatory monocultures – for example the US Federal Bridge CA, or the Scandinavian BankID network – where special legislation authorises banks and governments to identify customers by the one credential. The current National Strategy for Trusted Identities in Cyberspace has pondered legislation to manage liability but has balked. The regulatory elephant remains in the room.
As an aside, obviously social identities like Facebook and Twitter handles federate very nicely, but these are issued by organisations that don't really know who we are, and they're used by web sites that don't really care who we are; social identity federation is a poor model for serious identity management.
A promising identity development today is the Open Identity Foundation’s Attribute Exchange Network, a new architecture seeking to organise how identity claims may be traded. The Attribute Exchange Network resonates with a growing realization that, in the words of Andrew Nash, a past identity lead at Google and at PayPal, “attributes are at least as interesting as identities – if not more so.”
If we drop down a level and deal with concrete attribute data instead of abstract identities, we will start to make progress on the practical challenges in authentication: better resistance to fraud and account takeover, easier account origination and better privacy.
My vision is that by 2019 we will have a fresh marketplace of Attribute Providers. The notion of “Identity Provider” should die off, for identity is always in the eye of the Relying Party. What we need online is an array of respected authorities and agents that can vouch for our particulars. Banks can provide reliable electronic proof of our payment card numbers; government agencies can attest to our age and biographical details; and a range of private businesses can stand behind attributes like customer IDs, membership numbers and our retail reputations.
In five years time I expect we will adopt a much more precise language to describe how to deal with people online, and it will reflect more faithfully how we’ve transacted throughout history. As the old Italian proverb goes: It is nice to “trust” but it’s better not to.
This article first appeared as "Abandoning identity in favor of attributes" in Secure ID News, 2 December, 2014.
Bank robber Willie Sutton, when asked why he robbed banks, answered "That's where the money is". It's the same with breaches. Large databases are the targets of people who want data. It's that simple.
Having said that, there are different sorts of breaches and corresponding causes. Most high profile breaches are obviously driven by financial crime, where attackers typically grab payment card details. Breaches are what powers most card fraud. Organised crime gangs don't pilfer card numbers one at a time from people's computers or insecure websites (and so the standard advice to consumers to change their passwords every month and to make sure they see a browser padlock is nice but don't think it will do anything to stop mass card fraud).
Instead of blaming end user failings, we need to really turn up the heat on enterprise IT. The personal data held by big merchant organisations (including even mundane operations like car parking chains) is now worth many hundreds of millions of dollars. If this kind of value was in the form of cash or gold, you'd see Fort Knox-style security around it. Literally. But how much money does even the biggest enterprise invest in security? And what do they get for their money?
The grim reality is that no amount of conventional IT security today can prevent attacks on assets worth billions of dollars. The simple economics is against us. It's really more a matter of luck than good planning that some large organisations have yet to be breached (and that's only so far as we know).
Organised crime is truly organised. If it's card details they want, they go after the big data stores, at payments processors and large retailers. The sophistication of these attacks is amazing even to security pros. The attack on Target's Point of Sale terminals for instance was in the "can't happen" category.
The other types of criminal breach include mischief, as when the iCloud photos of celebrities were leaked last year, hacktivism, and political or cyber terrorist attacks, like the one on Sony.
There's some evidence that identity thieves are turning now to health data to power more complex forms of crime. Instead of stealing and replaying card numbers, identity thieves can use deeper, broader information like patient records to either commit fraud against health system payers, or to open bogus accounts and build them up into complex scams. The recent Anthem database breach involved extensive personal records on 80 million individuals; we have yet to see how these details will surface in the identity black markets.
The ready availability of stolen personal data is one factor we find to be driving Identity and Access Management (IDAM) innovation; see "The State of Identity Management in 2015". Next generation IDAM will eventually make stolen data less valuable, but for the foreseeable future, all enterprises holding large customer datasets we will remain prime targets for identity thieves.
Now let's not forget simple accidents. The Australian government for example has had some clangers though these can happen to any big organisation. A few months ago a staffer accidentally attached the wrong a file to an email, and thus released the passport details of the G20 leaders. Before that, we saw a spreadsheet holding personal details of thousands of asylum seekers get mistakenly pasted into a government website HTML.
A lesson I want to bring out here is the terrible complexity and fragility of our IT systems. It doesn't take much for human error to have catastrophic results. Who among us has not accidentally hit 'Reply All' or attached the wrong file to an email? If you did an honest Threat & Risk Assessment on these sorts of everyday office systems, you'd have to conclude they are not safe to handle sensitive data nor to be operated by most human beings. But of course we simply can't afford notto use office IT. We've created a monster.
Again, criminal elements know this. The expert cryptographer Bruce Schneier once said "amateurs hack systems, professionals hack people". Access control on today's sprawling complex computer systems is generally poor, leaving the way open for inside jobs. Just look at the Chelsea Manning case, one of the worst breaches of all time, made possible by granting too high access privileges to too many staffers.
Outside government, access control is worse, and so is access logging - so system administrators often can't tell there's even been a breach until circumstantial evidence emerges. I am sure the majority of breaches are occurring without anyone knowing. It's simply inevitable.
Look at hotels. There are occasional reports of hotel IT breaches, but they are surely happening continuously. The guest details held in hotels is staggering - payment card details, license plates, travel itineraries including airline flight details, even passport numbers are held by some places. And these days, with global hotel chains, the whole booking database is available to a rogue employee from any place in the world, 24-7.
Please, don't anyone talk to me about PCI-DSS! The Payment Card Industry Data Security Standards for protecting cardholder details haven't had much effect at all. Some of the biggest breaches of all time have affected top tier merchants and payments processors which appear to have been PCI compliant. Yet the lawyers for the payments institutions will always argue that such-and-such a company wasn't "really" compliant. And the PCI auditors walk away from any liability for what happens in between audits. You can understand their position; they don't want to be accountable for wrong doings or errors committed behind their backs. However, cardholders and merchants are caught in the middle. If a big department store passes its PCI audits, surely we can expect them to be reasonably secure year-long? No, it turns out that the day after a successful audit, an IT intern can mis-configure a firewall or forget a patch; all those defences become useless, and the audit is rendered meaningless.
Which reinforces my point about the fragility of IT: it's impossible to make lasting security promises anymore.
In any case, PCI is really just a set of data handling policies and promises. They improve IT security hygiene, and ward off amateur attacks. But they are useless against organised crime or inside jobs.
There is an increasingly good argument to outsource data management. Rather than maintain brittle databases in the face of so much risk, companies are instead turning to large reputable cloud services, where the providers have the scale, resources and attention to detail to protect data in their custody. I previously looked at what matters in choosing cloud services from a geographical perspective in my Constellation Research report "Why Cloud Geography Matters in a Post-Snowden/NSA Era". And in forthcoming research I'll examine a broader set of contract-related KPIs to help buyers make the right choice of cloud service provider.
If you asked me what to do about data breaches, I'd say the short-to-medium term solution is to get with the strength and look for managed security services from specialist providers. In the longer term, we will have to see grassroots re-engineering of our networks and platforms, to harden them against penetration, and to lessen the opportunity for identity theft.
In the meantime, you can hope for the best, if you plan for the worst.
Actually, no, you can't hope.
The Australian government is to revamp the troubled Personally Controlled Electronic Health Record (PCEHR). In line with the Royle Review from Dec 2013, it is reported that patient participation is to change from the current Opt-In model to Opt-Out; see "Govt to make e-health records opt-out" by Paris Cowan, IT News.
That is to say, patient data from hospitals, general practice, pathology and pharmacy will be added by default to a central longitudinal health record, unless patients take steps (yet to be specified) to disable sharing.
The main reason for switching the consent model is simply to increase the take-up rate. But it's a much bigger change than many seem to realise.
The government is asking the community to trust it to hold essentially all medical records. Are the PCEHR's security and privacy safeguards up to scratch to take on this grave responsibility? I argue the answer is no, on two grounds.
Firstly there is the practical matter of PCEHR's security performance to date. It's not good, based on publicly available information. On multiple occasions, prescription details have been uploaded from community pharmacy to the wrong patient's records. There have been a few excuses made for this error, with blame sheeted home to the pharmacy. But from a system's perspective -- and health care is all about the systems -- you cannot pass the buck like that. Pharmacists are using a PCEHR system that was purportedly designed for them. And it was subject to system-wide threat & risk assessments that informed the architecture and design of not just the electronic records system but also the patient and healthcare provider identification modules. How can it be that the PCEHR allows such basic errors to occur?
Secondly and really fundamentally, you simply cannot invert the consent model as if it's a switch in the software. The privacy approach is deep in the DNA of the system. Not only must PCEHR security be demonstrably better than experience suggests, but it must be properly built in, not retrofitted.
Let me explain how the consent approach crops up deep in the architecture of something like PCEHR. During analysis and design, threat & risk assessments (TRAs) and privacy impact assessments (PIAs) are undertaken, to identify things that can go wrong, and to specify security and privacy controls. These controls generally comprise a mix of technology, policy and process mechanisms. For example, if there is a risk of patient data being sent to the wrong person or system, that risk can be mitigated a number of ways, including authentication, user interface design, encryption, contracts (that obligate receivers to act responsibly), and provider and patient information. The latter are important because, as we all should know, there is no such thing as perfect security. Mistakes are bound to happen.
One of the most fundamental privacy controls is participation. Individuals usually have the ultimate option of staying away from an information system if they (or their advocates) are not satisfied with the security and privacy arrangements. Now, these are complex matters to evaluate, and it's always best to assume that patients do not in fact have a complete understanding of the intricacies, the pros and cons, and the net risks. People need time and resources to come to grips with e-health records, so a default opt-in affords them that breathing space. And it errs on the side of caution, by requiring a conscious decision to participate. In stark contrast, a default opt-out policy embodies a position that the scheme operator believes it knows best, and is prepared to make the decision to participate on behalf of all individuals.
Such a position strikes many as beyond the pale, just on principle. But if opt-out is the adopted policy position, then clearly it has to be based on a risk assessment where the pros indisputably out-weigh the cons. And this is where making a late switch to opt-out is unconscionable.
You see, in an opt-in system, during analysis and design, whenever a risk is identified that cannot be managed down to negligible levels by way of technology and process, the ultimate safety net is that people don't need to use the PCEHR. It is a formal risk management ploy (a part of the risk manager's toolkit) to sometimes fall back on the opt-in policy. In an opt-in system, patients sign an agreement in which they accept some risk. And the whole security design is predicated on that.
Look at the most recent PIA done on the PCEHR in 2011; section 9.1.6 "Proposed solutions - legislation" makes it clear that opt-in participation is core to the existing architecture. The PIA makes a "critical legislative recommendation" including:
- a number of measures to confirm and support the 'opt in' nature of the PCEHR for consumers (Recommendations 4.1 to 4.3) [and] preventing any extension of the scope of the system, or any change to the 'opt in' nature of the PCEHR.
The PIA at section 2.2 also stresses that a "key design feature of the PCEHR System ... is opt in – if a consumer or healthcare provider wants to participate, they need to register with the system." And that the PCEHR is "not compulsory – both consumers and healthcare providers choose whether or not to participate".
A PDF copy of the PIA report, which was publicly available at the Dept of Health website for a few years after 2011, is archived here.
The fact is that if the government changes the PCEHR from opt-in to opt-out, it will invalidate the security and privacy assessments done to date. The PIAs and TRAs will have to be repeated, and the project must be prepared for major redesign.
The Royle Review report (PDF) did in fact recommend "a technical assessment and change management plan for an opt-out model ..." (Recommendation 14) but I am not aware that such a review has taken place.
To look at the seriousness of this another way, think about "Privacy by Design", the philosophy that's being steadily adopted across government. In 2014 NEHTA wrote in a submission (PDF) to the Australian Privacy Commissioner:
- The principle that entities should employ “privacy by design” by building privacy into their processes, systems, products and initiatives at the design stage is strongly supported by NEHTA. The early consideration of privacy in any endeavour ensures that the end product is not only compliant but meets the expectations of stakeholders.
One of the tenets of Privacy by Design is that you cannot bolt on privacy after a design is done. Privacy must be designed into the fabric of any system from the outset. All the way along, PCEHR has assumed opt-in, and the last PIA enshrined that position.
If the government was to ignore its own Privacy by Design credo, and not revisit the PCEHR architecture, it would be an amazing breach of the public's trust in the healthcare system.
Every now and then, a large organisation in the media spotlight will experience the special pain of having a password accidentally revealed in the background of a photograph or TV spot. Security commentator Graham Cluley has recorded a lot of these misadventures, most recently at a British national rail control room, and before that, in the Superbowl nerve centre and an emergency response agency.
Security folks love their schadenfreude but what are we to make of these SNAFUs? Of course, nobody is perfect. And some plumbers have leaky taps.
But these cases hold much deeper lessons. These are often critical infrastructure providers (consider that on financial grounds, there may be more at stake in Superbowl operations than the railways). The outfits making kindergarten security mistakes will have been audited many times over. So how on earth do they pass?
Posting passwords on the wall is not a random error - it's systemic. Some administrators do it out of habit, or desperation. They know it's wrong, but they do it anyway, and they do it with such regularity it gets caught on TV.
I really want to know if none of the security auditors at any of these organisations ever noticed the passwords in plain view? Or do the personnel do a quick clean up on the morning of each audit, only to revert to reality in between audits? Either way, here's yet more proof that security audit, frankly, is a sick joke. And that security practices aren't worth the paper they're printed on.
Security orthodoxy holds that people and process are more fundamental than technology, and that people are the weakest link. That's why we have security management processes and security audits. It's why whole industries have been built around security process standards like ISO 27000. So it's unfathomable to me that companies with passwords caught on camera can have have ever passed their audits.
Security isn't what people think it is. Instead of meticulous procedures and hawk-eyed inspections, too often it's just simple people going through the motions. Security isn't intellectually secure. The things we do in the name of "security" don't make us secure.
Let's not dismiss password flashing as a temporary embarrassment for some poor unfortunates. This should be humiliating for the whole information security industry. We need another way.
Picture credits: Graham Cluley.
Posted in Security
The State Of Identity Management in 2015
Constellation Research recently launched the "State of Enterprise Technology" series of research reports. These assess the current enterprise innovations which Constellation considers most crucial to digital transformation, and provide snapshots of the future usage and evolution of these technologies.
My second contribution to the state-of-the-state series is "Identity Management Moves from Who to What". Here's an excerpt from the report:
In spite of all the fuss, personal identity is not usually important in routine business. Most transactions are authorized according to someone’s credentials, membership, role or other properties, rather than their personal details. Organizations actually deal with many people in a largely impersonal way. People don’t often care who someone really is before conducting business with them. So in digital Identity Management (IdM), one should care less about who a party is than what they are, with respect to attributes that matter in the context we’re in. This shift in focus is coming to dominate the identity landscape, for it simplifies a traditionally multi-disciplined problem set. Historically, the identity management community has made too much of identity!
Six Digital Identity Trends for 2015
1. Mobile becomes the center of gravity for identity. The mobile device brings convergence for a decade of progress in IdM. For two-factor authentication, the cell phone is its own second factor, protected against unauthorized use by PIN or biometric. Hardly anyone ever goes anywhere without their mobile - service providers can increasingly count on that without disenfranchising many customers. Best of all, the mobile device itself joins authentication to the app, intimately and seamlessly, in the transaction context of the moment. And today’s phones have powerful embedded cryptographic processors and key stores for accurate mutual authentication, and mobile digital wallets, as Apple’s Tim Cook highlighted at the recent White House Cyber Security Summit.
2. Hardware is the key – and holds the keys – to identity. Despite the lure of the cloud, hardware has re-emerged as pivotal in IdM. All really serious security and authentication takes place in secure dedicated hardware, such as SIM cards, ATMs, EMV cards, and the new Trusted Execution Environment mobile devices. Today’s leading authentication initiatives, like the FIDO Alliance, are intimately connected to standard cryptographic modules now embedded in most mobile devices. Hardware-based identity management has arrived just in the nick of time, on the eve of the Internet of Things.
3. The “Attributes Push” will shift how we think about identity. In the words of Andrew Nash, CEO of Confyrm Inc. (and previously the identity leader at PayPal and Google), “Attributes are at least as interesting as identities, if not more so.” Attributes are to identity as genes are to organisms – they are really what matters about you when you’re trying to access a service. By fractionating identity into attributes and focusing on what we really need to reveal about users, we can enhance privacy while automating more and more of our everyday transactions.
The Attributes Push may recast social logon. Until now, Facebook and Google have been widely tipped to become “Identity Providers”, but even these giants have found federated identity easier said than done. A dark horse in the identity stakes – LinkedIn – may take the lead with its superior holdings in verified business attributes.
4. The identity agenda is narrowing. For 20 years, brands and organizations have obsessed about who someone is online. And even before we’ve solved the basics, we over-reached. We've seen entrepreneurs trying to monetize identity, and identity engineers trying to convince conservative institutions like banks that “Identity Provider” is a compelling new role in the digital ecosystem. Now at last, the IdM industry agenda is narrowing toward more achievable and more important goals - precise authentication instead of general identification.
5. A digital identity stack is emerging. The FIDO Alliance and others face a challenge in shifting and improving the words people use in this space. Words, of course, matter, as do visualizations. IdM has suffered for too long under loose and misleading metaphors. One of the most powerful abstractions in IT was the OSI networking stack. A comparable sort of stack may be emerging in IdM.
6. Continuity will shape the identity experience. Continuity will make or break the user experience as the lines blur between real world and virtual, and between the Internet of Computers and the Internet of Things. But at the same time, we need to preserve clear boundaries between our digital personae, or else privacy catastrophes await. “Continuous” (also referred to as “Ambient”) Authentication is a hot new research area, striving to provide more useful and flexible signals about the instantaneous state of a user at any time. There is an explosion in devices now that can be tapped for Continuous Authentication signals, and by the same token, rich new apps in health, lifestyle and social domains, running on those very devices, that need seamless identity management.
A snapshot at my report "Identity Moves from Who to What" is available for download at Constellation Research. It expands on the points above, and sets out recommendations for enterprises to adopt the latest identity management thinking.
I have just updated my periodic series of research reports on the FIDO Alliance. The fourth report, "FIDO Alliance Update: On Track to a Standard" is available at Constellation Research (for free for a time).
The Identity Management industry leader publishes its protocol specifications at v1.0, launches a certification program, and attracts support in Microsoft Windows 10.
The FIDO Alliance is the fastest-growing Identity Management (IdM) consortium we have seen. Comprising technology vendors, solutions providers, consumer device companies, and e-commerce services, the FIDO Alliance is working on protocols and standards to strongly authenticate users and personal devices online. With a fresh focus and discipline in this traditionally complicated field, FIDO envisages simply “doing for authentication what Ethernet did for networking”.
Launched in early 2013, the FIDO Alliance has now grown to over 180 members. Included are technology heavyweights like Google, Lenovo and Microsoft; almost every SIM and smartcard supplier; payments giants Discover, MasterCard, PayPal and Visa; several banks; and e-commerce players like Alibaba and Netflix.
FIDO is radically different from any IdM consortium to date. We all know how important it is to fix passwords: They’re hard to use, inherently insecure, and lie at the heart of most breaches. The Federated Identity movement seeks to reduce the number of passwords by sharing credentials, but this invariably confounds the relationships we have with services and complicates liability when more parties rely on fewer identities.
In contrast, FIDO’s mission is refreshingly clear: Take the smartphones and devices most of us are intimately connected to, and use the built-in cryptography to authenticate users to services. A registered FIDO-compliant device, when activated by its user, can send verified details about the device and the user to service providers, via standardized protocols. FIDO leverages the ubiquity of sophisticated handsets and the tidal wave of smart things. The Alliance focuses on device level protocols without venturing to change the way user accounts are managed or shared.
The centerpieces of FIDO’s technical work are two protocols, called UAF and U2F, for exchanging verified authentication signals between devices and services. Several commercial applications have already been released under the UAF and U2F specifications, including fingerprint-based payments apps from Alibaba and PayPal, and Google’s Security Key from Yubico. After a rigorous review process, both protocols are published now at version 1.0, and the FIDO Certified Testing program was launched in April 2015. And Microsoft announced that FIDO support would be built into Windows 10.
With its focus, pragmatism and membership breadth, FIDO is today’s go-to authentication standards effort. In this report, I look at what the FIDO Alliance has to offer vendors and end user communities, and its critical success factors.