Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Good, better, BlackBerry

In the latest course of a 15 month security feast, BlackBerry has announced it is acquiring mobile device management (MDM) provider Good Technology. The deal is said to be definitive, for US$425 million in cash.

As BlackBerry boldly re-positions itself as a managed service play in the Internet of Things, adding an established MDM capability to its portfolio will bolster its claim -- which still surprises many -- to be handset neutral. But the Good buy is much more than that. It has to be seen in the context of John Chen's drive for cross-sector security and privacy infrastructure for the IoT.

As I reported from the recent BlackBerry Security Summit in New York, the company has knitted together a comprehensive IoT security fabric. Look at how they paint their security platform:

BBY Security Platform In Action

And see how Good will slip neatly into the Platform Services column. It's the latest in what is now a $575 million investment in non-organic security growth (following purchases of Secusmart, Watchdox, Movirtu and Athoc).

According to BlackBerry,

    • Good will bring complementary capabilities and technologies to BlackBerry, including secure applications and containerization that protects end user privacy. With Good, BlackBerry will expand its ability to offer cross-platform EMM solutions that are critical in a world with varying deployment models such as bring-your-own-device (BYOD); corporate owned, personally enabled (COPE); as well as environments with multiple user interfaces and operating systems. Good has expertise in multi-OS management with 64 percent of activations from iOS devices, followed by a broad Android and Windows customer base.(1) This experience combined with BlackBerry’s strength in BlackBerry 10 and Android management – including Samsung KNOX-enabled devices – will provide customers with increased choice for securely deploying any leading operating system in their organization.


The strategic acquisition of Good Technology will also give the Identity-as-a-Service sector a big kick. IDaaS is become a crowded space with at least ten vendors (CA, Centrify, IBM, Microsoft, Okta, OneLogin, Ping, Salepoint, Salesforce, VMware) competing strongly around a pretty well settled set of features and functions. BlackBerry themselves launched an IDaaS a few months ago. At the Security Summit, I asked their COO Marty Beard what is going to distinguishe their offering in such a tight market, and he said, simply, mobility. Presto!

But IDaaS is set to pivot. We all know that mobility is now the locus of security , and we've seen VMware parlay its AirWatch investment into a competitive new cloud identity service. This must be more than a catch-up play with so many entrenched IDaaS vendors.

Here's the thing. I foresee identity actually disappearing from the user experience, which more and more will just be about the apps. I discussed this development in a really fun "Identity Innovators" video interview recorded with Ping at the recent Cloud Identity Summit. For identity to become seamless with the mobile application UX, we need two things. Firstly, federation protocols so that different pieces of software can hand over attributes and authentication signals to one another, and these are all in place now. But secondly we also need fully automated mobile device management as a service, and that's where Good truly fits with the growing BlackBerry platform.

Now stay tuned for new research coming soon via Constellation on the Internet of Things, identity, privacy and software reliability.

See also The State of Identity Management in 2015.

Posted in Security, Identity, Federated Identity, Constellation Research, Big Data

Card Not Present fraud trends (sadly) back to normal

The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. For years, Lockstep has been monitoring these figures, plotting the trend data and analysing what the industry is and is not doing about it. A few weeks ago, statistics for calendar year 2014 came out.

CNP trends pic to CY 2014

As we reported last time, despite APCA's optimistic boosting of 3D Secure and education measures for many years, Card Not Present (CNP) online fraud was not falling as hoped. And what we see now in the latest numbers is the second biggest jump in CNP fraud ever! CY 2014 online card fraud losses were very nearly AU$300M, up 42% in 12 months.

Again, APCA steadfastly rationalises in its press release (PDF) that high losses simply reflect the popularity of online shopping. That's cold comfort to the card holders and merchants who are affected.

APCA has a love-ignore relationship with 3D Secure. This is one of the years when 3D Secure goes unmentioned. Instead the APCA presser talks up tokenization, I think for the first time. Yet the payments industry has had tokenization for about a decade. It's just another band-aid over the one fundamental crack in the payment card system: nothing stops stolen card numbers being replayed.

A proper fix to replay attack is easily within reach, which would re-use the same cryptography that solves skimming and carding, and would restore a seamless payment experience for card holders. See my 2012 paper Calling for a Uniform Approach to Card Fraud Offline and On" (PDF).


The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.

This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.

With all the innovation in payments leveraging cryptographic Secure Elements in mobile phones - the exemplar being Apple Pay for Card Present business - it beggars belief that we have yet to modernise CNP payments for web and mobile shopping.

Posted in Security, Payments, Fraud

BlackBerry Security Summit 2015

On July 23, BlackBerry hosted its second annual Security Summit, once again in New York City. As with last year’s event, this was a relatively intimate gathering of analysts and IT journalists, brought together for the lowdown on BlackBerry’s security and privacy vision.

By his own account, CEO John Chen has met plenty of scepticism over his diverse and, some say, chaotic product and services portfolio. And yet it’s beginning to make sense. There is a strong credible thread running through Chen’s initiatives. It all has to do with the Internet of Things.

Disclosure: I traveled to the Blackberry Security Summit as a guest of Blackberry, which covered my transport and accommodation.

The Growth Continues

In 2014, John Chen opened the show with the announcement he was buying the German voice encryption firm Secusmart. That acquisition appears to have gone well for all concerned; they say nobody has left the new organisation in the 12 months since. News of BlackBerry’s latest purchase - of crisis communications platform AtHoc - broke a few days before this year’s Summit, and it was only the most recent addition to the family. In the past 12 months, BlackBerry has been busy spending $150M on inorganic growth, picking up:

  • Secusmart - voice & message encryption (announced at the inaugural Security Summit 2014)
  • Movirtu - innovative virtual SIM solutions for holding multiple cell phone numbers on one chip
  • Watchdox - document security and rights management, for “data centric privacy”, and
  • Athoc (announced but not yet complete; see more details below).

    Chen has also overseen an additional $100M expenditure in the same timeframe on organic security expansion (over and above baseline product development). Amongst other things BlackBerry has:

  • "rekindled" Certicom, a specialist cryptography outfit acquired back in 2009 for its unique IP in elliptic curve encryption, and spun out a a new managed PKI service.
  • And it has created its own Enterprise Identity-as-a-Service (IDaas) solution. From what I saw at the Summit, BlackBerry is playing catch-up in cloud based IDAM but they do have an edge in mobility over the specialist identity vendors in what is now a crowded identity services marketplace.

    The Growth Explained - Secure Mobile Communications

    Executives from different business units and different technology horizontals all organised their presentations around what is now a comprehensive security product and services matrix. It looks like this (before adding AtHoc):

    BBY Security Platform In Action

    BlackBerry is striving to lead in Secure Mobile Communications. In that context the highlights of the Security Summit for mine were as follows.

    The Internet of Things

    BlackBerry’s special play is in the Internet of Things. It’s the consistent theme that runs through all their security investments, because as COO Marty Beard says, IoT involves a lot more than machine-to-machine communications. It’s more about how to extract meaningful data from unbelievable numbers of devices, with security and privacy. That is, IoT for BlackBerry is really a security-as-a-service play.

    Chief Security Officer David Kleidermacher repeatedly stressed the looming challenge of “how to patch and upgrade devices at scale”.

      • MyPOV: Functional upgrades for smart devices will of course be part and parcel of IoT, but at the same time, we need to work much harder to significantly reduce the need for reactive security patches. I foresee an angry consumer revolt if things that never were computers start to behave and fail like computers. A radically higher standard of quality and reliability is required. Just look at the Jeep Uconnect debacle, where it appears Chrysler eventually thought better of foisting a patch on car owners and instead opted for a much more expensive vehicle recall. It was BlackBerry’s commitment to ultra high reliability software that really caught my attention at the 2014 Security Summit, and it convinces me they grasp what’s going to be required to make ubiquitous computing properly seamless.

    Refreshingly, COO Beard preferred to talk about economic value of the IoT, rather than the bazillions of devices we are all getting a little jaded about. He said the IoT would bring about $4 trillion of required technology within a decade, and that the global economic impact could be $11 trillion.

    BlackBerry’s real time operating system QNX is in 50 million cars today.


    AtHoc is a secure crisis communications service, with its roots in the first responder environment. It’s used by three million U.S. government workers today, and the company is now pushing into healthcare.

    Founder and CEO Guy Miasnik explained that emergency communications involves more than just outbound alerts to people dealing with disasters. Critical to crisis management is the secure inbound collection of info from remote users. AtHoc is also not just about data transmission (as important as that is) but it works also at the application layer, enabling sophisticated workflow management. This allows procedures for example to be defined for certain events, guiding sets of users and devices through expected responses, escalating issues if things don’t get done as expected.


    We heard more about BlackBerry’s collaboration with Oxford University on the Centre for High Assurance Computing Excellence, first announced in April at the RSA Conference. CHACE is concerned with a range of fundamental topics, including formal methods for verifying program correctness (an objective that resonates with BlackBerry’s secure operating system division QNX) and new security certification methodologies, with technical approaches based on the Common Criteria of ISO 15408 but with more agile administration to reduce that standard’s overhead and infamous rigidity.

    CSO Kleidermacher announced that CHACE will work with the Diabetes Technology Society on a new healthcare security standards initiative. The need for improved medical device security was brought home vividly by an enthralling live demonstration of hacking a hospital drug infusion pump. These vulnerabilities have been exposed before at hacker conferences but BlackBerry’s demo was especially clear and informative, and crafted for a non-technical executive audience.

      • MyPOV: The message needs to be broadcast loud and clear: there are life-critical machines in widespread use, built on commercial computing platforms, without any careful thought for security. It’s a shameful and intolerable situation.


    I was impressed by BlackBerry’s privacy line. It's broader and more sophisticated than most security companies, going way beyond the obvious matters of encryption and VPNs. In particular, the firm champions identity plurality. For instance, WorkLife by BlackBerry, powered by Movirtu technology, realizes multiple identities on a single phone. BlackBerry is promoting this capability in the health sector especially, where there is rarely a clean separation of work and life for professionals. Chen said he wants to “separate work and private life”.

    The health sector in general is one of the company’s two biggest business development priorities (the other being automotive). In addition to sophisticated telephony like virtual SIMs, they plan to extend extend AtHoc into healthcare messaging, and have tasked the CHACE think-tank with medical device security. These actions complement BlackBerry’s fine words about privacy.


    So BlackBerry’s acquisition plan has gelled. It now has perhaps the best secure real time OS for smart devices, a hardened device-independent Mobile Device Management backbone, new data-centric privacy and rights management technology, remote certificate management, and multi-layered emergency communications services that can be diffused into mission-critical rules-based e-health settings and, eventually, automated M2M messaging. It’s a powerful portfolio that makes strong sense in the Internet of Things.

    BlackBerry says IoT is 'much more than device-to-device'. It’s more important to be able to manage secure data being ejected from ubiquitous devices in enormous volumes, and to service those things – and their users – seamlessly. For BlackBerry, the Internet of Things is really all about the service.

    Posted in Software engineering, Security, Privacy, PKI, e-health, Constellation Research, Cloud, Big Data

  • Man made software in His own image

    In 2002, a couple of Japanese visitors to Australia swapped passports with each other before walking through an automatic biometric border control gate being tested at Sydney airport. The facial recognition algorithm falsely matched each of them to the others' passport photo. These gentlemen were in fact part of an international aviation industry study group and were in the habit of trying to fool biometric systems then being trialed round the world.

    When I heard about this successful prank, I quipped that the algorithms were probably written by white people - because we think all Asians look the same. Colleagues thought I was making a typical sick joke, but actually I was half-serious. It did seem to me that the choice of facial features thought to be most distinguishing in a facial recognition model could be culturally biased.

    Since that time, border control face recognition has come a long way, and I have not heard of such errors for many years. Until today.

    The San Francisco Chronicle of July 21 carries a front page story about the cloud storage services of Google and Flickr mislabeling some black people as gorillas (see updated story, online). It's a quite incredible episode. Google has apologized. Its Chief Architect of social, Yonatan Zunger, seems mortified judging by his tweets as reported, and is investigating.

    The newspaper report quotes machine learning experts who suggest programmers with limited experience of diversity may be to blame. That seems plausible to me, although I wonder where exactly the algorithm R&D gets done, and how much control is to be had over the biometric models and their parameters along the path from basic research to application development.

    So man has literally made software in his own image.

    The public is now being exposed to Self Driving Cars, which are heavily reliant on machine vision, object recognition and artificial intelligence. If this sort of software can't tell people from apes in static photos given lots of processing time, how does it perform in real time, with fleeting images, subject to noise, and with much greater complexity? It's easy to imagine any number of real life scenarios where an autonomous car will have to make a split-second decision between two pretty similar looking objects appearing unexpectedly in its path.

    The general expectation is that Self Driving Cars (SDCs) will be tested to exhaustion. And so they should. But if cultural partiality is affecting the work of programmers, it's possible that testers have suffer the same blind spots without knowing it. Maybe the offending photo labeling programs were never verified with black people. So how are the test cases for SDCs being selected? What might happen when an SDC ventures into environments and neighborhoods where its programmers have never been?

    Everybody in image processing and artificial intelligence should be humbled by the racist photo labeling. With the world being eaten by software, we need to reflect really deeply on how such design howlers arise. And frankly double check if we're ready to let computer programs behind the wheel.

    Posted in Social Media, Culture, Biometrics

    Breaking down digital identity

    Identity online is a vexed problem. The majority of Internet fraud today can be related to weaknesses in the way we authenticate people electronically. Internet identity is terribly awkward too. Unfortunately today we still use password techniques dating back to 1960s mainframes that were designed for technicians, by technicians.

    Our identity management problems also stem from over-reach. For one thing, the information era heralded new ways to reach and connect with people, with almost no friction. We may have taken too literally the old saw “information wants to be free.” Further, traditional ways of telling who people are, through documents and “old boys networks” creates barriers, which are anathema to new school Internet thinkers.

    For the past 10-to-15 years, a heady mix of ambitions has informed identity management theory and practice: improve usability, improve security and improve “trust.” Without ever pausing to unravel the rainbow, the identity and access management industry has created grandiose visions of global “trust frameworks” to underpin a utopia of seamless stranger-to-stranger business and life online.

    Well-resourced industry consortia and private-public partnerships have come and gone over the past decade or more. Numerous “trust” start-up businesses have launched and failed. Countless new identity gadgets, cryptographic algorithms and payment schemes have been tried.

    And yet the identity problem is still with us. Why is identity online so strangely resistant to these well-meaning efforts to fix it? In particular, why is federated identity so dramatically easier said than done?

    Identification is a part of risk management. In business, service providers use identity to manage the risk that they might be dealing with the wrong person. Different transactions carry different risks, and identification standards are varied accordingly. Conversely, if a provider cannot be sure enough who someone is, they now have the tools to withhold or limit their services. For example, when an Internet customer signs in from an unusual location, payment processors can put a cap on the dollar amounts they will authorize.

    Across our social and business walks of life, we have distinct ways of knowing people, which yields a rich array of identities by which we know and show who we are to others. These Identities have evolved over time to suit different purposes. Different relationships rest on different particulars, and so identities naturally become specific not general.

    The human experience of identity is one of ambiguity and contradictions. Each of us simultaneously holds a weird and wonderful ensemble of personal, family, professional and social identities. Each is different, sometimes radically so. Some of us lead quite secret lives, and I’m not thinking of anything salacious, but maybe just the role-playing games that provide important escapes from the humdrum.

    Most of us know how it feels when identities collide. There’s no better example than what I call the High School Reunion Effect: that strange dislocation you feel when you see acquaintances for the first time in decades. You’ve all moved on, you’ve adopted new personae in new contexts – not the least of which is the one defined by a spouse and your own new family. Yet you find yourself re-winding past identities, relating to your past contemporaries as you all once were, because it was those school relationships, now fossilised, that defined you.

    Frankly, we’ve made a mess of the pivotal analogue-to-digital conversion of identity. In real life we know identity is malleable and relative, yet online we’ve rendered it crystalline and fragile.
    We’ve come close to the necessary conceptual clarity. Some 10 years ago a network of “identerati” led by Kim Cameron of Microsoft composed the “Laws of Identity,” which contained a powerful formulation of the problem to be addressed. The Laws defined Digital Identity as “a set of claims made [about] a digital subject.”

    Your Digital Identity is a proxy for a relationship, pointing to a suite of particulars that matter about you in a certain context. When you apply for a bank account, when you subsequently log on to Internet banking, when you log on to your work extranet, or to Amazon or PayPal or Twitter, or if you want to access your electronic health record, the relevant personal details are different each time.
    The flip side of identity management is privacy. If authentication concerns what a Relying Party needs to know about you, then privacy is all about what they don’t need to know. Privacy amounts to information minimization; security professionals know this all too well as the “Need to Know” principle.

    All attempts at grand global identities to date have failed. The Big Certification Authorities of the 1990s reckoned a single, all-purpose digital certificate would meet the needs of all business, but they were wrong. Ever more sophisticated efforts since then have also failed, such as the Infocard Foundation, Liberty Alliance and the Australian banking sector’s Trust Centre.

    Significantly, federation for non-trivial identities only works within regulatory monocultures – for example the US Federal Bridge CA, or the Scandinavian BankID network – where special legislation authorises banks and governments to identify customers by the one credential. The current National Strategy for Trusted Identities in Cyberspace has pondered legislation to manage liability but has balked. The regulatory elephant remains in the room.

    As an aside, obviously social identities like Facebook and Twitter handles federate very nicely, but these are issued by organisations that don't really know who we are, and they're used by web sites that don't really care who we are; social identity federation is a poor model for serious identity management.

    A promising identity development today is the Open Identity Foundation’s Attribute Exchange Network, a new architecture seeking to organise how identity claims may be traded. The Attribute Exchange Network resonates with a growing realization that, in the words of Andrew Nash, a past identity lead at Google and at PayPal, “attributes are at least as interesting as identities – if not more so.”

    If we drop down a level and deal with concrete attribute data instead of abstract identities, we will start to make progress on the practical challenges in authentication: better resistance to fraud and account takeover, easier account origination and better privacy.

    My vision is that by 2019 we will have a fresh marketplace of Attribute Providers. The notion of “Identity Provider” should die off, for identity is always in the eye of the Relying Party. What we need online is an array of respected authorities and agents that can vouch for our particulars. Banks can provide reliable electronic proof of our payment card numbers; government agencies can attest to our age and biographical details; and a range of private businesses can stand behind attributes like customer IDs, membership numbers and our retail reputations.

    In five years time I expect we will adopt a much more precise language to describe how to deal with people online, and it will reflect more faithfully how we’ve transacted throughout history. As the old Italian proverb goes: It is nice to “trust” but it’s better not to.

    This article first appeared as "Abandoning identity in favor of attributes" in Secure ID News, 2 December, 2014.

    Posted in Trust, Social Networking, Security, Identity, Federated Identity

    On "Trust Frameworks"

    The Australian government's new Digital Transformation Office (DTO) is a welcome initiative, and builds on a generally strong e-government program of many years standing.

    But I'm a little anxious about one plank of the DTO mission: the development of a "Trusted Digital Identity Framework".

    We've had several attempts at this sort of thing over many years, and we really need a fresh approach next time around.

    I hope we don't re-hash the same old hopes for "trust" and "identity" as we have for over 15 years. The real issues can be expressed more precisely. How do we get reliable signals about the people and entities we're trying to deal with online? How do we equip individuals to be able to show relevant signals about themselves, sufficient to get something done online? What are the roles of governments and markets in all this?

    Frankly, I loathe the term "Trust Framework"! Trust is hugely distracting if not irrelevant. And we already have frameworks in spades.

    Posted in Internet, Language, Trust

    Apply for a SuperNova Award - Recognising leaders in digital business

    Every year the Constellation SuperNova Awards recognise eight individuals for their leadership in digital business. Nominate yourself or someone you know by August 7, 2015.

    The SuperNova Awards honour leaders that demonstrate excellence in the application and adoption of new and emerging technologies. In its fifth year, the SuperNova Awards program will recognise eight individuals who demonstrate true leadership in digital business through their application of new and emerging technologies. Constellation Research is searching for leaders and corporate teams who have innovatively applied disruptive technolgies to their businesses, to adapt to the rapidly-changing digital business environment. Special emphasis will be given to projects that seek to redefine how the enterprise uses technology on a large scale.

    We’re searching for the boldest, most transformative technology projects out there. Apply for a SuperNova Award by filling out the application here: http://www.constellationr.com/node/3137/apply

    SuperNova Award Categories

    • Consumerization of IT & The New C-Suite - The Enterprise embraces consumer tech, and perfects it.
    • Data to Decisions - Using data to make informed business decisions.
    • Digital Marketing Transformation - Put away that megaphone. Marketing in the digital age requires a new approach.
    • Future of Work - The processes and technologies addressing the rapidly shifting work paradigm.
    • Matrix Commerce - Commerce responds to changing realities from the supply chain to the storefront.
    • Next Generation Customer Experience - Customers in the digital age demand seamless service throughout all lifecycle stages and across all channels.
    • Safety and Privacy - Not 'security'. Safety and Privacy is the art and science of the art and science of protecting information assets, including your most important assets: your people.
    • Technology Optimization & Innovation - Innovative methods to balance innovation and budget requirements.

    Five reasons to apply for a SuperNova Award

    • Exposure to the SuperNova Award judges, comprised of the top influencers in enterprise technology
    • Case study highlighting the achievements of the winners written by Constellation analysts
    • Complimentary admission to the SuperNova Award Gala Dinner and Constellation's Connected Enterprise for all finalists November 4-6, 2015 (NB: lodging and travel not included)
    • One year unlimited access to Constellation's research library
    • Winners featured on Constellation's blog and weekly newsletter.

    Learn more about the SuperNova Awards.

    What to expect when applying for a SuperNova Award. Tips and sample application.

    Posted in Constellation Research, Cloud, Big Data

    Once more to the breach!

    Bank robber Willie Sutton, when asked why he robbed banks, answered "That's where the money is". It's the same with breaches. Large databases are the targets of people who want data. It's that simple.

    Having said that, there are different sorts of breaches and corresponding causes. Most high profile breaches are obviously driven by financial crime, where attackers typically grab payment card details. Breaches are what powers most card fraud. Organised crime gangs don't pilfer card numbers one at a time from people's computers or insecure websites (and so the standard advice to consumers to change their passwords every month and to make sure they see a browser padlock is nice but don't think it will do anything to stop mass card fraud).

    Instead of blaming end user failings, we need to really turn up the heat on enterprise IT. The personal data held by big merchant organisations (including even mundane operations like car parking chains) is now worth many hundreds of millions of dollars. If this kind of value was in the form of cash or gold, you'd see Fort Knox-style security around it. Literally. But how much money does even the biggest enterprise invest in security? And what do they get for their money?

    The grim reality is that no amount of conventional IT security today can prevent attacks on assets worth billions of dollars. The simple economics is against us. It's really more a matter of luck than good planning that some large organisations have yet to be breached (and that's only so far as we know).

    Organised crime is truly organised. If it's card details they want, they go after the big data stores, at payments processors and large retailers. The sophistication of these attacks is amazing even to security pros. The attack on Target's Point of Sale terminals for instance was in the "can't happen" category.

    The other types of criminal breach include mischief, as when the iCloud photos of celebrities were leaked last year, hacktivism, and political or cyber terrorist attacks, like the one on Sony.

    There's some evidence that identity thieves are turning now to health data to power more complex forms of crime. Instead of stealing and replaying card numbers, identity thieves can use deeper, broader information like patient records to either commit fraud against health system payers, or to open bogus accounts and build them up into complex scams. The recent Anthem database breach involved extensive personal records on 80 million individuals; we have yet to see how these details will surface in the identity black markets.

    The ready availability of stolen personal data is one factor we find to be driving Identity and Access Management (IDAM) innovation; see "The State of Identity Management in 2015". Next generation IDAM will eventually make stolen data less valuable, but for the foreseeable future, all enterprises holding large customer datasets we will remain prime targets for identity thieves.

    Now let's not forget simple accidents. The Australian government for example has had some clangers though these can happen to any big organisation. A few months ago a staffer accidentally attached the wrong a file to an email, and thus released the passport details of the G20 leaders. Before that, we saw a spreadsheet holding personal details of thousands of asylum seekers get mistakenly pasted into a government website HTML.

    A lesson I want to bring out here is the terrible complexity and fragility of our IT systems. It doesn't take much for human error to have catastrophic results. Who among us has not accidentally hit 'Reply All' or attached the wrong file to an email? If you did an honest Threat & Risk Assessment on these sorts of everyday office systems, you'd have to conclude they are not safe to handle sensitive data nor to be operated by most human beings. But of course we simply can't afford notto use office IT. We've created a monster.

    Again, criminal elements know this. The expert cryptographer Bruce Schneier once said "amateurs hack systems, professionals hack people". Access control on today's sprawling complex computer systems is generally poor, leaving the way open for inside jobs. Just look at the Chelsea Manning case, one of the worst breaches of all time, made possible by granting too high access privileges to too many staffers.

    Outside government, access control is worse, and so is access logging - so system administrators often can't tell there's even been a breach until circumstantial evidence emerges. I am sure the majority of breaches are occurring without anyone knowing. It's simply inevitable.

    Look at hotels. There are occasional reports of hotel IT breaches, but they are surely happening continuously. The guest details held in hotels is staggering - payment card details, license plates, travel itineraries including airline flight details, even passport numbers are held by some places. And these days, with global hotel chains, the whole booking database is available to a rogue employee from any place in the world, 24-7.

    Please, don't anyone talk to me about PCI-DSS! The Payment Card Industry Data Security Standards for protecting cardholder details haven't had much effect at all. Some of the biggest breaches of all time have affected top tier merchants and payments processors which appear to have been PCI compliant. Yet the lawyers for the payments institutions will always argue that such-and-such a company wasn't "really" compliant. And the PCI auditors walk away from any liability for what happens in between audits. You can understand their position; they don't want to be accountable for wrong doings or errors committed behind their backs. However, cardholders and merchants are caught in the middle. If a big department store passes its PCI audits, surely we can expect them to be reasonably secure year-long? No, it turns out that the day after a successful audit, an IT intern can mis-configure a firewall or forget a patch; all those defences become useless, and the audit is rendered meaningless.

    Which reinforces my point about the fragility of IT: it's impossible to make lasting security promises anymore.

    In any case, PCI is really just a set of data handling policies and promises. They improve IT security hygiene, and ward off amateur attacks. But they are useless against organised crime or inside jobs.

    There is an increasingly good argument to outsource data management. Rather than maintain brittle databases in the face of so much risk, companies are instead turning to large reputable cloud services, where the providers have the scale, resources and attention to detail to protect data in their custody. I previously looked at what matters in choosing cloud services from a geographical perspective in my Constellation Research report "Why Cloud Geography Matters in a Post-Snowden/NSA Era". And in forthcoming research I'll examine a broader set of contract-related KPIs to help buyers make the right choice of cloud service provider.

    If you asked me what to do about data breaches, I'd say the short-to-medium term solution is to get with the strength and look for managed security services from specialist providers. In the longer term, we will have to see grassroots re-engineering of our networks and platforms, to harden them against penetration, and to lessen the opportunity for identity theft.

    In the meantime, you can hope for the best, if you plan for the worst.

    Actually, no, you can't hope.

    Posted in Constellation Research, Security

    The government cannot simply opt-out of opt-in

    The Australian government is to revamp the troubled Personally Controlled Electronic Health Record (PCEHR). In line with the Royle Review from Dec 2013, it is reported that patient participation is to change from the current Opt-In model to Opt-Out; see "Govt to make e-health records opt-out" by Paris Cowan, IT News.

    That is to say, patient data from hospitals, general practice, pathology and pharmacy will be added by default to a central longitudinal health record, unless patients take steps (yet to be specified) to disable sharing.

    The main reason for switching the consent model is simply to increase the take-up rate. But it's a much bigger change than many seem to realise.

    The government is asking the community to trust it to hold essentially all medical records. Are the PCEHR's security and privacy safeguards up to scratch to take on this grave responsibility? I argue the answer is no, on two grounds.

    Firstly there is the practical matter of PCEHR's security performance to date. It's not good, based on publicly available information. On multiple occasions, prescription details have been uploaded from community pharmacy to the wrong patient's records. There have been a few excuses made for this error, with blame sheeted home to the pharmacy. But from a system's perspective -- and health care is all about the systems -- you cannot pass the buck like that. Pharmacists are using a PCEHR system that was purportedly designed for them. And it was subject to system-wide threat & risk assessments that informed the architecture and design of not just the electronic records system but also the patient and healthcare provider identification modules. How can it be that the PCEHR allows such basic errors to occur?

    Secondly and really fundamentally, you simply cannot invert the consent model as if it's a switch in the software. The privacy approach is deep in the DNA of the system. Not only must PCEHR security be demonstrably better than experience suggests, but it must be properly built in, not retrofitted.

    Let me explain how the consent approach crops up deep in the architecture of something like PCEHR. During analysis and design, threat & risk assessments (TRAs) and privacy impact assessments (PIAs) are undertaken, to identify things that can go wrong, and to specify security and privacy controls. These controls generally comprise a mix of technology, policy and process mechanisms. For example, if there is a risk of patient data being sent to the wrong person or system, that risk can be mitigated a number of ways, including authentication, user interface design, encryption, contracts (that obligate receivers to act responsibly), and provider and patient information. The latter are important because, as we all should know, there is no such thing as perfect security. Mistakes are bound to happen.

    One of the most fundamental privacy controls is participation. Individuals usually have the ultimate option of staying away from an information system if they (or their advocates) are not satisfied with the security and privacy arrangements. Now, these are complex matters to evaluate, and it's always best to assume that patients do not in fact have a complete understanding of the intricacies, the pros and cons, and the net risks. People need time and resources to come to grips with e-health records, so a default opt-in affords them that breathing space. And it errs on the side of caution, by requiring a conscious decision to participate. In stark contrast, a default opt-out policy embodies a position that the scheme operator believes it knows best, and is prepared to make the decision to participate on behalf of all individuals.

    Such a position strikes many as beyond the pale, just on principle. But if opt-out is the adopted policy position, then clearly it has to be based on a risk assessment where the pros indisputably out-weigh the cons. And this is where making a late switch to opt-out is unconscionable.

    You see, in an opt-in system, during analysis and design, whenever a risk is identified that cannot be managed down to negligible levels by way of technology and process, the ultimate safety net is that people don't need to use the PCEHR. It is a formal risk management ploy (a part of the risk manager's toolkit) to sometimes fall back on the opt-in policy. In an opt-in system, patients sign an agreement in which they accept some risk. And the whole security design is predicated on that.

    Look at the most recent PIA done on the PCEHR in 2011; section 9.1.6 "Proposed solutions - legislation" makes it clear that opt-in participation is core to the existing architecture. The PIA makes a "critical legislative recommendation" including:

      • a number of measures to confirm and support the 'opt in' nature of the PCEHR for consumers (Recommendations 4.1 to 4.3) [and] preventing any extension of the scope of the system, or any change to the 'opt in' nature of the PCEHR.

    The PIA at section 2.2 also stresses that a "key design feature of the PCEHR System ... is opt in – if a consumer or healthcare provider wants to participate, they need to register with the system." And that the PCEHR is "not compulsory – both consumers and healthcare providers choose whether or not to participate".

    A PDF copy of the PIA report, which was publicly available at the Dept of Health website for a few years after 2011, is archived here.

    The fact is that if the government changes the PCEHR from opt-in to opt-out, it will invalidate the security and privacy assessments done to date. The PIAs and TRAs will have to be repeated, and the project must be prepared for major redesign.

    The Royle Review report (PDF) did in fact recommend "a technical assessment and change management plan for an opt-out model ..." (Recommendation 14) but I am not aware that such a review has taken place.

    To look at the seriousness of this another way, think about "Privacy by Design", the philosophy that's being steadily adopted across government. In 2014 NEHTA wrote in a submission (PDF) to the Australian Privacy Commissioner:

      • The principle that entities should employ “privacy by design” by building privacy into their processes, systems, products and initiatives at the design stage is strongly supported by NEHTA. The early consideration of privacy in any endeavour ensures that the end product is not only compliant but meets the expectations of stakeholders.

    One of the tenets of Privacy by Design is that you cannot bolt on privacy after a design is done. Privacy must be designed into the fabric of any system from the outset. All the way along, PCEHR has assumed opt-in, and the last PIA enshrined that position.

    If the government was to ignore its own Privacy by Design credo, and not revisit the PCEHR architecture, it would be an amazing breach of the public's trust in the healthcare system.

    Posted in Security, Privacy, e-health

    The security joke is on us all

    Every now and then, a large organisation in the media spotlight will experience the special pain of having a password accidentally revealed in the background of a photograph or TV spot. Security commentator Graham Cluley has recorded a lot of these misadventures, most recently at a British national rail control room, and before that, in the Superbowl nerve centre and an emergency response agency.

    Security folks love their schadenfreude but what are we to make of these SNAFUs? Of course, nobody is perfect. And some plumbers have leaky taps.

    Rail control jpeg
    Superbowl before jpeg
    Sky password reg jpeg

    But these cases hold much deeper lessons. These are often critical infrastructure providers (consider that on financial grounds, there may be more at stake in Superbowl operations than the railways). The outfits making kindergarten security mistakes will have been audited many times over. So how on earth do they pass?

    Posting passwords on the wall is not a random error - it's systemic. Some administrators do it out of habit, or desperation. They know it's wrong, but they do it anyway, and they do it with such regularity it gets caught on TV.

    I really want to know if none of the security auditors at any of these organisations ever noticed the passwords in plain view? Or do the personnel do a quick clean up on the morning of each audit, only to revert to reality in between audits? Either way, here's yet more proof that security audit, frankly, is a sick joke. And that security practices aren't worth the paper they're printed on.

    Security orthodoxy holds that people and process are more fundamental than technology, and that people are the weakest link. That's why we have security management processes and security audits. It's why whole industries have been built around security process standards like ISO 27000. So it's unfathomable to me that companies with passwords caught on camera can have have ever passed their audits.

    Security isn't what people think it is. Instead of meticulous procedures and hawk-eyed inspections, too often it's just simple people going through the motions. Security isn't intellectually secure. The things we do in the name of "security" don't make us secure.

    Let's not dismiss password flashing as a temporary embarrassment for some poor unfortunates. This should be humiliating for the whole information security industry. We need another way.

    Picture credits: Graham Cluley.

    Posted in Security