Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Once more to the breach!

Bank robber Willie Sutton, when asked why he robbed banks, answered "That's where the money is". It's the same with breaches. Large databases are the targets of people who want data. It's that simple.

Having said that, there are different sorts of breaches and corresponding causes. Most high profile breaches are obviously driven by financial crime, where attackers typically grab payment card details. Breaches are what powers most card fraud. Organised crime gangs don't pilfer card numbers one at a time from people's computers or insecure websites (and so the standard advice to consumers to change their passwords every month and to make sure they see a browser padlock is nice but don't think it will do anything to stop mass card fraud).

Instead of blaming end user failings, we need to really turn up the heat on enterprise IT. The personal data held by big merchant organisations (including even mundane operations like car parking chains) is now worth many hundreds of millions of dollars. If this kind of value was in the form of cash or gold, you'd see Fort Knox-style security around it. Literally. But how much money does even the biggest enterprise invest in security? And what do they get for their money?

The grim reality is that no amount of conventional IT security today can prevent attacks on assets worth billions of dollars. The simple economics is against us. It's really more a matter of luck than good planning that some large organisations have yet to be breached (and that's only so far as we know).

Organised crime is truly organised. If it's card details they want, they go after the big data stores, at payments processors and large retailers. The sophistication of these attacks is amazing even to security pros. The attack on Target's Point of Sale terminals for instance was in the "can't happen" category.

The other types of criminal breach include mischief, as when the iCloud photos of celebrities were leaked last year, hacktivism, and political or cyber terrorist attacks, like the one on Sony.

There's some evidence that identity thieves are turning now to health data to power more complex forms of crime. Instead of stealing and replaying card numbers, identity thieves can use deeper, broader information like patient records to either commit fraud against health system payers, or to open bogus accounts and build them up into complex scams. The recent Anthem database breach involved extensive personal records on 80 million individuals; we have yet to see how these details will surface in the identity black markets.

The ready availability of stolen personal data is one factor we find to be driving Identity and Access Management (IDAM) innovation; see "The State of Identity Management in 2015". Next generation IDAM will eventually make stolen data less valuable, but for the foreseeable future, all enterprises holding large customer datasets we will remain prime targets for identity thieves.

Now let's not forget simple accidents. The Australian government for example has had some clangers though these can happen to any big organisation. A few months ago a staffer accidentally attached the wrong a file to an email, and thus released the passport details of the G20 leaders. Before that, we saw a spreadsheet holding personal details of thousands of asylum seekers get mistakenly pasted into a government website HTML.

A lesson I want to bring out here is the terrible complexity and fragility of our IT systems. It doesn't take much for human error to have catastrophic results. Who among us has not accidentally hit 'Reply All' or attached the wrong file to an email? If you did an honest Threat & Risk Assessment on these sorts of everyday office systems, you'd have to conclude they are not safe to handle sensitive data nor to be operated by most human beings. But of course we simply can't afford notto use office IT. We've created a monster.

Again, criminal elements know this. The expert cryptographer Bruce Schneier once said "amateurs hack systems, professionals hack people". Access control on today's sprawling complex computer systems is generally poor, leaving the way open for inside jobs. Just look at the Chelsea Manning case, one of the worst breaches of all time, made possible by granting too high access privileges to too many staffers.

Outside government, access control is worse, and so is access logging - so system administrators often can't tell there's even been a breach until circumstantial evidence emerges. I am sure the majority of breaches are occurring without anyone knowing. It's simply inevitable.

Look at hotels. There are occasional reports of hotel IT breaches, but they are surely happening continuously. The guest details held in hotels is staggering - payment card details, license plates, travel itineraries including airline flight details, even passport numbers are held by some places. And these days, with global hotel chains, the whole booking database is available to a rogue employee from any place in the world, 24-7.

Please, don't anyone talk to me about PCI-DSS! The Payment Card Industry Data Security Standards for protecting cardholder details haven't had much effect at all. Some of the biggest breaches of all time have affected top tier merchants and payments processors which appear to have been PCI compliant. Yet the lawyers for the payments institutions will always argue that such-and-such a company wasn't "really" compliant. And the PCI auditors walk away from any liability for what happens in between audits. You can understand their position; they don't want to be accountable for wrong doings or errors committed behind their backs. However, cardholders and merchants are caught in the middle. If a big department store passes its PCI audits, surely we can expect them to be reasonably secure year-long? No, it turns out that the day after a successful audit, an IT intern can mis-configure a firewall or forget a patch; all those defences become useless, and the audit is rendered meaningless.

Which reinforces my point about the fragility of IT: it's impossible to make lasting security promises anymore.

In any case, PCI is really just a set of data handling policies and promises. They improve IT security hygiene, and ward off amateur attacks. But they are useless against organised crime or inside jobs.

There is an increasingly good argument to outsource data management. Rather than maintain brittle databases in the face of so much risk, companies are instead turning to large reputable cloud services, where the providers have the scale, resources and attention to detail to protect data in their custody. I previously looked at what matters in choosing cloud services from a geographical perspective in my Constellation Research report "Why Cloud Geography Matters in a Post-Snowden/NSA Era". And in forthcoming research I'll examine a broader set of contract-related KPIs to help buyers make the right choice of cloud service provider.

If you asked me what to do about data breaches, I'd say the short-to-medium term solution is to get with the strength and look for managed security services from specialist providers. In the longer term, we will have to see grassroots re-engineering of our networks and platforms, to harden them against penetration, and to lessen the opportunity for identity theft.

In the meantime, you can hope for the best, if you plan for the worst.

Actually, no, you can't hope.

Posted in Constellation Research, Security

The government cannot simply opt-out of opt-in

The Australian government is to revamp the troubled Personally Controlled Electronic Health Record (PCEHR). In line with the Royle Review from Dec 2013, it is reported that patient participation is to change from the current Opt-In model to Opt-Out; see "Govt to make e-health records opt-out" by Paris Cowan, IT News.

That is to say, patient data from hospitals, general practice, pathology and pharmacy will be added by default to a central longitudinal health record, unless patients take steps (yet to be specified) to disable sharing.

The main reason for switching the consent model is simply to increase the take-up rate. But it's a much bigger change than many seem to realise.

The government is asking the community to trust it to hold essentially all medical records. Are the PCEHR's security and privacy safeguards up to scratch to take on this grave responsibility? I argue the answer is no, on two grounds.

Firstly there is the practical matter of PCEHR's security performance to date. It's not good, based on publicly available information. On multiple occasions, prescription details have been uploaded from community pharmacy to the wrong patient's records. There have been a few excuses made for this error, with blame sheeted home to the pharmacy. But from a system's perspective -- and health care is all about the systems -- you cannot pass the buck like that. Pharmacists are using a PCEHR system that was purportedly designed for them. And it was subject to system-wide threat & risk assessments that informed the architecture and design of not just the electronic records system but also the patient and healthcare provider identification modules. How can it be that the PCEHR allows such basic errors to occur?

Secondly and really fundamentally, you simply cannot invert the consent model as if it's a switch in the software. The privacy approach is deep in the DNA of the system. Not only must PCEHR security be demonstrably better than experience suggests, but it must be properly built in, not retrofitted.

Let me explain how the consent approach crops up deep in the architecture of something like PCEHR. During analysis and design, threat & risk assessments (TRAs) and privacy impact assessments (PIAs) are undertaken, to identify things that can go wrong, and to specify security and privacy controls. These controls generally comprise a mix of technology, policy and process mechanisms. For example, if there is a risk of patient data being sent to the wrong person or system, that risk can be mitigated a number of ways, including authentication, user interface design, encryption, contracts (that obligate receivers to act responsibly), and provider and patient information. The latter are important because, as we all should know, there is no such thing as perfect security. Mistakes are bound to happen.

One of the most fundamental privacy controls is participation. Individuals usually have the ultimate option of staying away from an information system if they (or their advocates) are not satisfied with the security and privacy arrangements. Now, these are complex matters to evaluate, and it's always best to assume that patients do not in fact have a complete understanding of the intricacies, the pros and cons, and the net risks. People need time and resources to come to grips with e-health records, so a default opt-in affords them that breathing space. And it errs on the side of caution, by requiring a conscious decision to participate. In stark contrast, a default opt-out policy embodies a position that the scheme operator believes it knows best, and is prepared to make the decision to participate on behalf of all individuals.

Such a position strikes many as beyond the pale, just on principle. But if opt-out is the adopted policy position, then clearly it has to be based on a risk assessment where the pros indisputably out-weigh the cons. And this is where making a late switch to opt-out is unconscionable.

You see, in an opt-in system, during analysis and design, whenever a risk is identified that cannot be managed down to negligible levels by way of technology and process, the ultimate safety net is that people don't need to use the PCEHR. It is a formal risk management ploy (a part of the risk manager's toolkit) to sometimes fall back on the opt-in policy. In an opt-in system, patients sign an agreement in which they accept some risk. And the whole security design is predicated on that.

Look at the most recent PIA done on the PCEHR in 2011; section 9.1.6 "Proposed solutions - legislation" makes it clear that opt-in participation is core to the existing architecture. The PIA makes a "critical legislative recommendation" including:

    • a number of measures to confirm and support the 'opt in' nature of the PCEHR for consumers (Recommendations 4.1 to 4.3) [and] preventing any extension of the scope of the system, or any change to the 'opt in' nature of the PCEHR.

The PIA at section 2.2 also stresses that a "key design feature of the PCEHR System ... is opt in – if a consumer or healthcare provider wants to participate, they need to register with the system." And that the PCEHR is "not compulsory – both consumers and healthcare providers choose whether or not to participate".

A PDF copy of the PIA report, which was publicly available at the Dept of Health website for a few years after 2011, is archived here.

The fact is that if the government changes the PCEHR from opt-in to opt-out, it will invalidate the security and privacy assessments done to date. The PIAs and TRAs will have to be repeated, and the project must be prepared for major redesign.

The Royle Review report (PDF) did in fact recommend "a technical assessment and change management plan for an opt-out model ..." (Recommendation 14) but I am not aware that such a review has taken place.

To look at the seriousness of this another way, think about "Privacy by Design", the philosophy that's being steadily adopted across government. In 2014 NEHTA wrote in a submission (PDF) to the Australian Privacy Commissioner:

    • The principle that entities should employ “privacy by design” by building privacy into their processes, systems, products and initiatives at the design stage is strongly supported by NEHTA. The early consideration of privacy in any endeavour ensures that the end product is not only compliant but meets the expectations of stakeholders.

One of the tenets of Privacy by Design is that you cannot bolt on privacy after a design is done. Privacy must be designed into the fabric of any system from the outset. All the way along, PCEHR has assumed opt-in, and the last PIA enshrined that position.

If the government was to ignore its own Privacy by Design credo, and not revisit the PCEHR architecture, it would be an amazing breach of the public's trust in the healthcare system.

Posted in Security, Privacy, e-health

The security joke is on us all

Every now and then, a large organisation in the media spotlight will experience the special pain of having a password accidentally revealed in the background of a photograph or TV spot. Security commentator Graham Cluley has recorded a lot of these misadventures, most recently at a British national rail control room, and before that, in the Superbowl nerve centre and an emergency response agency.

Security folks love their schadenfreude but what are we to make of these SNAFUs? Of course, nobody is perfect. And some plumbers have leaky taps.

Rail control jpeg
Superbowl before jpeg
Sky password reg jpeg


But these cases hold much deeper lessons. These are often critical infrastructure providers (consider that on financial grounds, there may be more at stake in Superbowl operations than the railways). The outfits making kindergarten security mistakes will have been audited many times over. So how on earth do they pass?

Posting passwords on the wall is not a random error - it's systemic. Some administrators do it out of habit, or desperation. They know it's wrong, but they do it anyway, and they do it with such regularity it gets caught on TV.

I really want to know if none of the security auditors at any of these organisations ever noticed the passwords in plain view? Or do the personnel do a quick clean up on the morning of each audit, only to revert to reality in between audits? Either way, here's yet more proof that security audit, frankly, is a sick joke. And that security practices aren't worth the paper they're printed on.

Security orthodoxy holds that people and process are more fundamental than technology, and that people are the weakest link. That's why we have security management processes and security audits. It's why whole industries have been built around security process standards like ISO 27000. So it's unfathomable to me that companies with passwords caught on camera can have have ever passed their audits.

Security isn't what people think it is. Instead of meticulous procedures and hawk-eyed inspections, too often it's just simple people going through the motions. Security isn't intellectually secure. The things we do in the name of "security" don't make us secure.

Let's not dismiss password flashing as a temporary embarrassment for some poor unfortunates. This should be humiliating for the whole information security industry. We need another way.

Picture credits: Graham Cluley.

Posted in Security