Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Suspension of Disbelief and digital safety

If the digital economy is really the economy then it's high time we moved beyond hoping that we can simply train users to be safe online. Is the real economy only for heros who can protect themselves in the jungle, writing their own code. As if they're carrying their own guns? Or do we as a community build structures and standards and insist on technologies that work for all?

For most people, the World Wide Web experience still a lot like watching cartoons on TV. The human-machine interface is almost the same. The images and actions are just as synthetic; crucially, nothing on a web browser is real. Almost anything goes -- just as the Roadrunner defies gravity in besting Coyote, there are no laws of physics that temper the way one bit of multimedia leads to the next. Yes, there is a modicum of user feedback in the way we direct some of the action when browsing and e-shopping, but it's quite illusory; for the most part all we're really doing is flicking channels across a billion pages.

It's the suspension of disbelief when browsing that lies at the heart of many of the safety problems we're now seeing. Inevitably we lose our bearings in the totally synthetic World Wide Web. We don't even realise it, we're taken in by a virtual reality, and we become captive to social engineering.

But I don't think it's possible to tackle online safety by merely countering users' credulity. Education is not the silver bullet, because the Internet is really so technologically complex and abstract that it lies beyond the comprehension of most lay people.

Using the Internet 'safely' today requires deep technical skills, comparable to the level of expertise needed to operate an automobile circa 1900. Back then you needed to be able to do all your own mechanics [roughly akin to the mysteries of maintaining anti-virus software], look after the engine [i.e. configure the operating system and firewall], navigate the chaotic emerging road network [there's yet no trusted directory for the Internet, nor any road rules], and even figure out how to fuel the contraption [consumer IT supply chains is about as primitive as the gasoline industry was 100 years ago]. The analogy with the early car industry becomes especially sharp for me when I hear utopian open source proponents argue that writing ones own software is the best way to be safe online.

The Internet is so critical (I'd have thought this was needless to say) that we need ways of working online that don't require us to all be DIY experts.

I wrote a first draft of this blog six years ago, and at that time I called for patience in building digital literacy and sophistication. "It took decades for safe car and road technologies to evolve, and the Internet is still really in its infancy" I said in 2009. But I'm less relaxed about his now, on the brink of the Internet of Things. It's great that the policy makers like the US FTC are calling on connected device makers to build in security and privacy, but I suspect the Internet of Things will require the same degree of activist oversight and regulation as does the auto industry, for the sake of public order and the economy. Do we have the appetite to temper breakneck innovation with safety rules?

Posted in Culture, Internet, Security

Consumerization of Authentication

For the second year running, the FIDO Alliance hosted a consumer authentication showcase at CES, the gigantic Consumer Electronics Show in Las Vegas, this year featuring four FIDO Alliance members.

This is a watershed in Internet security and privacy - never before has authentication been a headline consumer issue.

Sure we've all talked about the password problem for ten years or more, but now FIDO Alliance members are doing something about it, with easy-to-use solutions designed specifically for mass adoption.

The FIDO Alliance is designing the authentication plumbing for everything online. They are creating new standards and technical protocols allowing secure personal devices (phones, personal smart keys, wearables, and soon a range of regular appliances) to securely transmit authentication data to cloud services and other devices, in some cases eliminating passwords altogether.

See also my ongoing FIDO Alliance research at Constellation.

Posted in Privacy, Identity, Constellation Research, Security

We cannot pigeon-hole risk

In electronic business, Relying Parties (RPs) need to understand their risks of dealing with the wrong person (say a fraudulent customer or a disgruntled ex employee), determine what they really need to know about those people in order to help manage risk, and then in many cases, design a registration process for bringing those people into the business fold. With federated identity, the aim is to offload the registration and other overheads onto an Identity Provider (IdP). But evaluating IdPs and forging identity management arrangements has proven to be enormously complex, and the federated identity movement has been looking for ways to streamline and standardize the process.

One approach is to categorise different classes of IdP, matched to different transaction types. "Levels of Assurance" (LOAs) have been loosely standardised by many governments and in some federated identity frameworks, like the Kantara Initiative. The US Authentication Guideline NIST SP 800-63 is one of the preeminent de facto standards, adopted by the National Strategy for Trusted Identities in Cyberspace (NSTIC). But over the years, adoption of SP 800-63 in business has been disappointing, and now NIST has announced a review.

One of my problem with LOAs is simply stated: I don't believe it's possible to pigeon-hole risk.

With risk management, the devil is in the detail. Risk Management standards like ISO 31000 require organisations to start by analysing the threats that are peculiar to their environment. It's folly to take short cuts here, and it's also well recognised that you cannot "outsource" liability.

To my mind, the LOA philosophy goes against risk management fundamental. To come up with an LOA rating is an intermediate step that takes an RP's risk analysis, squeezes it into a bin (losing lots of information as a result), which is then used to shortlist candidate IdPs, before going into detailed due diligence where all those risk details need to be put back on the table.

I think we all know by now of cases where RPs have looked at candidate IdPs at a given LOA, been less than satisfied with the available offerings, and have felt the need for an intermediate level, something like "LOA two and a half" (this problem was mentioned at CIS 2014 more than once, and I have seen it first hand in the UK IDAP).

Clearly what's going on here is an RP's idea of "LOA 2" differs from a given IdP's idea of the same LOA 2. This is because everyone's risk appetite and threat profile is different. Moreover, the detailed prescription of "LOA 2" must differ from one identity provider to the next. When an RP thinks they need "LOA 2.5" what they're relly asking for is a customised identification. If an off-the-shelf "LOA 2" isn't what it seems, then there can't be any hope for an agreed intermediate LOA 2.5. Even if an IdP and an RP agree in one instance, soon enough we will get a fresh call for "LOA 2.75 please".

We cannot pigeonhole risk. Attaching chunky one dimensional Levels of Assurance is misleading. There is no getting away from the need to do detailed analysis of the threats and therefore the authentication needs required.

Posted in Security, Identity, Federated Identity

Making cyber safe like cars

This is an updated version of arguments made in Lockstep's submission to the 2009 Cyber Crime Inquiry by the Australian federal government.

In stark contrast to other fields, cyber safety policy is almost exclusively preoccupied with user education. It's really an obsession. Governments and industry groups churn out volumes of well-meaning and technically reasonable security advice, but for the average user, this material is overwhelming. There is a subtle implication that security is for experts, and that the Internet isn't safe unless you go to extremes. Moreover, even if consumers do their very best online, their personal details can still be taken over in massive criminal raids on databases that hardly anyone even know exist.

Too much onus is put on regular users protecting themselves online, and this blinds us to potential answers to cybercrime. In other walks of life, we accept a balanced approach to safety, and governments are less reluctant to impose standards than they are on the Internet. Road safety for instance rests evenly on enforceable road rules, car technology innovation, certified automotive products, mandatory quality standards, traffic management systems, and driver training and licensing. Education alone would be nearly worthless.

Around cybercrime we have a bizarre allergy to technology. We often hear that 'Preventing data breaches not a technology issue' which may be politically correct but it's faintly ridiculous. Nobody would ever say that preventing car crashes is 'not a technology issue'.

Credit card fraud and ID theft in general are in dire need of concerted technological responses. Consider that our Card Not Present (CNP) payments processing arrangements were developed many years ago for mail orders and telephone orders. It was perfectly natural to co-opt the same processes when the Internet arose, since it seemed simply to be just another communications medium. But the Internet turned out to be more than an extra channel: it connects everyone to everything, around the clock.

The Internet has given criminals x-ray vision into peoples' banking details, and perfect digital disguises with which to defraud online merchants. There are opportunities for crime now that are both quantitatively and qualitatively radically different from what went before. In particular, because identity data is available by the terabyte and digital systems cannot tell copies from originals, identity takeover is child's play.

You don't even need to have ever shopped online to run foul of CNP fraud. Most stolen credit card numbers are obtained en masse by criminals breaking into obscure backend databases. These attacks go on behind the scenes, out of sight of even the most careful online customers.

So the standard cyber security advice misses the point. Consumers are told earnestly to look out for the "HTTPS" padlock that purportedly marks a site as secure, to have a firewall, to keep their PCs "patched" and their anti-virus up to date, to only shop online at reputable merchants, and to avoid suspicious looking sites (as if cyber criminals aren't sufficiently organised to replicate legitimate sites in their entirety). But none of this advice touches on the problem of coordinated massive heists of identity data.

Merchants are on the hook for unwieldy and increasingly futile security overheads. When a business wishes to accept credit card payments, it's straightforward in the real world to install a piece of bank-approved terminal equipment. But to process credit cards online, shopkeepers have to sign up to onerous PCI-DSS requirements that in effect require even small business owners to become IT security specialists. But to what end? No audit regime will ever stop organised crime. To stem identity theft, we need to make stolen IDs less valuable.

All this points to urgent public policy matters for governments and banks. It is not enough to put the onus on individuals to guard against ad hoc attacks on their credit cards. Systemic changes and technological innovation are needed to render stolen personal data useless to thieves. It's not that the whole payments processing system is broken; rather, it is vulnerable at just one point where stolen digital identities can be abused.

Digital identities are the keys to our personal kingdoms. As such they really need to be treated as seriously as car keys, which have become very high tech indeed. Modern car keys cannot be duplicated at a suburban locksmith. It's possible you've come across office and filing cabinet keys that carry government security certifications. And we never use the same keys for our homes and offices; we wouldn't even consider it (which points to the basic weirdness in Single Sign On and identity federation).

In stark contrast to car keys, almost no attention is paid to the pedigree of digital identities. Technology neutrality has bred a bewildering array of ad hoc authentication methods, including SMS messages, one time password generators, password calculators, grid cards and picture passwords; at the same time we've done nothing at all to inhibit the re-use of stolen IDs.

It's high time government and industry got working together on a uniform and universal set of smart identity tools to properly protect consumers online.

Stay tuned for more of my thoughts on identity safety, inspired by recent news that health identifiers may be back on the table in the gigantic U.S. e-health system. The security and privacy issues are large but the cyber safety technology is at hand!

Posted in Fraud, Identity, Internet, Payments, Privacy, Security

RSS error - you may have missed three blog posts

Sorry followers, but I had an error in the HTML of a mid December blog post, and my RSS feed was probably broken. You might have missed these three posts:

The consumerization of security

Increasingly, commentators are calling into question the state of information security. It's about time. We infosec professionals need to take action before our customers force us to.

Standard security is just not intellectually secure. Information Security Management Systems and security audits are based on discredited quality management frameworks like ISO 9000 and waterfall methodologies. The derivative PCI-DSS regime mitigates accidental losses and amateur attacks but is farcically inadequate in the face of organised crime. The economics of perimeter security are simply daft: many databases are now worth billionsof dollars to identity thieves, but they're protected by meagre firewalls and administrators with superuser privileges on $40K salaries. Threat & Risk Assessments have their roots in Failure Modes & Criticality Analysis (FMECA) which is hopeless in the highly non-linear and unpredictable world of software, where a trivial mistake in one part of a program can have unlimited impact on the whole system; witness the #gotofail episode. Software is so easy to write and businesses are so obsessed with time to market that the world now rests on layer upon layer of bloated spaghetti code. The rapidity of software development has trumped quality and UI design. We have fragile home computers that are impossibly complex to operate safely, and increasingly, Internet-connected home appliances with the same characteristics.

We can't adequately protect credit card numbers, yet we're joy-riding like a 12-year old on a stolen motorcycle into an Internet of Things.

We're going to have to fix complexity and quality before security stands a chance.

Maybe the market will come to the rescue. Consumers seem to tolerate crappy computer quality to some degree, doubtless weighing up the benefits of being online versus the hassle of the occasional blue screen or hard drive crash. But when things like cars, thermostats and swimming pool filters, which don't need to be computers, become computers, consumers may make a harsher judgement of technology reliability.

Twenty years ago when I worked in medical device software -- pre-Internet, let alone the Internet of Things -- I recall an article about quality which predicted the public would paradoxically put up with more bugs in flight control software than they would in a light switch. In a way, that analysis predicted one of the driving forces for technology today: consumerization.

Posted in Security