Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The dogs bark but the caravan moves on

Question: Can you guess when this was written?

e-security in slow motion

Australia has for many years enjoyed international pre-eminence in e-security. We boast several world-class academic security centres, special concentrations of investment in security R&D, large scale e-government rollouts, much successful commercialisation from CRCs, some of the first progressive electronic transactions legislation, and pioneering IT testing schemes. Australia has indeed punched above its weight in security and trust.

Yet despite all this, everywhere today we find deep confusion. Industry struggles to measure and justify security investments; banks agonise over two factor authentication and so send mixed messages about their attitudes to identity theft. Vital, compelling programmes are being stymied by an inability to make security happen, transparently, economically, reliably and effectively. There are irresistible forces – mostly forces for good – behind e-health, national security, online financial services, electronic government and electronic voting. The nation will not put these programmes on hold. But an inability to think clearly and act decisively on security and trust is fast becoming endemic. Should even one of these social programmes collapse for want of proper security, the results would be unconscionable.

There is a lack of sophistication at most levels of the “security debate”. For example, the natural tension between counter-terrorism and privacy protection is met by idealism on both sides: privacy advocates refer to inalienable rights while defence analysts appeal to the greater good. But security and trust need not be a zero sum game. New security technologies – smartcards, anonymity protocols, permissions management infrastructure and resilient architectures to name a few – bring the promise of trusted secure solutions benefiting all.

The pressing challenge for e-security practitioners today is to form better ways to articulate, quantify and specify security practice, to break out of the apparent slow motion we find much of the field to be in. We must:

  • progress from art to engineering, through standardised toolkits, handbooks, harmonised professional qualifications and so on
  • modularise security building blocks in order to maximise architectural purity, break down vendor lock-in, and indirectly provide better commercial incentives for specialist companies in the Australian market
  • at the same time, embed security methods and standards across all information systems to the same extent the automotive industry has done with safety engineering, where drivers have come to not only trust complex engineering but also to think and talk meaningfully about safety and performance
  • deconstruct the language used to describe e-security, moving on from the silly metaphors of locks and keys and passports, to engender true trust, where people aren’t alternately lulled into a false sense of security on the one hand, or dazed and confused on the other.

Strategic e-security objectives

I offer the following distillation of security objectives over the next five to ten years in Australia, spanning business, government and the public:

  • Best practice has yet to be accepted beyond the rarefied world of certain federal agencies for the protection of aggregated and/or distributed collections of sensitive information. Developments in longitudinal health records, inter-organisational data pooling under the banking sector’s Basel II Capital Accord, and international counter-terrorism related data mining have already outstripped our understanding of concomitant protection profiles and practicable security architectures.
  • Similarly, information security techniques for Critical Infrastructure Protection are far from standardised. Assurance and risk management standards might be widely recognised but management standards do not define protection profiles or best practice. Organisations’ actual infrastructure protection remains ad hoc, and is extremely sensitive to the skills and resources of the individual security practitioners that happen to be on hand.
  • Trusted e-business modules and appliances are needed at the front and back ends of Internet commerce systems. The required business outcome is that users are able to trust that their Internet connection is intrinsically clean and safe, in much the same way as they trust a telephone connection.
  • The trustworthiness of Open Source software remains something of an article of faith. As Open Source moves into the mainstream, the traditional “ecological” view of Open Source quality needs to be bolstered. No convincing transparent methodology has yet to be developed for assuring the security of Open Source modules while preserving the community’s spirit and rapid responsiveness.
  • Identity Management is much more than a buzzword yet most available approaches remain proprietary and narrowly focussed on particular commercial technologies. Organisations in many sectors – most notably banking, telecommunications, health and social security – are striving for the “single view of customer” to bring about improved service levels, enhanced up-sell and cross-sell, reduced identity fraud and reduced costs.

Answer: I once applied for the job of Security & Trust Program Leader at NICTA. Part of the application process was to write a vision paper for Australian e-security. The text above is a near verbatim extract from my essay.

It was written over six years ago, in March 2004. I feel it could have been written today.

Posted in Security

Is the cloud sustainable?

The value proposition of cloud computing is basically that backend or server-side computing is somehow better than front-end or client side. History suggests that the net benefit tends to swing like a pendulum between front and back. I don't think cloud computing will last, for there is an inexorable trend towards the client. It seems people like to keep their computing close.

It's often said that cloud computing is not unlike time-slice computing of the 1960s. Or the network computers of the 1980s. These are telling comparisons. So what was the attraction of backend computing in past eras?

In the 1960s, hardware was fiercely expensive and few could afford more than dumb terminals. Moore's Law fixed that problem.

In the 1980s, it was software that was expensive. The basic value proposition of the classic Sun NetPC was that desktop apps from you-know-who were too costly. But software prices have dropped, and the Free and Open Source movements in a sense outcompeted the network computer.

In the current cycle I think the differences between front and back are more complex (as is the business environment) and there are a number of different reasons to shift once more to the backend. For consumers until recently, it had to do with the cost of storage; filesharing for photos and the like made sense while terabytes were unaffordable but already that has changed.

A good deal of cloud 'migration' is happening by stealth, with great new IT services having their origins in the cloud. I'm thinking of course of Facebook and its ilk. A generation seems to be growing up having never experienced fat client e-mail or building their own website; they aren't moving anything to the cloud; they have been born up there and have never experienced anything else. A fascinating dynamic is how Facebook is now trying to attract businesses.

For corporates, much of the benefit of cloud computing relates to compliance. In particular, security, PCI-DSS and data breach disclosure obligations are proving prohibitive for smaller organisations, and outsourcing their IT to cloud providers makes sense.

Yet compliance costs at present are artificially high and are bound to fall. The PCI regime for instance is proving to be a wild goose chase, which will end sooner or later when proper security measures are finally deployed to prevent replay of payment card numbers. Information security in general is expensive largely because our commodity PCs, applicances and desk top apps aren't so well engineered. This has to change -- even if it takes another decade -- and when it does, the safety margin of outsourcing services will drop, and once again, people will probably prefer to do their computing closer to home.

Still, if cloud computing provides corporates with lower compliance costs for another ten years, then that will be a pretty good trot.

Posted in Security, Privacy, Cloud

NSTIC delayed -- for the wrong reasons

Predictably, ratification by the President of the US National Strategy for Trusted Identities in Cyberspace has been delayed. The discussion paper was only released last June, and its champions pressed for Obama's signature by October. That was never going to happen. We're talking about sophisticated IT here in a technology-neutral policy environment, not a recipe for speedy resolution.

I myself would have hoped for a delay and a review, but on technical not political grounds. There are all manner of real problems with NSTIC as presented, which need to be worked through.

1. It's really not an "ecosystem". True ecosystems grow and evolve naturally; they are not architected. Yes, of course there is a marketplace of authentication services and products, but to call it an ecosystem is an attempt to elevate it above the hurley burley of competitiive IT. This is marketing, not ecology. Using such language aims to position the architecture as something kinda saintly and deserving of government stimulus.

2. The NSTIC paper is really an uplift of the OIX whitepaper. OIX itself is the latest incarnation of a long line of security industry consortia, dating from Liberty Alliance through Kantara and the Infocard foundation. The steady recycling of federated identity concepts is either a sign that the foundations are not yet stable, or that something is not quite right with the basic premise. Either way, these are not the hallmarks of a new industry that government would normally throw money at.

3. The NSTIC paper is silent on important matters like who exactly will step up to the plate and act as Identity and Attribute Providers. If we're talking general purpose Identity Providers at high levels of assurance, then we're back on the merry-go-round of Big PKI. High assurance identities tend to become siloed, and useless for cross-domain transactions, because nobody is willing to underwrite liability for misidentification when the stakes are high. I've written recently about this.

4. The biggest technical problem with NSTIC and federated identity in general is that it is still so complex. There are way too many complicating generalisations, and too few simplifying assumptions.

5. The identity metasystem is much more novel than people think. Federated identity calls for orthodox, risk-averse organisations like banks and government agencies to re-imagine themselves as "Identity Providers" and to allow their "identities" to be used in brand new contexts. To make this attractive, some schemes have tried to create new revenue opportunities for the players, but this only complicates things even further. The legal novelty is huge: How does a bank write a contract with a customer that allows the customer to use their bank-issued identity to do business with counterparties that the bank doesn't know? And in transactions that the bank hasn't even thought of yet? Of course you can't write such a contract, and so the federated identity arrangements are full of fine print, restrictions, liability caps ... all the stuff that bogged down Big PKI.

Come on! If anyone is serious about ecological thinking in this space, then it is high time to re-examine why federated identity is so much easier said than done.

The reason I suggest is because identities have evolved. Each one of the identities one has -- with banks, government agencies, employers, professional associations and so on -- is really a proxy for the relationship we have in each context. These relationships have conventions and rules and terms & conditions that have evolved over long periods of time, and which rest on crucial simplifying assumptions. We all know identities are context dependent, but it seems that we don't collectively appreciate why this is so. It's because the context (environment) has bred the form (or 'genetics') of each identity, and it's no simple matter to take a stable form and expect it to work properly in a totally different niche.

Posted in Security, Identity

Security is dead

Is Security Dead, in the same sense as "Quality is Dead", with reference to the formulaic, fashionable, industralised Total Quality Movement?

Does anyone else see the parallels between infosec and TQM? Both are Politically Correct, proselytising, fervent, and obsessively process-driven. In both quality management and security management we’ve seen a dizzying progression of ever fatter standards (the ISO 9001 and ISO 27000 series), ever more detailed corporate procedure manuals, and truly endless audits.

Want better quality? Want better security? Then you’d better write another Work Instruction and hold another training course! Invariably the response to every new breach is to remediate the security policy.

I see the early days of a long overdue security backlash. The ISO 27001 and PCI-DSS regimes are finally being exposed as robotic. The fad is passing, the hangover is palpable, and critical reappraisal of policy-based security management is imminent. It's such a shame that the security audit industry wasn't recognised sooner as a repeat of the quality movement two decades ago.

Posted in Security

Social Networking in a bubble

Malcolm Gladwell recently wrote a sober assessment of online activism, in New Yorker magazine of October 4 (see http://www.newyorker.com/reporting/2010/10/04/101004fa_fact_gladwell?currentPage=all).

Gladwell argues that Online Social Networking is less effective than some would have us believe, for several reasons, including the historical lesson that protest movements are best marshalled via strict hierarchies, not loose networks.

I reckon there’s another factor that exaggerates the efficacy of Online Social Networking, and more generally undermines privacy and security online. A sort of suspension of disbelief helps to animate cyberspace. The cues we receive online are unreliable, and our responses unnatural. The medium itself is a huge problem. Until the advent of blogs, user generated content and social media, the online experience was passive, not much different from watching cartoon shows on TV. Web 1.0 was unreal; almost anything goes. Just as the Roadrunner defies gravity in besting Coyote, there are no laws of physics to moderate how we careen though cyberspace. The loss of one’s bearings online is the root of much cybercrime, and the lack of friction (both physical and social) kills privacy.

But it’s much worse now that Web 2.0 is interactive. As with a professional magic show, audience participation creates compelling mental expectations and amplifies the illusion. The twits are having an increasingly unreal time, smug in an oddly pre-Copernican theatre that places each of them at the centre of the universe.

Posted in Culture, Internet, Social Networking