That is, information security is not intellectually secure. Almost every precept of orthodox information security is ready for a shake-up. Infosec practices are built on crumbling foundations.
UPDATE: I’ve been selected to speak on this topic at the 2014 AusCERT Conference – the biggest information security event in Australasia.
The recent tragic experience of data breaches — at Target, Snapchat, Adobe Systems and RSA to name a very few — shows that orthodox information security is simply not up to the task of securing serious digital assets. We have to face facts: no amount of today’s conventional security is ever going to protect assets worth billions of dollars.
Our approach to infosec is based on old management process standards (which can be traced back to ISO 9000) and a ponderous technology neutrality that overly emphasises people and processes. The things we call “Information Security Management Systems” are actually not systems that any engineer would recognise but instead are flabby sets of documents and audit procedures.
“Continuous security improvement” in reality is continuous document engorgement.
Most ISMSs sit passively on shelves and share drives doing nothing for 12 months, until the next audit, when the papers become the centre of attention (not the actual security). Audit has become a sick joke. ISO 27000 and PCI assessors have the nerve to tell us their work only provides a snapshot, and if a breach occurs between visits, it’s not their fault. In their words they admit therefore that audits do not predict performance between audits. While nobody is looking, our credit card numbers are about as secure as Schrodinger’s Cat!
The deep problem is that computer systems have become so very complex and so very fragile that they are not manageable by traditional means. Our standard security tools, including Threat & Risk Assessment and hierarchical layered network design, are rooted in conventional engineering. Failure Modes & Criticality Analysis works well in linear systems, where small perturbations have small effects, but IT is utterly unlike this. The smallest most trivial omission in software or in a server configuration can have dire and unlimited consequences. It’s like we’re playing Jenga.
Update: Barely a month after I wrote this blog, we heard about the “goto fail” bug in the Apple iOS SSL routines, which resulted from one spurious line of code. It might have been more obvious to the programmer and/or any code reviewer had the code been indented differently or if curly braces were used rigorously.
Security needs to be re-thought from the ground up. We need some bigger ideas.
We need less rigid, less formulaic security management structures, to encourage people at the coal face to exercise their judgement and skill. We need straight talking CISOs with deep technical experience in how computers really work, and not ‘suits’ more focused on the C-suite than the dev teams. We have to stop writing impenetrable hierarchical security policies and SOPs (in the “waterfall” manner we recognised decades ago fails to do much good in software development). And we need to equate security with software quality and reliability, and demand that adequate time and resources be allowed for the detailed work to be done right.
If we can’t protect credit card numbers today, we urgently need to do things differently, standing as we are on the brink of the Internet of Things.