Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Improving the position of the CISO

Over the years, we security professionals have tried all sorts of things to make better connections with other parts of the business. We have broadened our qualifications, developed new Return on Security Investment tools, preached that security is a "business enabler", and strived to talk about solutions and not boring old technologies. But we've had mixed success.

Once when I worked as a principal consultant for a large security services provider, a new sales VP came in to the company with a fresh approach. She was convinced that the customer conversation had to switch from technical security to something more meaningful to the wider business: Risk Management. For several months after that I joined call after call with our sales teams, all to no avail. We weren't improving our lead conversions; in fact with banks we seemed to be going backwards. And then it dawned on me: there isn't much anyone can tell bankers about risk they don't already know.

Joining the worlds of security and business is easier said than done. So what is the best way for security line managers to engage with their peers? How can they truly contribute to new business instead of being limited to protecting old business? In a new investigation I've done at Constellation Research I've been looking at how classical security analysis skills and tools can be leveraged for strategic information management.

Remember that the classical frame for managing security is "Confidentiality-Integrity-Availability" or C-I-A. This is how we conventionally look at defending enterprise information assets; threats to security are seen in terms of how critical data may be exposed to unauthorised access, or lost, damaged or stolen, or otherwise made inaccessible to legitimate users. The stock-in-trade for the Chief Information Security Officer (CISO) is the enterprise information asset register and the continuous exercise of Threat & Risk Assessment around those assets.

I suggest that this way of looking at assets can be extended, shifting from a defensive mindset to a strategic, forward outlook. When the CISO has developed a birds-eye view of their organisation's information assets, they are ideally positioned to map the value of the assets more completely. What is it that makes information valuable exactly? It depends on the business - and security professionals are very good at looking at context. So for example, in financial services or technology, companies can compete on the basis of their original research, so it's the lead time to discovery that sets them apart. On the other hand, in healthcare and retail, the completeness of customer records is a critical differentiator for it allows better quality relationships to be created. And when dealing with sensitive personal information, as in the travel and hospitality industry, the consent and permissions attached to data records determine how they may be leveraged for new business. These are the sorts of things that make different data valuable in different contexts.

CISOs are trained to look at data through different prisms and to assess data in different dimensions. I've found that CISOs are therefore ideally qualified to bring a fresh approach to building the value of enterprise information assets. They can take a more pro-active role in information management, and carve out a new strategic place for themselves in the C-suite.

My new report, "Strategic Opportunities for the CISO", is available now.

Posted in Big Data, Constellation Research, Management theory

From Information Security to Digital Competitiveness

Exploring new strategic opportunities for CIOs and CISOs.

For as long as we've had a distinct information security profession, it has been said that security needs to be a "business enabler". But what exactly does that mean? How can security professionals advance from their inherently defensive postures, into more strategic positions, and contribute actively to the growth of the business? This is the focus of my latest work at Constellation Research. It turns out that security professionals have special tools and skills ideally suited to a broader strategic role in information management.

The role of Chief Information Security Officer (CISO) is a tough one. Security is red hot. Not a week goes by without news of another security breach.

Information now is the lifeblood of most organisations; CISOs and their teams are obviously crucial in safeguarding that. But a purely defensive mission seldom allows for much creativity, or a positive reputation amongst one's peers. A predominantly reactive work mode -- as important as it is from day to day -- can sometimes seem precarious. The good news for CISOs' career security and job satisfaction is they happen to have special latent skills to innovate and build out those most important digital assets.

Information assets are almost endless: accounts, ledgers and other legal records, sales performance, stock lists, business plans, R&D plans, product designs, market analyses and forecasts, customer data, employee files, audit reports, patent specifications and trade secrets. But what is it about all this information that actually needs protecting? What exactly makes any data valuable? These questions take us into the mind of the CISO.

Security management is formally all about the right balance of Confidentiality, Integrity and Availability in the context of the business. Different businesses have different needs in these three dimensions.

Think of the famous industrial secrets like the recipes for KFC or Coca Cola. These demand the utmost confidentiality and integrity but the availability of the information can be low (nay, must be low) because it is accessed as a whole so seldomly. Medical records too have traditionally needed confidentiality more than availability, but that's changing. Complex modern healthcare demands electronic records, and these do need high availability especially in emergency care settings.

In contrast, for public information like stock prices there is no value in confidentiality whatsoever, and instead, availability and integrity are paramount. On the other hand, market-sensitive information that listed companies periodically report to stock exchanges must have very strict confidentiality for a relatively brief period.

Security professionals routinely compile Information Asset Inventories and plan for appropriate C-I-A for each type of data held. From there, a Threat & Risk Assessment (TRA) is generally undertaken, to examine the adverse events that might compromise the Confidentiality, Integrity and/or Availability. The likelihood and the impact of each adverse event are estimated and multiplied together to gauge the practical risk posed by each known threat. By prioritising counter-measures for the identified threats, in line with the organisation's risk appetite, the TRA helps guide a rational program of investment in security.

Now their practical experience can put CISOs in a special position to enhance their organisation's information assets rather than restrict themselves to hardening information against just the negative impacts.

Here's where the CISO's mindset comes into play in a new way. The real value of information lies not so much in the data itself as in its qualities. Remember the cynical old saw "It's not what you know, it's who you know". There's a serious side to the saying, which highlights that really useful information has pedigree.

So the real action is in the metadata; that is, data about data. It may have got a bad rap recently thanks to surveillance scandals, but various thinkers have long promoted the importance of metadata. For example, back in the 1980s, Citibank CEO Walter Wriston famously said "information about money will become almost as important as money itself". What a visionary endorsement of metadata!

The important latent skills I want to draw out for CISOs is their practiced ability to deal with the qualities of data. To bring greater value to the business, CISOs can start thinking about the broader pedigree of data and not merely its security qualities. They should spread their wings beyond C-I-A, to evaluate all sorts of extra dimensions, like completeness, reliability, originality, currency, privacy and regulatory compliance.

The core strategic questions for the modern CISO are these: What is it about your corporate information that gives you competitive advantage? What exactly makes information valuable?

The CISO has the mindset and the analytical tools to surface these questions and positively engage their executive peers in finding the answers.

My new Constellation Research report will be published soon.

Posted in Security, Privacy, Management theory, Constellation Research

Revisiting software professionalism

The ongoing debate (or spat) on Twitter about the "No Estimates" movement had me reaching for the archives.

Some now say that being forced to provide estimates is somehow counter-productive for software developers. I've long thought about programming productivity, and the paradox that software is too soft.

Some programmers want special treatment. In effect, "No Estimates" proponents are claiming their particular work is not amenable to traditional metrics and management. Now in a way, they're right; there is as yet no such thing as software "engineering". There are none of the handbooks or standards that feature in chemical, mechanical and electrical engineering. But nevertheless, if a programmer knows what they're doing - if they know their subject matter and how their code behaves - then providing estimates is not all that difficult. Disclaiming one's ability to predict how long a task will take is a weird way to try and engage with the business.

Software is definitely a difficult medium. It's highly non-linear, and breeds amazing complexity. But a great many of today's problems, like the recent #gotofail and Heartbleed scandals, are manifestly due to chaotic development practices.

As such, programmers are part of the problem.

I once wrote a letter to the editor of ComputerWorld about this ...


IT Governance

Yes indeed, IT is made the scapegoat for a great many project disasters (ComputerWorld 28 September, 2005, page 1). But it may prove fruitless to force orthodox project management and corporate governance methodologies onto big IT projects. And at the same time, IT "professionals" are not entirely free of blame.

So the KPMG Global IT Project Management Survey found that the vast majority of technology projects run over budget. In the main, "technology" means software, whether we build or buy. The "software crisis" - the systemic inability to estimate software projects accurately and to deliver what's promised - is about 40 years old. And it's more subtle than KPMG suggests in blaming corporate governance. It is fashionable at the moment to look to governance to rectify business problems but in this case, it really is a technology issue.

Software project management truly is different from all other technical fields, for software does not obey the laws of nature. Building skyscrapers, tunnels, dams and bridges is relatively predictable. You start with site surveys and foundations, erect a sturdy framework, fill in the services, fit it out, and take away the scaffolding. Specifications don't change much over a several year project, and the tools don't change at all.

But with software, you can start a big project anywhere you like, and before the spec is signed off. Metaphorically speaking, the plumbing can go in before the framework. Hell, you don't even need a framework! Nothing physical holds a software system up.

And software coding is fast and furious. In a single day, a programmer can create a system more complex than an airport that might take 10,000 person-years to build. So software development is fun. Let's be honest: it's why the majority of programmers chose their craft in the first place.

Ironically it's the rapidity of programming that contributes the most to project overruns. We only use software in information systems because it's fast to write and easy to modify. So the temptation is irresistible to keep specs fluid and to change requirements at any time. Famously, the differences between prototype, "beta release" and product are marginal and arbitrary. Management and marketing take advantage of this fact, and unfortunately software engineers themselves yield too readily to the attraction of the last minute tweak.

The same dynamics of course afflict third party software components. They tend to change too often and fail to meet expectations, making life hell for IT systems integrators.

It won't be until software engineering develops the tools, standards and culture of a true profession that any of this will change. Then corporate governance will have something to govern in big technology projects. Meanwhile, programmers will remain more like playwrights than engineers, and just as manageable.

Posted in Software engineering, Management theory

A software engineer's memoir (work in progress)

I'm an ex software "engineer" [I have reservations about that term] with some life experience of ultra high rel development practices. It's fascinating how much about software quality I learned in the 1980s and 90s is relevant to info sec today.

I've had a trip down memory lane triggered by Karen Sandler's presentation at LinuxConf12 in Ballarat http://t.co/xvUkkaGl and her paper "Killed by code".

The software in implantable defibrillators

I'm still working my way through Karen Sandler's materials. So this post is a work in progress.

What's really stark on first viewing of Karen's talk is the culture she experienced and how it differs from the implantable defib industry I knew in its beginnings 25 years ago.

Karen had an incredibly hard and very off-putting time getting the company that made her defib to explain their software. But when we started in this field, every single person in the company -- and many of our doctors -- would have been able to answer the question What software does this defib run on?: the answer was "ours". And moreover, the FDA were highly aware of software quality issues. The whole medical device industry was still on edge from the notorious Therac 25 episode, a watershed in software verification.

A personal story

I was part of the team that wrote the code for the world's first software controlled implantable cadrioverter/defibrillator (ICD).

In 1990 Telectronics (a tragic legend of Australian technology) released the model 4210, which was just the fourth or fifth ICD on the market (the first few being hard-wired devices from CPI Inc. and Telectronics). The computing technology was severely restricted by several design factors, most especially ultra low power consumption, and a very limited number of microprocessor vendors that would warrant their chips for use in medical devices. The 4210 defib used a semi-customised 8 bit micro-controller based on the 6502, and a 32 KB byte-organised SRAM chip that held the entire executable. The micro clocked at 128kHz, fully eight times slower than the near identical micro in the Apple II a decade earlier. The software had to be efficient, not only to ensure it could make some very tough real time rendezvous, but to keep the power consumption down; the micro consumed about 30% of the device's power over its nominal five year lifetime.

Software development

We wrote mostly in C, with some assembly coding for the kernel and some performance sensitive routines. The kernel was of our own design, multi-tasking, with hard real time performance requirements (in particular, for obvious reasons the system had to respond within tight specs to heart beat interrupts and we had to show we weren't ever going to miss an interrupt!) We also wrote the C compiler.

The 4210's software was 40,000 lines of C, developed by a team of 5-6 over several years; the total effort was 25 person-years. Some of the testing and pre-release validation is described in my blog post about coding being like play writing. The final code inspection involved a team of five working five-to-six hour days for two months, reading aloud and understanding every single line. When occasion called for checking assembly instructions, sometimes we took turns with pencil and paper pretending to be the accumulators, the index registers, the program counter and so on. No stone was left unturned.

The final walk-through was quite a personnel management challenge. One of the senior engineers (a genius who also wrote our kernel and compiler) lobbied for inspecting the whole executable because he didn't want to rely on the correctness of the compiler -- but that would have taken six months. So we compromised by walking through only the assembly code for the critical modules, like the tachycardia detector and the interrupt handlers.

I mentioned that the kernel and compiler were home-grown. So this meant that the company controlled literally every single bit of code running in its defibs. And I reiterate we had several individuals who knew the source code end to end.

By the way, these days I will come across developers in the smartcard industry who find it hard working on applets that are 5 or 10 kilobytes small. Compare say a digital signing applet with an ICD, with its cardiac monitoring algorithms, treatment algorithms, telemetry controller, data logging and operating system all squeezed into 32KB.

Reliability

We amassed several thousand implant-years of experience with the 4210 before it was superseded. After release, we found two or three minor bugs, which we fixed with software upgrades. None would have caused a misfire, neither false positive or false negative.

Yes, for the upgrade we could write into the RAM over a proprietary telemetry protocol. In fact the main reason for the one major software upgrade in the field was to add error correction because after hundreds of device-years we noticed higher than expected bit flips from natural background radiation. That's a helluva story in itself. It was observed that had the code been in ROM, we couldn't have changed it but we wouldn't have had to change it for bit flips either.

Morals of the story

Anyway, some of the morals of the story so far:


  • Software then was cool and topical, and the whole company knew how to talk about it. The real experts -- the dozen or so people in Sydney directly involved in the development -- were all well known worldwide by the executives, the sales reps, the field clinical engineers, and regulatory affairs. And we got lots of questions (in contrast to Karen Sandler's experience where all the caridologists and company people said nobody ever asked about the code).

  • Everything about the software was controlled by the company: the operating system, the chip platform, the compiler, the telemetry protocol.

  • We had a team of people that knew the code like the backs of their hands. Better in fact. It was reliable and, in hindsight, impregnable. Not that we worried about malware back in 1987-1990.

Where has software development gone?

So the sorts of issues that Karen Sandler is raising now, over two decades on, are astonishing to me on so many levels.


  • Why would anyone decide to write life support software on someone else's platform?

  • Why would they use wifi or Bluetooth for telemetry?

  • And if the medical device companies cut corners in software development, one wonders what the defense industry is doing with their drone flight controllers and other "smart" weaponry with its countless millions of lines of opaque software.

[TO BE CONTINUED]

Posted in Software engineering, Management theory

The end of standards

A colleague drew my attention to what he called "yet another management standard". Which got me thinking about where our preoccupation with standards might be heading and where it might end.

Most modern risk management standards allow for exception management. If a company has a formal procedure in place -- for example a Disaster Recovery Plan -- but something out of the ordinary comes up, then the latest standards provide management with flexibility to vary their response to suit their particular circumstances; in other words, management can generally waive regular procedures and "accept the risk". The company can remain in compliance with management systems and standards if it documents these exceptions carefully.

So ... what if a company says "the hell with this latest management standard, we don't want to have anything to do with it". If the standard allows for exceptions, then the company may still be in compliance with the standard by not being in compliance with it.

How about that: a standard you cannot help but comply with!

And then we wouldn't need auditors. We might even start to make some real progress.



Here's a less facetious analysis of the perils of over-standardisation: http://lockstep.com.au/blog/2010/12/21/no-algorithm-for-management.

Posted in Management theory

Policy fads and policy failures

In cyber security, user awareness, education and training have long gone past their Use By Date. We have technological problems that need technological fixes, yet governments and businesses remain averse to investing in real security. Instead, the long standing management fad is to 'audit ourselves' out of trouble, and to over-play user awareness as a security measure when the systems we make them use are inherently insecure.

It’s a massive systemic failure in the security profession.

We see a policy and compliance fixation everywhere. The dominant philosophy in security is obsessed with process. The international information security standard ISO 27001 is a management system standard; it has almost nothing to say universally about security technology. Instead the focus is on documentation and audit. Box ticking. It’s intellectually a carbon-copy of the ISO 9001 quality management standard, and we all know the limitations of that.

Or do we? Remember that those who don’t know the lessons of history are condemned to repeat it. I urge all infosec practitioners to read this decade old article: Is ISO 9000 really a standard? -- it should ring some bells.

Education, policy and process are almost totally useless in fighting ID theft. Consider this: those CD ROMs with 25,000,000 financial records, lost in the mail by British civil servants in 2007 were valued at 1.5 billion pounds, using the going rate on the stolen identity black market. With stolen data being so immensely valuable, just how is security policy ever going to stop insiders cashing in on such treasure?

In another case, after data was lost by the Australian Tax Office, there was earnest criticism that the data should have been encrypted. But so what if it was? What common encryption method could not be cracked by organised crime if there was millions and millions of dollars worth of value to be gained?

The best example of process and policy-dominated security is probably the Payment Card Industry Data Security Standard PCI-DSS. The effectiveness of PCI-DSS and its onerous compliance regime was considered by a US Homeland Security Congressional Committee in March 2009. In hearings, the National Retail Federation submitted that “PCI has been plagued by poor execution ... The PCI guidelines are onerous, confusing, and are constantly changing”. They noted the irony that “the credit card companies’ rules require merchants to store credit card data that many retailers do not want to keep” (emphasis in original). The chair committee remarked that “The essential flaw with the PCI Standard is that it allows companies to check boxes, but not necessarily be secure. Compliance does not equal security. We have to get beyond check box security.”

To really stop ID theft, we need proper technological preventative measures, not more policies and feel-good audits.

The near exclusive emphasis on user education and awareness is a subtle form of blame shifting. It is simply beyond the capacity of regular users to tell pharming sites from real sites, or even to spot all phishing e-mails. What about the feasibility of training people to "shop safely" online? It's a flimsy proposition, considering that the biggest cases of credit card theft have occurred at backend databases of department store chains and payments processors. Most stolen card details in circulation probably originate from regular in-store Card Present transactions, and not from Internet sites. The lesson is even if you never ever shop online, you can have your card details stolen and abused behind your back. All the breathless advice about looking out for the padlock is moot.

In other walks of life we don’t put all the onus on user education. Think about car safety. Yes good driving practices are important, but the major focus is on legislated standards for automotive technology, and enforceable road rules. In contrast, Internet security is dominated by a wild west, everyone-for-themselves mentality, leading to a confusing patchwork of security gizmos, proprietary standards and no common benchmarks.

Posted in Security, Management theory, Internet

There is no algorithm for good management

There is a malaise in security. One problem is that as a "profession", we’ve tried to mechanise security management, as if it were just like generic manufacturing, ammenable to ISO 9000-like management standards. We use essentially the same process and policy templates for all businesses. Don’t get me wrong: process is important, and we do want our security responses to be repeatable and uniform. But not robotic. The truth is, there is no algorithm for doing the right thing. Moreover, there can never be a universal management algorithm, and an underlying naive faith in such a thing is dulling our collective management skills.

An algorithm is a repeatable set of instructions or recipe that can be followed to automatically perform some task or solve some structured problem. Given the same conditions and the same inputs, an algorithm will always produce the same results. But no algorithm can cope with unexpected inputs or events; an algorithm’s designer needs to have a complete view of all input circumstances in advance.

Mathematicians have long known that some surprisingly simple tasks cannot be done algorithmically. The classic ‘travelling salesman’ problem, of how to plot the shortest course through multiple connected towns, has no single recipe for success. There is no way to trisect an angle using a compass and a ruler. There is no consistent way to tell if any given computer program is ever going to stop.

So when security is concerned so much of the time with the unexpected, we should be doubly careful about formulaic management approaches, especially template policies and checklist-based security audits!

Ok, but what's the alternative? This is extremely challenging, but we need to think outside the check box.

Like any complex management field, security is all about problem solving. There’s never going to be a formula for it. Rather, we need to put smart people on the job and let them get on with it, using their experience and their wits. Good security like good design frankly involves a bit of magic. We can foster security excellence through genuine expertise, teamwork, research, innovation and agility. We need security leaders who have the courage to treat new threats and incidents on their merits, trust their professional instincts, try new things, break the mould, and have the sense to avoid management fads.

I have to say I remain pessimistic. These are not good times for couragous managers. For the first rule of career risk management is to make sure everyone agrees in advance to whatever you plan to do, so the blame can be shared when something goes wrong. This is probably the real reason why people are drawn to algorithms in management: they can be documented, reviewed, signed off, and put on back the shelf in wait for a disaster and the inevitable audit. So long as everyone did what the said they were going to do in response to an incident, nobody is to blame.

So I'd like to see a law suit against a company with a perfect ISO 27001 record which still got breached, where the lawyers's case is that it is unreasonable to rely on algorithms to manage in the real world.

Posted in Security, Science, Management theory