Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Thinking creatively about information assets in retail and hospitality

In my last blog Improving the Position of the CISO, I introduced the new research I've done on extending the classic "Confidentiality-Integrity-Availability" (C-I-A) frame for security analysis, to cover all the other qualities of enterprise information assets. The idea is to build a comprehensive account of what it is that makes information valuable in the context of the business, leveraging the traditional tools and skills of the CISO. After all, security professionals are particularly good at looking at context. Instead of restricting themselves to defending information assets against harm, CISOs can be helping to enhance those assets by building up their other competitive attributes.

Let's look at some examples of how this would work, in some classic Big Data applications in retail and hospitality.

Companies in these industries have long been amassing detailed customer databases under the auspices of loyalty programs. Supermarkets have logged our every purchase for many years, so they can for instance put together new deals on our favorite items, from our preferred brands, or from competitors trying to get us to switch brands. Likewise, hotels track when we stay and what we do, so they can personalise our experience, tailor new packages for us, and try to cross-sell services they predict we'll find attractive. Behind the scenes, the data is also used for operations to plan demand, fine tune their logistics and so on.

Big Data techniques amplify the value of information assets enormously, but they can take us into complicated territory. Consider for example the potential for loyalty information to be parlayed into insurance and other financial services products. Supermarkets find they now have access to a range of significant indicators of health & lifestyle risk factors which are hugely valuable in insurance calculations ... if only the data is permitted to be re-purposed like that.

The question is, what is it about the customer database of a given store or hotel that gives it an edge over its competitors? There many more attributes to think creatively about beyond C-I-A!

  • Utility
  • It's important to rigorously check that the raw data, the metadata and any derived analytics can actually be put to different business purposes.
    • Are data formats well-specified, and technically and semantically interoperable?
    • What would it cost to improve interoperability as needed?
    • Is the data physically available to your other business systems?
    • Does the rest of the business know what's in the data sets?
  • Completeness
    • Do you know more about your customers than your competitors do?
    • Do you supplement and enrich raw customer behaviours with questionaires, or linked data?
    • How far back in time do the records go?
    • Do you understand the reason any major gaps? Do the gaps themselves tell you anything?
    • What sort of metadata do you have? For example, do you retain time & location, trend data, changes, origins and so on?
  • Currency & Accuracy
    • Is your data up to date? Remember that accuracy can diminish over time, so the sheer age of a long term database can have a downside.
    • What mechanisms are there to keep data up to date?
  • Permissions & Consent
    • Have customers consented to secondary usage of data?
    • Is the consent specific, blanket or bundled?
    • Might customers be surprised and possibly put off to learn how their loyalty data is utilised?
    • Do the terms & conditions of participation in a loyalty program cover what you wish to do with the data?
    • Do the Ts&Cs (which might have been agreed to in the past) still align with the latest plans for data usage?
    • Are there opportunities to refresh the Ts&Cs?
    • Are there opportunities for customers to negotiate the value you can offer for re-purposing the data?

  • Compliance
  • When businesses derive new insights from data, it is possible that they are synthesising brand new Personal Information, and non-obvious privacy obligations can go along with that. The competitive advantage of Big Data can be squandered if regulations are overlooked, especially in international environments.
    • So where is the data held, and where does it flow?
    • Are applications for your data compliant with applicable regulations?
    • Is health information or similar sensitive Personal Information extracted or synthesised, and do you have specific consent for that?
    • Can you meet the Access & Correction obligations in many data protection regulations?

For more detail, my new report, "Strategic Opportunities for the CISO", is available now.

Posted in Big Data, Constellation Research, Management theory, Privacy, Security

Improving the position of the CISO

Over the years, we security professionals have tried all sorts of things to make better connections with other parts of the business. We have broadened our qualifications, developed new Return on Security Investment tools, preached that security is a "business enabler", and strived to talk about solutions and not boring old technologies. But we've had mixed success.

Once when I worked as a principal consultant for a large security services provider, a new sales VP came in to the company with a fresh approach. She was convinced that the customer conversation had to switch from technical security to something more meaningful to the wider business: Risk Management. For several months after that I joined call after call with our sales teams, all to no avail. We weren't improving our lead conversions; in fact with banks we seemed to be going backwards. And then it dawned on me: there isn't much anyone can tell bankers about risk they don't already know.

Joining the worlds of security and business is easier said than done. So what is the best way for security line managers to engage with their peers? How can they truly contribute to new business instead of being limited to protecting old business? In a new investigation I've done at Constellation Research I've been looking at how classical security analysis skills and tools can be leveraged for strategic information management.

Remember that the classical frame for managing security is "Confidentiality-Integrity-Availability" or C-I-A. This is how we conventionally look at defending enterprise information assets; threats to security are seen in terms of how critical data may be exposed to unauthorised access, or lost, damaged or stolen, or otherwise made inaccessible to legitimate users. The stock-in-trade for the Chief Information Security Officer (CISO) is the enterprise information asset register and the continuous exercise of Threat & Risk Assessment around those assets.

I suggest that this way of looking at assets can be extended, shifting from a defensive mindset to a strategic, forward outlook. When the CISO has developed a birds-eye view of their organisation's information assets, they are ideally positioned to map the value of the assets more completely. What is it that makes information valuable exactly? It depends on the business - and security professionals are very good at looking at context. So for example, in financial services or technology, companies can compete on the basis of their original research, so it's the lead time to discovery that sets them apart. On the other hand, in healthcare and retail, the completeness of customer records is a critical differentiator for it allows better quality relationships to be created. And when dealing with sensitive personal information, as in the travel and hospitality industry, the consent and permissions attached to data records determine how they may be leveraged for new business. These are the sorts of things that make different data valuable in different contexts.

CISOs are trained to look at data through different prisms and to assess data in different dimensions. I've found that CISOs are therefore ideally qualified to bring a fresh approach to building the value of enterprise information assets. They can take a more pro-active role in information management, and carve out a new strategic place for themselves in the C-suite.

My new report, "Strategic Opportunities for the CISO", is available now.

Posted in Big Data, Constellation Research, Management theory

From Information Security to Digital Competitiveness

Exploring new strategic opportunities for CIOs and CISOs.

For as long as we've had a distinct information security profession, it has been said that security needs to be a "business enabler". But what exactly does that mean? How can security professionals advance from their inherently defensive postures, into more strategic positions, and contribute actively to the growth of the business? This is the focus of my latest work at Constellation Research. It turns out that security professionals have special tools and skills ideally suited to a broader strategic role in information management.

The role of Chief Information Security Officer (CISO) is a tough one. Security is red hot. Not a week goes by without news of another security breach.

Information now is the lifeblood of most organisations; CISOs and their teams are obviously crucial in safeguarding that. But a purely defensive mission seldom allows for much creativity, or a positive reputation amongst one's peers. A predominantly reactive work mode -- as important as it is from day to day -- can sometimes seem precarious. The good news for CISOs' career security and job satisfaction is they happen to have special latent skills to innovate and build out those most important digital assets.

Information assets are almost endless: accounts, ledgers and other legal records, sales performance, stock lists, business plans, R&D plans, product designs, market analyses and forecasts, customer data, employee files, audit reports, patent specifications and trade secrets. But what is it about all this information that actually needs protecting? What exactly makes any data valuable? These questions take us into the mind of the CISO.

Security management is formally all about the right balance of Confidentiality, Integrity and Availability in the context of the business. Different businesses have different needs in these three dimensions.

Think of the famous industrial secrets like the recipes for KFC or Coca Cola. These demand the utmost confidentiality and integrity but the availability of the information can be low (nay, must be low) because it is accessed as a whole so seldomly. Medical records too have traditionally needed confidentiality more than availability, but that's changing. Complex modern healthcare demands electronic records, and these do need high availability especially in emergency care settings.

In contrast, for public information like stock prices there is no value in confidentiality whatsoever, and instead, availability and integrity are paramount. On the other hand, market-sensitive information that listed companies periodically report to stock exchanges must have very strict confidentiality for a relatively brief period.

Security professionals routinely compile Information Asset Inventories and plan for appropriate C-I-A for each type of data held. From there, a Threat & Risk Assessment (TRA) is generally undertaken, to examine the adverse events that might compromise the Confidentiality, Integrity and/or Availability. The likelihood and the impact of each adverse event are estimated and multiplied together to gauge the practical risk posed by each known threat. By prioritising counter-measures for the identified threats, in line with the organisation's risk appetite, the TRA helps guide a rational program of investment in security.

Now their practical experience can put CISOs in a special position to enhance their organisation's information assets rather than restrict themselves to hardening information against just the negative impacts.

Here's where the CISO's mindset comes into play in a new way. The real value of information lies not so much in the data itself as in its qualities. Remember the cynical old saw "It's not what you know, it's who you know". There's a serious side to the saying, which highlights that really useful information has pedigree.

So the real action is in the metadata; that is, data about data. It may have got a bad rap recently thanks to surveillance scandals, but various thinkers have long promoted the importance of metadata. For example, back in the 1980s, Citibank CEO Walter Wriston famously said "information about money will become almost as important as money itself". What a visionary endorsement of metadata!

The important latent skills I want to draw out for CISOs is their practiced ability to deal with the qualities of data. To bring greater value to the business, CISOs can start thinking about the broader pedigree of data and not merely its security qualities. They should spread their wings beyond C-I-A, to evaluate all sorts of extra dimensions, like completeness, reliability, originality, currency, privacy and regulatory compliance.

The core strategic questions for the modern CISO are these: What is it about your corporate information that gives you competitive advantage? What exactly makes information valuable?

The CISO has the mindset and the analytical tools to surface these questions and positively engage their executive peers in finding the answers.

My new Constellation Research report will be published soon.

Posted in Security, Privacy, Management theory, Constellation Research

Revisiting software professionalism

The ongoing debate (or spat) on Twitter about the "No Estimates" movement had me reaching for the archives.

Some now say that being forced to provide estimates is somehow counter-productive for software developers. I've long thought about programming productivity, and the paradox that software is too soft.

Some programmers want special treatment. In effect, "No Estimates" proponents are claiming their particular work is not amenable to traditional metrics and management. Now in a way, they're right; there is as yet no such thing as software "engineering". There are none of the handbooks or standards that feature in chemical, mechanical and electrical engineering. But nevertheless, if a programmer knows what they're doing - if they know their subject matter and how their code behaves - then providing estimates is not all that difficult. Disclaiming one's ability to predict how long a task will take is a weird way to try and engage with the business.

Software is definitely a difficult medium. It's highly non-linear, and breeds amazing complexity. But a great many of today's problems, like the recent #gotofail and Heartbleed scandals, are manifestly due to chaotic development practices.

As such, programmers are part of the problem.

I once wrote a letter to the editor of ComputerWorld about this ...


IT Governance

Yes indeed, IT is made the scapegoat for a great many project disasters (ComputerWorld 28 September, 2005, page 1). But it may prove fruitless to force orthodox project management and corporate governance methodologies onto big IT projects. And at the same time, IT "professionals" are not entirely free of blame.

So the KPMG Global IT Project Management Survey found that the vast majority of technology projects run over budget. In the main, "technology" means software, whether we build or buy. The "software crisis" - the systemic inability to estimate software projects accurately and to deliver what's promised - is about 40 years old. And it's more subtle than KPMG suggests in blaming corporate governance. It is fashionable at the moment to look to governance to rectify business problems but in this case, it really is a technology issue.

Software project management truly is different from all other technical fields, for software does not obey the laws of nature. Building skyscrapers, tunnels, dams and bridges is relatively predictable. You start with site surveys and foundations, erect a sturdy framework, fill in the services, fit it out, and take away the scaffolding. Specifications don't change much over a several year project, and the tools don't change at all.

But with software, you can start a big project anywhere you like, and before the spec is signed off. Metaphorically speaking, the plumbing can go in before the framework. Hell, you don't even need a framework! Nothing physical holds a software system up.

And software coding is fast and furious. In a single day, a programmer can create a system more complex than an airport that might take 10,000 person-years to build. So software development is fun. Let's be honest: it's why the majority of programmers chose their craft in the first place.

Ironically it's the rapidity of programming that contributes the most to project overruns. We only use software in information systems because it's fast to write and easy to modify. So the temptation is irresistible to keep specs fluid and to change requirements at any time. Famously, the differences between prototype, "beta release" and product are marginal and arbitrary. Management and marketing take advantage of this fact, and unfortunately software engineers themselves yield too readily to the attraction of the last minute tweak.

The same dynamics of course afflict third party software components. They tend to change too often and fail to meet expectations, making life hell for IT systems integrators.

It won't be until software engineering develops the tools, standards and culture of a true profession that any of this will change. Then corporate governance will have something to govern in big technology projects. Meanwhile, programmers will remain more like playwrights than engineers, and just as manageable.

Posted in Software engineering, Management theory

gotofail and a defence of purists

'The widely publicised and very serious "gotofail" bug in iOS7 took me back ...

Early in my career I spent seven years in a very special software development environment. I didn't know it at the time, but this experience set the scene for much of my understanding of information security two decades later. I was in a team with a rigorous software development lifecycle; we attained ISO 9001 certification way back in 1998. My company deployed 30 software engineers in product development, 10 of whom were dedicated to testing. Other programmers elsewhere independently wrote manufacture test systems. We spent a lot of time researching leading edge development methodologies, such as Cleanroom, and formal specification languages like Z.

We wrote our own real time multi-tasking operating system; we even wrote our own C compiler and device drivers! Literally every single bit of the executable code was under our control. "Anal" doesn't even begin to describe our corporate culture.

Why all the fuss? Because at Telectronics Pacing Systems, over 1986-1990, we wrote the code for the world's first software controlled implantable defibrillator, the Guardian 4210.

The team spent relatively little time actually coding; we were mostly occupied writing and reviewing documents. And then there were the code inspections. We walked through pseudo-code during spec reviews, and source code during unit validation. And before finally shipping the product, we inspected the entire 40,000 lines of source code. That exercise took five people two months.

For critical modules, like the kernel and error correction routines, we walked through the compiled assembly code. We took the time to simulate the step-by-step operation of the machine code using pen and paper, each team member role-playing parts of the microprocessor (Phil would pretend to be the accumulator, Lou the program counter, me the index register). By the end of it all, we had several people who knew the defib's software like the back of their hand.

And we had demonstrably the most reliable real time software ever written. After amassing several thousand implant-years, we measured a bug rate of less than one in 10,000 lines.

The implant software team had a deserved reputation as pedants. Over 25 person years, the average rate of production was one line of debugged C per team member per day. We were painstaking, perfectionist, purist. And argumentative! Some of our debates were excruciating to behold. We fought over definitions of “verification” and “validation”; we disputed algorithms and test tools, languages and coding standards. We were even precious about code layout, which seemed to some pretty silly at the time.

Yet 20 years later, purists are looking good.

Last week saw widespread attention to a bug in Apple's iOS operating system which rendered website security impotent. The problem arose from a single superfluous line of code – an extra goto statement – that nullified checking of SSL connections, leaving users totally vulnerable to fake websites. The Twitterverse nicknamed the flaw #gotofail.

There are all sorts of interesting quality control questions in the #gotofail experience.


  • Was the code inspected? Do companies even do code inspections these days?
  • The extra goto was said to be a recent change to the source; if that's the case, what regression testing was performed on the change?
  • How are test cases selected?
  • For something as important as SSL, are there not test rigs with simulated rogue websites to stress test security systems before release?

There seems to have been egregious shortcomings at every level : code design, code inspection, and testing.

A lot of attention is being given to the code layout. The spurious goto is indented in such a way that it appears to be part of a branch, but it is not. If curly braces were used religiously, or if an automatic indenting tool was applied, then the bug would have been more obvious (assuming that the code gets inspected). I agree of course that layout and coding standards are important, but there is a much more robust way to make source code clearer.

Beyond the lax testing and quality control, there is also a software-theoretic question in all this that is getting hardly any attention: Why are programmers using ANY goto statements at all?

I was taught at college and later at Telectronics to avoid goto statements at all cost. Yes, on rare occasions a goto statement makes the code more compact, but with care, a program can almost always be structured to be compact in other ways. Don't programmers care anymore about elegance in logic design? Don't they make efforts to set out their code in a rigorous structured manner?

The conventional wisdom is that goto statements make source code harder to understand, harder to test and harder to maintain. Kernighan and Ritchie - UNIX pioneers and authors of the classic C programming textbook - said the goto statement is "infinitely abusable" and it "be used sparingly if at all." Before them, one of programming's giants, Edsger Dijkstra, wrote in 1968 that "The go to statement ... is too much an invitation to make a mess of one's program"; see Go To Statement Considered Harmful. The goto creates spaghetti code. The landmark structured programming language PASCAL doesn't even have a goto statement! At Telectronics our coding standard prohibited without exception gotos in all implantable software.

Hard to understand, hard to test and hard to maintain is exactly what we see in the flawed iOS7 code. The critical bug never would have happened if Apple too banned the goto.

Now, I am hardly going to suggest that fanatical coding standards and intellectual rigor are sufficient to make software secure (see also "Security Isn’t Secure). It's unlikely that many commercial developers will be able to cost-justify exhaustive code walkthroughs when millions of lines are involved even in the humble mobile phone. It’s not as if lives depend on commercial software.

Or do they?!

Let’s leave aside that vexed question for now and return to fundamentals.

The #gotofail episode will become a text book example of not just poor attention to detail, but moreover, the importance of disciplined logic, rigor, elegance, and fundamental coding theory.

A still deeper lesson in all this is the fragility of software. Prof Arie van Deursen nicely describes the iOS7 routine as "brittle". I want to suggest that all software is tragically fragile. It takes just one line of silly code to bring security to its knees. The sheer non-linearity of software – the ability for one line of software anywhere in a hundred million lines to have unbounded impact on the rest of the system – is what separates development from conventional engineering practice. Software doesn’t obey the laws of physics. No non-trivial software can ever be fully tested, and we have gone too far for the software we live with to be comprehensively proof read. We have yet to build the sorts of software tools and best practice and habits that would merit the title "engineering".

I’d like to close with a philosophical musing that might have appealed to my old mentors at Telectronics. Post-modernists today can rejoice that the real world has come to pivot precariously on pure text. It is weird and wonderful that technicians are arguing about the layout of source code – as if they are poetry critics.

We have come to depend daily on great obscure texts, drafted not by people we can truthfully call "engineers" but by a largely anarchic community we would be better of calling playwrights.

Posted in Security, Management theory, Software engineering

Security Isn't Secure

That is, information security is not intellectually secure. Almost every precept of orthodox information security is ready for a shake-up. Infosec practices are built on crumbling foundations.

UPDATE: I've been selected to speak on this topic at the 2014 AusCERT Conference - the biggest information security event in Australasia.

The recent tragic experience of data breaches -- at Target, Snapchat, Adobe Systems and RSA to name a very few -- shows that orthodox information security is simply not up to the task of securing serious digital assets. We have to face facts: no amount of today's conventional security is ever going to protect assets worth billions of dollars.

Our approach to infosec is based on old management process standards (which can be traced back to ISO 9000) and a ponderous technology neutrality that overly emphasises people and processes. The things we call "Information Security Management Systems" are actually not systems that any engineer would recognise but instead are flabby sets of documents and audit procedures.

"Continuous security improvement" in reality is continuous document engorgement.

Most ISMSs sit passively on shelves and share drives doing nothing for 12 months, until the next audit, when the papers become the centre of attention (not the actual security). Audit has become a sick joke. ISO 27000 and PCI assessors have the nerve to tell us their work only provides a snapshot, and if a breach occurs between visits, it's not their fault. In their words they admit therefore that audits do not predict performance between audits. While nobody is looking, our credit card numbers are about as secure as Schrodinger's Cat!

The deep problem is that computer systems have become so very complex and so very fragile that they are not manageable by traditional means. Our standard security tools, including Threat & Risk Assessment and hierarchical layered network design, are rooted in conventional engineering. Failure Modes & Criticality Analysis works well in linear systems, where small perturbations have small effects, but IT is utterly unlike this. The smallest most trivial omission in software or in a server configuration can have dire and unlimited consequences. It's like we're playing Jenga.

Update: Barely a month after I wrote this blog, we heard about the "goto fail" bug in the Apple iOS SSL routines, which resulted from one spurious line of code. It might have been more obvious to the programmer and/or any code reviewer had the code been indented differently or if curly braces were used rigorously.

Security needs to be re-thought from the ground up. We need some bigger ideas.

We need less rigid, less formulaic security management structures, to encourage people at the coal face to exercise their judgement and skill. We need straight talking CISOs with deep technical experience in how computers really work, and not 'suits' more focused on the C-suite than the dev teams. We have to stop writing impenetrable hierarchical security policies and SOPs (in the "waterfall" manner we recognised decades ago fails to do much good in software development). And we need to equate security with software quality and reliability, and demand that adequate time and resources be allowed for the detailed work to be done right.

If we can't protect credit card numbers today, we urgently need to do things differently, standing as we are on the brink of the Internet of Things.

Posted in Security, Management theory, Software engineering

A software engineer's memoir (work in progress)

I'm an ex software "engineer" [I have reservations about that term] with some life experience of ultra high rel development practices. It's fascinating how much about software quality I learned in the 1980s and 90s is relevant to info sec today.

I've had a trip down memory lane triggered by Karen Sandler's presentation at LinuxConf12 in Ballarat http://t.co/xvUkkaGl and her paper "Killed by code".

The software in implantable defibrillators

I'm still working my way through Karen Sandler's materials. So this post is a work in progress.

What's really stark on first viewing of Karen's talk is the culture she experienced and how it differs from the implantable defib industry I knew in its beginnings 25 years ago.

Karen had an incredibly hard and very off-putting time getting the company that made her defib to explain their software. But when we started in this field, every single person in the company -- and many of our doctors -- would have been able to answer the question What software does this defib run on?: the answer was "ours". And moreover, the FDA were highly aware of software quality issues. The whole medical device industry was still on edge from the notorious Therac 25 episode, a watershed in software verification.

A personal story

I was part of the team that wrote the code for the world's first software controlled implantable cadrioverter/defibrillator (ICD).

In 1990 Telectronics (a tragic legend of Australian technology) released the model 4210, which was just the fourth or fifth ICD on the market (the first few being hard-wired devices from CPI Inc. and Telectronics). The computing technology was severely restricted by several design factors, most especially ultra low power consumption, and a very limited number of microprocessor vendors that would warrant their chips for use in medical devices. The 4210 defib used a semi-customised 8 bit micro-controller based on the 6502, and a 32 KB byte-organised SRAM chip that held the entire executable. The micro clocked at 128kHz, fully eight times slower than the near identical micro in the Apple II a decade earlier. The software had to be efficient, not only to ensure it could make some very tough real time rendezvous, but to keep the power consumption down; the micro consumed about 30% of the device's power over its nominal five year lifetime.

Software development

We wrote mostly in C, with some assembly coding for the kernel and some performance sensitive routines. The kernel was of our own design, multi-tasking, with hard real time performance requirements (in particular, for obvious reasons the system had to respond within tight specs to heart beat interrupts and we had to show we weren't ever going to miss an interrupt!) We also wrote the C compiler.

The 4210's software was 40,000 lines of C, developed by a team of 5-6 over several years; the total effort was 25 person-years. Some of the testing and pre-release validation is described in my blog post about coding being like play writing. The final code inspection involved a team of five working five-to-six hour days for two months, reading aloud and understanding every single line. When occasion called for checking assembly instructions, sometimes we took turns with pencil and paper pretending to be the accumulators, the index registers, the program counter and so on. No stone was left unturned.

The final walk-through was quite a personnel management challenge. One of the senior engineers (a genius who also wrote our kernel and compiler) lobbied for inspecting the whole executable because he didn't want to rely on the correctness of the compiler -- but that would have taken six months. So we compromised by walking through only the assembly code for the critical modules, like the tachycardia detector and the interrupt handlers.

I mentioned that the kernel and compiler were home-grown. So this meant that the company controlled literally every single bit of code running in its defibs. And I reiterate we had several individuals who knew the source code end to end.

By the way, these days I will come across developers in the smartcard industry who find it hard working on applets that are 5 or 10 kilobytes small. Compare say a digital signing applet with an ICD, with its cardiac monitoring algorithms, treatment algorithms, telemetry controller, data logging and operating system all squeezed into 32KB.

Reliability

We amassed several thousand implant-years of experience with the 4210 before it was superseded. After release, we found two or three minor bugs, which we fixed with software upgrades. None would have caused a misfire, neither false positive or false negative.

Yes, for the upgrade we could write into the RAM over a proprietary telemetry protocol. In fact the main reason for the one major software upgrade in the field was to add error correction because after hundreds of device-years we noticed higher than expected bit flips from natural background radiation. That's a helluva story in itself. It was observed that had the code been in ROM, we couldn't have changed it but we wouldn't have had to change it for bit flips either.

Morals of the story

Anyway, some of the morals of the story so far:


  • Software then was cool and topical, and the whole company knew how to talk about it. The real experts -- the dozen or so people in Sydney directly involved in the development -- were all well known worldwide by the executives, the sales reps, the field clinical engineers, and regulatory affairs. And we got lots of questions (in contrast to Karen Sandler's experience where all the caridologists and company people said nobody ever asked about the code).

  • Everything about the software was controlled by the company: the operating system, the chip platform, the compiler, the telemetry protocol.

  • We had a team of people that knew the code like the backs of their hands. Better in fact. It was reliable and, in hindsight, impregnable. Not that we worried about malware back in 1987-1990.

Where has software development gone?

So the sorts of issues that Karen Sandler is raising now, over two decades on, are astonishing to me on so many levels.


  • Why would anyone decide to write life support software on someone else's platform?

  • Why would they use wifi or Bluetooth for telemetry?

  • And if the medical device companies cut corners in software development, one wonders what the defense industry is doing with their drone flight controllers and other "smart" weaponry with its countless millions of lines of opaque software.

[TO BE CONTINUED]

Posted in Software engineering, Management theory

The end of standards

A colleague drew my attention to what he called "yet another management standard". Which got me thinking about where our preoccupation with standards might be heading and where it might end.

Most modern risk management standards allow for exception management. If a company has a formal procedure in place -- for example a Disaster Recovery Plan -- but something out of the ordinary comes up, then the latest standards provide management with flexibility to vary their response to suit their particular circumstances; in other words, management can generally waive regular procedures and "accept the risk". The company can remain in compliance with management systems and standards if it documents these exceptions carefully.

So ... what if a company says "the hell with this latest management standard, we don't want to have anything to do with it". If the standard allows for exceptions, then the company may still be in compliance with the standard by not being in compliance with it.

How about that: a standard you cannot help but comply with!

And then we wouldn't need auditors. We might even start to make some real progress.



Here's a less facetious analysis of the perils of over-standardisation: http://lockstep.com.au/blog/2010/12/21/no-algorithm-for-management.

Posted in Management theory

Programming is like playwriting

The software-as-a-profession debate continues largely untouched by each generation's innovations in production methods. Most of us in the 90s thought that formal methods, reuse and enforced modularity would introduce to software some of the hallmarks of real engineering: predictability, repeatability, measurability and quality. Yet despite Objected Oriented methods and sophisticated CASE tools, many of the human traits of software-as-a-craft remain with us.

The "software crisis" – the systemic inability to estimate software projects accurately, to deliver what's promised, and to meet quality expectations – is over 40 years old. Its causes are multiple and subtle, and despite ample innovation in languages, tools and methodologies, debates continue over what ails the software industry. The latest skirmish is a provocative suggestion from Forrester analyst Mark Gualtieri that we shift the emphasis from independent Quality Assurance back onto developers’ own responsibilities, or abandon QA altogether. There has been a strong reaction! The general devotion to QA I think is aided and abetted by today’s widespread fashion for corporate governance.

But we should listen to radical ideas like Gualtieri’s, rather than maintain a slavish devotion to orthodoxies. We should recognise that software engineering is inherently different from conventional engineering, because software itself is a different kind of material. Its properties make it less amenable to regular governance.

Simply, software does not obey the laws of physics. Building skyscrapers, tunnels, dams and bridges is relatively predictable. You start with site surveys and foundations, erect a sturdy framework and all sorts of temporary formers, flesh out the structure, fill in the services like power and plumbing, do the fit-out, and finally take away all the scaffolding.

Specifications for real engineering projects don’t change much, even over several years for really big initiatives. And the engineering tools don't change at all.

Software is utterly unlike this. You can start writing software anywhere you like, and before the spec is signed off. There aren’t any raw materials to specify and buy, and no quantity surveyors to deal with. Metaphorically speaking, the plumbing can go in before the framework. Hell, you don't even need a framework! Nothing physical holds a software system up. Flimsy software, as a material, is indistinguishable from the best code. No laws of physics dictate that you start at the bottom and work your way slowly upwards, using a symbiosis of material properties and gravity to keep your construction stable and well-behaved as it grows. The process of real world engineering is thick with natural constraints that ensure predictability (just imagine how wobbly a house would be if you could lay the bricks from the top down) whereas software development processes are almost totally arbitrary, except for the odd stricture imposed by high level languages.

Real world systems are naturally compartmentalised. If a bearing fails in an air-conditioning plant in the basement, it’s not going to affect the integrity of any of the floor plates. On the other hand, nothing physically decouples lines of code; a bug in one part of the program can impinge on almost any other part (which incidentally renders traditional failure modes and effects analysis impossible). We only modularise software by artificial means, like banning goto statements and self-modifying code.

Coding is fast and furious. In a single day, a programmer can create a system probably more complex than an airport that takes more than 10,000 person-years to build. And software development is tremendous creative fun. Let's be honest: it's why the majority of programmers chose their craft in the first place.

Ironically the rapidity of programming contributes significantly to software project overruns. We only use software in information systems because it's faster to make and easier to modify than wired logic. So the temptation is irresistible to keep specs fluid and to accommodate new requirements at any time. Famously, the differences between prototype, beta and production product are marginal and arbitrary. Management and marketing take advantage of this fact, and unfortunately software engineers themselves yield too readily to the attraction of the last minute tweak.

I suggest programming is more like play writing than engineering, and many programmers (especially the really good ones!) are just as manageable as poets.

In both software and play writing, structure is almost entirely arbitrary. Because neither obey the laws of physics, the structure of software and plays comes from the act of composition. A good software engineer will know their composition from end to end. But another programmer can always come along and edit the work, inserting their own code as they see fit. It is received wisdom in programming that most bugs arise from imprudent changes made to old code.

Messing with a carefully written piece of software is fraught with danger, just as it is with a finished play. I could take Hamlet for instance, and hack it as easily as I might hack an old program -- add a character or two, or a whole new scene -- but the entire internal logic of the play would almost certainly be wrecked. It would be “buggy”.

I was a software development manager for some years in the cardiac pacemaker industry. We developed the world’s first software controlled automatic implantable defibrillator. It had several tens of thousands of lines of C, developed at a rate of about one tested line of code per person per day. We quantified it as the most reliable real time software ever written at the time.

I believe the outstanding quality resulted from a handful of special grassroots techniques:


  • We had independent software test teams that developed their own test cases and tools
  • We did obsessive source code inspections on all units before integration. And in the end, before we shipped, we did an end-to-end walkthrough of the frozen software. It took six of us two months straight. So we had several people who knew the entire object intimately.
  • We did early informal design reviews. As team leader, I favoured having my developers do a whiteboard presentation to the team of their early design ideas, no more than 48 hours after being given responsibility for a module. This helped prevent designers latching onto misconceptions at the formative stages.
  • We took our time. I was concerned that the CASE tools we introduced in the mid 90s might make code rather too easy to trot out, so at the same time I set a new rule that developers had toturn their workstations off for a whole day once a week, and work with pen and paper.
  • My internal coding standard included a requirement that when starting a new module, developers write their comments before they write their code, and their comments had to describe ‘why’ not ‘what’. Code is all syntax; the meaning and intent of any software can only be found in the natural language comments.

Code is so very unlike the stuff of other professions – soil and gravel, metals and alloys, nuts and bolts, electronics, even human flesh and blood - that the metaphor of engineering in the phrase “software engineering” may be dangerously misplaced. By coopting the term we have might have started out on the wrong foot, underestimating the fundamental challenge of forging a software profession. It won't be until software engineering develops the normative tools and standards, culture and patience of a true profession that the software crisis will turn around. And then corporate governance will have something to govern in software development.

Posted in Management theory, Language, Culture, Software engineering

Policy fads and policy failures

In cyber security, user awareness, education and training have long gone past their Use By Date. We have technological problems that need technological fixes, yet governments and businesses remain averse to investing in real security. Instead, the long standing management fad is to 'audit ourselves' out of trouble, and to over-play user awareness as a security measure when the systems we make them use are inherently insecure.

It’s a massive systemic failure in the security profession.

We see a policy and compliance fixation everywhere. The dominant philosophy in security is obsessed with process. The international information security standard ISO 27001 is a management system standard; it has almost nothing to say universally about security technology. Instead the focus is on documentation and audit. Box ticking. It’s intellectually a carbon-copy of the ISO 9001 quality management standard, and we all know the limitations of that.

Or do we? Remember that those who don’t know the lessons of history are condemned to repeat it. I urge all infosec practitioners to read this decade old article: Is ISO 9000 really a standard? -- it should ring some bells.

Education, policy and process are almost totally useless in fighting ID theft. Consider this: those CD ROMs with 25,000,000 financial records, lost in the mail by British civil servants in 2007 were valued at 1.5 billion pounds, using the going rate on the stolen identity black market. With stolen data being so immensely valuable, just how is security policy ever going to stop insiders cashing in on such treasure?

In another case, after data was lost by the Australian Tax Office, there was earnest criticism that the data should have been encrypted. But so what if it was? What common encryption method could not be cracked by organised crime if there was millions and millions of dollars worth of value to be gained?

The best example of process and policy-dominated security is probably the Payment Card Industry Data Security Standard PCI-DSS. The effectiveness of PCI-DSS and its onerous compliance regime was considered by a US Homeland Security Congressional Committee in March 2009. In hearings, the National Retail Federation submitted that “PCI has been plagued by poor execution ... The PCI guidelines are onerous, confusing, and are constantly changing”. They noted the irony that “the credit card companies’ rules require merchants to store credit card data that many retailers do not want to keep” (emphasis in original). The chair committee remarked that “The essential flaw with the PCI Standard is that it allows companies to check boxes, but not necessarily be secure. Compliance does not equal security. We have to get beyond check box security.”

To really stop ID theft, we need proper technological preventative measures, not more policies and feel-good audits.

The near exclusive emphasis on user education and awareness is a subtle form of blame shifting. It is simply beyond the capacity of regular users to tell pharming sites from real sites, or even to spot all phishing e-mails. What about the feasibility of training people to "shop safely" online? It's a flimsy proposition, considering that the biggest cases of credit card theft have occurred at backend databases of department store chains and payments processors. Most stolen card details in circulation probably originate from regular in-store Card Present transactions, and not from Internet sites. The lesson is even if you never ever shop online, you can have your card details stolen and abused behind your back. All the breathless advice about looking out for the padlock is moot.

In other walks of life we don’t put all the onus on user education. Think about car safety. Yes good driving practices are important, but the major focus is on legislated standards for automotive technology, and enforceable road rules. In contrast, Internet security is dominated by a wild west, everyone-for-themselves mentality, leading to a confusing patchwork of security gizmos, proprietary standards and no common benchmarks.

Posted in Security, Management theory, Internet