Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

There is no algorithm for good management

There is a malaise in security. One problem is that as a "profession", we’ve tried to mechanise security management, as if it were just like generic manufacturing, ammenable to ISO 9000-like management standards. We use essentially the same process and policy templates for all businesses. Don’t get me wrong: process is important, and we do want our security responses to be repeatable and uniform. But not robotic. The truth is, there is no algorithm for doing the right thing. Moreover, there can never be a universal management algorithm, and an underlying naive faith in such a thing is dulling our collective management skills.

An algorithm is a repeatable set of instructions or recipe that can be followed to automatically perform some task or solve some structured problem. Given the same conditions and the same inputs, an algorithm will always produce the same results. But no algorithm can cope with unexpected inputs or events; an algorithm’s designer needs to have a complete view of all input circumstances in advance.

Mathematicians have long known that some surprisingly simple tasks cannot be done algorithmically. The classic ‘travelling salesman’ problem, of how to plot the shortest course through multiple connected towns, has no single recipe for success. There is no way to trisect an angle using a compass and a ruler. There is no consistent way to tell if any given computer program is ever going to stop.

So when security is concerned so much of the time with the unexpected, we should be doubly careful about formulaic management approaches, especially template policies and checklist-based security audits!

Ok, but what's the alternative? This is extremely challenging, but we need to think outside the check box.

Like any complex management field, security is all about problem solving. There’s never going to be a formula for it. Rather, we need to put smart people on the job and let them get on with it, using their experience and their wits. Good security like good design frankly involves a bit of magic. We can foster security excellence through genuine expertise, teamwork, research, innovation and agility. We need security leaders who have the courage to treat new threats and incidents on their merits, trust their professional instincts, try new things, break the mould, and have the sense to avoid management fads.

I have to say I remain pessimistic. These are not good times for couragous managers. For the first rule of career risk management is to make sure everyone agrees in advance to whatever you plan to do, so the blame can be shared when something goes wrong. This is probably the real reason why people are drawn to algorithms in management: they can be documented, reviewed, signed off, and put on back the shelf in wait for a disaster and the inevitable audit. So long as everyone did what the said they were going to do in response to an incident, nobody is to blame.

So I'd like to see a law suit against a company with a perfect ISO 27001 record which still got breached, where the lawyers's case is that it is unreasonable to rely on algorithms to manage in the real world.

Posted in Security, Science, Management theory

CNP fraud continues to rise in Australia

Every six months, the Australian Payments Clearing Association (APCA) releases card fraud statistics for the preceding 12m period. They have just released the stats for FY2010 at http://www.apca.com.au/Public/apca01_live.nsf/WebPageDisplay/Payment_Statistics.

Lockstep monitors these figures and plots the trend data.

The graph shows card fraud in three major categories over the past four financial years.

As everyone knows, chip cards stifle skimming and counterfeiting, leaving Card Not Present as the preferred mode for organised crime. Therefore APCA expects CNP fraud to continue to rise in the short term (the current rate of CNP fraud growth is around 25% p.a.).

The jury is still out on whether PCI-DSS really helps curtail CNP fraud. It is supposed to stem the flow of stolen account details, yet there are plenty of big PCI "compliant" organsiations that continue to be breached, putting tens of millions of card numbers at a time into the hands of fraudsters.

CNP fraud is on the decline in the UK, good news which is attributed to 3D Secure. Yet merchant and consumer acceptance of 3D Secure is not getting any better, and its long term success is uncertain. There is little sign of Verified by Visa or SecureCoce in Australia so far.

Posted in Security, Fraud

'Cybernfreude' and Wikileaks

Wikileaks has long been invaluable. So I hate to think that in flooding us with mostly mundane diplomatic cables, they may have over-played their hand. By humiliating governments, they may have provoked the authorities into truly radical controls over the Internet. And for what? To feed the front pages of an increasingly tabloid press. Honestly, there hasn't been a single revelation all week in the Slatternly Morning Herald befitting investigative journalism. Shrill gossip has trumped real scandal. The signal to noise ratio is so low now that it devalues and demeans Wikileaks' other beneficiaries.

I signed GetUp's petition in support of Wikileaks, but I ducked the protest march. I suspect many of the protestors are simply relishing the embarrassment of our loathed political leaders. Wikileaks should be bigger than this. The movement now seems to be sustained largely by what I would call cybernfreude: taking pleasure in the online misfortune of others.

Posted in Social Media, Security, Privacy

Smile! You're on Candid Apple

Apple is reported to have acquired the "Polar Rose" technology that allows photos to be tagged with names through automated facial recognition.

The iPhone FAQ site says:
Interesting uses for the technology include automatically tagging people in photos and recognizing FaceTime callers from contact information. As the photographs taken on the iPhone improve, various image analysis algorithms could also be used to automatically classify and organize photos by type or subject.
Apple's iPhoto currently recognizes faces in pictures for tagging purposes. It's possible Apple is looking to improve and expand this functionality. Polar Rose removed its free tagging services for Facebook and Flickr earlier this month, citing interest from larger companies in licensing their technology.

The privacy implications are many and varied. Fundamentally, such technology will see hitherto anonymous image data converted into personal information, at those informopolies like Google, Facebook and Apple which hold vast personal photo archives.

Facial recognition systems obviously need to be trained. Members will upload photos, name the people in the photos, and then have the algorithm run over other images in the database. So it seems that Apple (in this case) will have lists of the all-important bindings between biometric template and names. What stops them running the algorithm and binding over any other images they happen to have in their databases? Apple has already shown a propensity to hang on to rich geolocation data generated by the iPhone, and a reluctance to specify what they intend to do with that data.

If facial recognition worked well, then the shady possibilities are immediately obvious. Imagine that I have been snapped in a total stranger's photo -- say some tourist on the Manly ferry -- and they've uploaded the image to a host of some sort. What if the host, or some third party data miner, runs the matching algorithm over the stranger's photo and recognises me in it? If they're quick, a cunning business might SMS me a free ice cream offer, seeing I'm heading towards the corso. Or they might work out I'm a visitor, because the day before I was snapped in Auckland, and they can start to fill in my travel profile.

This is probably sci-fi for now, because in fact, facial recognition doesn't work at all well when image capture conditions aren't tightly controlled. But this is no cause for complacency, for the very inaccuracy of the biometric method might make the privacy implications even worse.

To analyse this, as with any privacy assessment, we should start with information flows. Consider what's going on when a photo is uploaded to this kind of system. Say my friend Alice discloses to Apple that "manly ferry 11dec2010.jpg" is an image of Steve Wilson. Apple has then collected Personal Information about me, and they've done it indirectly, which under Australia's privacy regime is something they're supposed to inform me of as soon as practical.

Then Apple reduces the image to a biometric template, like "Steve Wilson sample 001.bio". The Australian Law Reform Commission has recommended that biometric data be treated as Sensitive Information, and that collection be subject to express consent. That is, a company won't be allowed to collect facial recognition template data without getting permission first.

Setting aside that issue for a moment, consider what happens when later, someone runs the algorithm against a bunch of other images, and it generates some matches: e.g. "uluru 30jan2008.jpg"-is-an-image-of-Steve-Wilson. It doesn't actually matter whether the match is true or false: it's still a brand new piece of Personal Information about me, created and collected by Apple, without my knowledge.

I reckon both false matches and true matches satisfy the definition of Personal Information in the Australian Privacy Act, which includes "an opinion ... whether true or not".

Remember: The failures of biometrics often cause greater privacy problems than do their successes.

Posted in Social Media, Privacy, Identity, Cloud

Wikileaks' 'defenders' take the law into their own hands

On ABC Radio PM yesterday, Columbia Law School professor Tim Wu suggested that Wikileaks might have so got under the skin of the US Government that they might take radical steps to curtail the Internet:

If as a result of WikiLeaks we see a real change in the federal government's attitude towards the Internet, if it sways the argument towards we've really got to control this thing for national security reasons, it could be the beginning of the real closing of the internet in the United States. It's an American invention originally and when sort of its home base begins to turn on it, if the federal government really starts to turn on the Internet and try to close it down, it could be a turning point.

If so, then what will really convince the administration to take such steps is the anarchic actions of hacktivists rushing in to ‘defend’ Wikileaks by bringing down payment provider websites. With friends like these, Wikileaks doesn’t need enemies.

No matter what we might think of the unilateral actions taken by PayPal, MasterCard, Visa et al, there can be no justifying vigilantes like Anonymous taking the law into their own hands.

If it were ever proven that a government had mounted a DDOS attack against Wikileaks, the blogosphere would rightly scream blue bloody murder. But too many are lionising the DDOS attacks undertaken by a self-appointed cyber militia.

One of Wikileaks’ key assets is the moral high ground. Their true supporters should roundly condemn all hacktivism, or we will all go down the gurgler of double standards.

Posted in Social Media, Security

No easy fix for federated identity liability

One of the many open questions in the proposed National Strategy for Trusted Identities in Cyberspace (NSTIC) is whether government will need to step in and legislate around liability allocation. Like federated identity itself, this is easier said than done.

The NSTIC discussion paper states:

  • This Strategy defines an Identity Ecosystem where one entity vets and establishes identities and another entity accepts them. To date, the appropriate apportionment of liability has prevented the cross-sector issuance and acceptance of identity credentials. The Federal Government must address this barrier through liability reform in order to establish the multi-directional trust required by transaction participants (p28).

It is true that liability allocation has impeded federation but I don't think we've collectively thought deeply enough about why this is the case. Legislators won't find a quick way to "reform" liability as called for by the NSTIC paper.

The identity ecosystem paradigm (yes, paradigm) is premised on the intuition that when Alice has gone to all the trouble of establishing her identity with a bank or government agency or e-store, she should be able to leverage that identity so that other service providers can strike up a fresh relationship with her. But in practice, this dream is impossible to achieve without all sorts of constraints.

All identity practitioners agree on the truism that identity is context dependent. But have we underestimated just how context-dependent identities are? Have we been too optimistic in our ability to engineer the changes of context that are implicit in federated identity?

The trouble is that what we think of as Alice's "identity" is really a proxy or shorthand for a specific relationship that she has with a particular provider. The rules by which she is conventionally identified vary from one provider to another, because each has its own business needs. Establishing a common set of rules is one of the insurmountable challenges in federated identity. Firstly, it is logically impossible to set rules for unforseen applications and Relying Parties. So federated identities come with fine print that constrain what applications a user is allowed to use their identity in (it's a lot like Big PKI all over again). This not only limits what we hoped would be universal identities, but it leads to a bigger practical problem. Once we agree on a set of uniform identification rules, sufficient for at least a nice big set of applications, it turns out that none of the existing Identity Providers (the oft cited candidates for which are banks, governments, telcos and social IdPs) will actually be following those rules already. They will all have to modify their registration procedures to align with the federation's rules. This is very costly; banks in particular don't readily change their KYC rules. There is great risk as well in investing in these changes, for the business model for making money from federated identities is still unproven. And so extant digital identities are not in fact useful for very much at all beyond their original contexts. .

Another way of looking at the problem is to consider how identity providers manage their risk. Currently, banks/agencies/telcos create digital identities for their customers as part of a relationship governed by explicit Ts&Cs. For instance, banking customers are usually forbidden from using their Internet banking OTP tokens to authenticate themselves to any other services (I have seen at least one Australian federated id scheme collapse because re-writing and re-executing these agreements is too hard). The good thing about the oft-derided identity silos is that they allow issuers to manage risk by tightly defining the context in which their customers use their "identities". When we try to break open the silos, and turn banks into general purpose Identity Providers, we compromise their ability to manage their risk. The ultimate promise of federated identity is that the customer will be able to use a bank-issued identity for instance in all manner of other applications, over which the bank has no control. This is a promise that banks are not able to keep, unless there is new legislation to address the liability. The only places where cross-sector identity federation seems to work are where specific laws have been passed to protect Identity Providers, such as in Scandinavia.

The reason why banks and other classic candidate IdPs have found federation easier said than done is that it's fiendishly difficult to manage liability for mis-identification in unforeseen applications. As far as I am aware, when new laws have enabled cross-sector federation, it has been restricted to banking and government, and even then, with tight constraints on the types of transactions allowed. Outside these legislated conditions, RPs are left to their own devices to manage authentication risks, and siloed identity relationships persist, as a natural consequence.

Posted in Security, Identity, Federated Identity

Generic verisimilitude

"Generic verisimilitude" is a nice big word! It means the accepted visual language that conveys reality in genre movies. In other words, cinematic cliches.

I've always been bemused by the sideways figure-of-eight black frame that tells us when a movie character is looking through binoculars. Have movie makers ever actually used binoculars? You don't get the sideways "8", instead you just see a nice black circle. But it's not the worst example.

I saw "Rachel getting married" a year or two ago, and thought it was pretty good except for the madly excessive handycam wobble. And I got thinking about that and realised what a terrible artifice it is. Ironically, handicam wobble has become the leading sign of generic verisimilitude in 'gritty' moviemaking, yet the wobble is entirely fictional.

One of the marvels of the human brain is the way it produces a steady image as we move around. We can walk, run, jump up and down even on a trampoline, and our steadfast perception of the world is that it stands still. This complicated feat of cognition is thought to involve feedback mechanisms that allow the brain to compensate for the visual field shifting around on our retinas as the skull moves, sorting out which movements are apparent because we're moving, and which movements are really out there. It's a really vital survival tool; you couldn't chase down a gazelle on the savanna if your cognition was confused by your own mad dashing about.

So, if the world doesn't actually look to me like it shifts when I move, then what is the point of a film maker foisting this jerkiness upon us? If I was really in the place of the cinematographer, no matter how much I dance about, I wouldn't see any wobble.

Moreover, motion pictures are the most voyeuristic artform. The whole cinemagraphic conceit is that you couldn't possibly be in the same room as the people you're privileged to be spying on. So again, why the "realism" of the handicam wobble which is intended to make us feel like actually we're part of the action?

It's odd that in the face of suspension-of-disbelief, when the audience is already putty in their hands, filmmakers inject these falsehoods into the visual language of otherwise hyper-realistic movies.

UPDATED 10 Sep 2012

Another example. I was watching a mockumentary on TV, set in the present, featuring gen Yers, and the protagonists made a home movie. And when we see their movie, it is sepia-coloured and has vertical scratch lines. Now, when was the last time anyone used film and not digital video to make a home movie? I wonder what young people even make of this tricked-up home movie look?

Another example. NASA posts mosaic pictures from the Mars rover - like this one http://twitpic.com/at9ps7 - with the patchwork edges preserved, and where the colour matching is worse than what you can get with free panaorama software on a mobile phone these days. With all their image processing powers, why wouldn't NASA smooth out the component pics? Are they inviting us to imagine standing on Mars like a tourist with our own point-and-click camera?

Posted in Science, Popular culture