Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Classic Facebook stalking horse

Yesterday Instagram made its first move towards delivering the real value in its acquisition by Facebook. They revised their Privacy Policy and Terms of Use to allow greater sharing of photos with Facebook and other businesses, especially advertisers. Instagram posted a new set of Terms on Monday, the shit hit the fan, and today they back-peddled.

The mea culpa is a classic, straight out of the Zuckerberg copybook. They say they were misunderstood. They say they don't want to sell photos to ad men. They say members will always own their photos. But ownership is a red herring and the whole exercise is likely a stalking horse, designed to distract people from more significant issues around metadata and Facebook's ever deepening ability to infer PII.

Firstly, let's be clear that greater sharing follows the acquisition as night follows day. I noted at the time that the only way to understand Facebook's billion dollar spend on Instagram is around the value to be mined from the mother lode of photo data. In particular, image analysis and facial recognition grant Instagram and Facebook x-ray vision into their members' daily lives. They can work out what people are doing, with whom they're doing it, when and where. With these tools, they're moving quickly from collecting Personally Identifiable Information when it is volunteered by users, to PII that is observed and inferred. The quality and quantity of the PII flux is driven up dramatically. No longer is the lifeblood of Facebook -- the insights they have on 15% of the world's population -- filtered by what users elect to post and Like and tag, but now that information is raw, unexpurgated and automated.

Now ask where the money in photo data is to be made. It's not in selling candid snapshots of folks enjoying branded products. It's in the intelligence that image data yield about how people lead their lives. This intelligence is Facebook's one and only asset.

So it is metadata that we need to worry about. In its initial update to the Terms, Instagram said this: [You] agree that a business or other entity may pay us to display your username, likeness, photos (along with any associated metadata), and/or actions you take, in connection with paid or sponsored content or promotions, without any compensation to you.. In over 6,000 words "metadata" is mentioned just twice, parenthetically, and without any definition. Metadata is figuring more and more in the privacy discourse, and that's great, but we need to look beyond the usual stuff like geolocation and camera type embedded in the JPEGs. Much more important now is the latent identifiable personal content in images. Image analysis and image search provide endless new possibilities for infomopolies to extract value from photos.

A great deal of this week's outcry has focused on things like the lack of compensation, and all of Instagram's apology today is around the ownership of photos. But ownership is moot if they reserve their right to use and disclose metadata in any way they like. What actually matters is the individual's ability to understand and control what is done with any PII about them, including metadata. When the German privacy regulator acted against Facebook's facial recognition practices earlier this year, the principle they applied from OECD style legislation is that there are limits to what can be collected about individuals without their consent. The regulator ruled it unlawful for Facebook to extract biometric information from images when their users innocently think they're only tagging people in photos.

So when I read Instagram's excuse, I don't see any truly meaningful self-restraint in the way they can exploit image data. Their switch is not even a tactical retreat, for as yet, they're not giving anything up.

Posted in Social Networking, Privacy, Big Data

Don't mix business and pleasure

At the recent Gartner Identity & Access Summit, analyst Earl Perkins spoke of the potential for Facebook to be used as an enterprise IdP. I'd like to see these sorts of speculations dampened a little by filtering them through the understanding that identity is a proxy for relationship.

Here's the practical difficulty that shows why we must reframe what we're talking about. If Facebook were to be an Identity Issuer, they would have to be clear about what enterprises really need to know about their staff, customers, partners and so on. There is no standardised answer to that; every business gets to know its people in their own peculiar ways. Does Facebook with its x-ray vision into our personal lives have anything to offer enterprises? If we work out which assertions might be vouched for by Facebook, how would they be underwritten exactly?

And I really mean exactly because liability is what kills off most identity federations. The idea of re-using identity across contexts is easier said than done. Banks have tried and tried again to federate identities amongst themselves. The Australian experience (of Trust Centre and MAMBO) was that banks find it too complex to re-use each others' issued IDs because of the legal complexity, even when they're all operating under the same laws and regulations! So how on earth will business make the jump to using Facebook as an IdP when they have yet to figure out banks as IdP?

I'd surely like to hear from Facebook themselves about how they see their IdP business developing. They're being very coy about even the early forays like Facedeals, which is using biometric data from Facebook to check people into stores by facial recognition. It's a pretty serious app, with very serious privacy ramifications, amplified by the fact that German regulators have thrown the book at Facebook for being underhanded with photo tagging. Under the circumstances, I would have expected Facedeals to have a Privacy Policy, and Facebook to make some public announcements about how they support the third party consumption of their biometric templates. But no, neither has happened.

The old saw don't "Mix Business And Pleasure" turns out to predict the cyber world challenges of bringing social identities and business identities together. I have concluded that identity is metaphorical. Each identity is really a proxy for a relationship, and most of our intuitions about identity need to be reframed in terms of relationships. We're not talking simply about names! The types of relationship we entertain socially (and are free to curate for ourselves) may be fundamentally irreconcilable with the identities provided to us by businesses as a way to manage their risks, as is their prerogative.

Posted in Social Networking, Identity, Federated Identity

Any ideas to curtail CNP fraud?

The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. Lockstep monitors these figures and plots the trend data. The latest stats were released this week, for FY 2012.

Here's the latest picture of Australian payment card fraud growth over the past seven financial years FY2006-12.

CNP trends pic to FY 2012

Compared with FY2011:


  • Total card fraud is up 25%
  • CNP fraud is up 27%
  • CNP fraud represents three quarters (72%) of all card fraud.
  • Card Not Present fraud as a proportion of all fraud remains at just under three quarters (72%).

As with the CY2011 stats we discussed last July, card fraud has again grown in all categories at once, not just Card Not Present, and this is unusual. The explanation may be a burst of skimming and counterfeiting in late 2011 which would be reflected in both the FY2012 and CY2011 numbers.

APCA's press release this week notes that card fraud has dropped in the past six months, contrasting financial 2012 ($189M) with calendar 2011 ($198M). This may not be a statistically valid comparison. We should expect seasonal buying habits will cause asymmetries within 12 months, making FY against CY a case of apples and oranges. Indeed, this looks like the first time APCA themselves have plotted CY and FY stats together. It certainly makes the latest figures look better.

Time will tell whether the trend is changing. The long term trend is that CNP fraud has grown at 38% p.a. on average, from $27M in FY2006 to $189M in FY2012. A 5% drop in the past six months may not mean much. The $189M loss most recently reported is probably close to the true trend.

APCA says "Broadly, the value of CNP fraud reflects growing retail activity in the online space, with many more businesses ... moving online". That's true but the question is: What will we do about it? Bank robbers rob banks because that's where the money is. Think about high road tolls: they reflect the popularity of driving, but we don't put up with them!

In any case, a cardholder's exposure to CNP fraud has nothing to do with whether they themselves shop online! Stolen card data are replayed online by criminals because they can. The online boom provides more places to use stolen cards but it's not where the criminals get most of their cards. Instead, it appears that account numbers are mostly obtained from massive database breaches at processors and large bricks-and-mortar retailers, like Heartland Payments, Global Payments, and Hannaford. So it's not fair to play down CNP fraud as relating to the cost of going digital, because it hurts people who haven't gone digital.

I'm afraid payments regulators seem light on ideas for actually rectifying CNP fraud.

Until recently, APCA actively promoted 3D Secure (Verified by Visa or Mastercard SecureCode) as a response to CNP fraud. In June 2011, APCA went so far as to say "retailers should be looking at a 3D Secure solution for their online checkout". But their most recent press release makes no mention of 3D Secure at all.

It looks to me that 3D Secure, after many years of disappointing performance and terrible take-up, is now too contentious to rate a mention from Australia’s regulators.

In my view, the industry needs to treat CNP fraud as seriously as it did skimming and carding. The industry should not resign itself to increasing rates of fraud just because online shopping is on the rise.

CNP fraud is not a technologically tough problem. It's just the digital equivalent of analogue skimming and carding, and it could be stopped just as effectively by using chips to protect cardholder data online.

Posted in Security, Payments, Fraud

If Facebook were honest

The first and foremost privacy principle in any data protection regime is Collection Limitation. A classic instance is Australia's National Privacy Principle NPP 1, which requires that an organisation refrain from collecting Personal Information unless (a) there is a clear need to collect that information; (b) the collection is done by fair means, and (c) the individual concerned is made aware of the collection and the reasons for it.

In accordance with the Collection Principle (and others besides), a conventional privacy notice or privacy policy should give a full account of what Personal Information an organisation collects (including that which it creates internally) and why it collects it.

And herein lies a fundamental challenge for most online social networks: if they were honest about the Collection Principle, they would have to say "We collect information about you to make money".

The core business model of many Online Social Networks is to exploit Personal Information, in many and varied ways. There's a bargain for Personal Information inherent in commercial social media. Some say the bargain is obvious to today's savvy netizens; it's said that everybody knows there is no such thing as a free lunch. But I am not so sure. I doubt that the average Facebook user really grasps what's going on. The bargain for their information is opaque and unfair.

From the outset, Facebook founder Mark Zuckerberg was tellingly enthusiastic for information built up in his system to be used by others. In 2004, he told a colleague "if you ever need info about anyone at Harvard, just ask".

Facebook has experienced a more or less continuous string of privacy controversies, including the "Beacon" sharing feature in 2007, which automatically imported members' activities on external websites and re-posted the information on Facebook for others to see. Facebook's privacy missteps almost always relate to the company using the data it collects in unforeseen and barely disclosed ways. Yet this is surely what Facebook's investors expect the company to be doing: innovating in the commercial exploitation of personal information. An inherent clash with privacy arises from the fact that Facebook is a pure play information company: its only significant asset is the information it holds about its members. The market expects this asset to be monetised and maximised. Logically, anything that checks the network's flux in Personal Information -- such as the restraints inherent in privacy protection, whether adopted from within or imposed from without -- must affect the company's futures.

Facebook's business model is enhanced by promiscuity amongst its members, so there is an apparent conflict of interest in the firm's privacy posture. The more information its members are willing to divulge, the greater is Facebook's value. Zuckerberg is far from a passive bystander in this; he has long tried to train his members to abandon privacy norms, in order to generate ever more information flux upon which the site depends. He is brazenly quick to judge what he sees as broader societal shifts. Interviewed at the 2010 TechCrunch conference, he said:

[In] the last five or six years, blogging has taken off in a huge way and all these different services that have people sharing all this information. People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time. We view it as our role in the system to constantly be innovating and be updating what our system is to reflect what the current social norms are.

It is rather too early to draw this sort of sweeping generalisation from the behaviours of a specially self-selected cohort of socially hyperactive users. Without underestimating the empirical importance of Facebook to hundreds of millions of people, surely one of the over-riding characteristics of OSN as a pastime is simply that it is fun. There is a sort of suspension of disbelief at work when people act in this digital world, divorced from normal social cues which may lead them to lower their guard. Facebook users are not fully briefed on the consequences of their actions, and so their behaviour to some extent is being directed by the site designers; it has not evolved naturally as Zuckerberg would have us believe.

Yet promiscuity is not in fact the source of the most valuable social data. Facebook has a particularly sorry history of hiding its most effective collection methods from view. Facial recognition is perhaps the best example. While it has offered photo tagging for years, it was only in early 2012 that Facebook started to talk plainly about how it constructs biometric templates from tags, and how it runs those templates over stored photo data to come up with tag suggestions. Meanwhile, the application of facial recognition is quietly expanding beyond what they reveal, with the likes of Facedeals for example starting to leverage Facebook's templates, in ways that are not disclosed in any Privacy Policies anywhere.

Privacy is largely about transparency. Businesses owe it to their members and customers to honestly disclose what data is collected and why. While social networks continue to obfuscate the true exchange of Personal Information for commercial value, we cannot take seriously their claims to respect our privacy.

Posted in Social Networking, Privacy

Security-convenience trade-off: What trade-off?

As mentioned last month, the security-convenience trade-off in computer security is radically different from traditional locks and keys. Regular users are so habituated to door keys that they don't even think of the trade-offs! Keys are so easy to use that nobody bothers to make them "easier" with the equivalent of Single Sign On (just imagine asking your boss to re-key the office door and all the file cabinets just so you could use the same key for work as well as your home and car - it would be preposterous).

The cyber security-convenience trade-off could be radically re-jigged if we adopted serious physical keys for our computing devices. The usability dilemma online is really all about human factors engineering.

It's instructive to look at the evolution of door locks. For centuries we've used the same basic form factor: as the Oxford dictionary puts it, "a small piece of shaped metal with incisions cut to fit the wards of a particular lock, which is inserted into a lock and turned to open or close it".

The UX is universal, while under the covers, security R&D has spawned long and steady improvement.

OLD KEY tumblr m7xzltHD0H1rwqtqoo1 500
Yale pin tumbler classic
HIGH SECURITY KEY key large
Mercedes key camera 4 17739 zoom
EMERGENCY KEY AUDI


And the most recent smart car keys still have a mechanical emergency key for when the electronics fails!

Posted in Security