Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Simply Secure is not simply private

Another week, another security collaboration launch!

"Simply Secure" calls itself “a small but growing organization [with] expertise in usability research, design, software development, and product management". Their mission has to do with improving the security functions that built-in so badly in most software today. Simply Secure is backed by Google and Dropbox, and supported by a diverse advisory board.

It's early days (actually early day, singular) so it might be churlish to point out that Simply Secure's strategic messaging is a little uneven ... except that the words being used to describe it shed light on the clarity of the thinking.

My first exposure to Simply Secure came last night, when I read an article in the Guardian by Cory Doctorow (who is one of their advisers). Doctorow places enormous emphasis on privacy; the word “privacy" outnumbers “security" 16 to three in the body of his column. Another admittedly shorter report about the launch by The Next Web doesn't mention privacy at all. And then there's the Simply Secure blog post, which cites privacy a great deal but every single time in conjunction with security, as in “security and privacy". That repeated phrasing conveys, to me at least, some discomfort. As I say, it's early days and the team is doubtless sorting out how to weigh and progress these closely related objectives.

But I hope they do it quickly. On the face of it, Simply Secure might only scratch the surface of privacy.

Doctorow's Guardian article is mostly concerned with encryption and the terrible implementations that have plagued us since the dawn of the Internet. It's definitely important that we improve here – and radically. If the Simply Secure initiative does nothing but make encryption easier to integrate into commodity software, that would be a great thing. I'm all for it. But it won't necessarily or even probably lead to better privacy, because privacy is about restraint not secrecy or anonymity.
As we go about our lives, we actually want to be known by others, but we want those who know us to be restrained in what they do with the knowledge they have about us. Privacy is the protection you need when your affairs are not secret.

I know Doctorow knows this – I've seen his terrific little speech on the steps on Comic-Con about PRISM. So I'm confused by his focus on cryptography.

How far does encryption get us? If we're using social networks, or if we're shopping and opting in to loyalty programs or selected targeted marketing, or if we're sharing our medical records with relatives, medicos, hospitals and researchers, then encryption becomes moot. We need mechanisms to restrain what the receivers of our personal information do with it. We all know the business model at work behind “free" online services; using encryption to protect privacy in social networking for instance would be like using an armoured van to deliver your valuables to Bernie Madoff.

Another limitation of user-centric or user-managed encryption has to do with Big Data. A great deal of personal information about us is created and collected unseen behind our backs, by sensors, and by analytics processes than manage to work out who we are by linking disparate data streams together. How could SS ameliorate those sorts of problems? If the SS vision includes encryption at rest as well as in transit, then how will the user control or even see all the secondary uses of their encrypted personal information?

There's a combativeness in Doctorow's explanation of Simply Secure and his tweets from yesterday on the topic. His aim is expressly to thwart the surveillance state, which in his view includes a symbiosis (if not conspiracy) between government and internet companies, where the former gets their dirty work done by the latter. I'm sure he and I both find that abhorrent in equal measure. But I argue the proper response to these egregious behaviours is political not technological (and political in the broad sense; I love that Snowden talks as much about accountability, legal processes, transparency and research as he does about encryption). If you think the government is exploiting the exploiters, then DIY encryption is a pretty narrow counter-measure. This is not the sort of society we want to live in, so let's work to change the establishment, rather than try to take it on in a crypto shoot-out.

Yes security technology is important but it's not nearly as important for privacy as the Rule of Law. Data privacy regimes instil restraint. The majority of businesses come to know that they are not at liberty to over-collect personal information, nor to re-use personal information unexpectedly and without consent. A minority of organisations flout data privacy principles, for example by slyly refining raw data into valuable personal knowledge, exploiting the trust citizens and users put in them. Some of these outfits flourish in the United States – the Canary Islands of privacy. Worldwide, the policing of privacy is patchy indeed, yet there have been spectacular legal victories in Europe and elsewhere against the excessive practices of really big companies like Facebook with their biometric data mining of photo albums, and Google's drift net-like harvesting of traffic from unencrypted Wi-Fi networks.

Pragmatically, I'm afraid encryption is such a fragile privacy measure. Once secrecy is penetrated, we need regulations to stem exploitation of our personal information.

By all means, let's improve cryptographic engineering and I wish the Simply Secure initiative all the best. So long as they don't call security privacy.

Posted in Security, Privacy, Language, Big Data

New Paper Coming: The collision between Big Data and privacy law

I have a new academic paper due to be published in October, in the Australian Journal of Telecommunications and the Digital Economy. Here is an extract.

The collision between Big Data and privacy law

Abstract

We live in an age where billionaires are self-made on the back of the most intangible of assets – the information they have about us. The digital economy is awash with data. It's a new and endlessly re-useable raw material, increasingly left behind by ordinary people going about their lives online. Many information businesses proceed on the basis that raw data is up for grabs; if an entrepreneur is clever enough to find a new vein of it, they can feel entitled to tap it in any way they like. However, some tacit assumptions underpinning today's digital business models are naive. Conventional data protection laws, older than the Internet, limit how Personal Information is allowed to flow. These laws turn out to be surprisingly powerful in the face of 'Big Data' and the 'Internet of Things'. On the other hand, orthodox privacy management was not framed for new Personal Information being synthesised tomorrow from raw data collected today. This paper seeks to bridge a conceptual gap between data analytics and privacy, and sets out extended Privacy Principles to better deal with Big Data.

Introduction

'Big Data' is a broad term capturing the extraction of knowledge and insights from unstructured data. While data processing and analysis is as old as computing, the term 'Big Data' has recently attained special meaning, thanks to the vast rivers of raw data that course unseen through the digital economy, and the propensity for entrepreneurs to tap that resource for their own profit, or to build new analytic tools for enterprises. Big Data represents one of the biggest challenges to privacy and data protection society has seen. Never before has so much Personal Information been available so freely to so many.

Big Data promises vast benefits for a great many stakeholders (Michael & Miller 2013: 22-24) but the benefits may be jeopardized by the excesses of a few overly zealous businesses. Some online business models are propelled by a naive assumption that data in the 'public domain' is up for grabs. Many think the law has not kept pace with technology, but technologists often underestimate the strength of conventional data protection laws and regulations. In particular, technology neutral privacy principles are largely blind to the methods of collection, and barely distinguish between directly and indirectly collected data. As a consequence, the extraction of Personal Information from raw data constitutes an act of collection and as such is subject to longstanding privacy statutes. Privacy laws such as that of Australia don't even use the words 'public' and 'private' to qualify the data flows concerned (Privacy Act 1988).

On the other hand, orthodox privacy policies and static data usage agreements do not cater for the way Personal Information can be synthesised tomorrow from raw data collected today. Privacy management must evolve to become more dynamic, instead of being preoccupied with unwieldy policy documents and simplistic technical notices about cookies.

Thus the fit between Big Data and data privacy standards is complex and sometimes surprising. While existing laws are not to be underestimated, there is a need for data privacy principles to be extended, to help individuals remain abreast of what's being done with information about them, and to foster transparency regarding the new ways for personal information to be generated.

Conclusion: Making Big Data privacy real

A Big Data dashboard like the one described could serve several parallel purposes in aid of progressive privacy principles. It could reveal dynamically to users what PII can be collected about them through Big Data; it could engage users in a fair and transparent exchange of value-for-PII transaction; and it could enable dynamic consent where users are able to opt in to Big Data processes, and opt out and in again, over time, as their understanding of the PII bargain evolves.

Big Data holds big promises, for the benefit of many. There are grand plans for population-wide electronic health records, new personalised financial services that leverage massive retail databases, and electricity grid management systems that draw on real-time consumption data from smart meters in homes, to extend the life of aging 'poles and wires' while reducing greenhouse gas emissions. The value to individuals and operators alike of these programs is amplified as computing power grows, new algorithms are researched, and more and more data sets are joined together. Likewise, the privacy risks are compounded. The potential value of Personal Information in the modern Big Data landscape cannot be represented in a static business model, and neither can the privacy pros and cons be captured in a fixed policy document. New user interfaces and visualisations like a 'Big Data dashboard' are needed to bring dynamic extensions to traditional privacy principles, and help people appreciate and intelligently negotiate the insights that can be extracted about them from the raw material that is data.

Posted in Privacy, Big Data

Schrodinger's Privacy: A Master Class

Master Class: How to Protect Your Customer's Digital Identity and Personal Data

A Social Media Week Sydney event #SMWSydney
Law Lounge, Sydney University Law School
New Law School Building
Eastern Ave, Camperdown
Fri, Sep 26 - 10:00 AM - 11:30 AM

How can you navigate privacy fact and fiction, without the geeks and lawyers boring each other to death?

It's often said that technology has outpaced privacy law. Many digital businesses seem empowered by this brash belief. And so they proceed with apparent impunity to collect and monetise as much Personal Information as they can get their hands on.

But it's a myth!

Some of the biggest corporations in the world, including Google and Facebook, have been forcefully brought to book by privacy regulations. So, we have to ask ourselves:

  • what does privacy law really mean for social media in Australia?
  • is privacy "good for business"?
  • is privacy "not a technology issue"?
  • how can digital businesses navigate fact & fiction, without their geeks and lawyers boring each other to death?

In this Social Media Week Master Class I will:

  • unpack what's "creepy" about certain online practices
  • show how to rate data privacy issues objectively
  • analyse classic misadventures with geolocation, facial recognition, and predicting when shoppers are pregnant
  • critique photo tagging and crowd-sourced surveillance
  • explain why Snapchat is worth more than three billion dollars
  • analyse the regulatory implications of Big Data, Biometrics, Wearables and The Internet of Things.

We couldn't have timed this Master Class better, coming two weeks after the announcement of the Apple Watch, which will figure prominently in the class!

So please come along, for a fun and in-depth a look at social media, digital technology, the law, and decency.

Register here.

About the presenter

Steve Wilson is a technologist, who stumbled into privacy 12 years ago. He rejected those well meaning slogans (like "Privacy Is Good For Business!") and instead dug into the relationships between information technology and information privacy. Now he researches and develops design patterns to help sort out privacy, alongside all the other competing requirements of security, cost, usability and revenue. His latest publications include:

  • "The collision between Big Data and privacy law" due out in October in the Australian Journal of Telecommunications and the Digital Economy.

Posted in Social Networking, Social Media, Privacy, Internet, Biometrics, Big Data

Privacy watch

Today Apple launched their much anticipated wrist watch, described by CEO Tim Cook as "the most personal device they have ever developed". He got that right!

Rather more than a watch, it's a sort of guardian angel. The Apple Watch has Siri built-in, along with new haptic sensors and buzzers, a heartbeat monitor, accelerometer, and naturally the GPS and Wi-Fi geolocation capability to track your speed and position throughout the day. So they say "Apple Watch is an all-day fitness tracker and a highly advanced sports watch in a single device".

Apple Watch

The Apple Watch will be a paragon of digital disruption. To understand and master disruption today requires the coordination of mobility, Big Data, the cloud and user interfaces. These cannot be treated as isolated technologies, so when a company like Apple controls them all, at scale, real transformation follows.

Thus Apple is one of the few businesses that can make promises like this: "Over time, Apple Watch gets to know you the way a good personal trainer would". In this we hear echoes of the smarts that power Siri, and we are reminded that amid the novel intimacy we have with these devices, many serious privacy problems have yet to be resolved.

The Apple Event today was a play in four acts:
Act I: the iPhone 6 release;
Act II: Apple Pay launch;
Act III: the Apple Watch announcement;
Act IV: U2 played live and released their new album free on iTunes!

It was fascinating to watch the thematic differences across these stanzas. With Apple Pay, they stressed security and privacy; we were told about the Secure Element, the way card numbers are replaced by random numbers (tokenization), and an architecture where Apple cannot see how much you spend nor where you spend it. On the other hand, when it came to the Apple Watch and its integrated health sensors, privacy wasn't mentioned, not at all. We are left to deduce that aggregating personal health data at Apple's servers is a part of a broader plan.

The cornerstones of data privacy include Collection Limitation, Use Limitation (or "Purpose Specification") and Openness. Custodians of our Personally Identifiable Information (PII) should refrain from collecting and retaining PII they don't really need; they should specify what they do with PII and restrict unrelated secondary usage; and they should tell people what they're doing, generally in a Privacy Policy. With Siri, Apple sadly fails all these tests.

The Apple Privacy Policy is altogether silent on Siri. The document details the sorts of information collected through its overt business processes like registration, sales and support, but it says nothing about the voice recordings and transcripts of Siri communications. Neither does the Siri FAQ mention what is done with all that data. It's quite an omission, seeing that when you dictate an SMS or an email to Siri, Apple retains a copy of communications that are normally out of bounds for your telecomms carrier.

It's been left to journalists to try and find out what Apple does with the information it mines from Siri. Wired magazine discovered eventually that Apple retains masked Siri voice recordings for six months; it then purportedly de-identifies them and keeps them for a further 18 months, for research. Yet even these explanations don't touch on the extracted contents of the communications, nor the metadata, like the trends and correlations that go to Siri's learning. If the purpose of Siri is ostensibly to automate the operation of the iPhone and its apps, then Apple should be refrain from using the by-products of Siri's voice processing for anything else. But we just don't know what they do, and Apple imposes no self-restraint.

We should hope for radically greater transparency with the Apple Watch and its health apps. Most of the watch's data processing and analytics will be carried out in the cloud. So Apple will come to hold detailed records of its users' exercise regimes, their performance figures, trend data and correlations. These are health records. Inevitably, health applications will take in other medical data, like food diaries entered by users, statistics imported from other databases, and detailed measurements from Internet-connected scales, blood pressure monitors and even medical devices. Apple will see what we're doing to improve our health, day by day, year on year. They will come to know more about what's making us healthy and what's not than we do ourselves.

Apple Watch Activity App

Now, the potential benefits from this sort of personal technology to self-managed care and preventative medicine are enormous. But so are the data management and privacy obligations.

Within the US, Apple will doubtless be taking steps to avoid falling under the stringent HIPAA regulations, yet in the rest of the world, a more subtle but far-reaching problem looms. Many broad based data privacy regimes forbid the collection of health information without consent. And the laws of the European Union, Australia, New Zealand and elsewhere are generally technology neutral. This means that data collected directly from patients or doctors, and fresh data collected by way of automated algorithms are treated essentially the same way. So when a sophisticated health management app running in the cloud somewhere mines all that exercise and lifestyle data, and starts to make inferences about health and wellbeing, great care needs to be taken that the indiviuals concerned know what's going on in advance, and have given their informed consent.

One of the deep privacy challenges in Big Data is that data miners don't know what they're going to find. Even with the best will in the world, a company can struggle to say in its Privacy Policy what PII is expects to extract (and thus collect) in future from the raw data it collects today. At Constellation Research we've been fleshing out a new sort of compact between businesses and individuals that seeks to keep users abreast of developments in data analytics, and promises to provide people with proper control of personal Big Data results.

It ought to be possible to expressly opt in to Big Data processes when you can understand the pros and cons and the net benefits, and to later opt out, and opt back in again, as the benefit equation shifts over time. But even visualising the products of Big Data is hard; I believe graphical user interfaces (GUIs) to allow people to comprehend and actively control the process will be one of the great software design problems of our age.

Apple are obviously preeminent in GUI and user experience innovation. You would think if anyone can create the novel yet intuitive interfaces desperately needed to control Big Data PII, Apple can. But first they will have to embrace their responsibilities for the increasingly intimate details they are helping themselves to. If the Apple Watch is "the most personal device they've ever designed" then let's see privacy and data protection commitments to match.

Posted in Privacy, e-health, Constellation Research, Cloud, Big Data

Engaging engineers in privacy

Updated from original post January 2013.

I have come to believe that a systemic conceptual shortfall affects typical technologists’ thinking about privacy. It may be that engineers tend to take literally the well-meaning slogan that “privacy is not a technology issue”. And I say this in all seriousness.

Online, we’re talking about data privacy, or data protection, but systems designers bring to work a spectrum of personal outlooks about privacy in the human sphere. Yet what matters is the precise wording of data privacy law, like Australia’s Privacy Act. To illustrate the difference, here’s the sort of experience I’ve had time and time again.

During the course of conducting a PIA in 2011, I spent time with the development team working on a new government database. These were good, senior people, with sophisticated understanding of information architecture, and they’d received in-house privacy training. But they harboured restrictive views about privacy. An important clue was the way they habitually referred to “private” information rather than Personal Information (or equivalently, Personally Identifiable Information, PII). After explaining that Personal Information is the operable term in Australian legislation, and reviewing its definition as essentially any information about an identifiable person, we found that the team had not appreciated the extent of the PII in their system. They had overlooked that most of their audit logs collect PII, albeit indirectly and automatically, and that information about clients in their register provided by third parties was also PII (despite it being intuitively ‘less private’ by virtue of originating from others).

I attributed these blind spots to the developers’ loose framing of “private” information. Online and in privacy law alike, things are very crisp. The definition of PII as any data relating to an individual whose identity is readily apparent sets a low bar, embracing a great many data classes and, by extension, informatics processes. It might be counter-intuitive that PII originating from some many places (even the public domain) falls under privacy regulations, yet the definition of PII is clear cut and readily factored into systems analysis. After getting that, the team engaged in the PIA with fresh energy, and we found and rectified several privacy risks that had gone unnoticed.

Here are some more of the recurring misconceptions I’ve noticed over the past decade:


  • “Personal” Information is sometimes taken to mean especially delicate information such as payment card details, rather than any information pertaining to an identifiable individual; see also this exchange with US data breach analyst Jake Kouns over the Epsilon incident in 2011 in which tens of millions of user addresses were taken from a bulk email house;
  • the act of collecting PII is sometimes regarded only in relation to direct collection from the individual concerned; technologists can overlook that PII provided by a third party to a data custodian is nevertheless being collected by the custodian; likewise technologists may not appreciate that generating PII internally, through event logging for instance, also represent collection.

These instances and others show that many ICT practitioners suffer important gaps in their understanding. Security professionals in particular may be forgiven for thinking that most legislated Privacy Principles are legal technicalities irrelevant to them, for generally only one of the principles in any given set is overtly about security; see:


  • no. 5 of the OECD Privacy Principles
  • no. 4 of the Fair Information Practice Principles in the US
  • no. 8 of the Generally Accepted Privacy Principles of the US and Canadian accounting bodies,
  • no. 4 of the older National Privacy Principles of Australia, and
  • no. 11 of the new Australian National Privacy Principles.

Yet all of the privacy principles in these regimes are impacted by information technology and security practices; see Mapping Privacy requirements onto the IT function, Privacy Law & Policy Reporter, v10.1 & 10.2, 2003. I believe the gaps in the privacy knowledge of ICT practitioners are not random but are systemic, probably resulting from privacy training for non-privacy professionals not being properly integrated with their particular world views.

To properly deal with data privacy, ICT practitioners need to have privacy framed in a way that leads to objective design requirements. Luckily there already exist several unifying frameworks for systematising the work of development teams. One tool that resonates strongly with data privacy practice is the Threat & Risk Assessment (TRA).

A TRA is for analysing infosec requirements and is widely practiced in the public and private sectors in Australia. There are a number of standards that guide the conduct of TRAs, such as ISO 31000. A TRA is used to systematically catalogue all foreseeable adverse events that threaten an organisation’s information assets, identify candidate security controls to mitigate those threats, and prioritise the deployment of controls to bring all risks down to an acceptable level. The TRA process delivers real world management decisions, understanding that non zero risks are ever present, and that no organisation has an unlimited security budget.

The TRA exercise is readily extensible to help Privacy by Design. A TRA can expressly incorporate privacy as an aspect of information assets worth protecting, alongside the conventional security qualities of confidentiality, integrity and availability ("C.I.A.").

Lockstep AusCERT 2013 Designing Privacy by Design (1 2) ASSET INVENTORY  pbd tra

A crucial subtlety here is that privacy is not the same as confidentiality, yet they are frequently conflated. A fuller understanding of privacy leads designers to consider the Collection, Use, Disclosure and Access & Correction principles, over and above confidentiality when they analyse information assets. The table above illustrates how privacy related factors can be accounted for alongside “C.I.A.”. In another blog post I discuss the selection of controls to mitigate privacy threats, within a unified TRA framework.

And in this post I look at how the definitional uncertainties in privacy and the unfolding identifiability of PII should not cause security professionals much anxiety - because they're trained to deal with uncertainties and likelihoods.

We continue to actively research the closer integration of security and privacy practices.

Posted in Security, Privacy

It's not too late for privacy

Have you heard the news? "Privacy is dead!"

The message is urgent. It's often shouted in prominent headlines, with an implied challenge. The new masters of the digital universe urge the masses: C'mon, get with the program! Innovate! Don't be so precious! Don't you grok that Information Wants To Be Free? Old fashioned privacy is holding us back!

The stark choice posited between privacy and digital liberation is rarely examined with much intellectual rigor. Often, "privacy is dead" is just a tired fatalistic response to the latest breach or eye-popping digital development, like facial recognition, or a smartphone's location monitoring. In fact, those who earnestly assert that privacy is over are almost always trying to sell us something, be it sneakers, or a political ideology, or a wanton digital business model.

Is it really too late for privacy? Is the "genie out of the bottle"? Even if we accepted the ridiculous premise that privacy is at odds with progress, no it's not too late, for a couple of reasons. Firstly, the pessimism (or barely disguised commercial opportunism) generally confuses secrecy for privacy. And secondly, frankly, we aint seen nothin yet!

Conflating privacy and secrecy

Technology certainly has laid us bare. Behavioral modeling, facial recognition, Big Data mining, natural language processing and so on have given corporations X-Ray vision into our digital lives. While exhibitionism has been cultivated and normalised by the informopolists, even the most guarded social network users may be defiled by data prospectors who, without consent, upload their contact lists, pore over their photo albums, and mine their shopping histories.

So yes, a great deal about us has leaked out into what some see as an infinitely extended neo-public domain. And yet we can be public and retain our privacy at the same time. Just as we have for centuries of civilised life.

It's true that privacy is a slippery concept. The leading privacy scholar Daniel Solove once observed that "Privacy is a concept in disarray. Nobody can articulate what it means."

Some people seem defeated by privacy's definitional difficulties, yet information privacy is simply framed, and corresponding data protection laws are elegant and readily understood.

Information privacy is basically a state where those who know us are restrained in they do with the knowledge they have about us. Privacy is about respect, and protecting individuals against exploitation. It is not about secrecy or even anonymity. There are few cases where ordinary people really want to be anonymous. We actually want businesses to know - within limits - who we are, where we are, what we've done and what we like ... but we want them to respect what they know, to not share it with others, and to not take advantage of it in unexpected ways. Privacy means that organisations behave as though it's a privilege to know us. Privacy can involve businesses and governments giving up a little bit of power.

Many have come to see privacy as literally a battleground. The grassroots Cryptoparty movement came together around the heady belief that privacy means hiding from the establishment. Cryptoparties teach participants how to use Tor and PGP, and they spread a message of resistance. They take inspiration from the Arab Spring where encryption has of course been vital for the security of protestors and organisers. One Cryptoparty I attended in Sydney opened with tributes from Anonymous, and a number of recorded talks by activists who ranged across a spectrum of political issues like censorship, copyright, national security and Occupy.

I appreciate where they're coming from, for the establishment has always overplayed its security hand, and run roughshod over privacy. Even traditionally moderate Western countries have governments charging like china shop bulls into web filtering and ISP data retention, all in the name of a poorly characterised terrorist threat. When governments show little sympathy for netizenship, and absolutely no understanding of how the web works, it's unsurprising that sections of society take up digital arms in response.

Yet going underground with encryption is a limited privacy stratagem, because do-it-yourself encryption is incompatible with the majority of our digital dealings. The most nefarious and least controlled privacy offences are committed not by government but by Internet companies, large and small. To engage fairly and squarely with businesses, consumers need privacy protections, comparable to the safeguards against unscrupulous merchants we enjoy, uncontroversially, in traditional commerce. There should be reasonable limitations on how our Personally Identifiable Information (PII) is used by all the services we deal with. We need department stores to refrain from extracting health information from our shopping habits, merchants to not use our credit card numbers as customer reference numbers, shopping malls to not track patrons by their mobile phones, and online social networks to not x-ray our photo albums by biometric face recognition.

Encrypting everything we do would only put it beyond reach of the companies we obviously want to deal with. Look for instance at how the cryptoparties are organised. Some cryptoparties manage their bookings via the US event organiser Eventbrite to which attendants have to send a few personal details. So ironically, when registering for a cryptoparty, you can not use encryption!

The central issue is this: going out in public does not neutralise privacy. It never did in the physical world and it shouldn't be the case in cyberspace either. Modern society has long rested on balanced consumer protection regulations to curb the occasional excesses of business and government. Therefore we ought not to respond to online privacy invasions as if the digital economy is a new Wild West. We should not have to hide away if privacy is agreed to mean respecting the PII of customers, users and citizens, and restraining what data custodians do with that precious resource.

Data Mining and Data Refining

We're still in the early days of the social web, and the information innovation has really only just begun. There is incredible value to be extracted from mining the underground rivers of data coursing unseen through cyberspace, and refining that raw material into Personal Information.

Look at what the data prospectors and processors have managed to do already.


  • Facial recognition transforms vast stores of anonymous photos into PII, without consent, and without limitation. Facebook's deployment of biometric technology was covert and especially clever. For years they encouraged users to tag people they knew in photos. It seemed innocent enough but through these fun and games, Facebook was crowd-sourcing the facial recognition templates and calibrating their constantly evolving algorithms, without ever mentioning biometrics in their privacy policy or help pages. Even now Facebook's Data Use Policy is entirely silent on biometric templates and what they allow themselves to do with them.

    It's difficult to overstate the value of facial recognition to businesses like Facebook when they have just one asset: knowledge about their members and users. Combined with image analysis and content addressable graphical memory, facial recognition lets social media companies work out what we're doing, when, where and with whom. I call it piracy. Billions of everyday images have been uploaded over many years by users for ostensiby personal purposes, without any clue that technology would energe to convert those pictures into a commercial resource.

    Third party services like Facedeals are starting to emerge, using Facebook's photo resources for commercial facial recognition in public. And the most recent facial recognition entrepreneurs like Name Tag App boast of scraping images from any "public" photo databases they can find. But as we shall see below, in many parts of the world there are restrictions on leveraging public-facing databases, because there is a legal difference between anonymous data and identified information.

  • Some of the richest stores of raw customer data are aggregated in retailer databases. The UK department store Tesco for example is said to hold more data about British citizens than the government does. For years of course data analysts have combed through shopping history for marketing insights, but their predictive powers are growing rapidly. An infamous example is Target's covert development of methods to identify customers who are pregnant based on their buying habits. Some Big Data practitioners seem so enamoured with their ability to extract secrets from apparently mundane data, they overlook that PII collected indirectly by algorithm is subject to privacy law just as if it was collected directly by questionnaire. Retailers need to remember this as they prepare to exploit their massive loyalty databases into new financial services ventures.
  • Natural Language Processing (NLP) is the secret sauce in Apple's Siri, allowing her to take commands and dictation. Every time you dictate an email or a text message to Siri, Apple gets hold of telecommunications contet that is normally out of bounds to the phone companies. Siri is like a free PA that reports your daily activities back to the secretarial agency. There is no mention at all of Siri in Apple's Privacy Policy despite the limitless collection of intimate personal information.
  • And looking ahead, Google Glass in the privacy stakes will probably surpass both Siri and facial recognition. If actions speak louder than words, imagine the value to Google of seeing through Glass exactly what we do in real time. Digital companies wanting to know our minds won't need us to expressly "like" anything anymore; they'll be able to tell our preferences from our unexpurgated behaviours.

The surprising power of data protection regulations

There's a widespread belief that technology has outstripped privacy law, yet it turns out technology neutral data privacy law copes well with most digital developments. OECD privacy principles (enacted in over 100 countries) and the US FIPPs (Fair Information Practice Principles) require that companies be transarent about what PII they collect and why, limit the ways in which PII is used for unrelated purposes.

Privacy advocates can take heart from several cases where existing privacy regulations have proven effective against some of the informopolies' trespasses. And technologists and cynics who think privacy is hopeless should heed the lessons.


  • Google StreetView cars, while they drive up and down photographing the world, also collect Wi-Fi hub coordinates for use in geo-location services. In 2010 it was discovered that the StreetView software was also collecting unencrypted Wi-Fi network traffic, some of which contained Personal Information like user names and even passwords. Privacy Commissioners in Australia, Japan, Korea, the Netherlands and elsewhere found Google was in breach of their data protection laws. Google explained that the collection was inadverrtant, apologized, and destroyed all the wireless traffic that had been gathered.

    The nature of this privacy offence has confused some commentators and technologists. Some argue that Wi-Fi data in the public domain is not private, and "by definition" (so they like to say) categorically could not be private. Accordingly some believed Google was within its rights to do whatever it liked with such found data. But that reasoning fails to grasp the technicality that Data Protection laws in Europe, Australia and elsewhere do not essentially distinguish “public” from "private". In fact the word “private” doesn’t even appear in Australia’s “Privacy Act”. If data is identifiable, then privacy rights generally attach to it irrespective of how it is collected.

  • Facebook photo tagging was ruled unlawful by European privacy regulators in mid 2012, on the grounds it represents a collection of PII (by the operation of the biometric matching algorithm) without consent. By late 2012 Facebook was forced to shut down facial recognition and tag suggestions in the EU. This was quite a show of force over one of the most powerful companies of the digital age. More recently Facebook has started to re-introduce photo tagging, prompting the German privacy regulator to reaffirm that this use of biometrics is counter to their privacy laws.

It's never too late

So, is it really too late for privacy? Outside the United States at least, established privacy doctrine and consumer protections have taken technocrats by surprise. They have found, perhaps counter intuitively, that they are not as free as they thought to exploit all personal data that comes their way.

Privacy is not threatened so much by technology as it is by sloppy thinking and, I'm afraid, by wishful thinking on the part of some vested interests. Privacy and anonymity, on close reflection, are not the same thing, and we shouldn't want them to be! It's clearly important to be known by others in a civilised society, and it's equally important that those who do know us, are reasonably restrained in how they use that knowledge.

Posted in Social Networking, Social Media, Privacy

Getting the security privacy balance wrong

National security analyst Dr Anthony Bergin of the Australian Strategic Policy Institute wrote of the government’s data retention proposals in the Sydney Morning Herald of August 14. I am a privacy advocate who accepts in fact that law enforcement needs new methods to deal with terrorism. I myself do trust there is a case for greater data retention in order to weed out terrorist preparations, but I reject Bergin’s patronising call that “Privacy must take a back seat to security”. He speaks soothingly of balance yet he rejects privacy out of hand. As such his argument for balance is anything but balanced.

Suspicions are rightly raised by the murkiness of the Australian government’s half-baked data retention proposals and by our leaders’ excruciating inability to speak cogently even about the basics. They bandy about metaphors for metadata that are so bad, they smack of misdirection. Telecommunications metadata is vastly more complex than addresses on envelopes; for one thing, the Dynamic IP Addresses of cell phones means for police to tell who made a call requires far more data than ASIO and AFP are letting on (more on this by Internet expert Geoff Huston here).

The way authorities jettison privacy so casually is of grave concern. Either they do not understand privacy, or they’re paying lip service to it. In truth, data privacy is simply about restraint. Organisations must explain what personal data they collect, why they collect, who else gets to access the data, and what they do with it. These principles are not at all at odds with national security. If our leaders are genuine in working with the public on a proper balance of privacy and security, then long-standing privacy principles about proportionality, transparency and restraint provide the perfect framework in which to hold the debate. Ed Snowden himself knows this; people should look beyond the trite hero-or-pariah characterisations and listen to his balanced analysis of national security and civil rights.

Cryptographers have a saying: There is no security in obscurity. Nothing is gained by governments keeping the existence of surveillance programs secret or unexplained, but the essential trust of the public is lost when their privacy is treated with contempt.

Posted in Trust, Security, Privacy

Postcard from Monterey 2 #CISmcc

Second Day Reflections from CIS Monterey.

Follow along on Twitter at #CISmcc (for the Monterey Conference Centre).

The Attributes push

At CIS 2013 in Napa a year ago, several of us sensed a critical shift in focus amongst the identerati - from identity to attributes. OIX launched the Attributes Exchange Network (AXN) architecture, important commentators like Andrew Nash were saying, 'hey, attributes are more interesting than identity', and my own #CISnapa talk went so far as to argue we should forget about identity altogether. There was a change in the air, but still, it was all pretty theoretical.

Twelve months on, and the Attributes push has become entirely practical. If there was a Word Cloud for the NSTIC session, my hunch is that "attributes" would dominate over "identity". Several live NSTIC pilots are all about the Attributes.

ID.me is a new company started by US military veterans, with the aim of improving access for the veterans community to discounted goods and services and other entitlements. Founders Matt Thompson and Blake Hall are not identerati -- they're entirely focused on improving online access for their constituents to a big and growing range of retailers and services, and offer a choice of credentials for proving veterans bona fides. It's central to the ID.me model that users reveal as little as possible about their personal identities, while having their veterans' status and entitlements established securely and privately.

Another NSTIC pilot Relying Party is the financial service sector infrastructure provider Broadridge. Adrian Chernoff, VP for Digital Strategy, gave a compelling account of the need to change business models to take maximum advantage of digital identity. Broadridge recently announced a JV with Pitney Bowes called Inlet, which will enable the secure sharing of discrete and validated attributes - like name, address and social security number - in an NSTIC compliant architecture.

Mind Altering

Yesterday I said in my #CISmcc diary that I hoped to change my mind about something here, and half way through Day 2, I was delighted it was already happening. I've got a new attitude about NSTIC.

Over the past six months, I had come to fear NSTIC had lost its way. It's hard to judge totally accurately when lurking on the webcast from Sydney (at 4:00am) but the last plenary seemed pedestrian to me. And I'm afraid to say that some NSTIC committees have got a little testy. But today's NSTIC session here was a turning point. Not only are there a number or truly exciting pilots showing real progress, but Jeremy Grant has credible plans for improving accountability and momentum, and the new technology lead Paul Grassi is thinking outside the box and speaking out of school. The whole program seems fresh all over again.

In a packed presentation, Grassi impressed me enormously on a number of points:

  • Firstly, he advocates a pragmatic NSTIC-focused extension of the old US government Authentication Guide NIST SP 800-63. Rather than a formal revision, a companion document might be most realistic. Along the way, Grassi really nailed an issue which we identity professionals need to talk about more: language. He said that there are words in 800-63 that are "never used anywhere else in systems development". No wonder, as he says, it's still "hard to implement identity"!
  • Incidentally I chatted some more with Andrew Hughes about language; he is passionate about terms, and highlights that our term "Relying Party" is an especially terrible distraction for Service Providers whose reason-for-being has nothing to do with "relying" on anyone!
  • Secondly, Paul Grassi wants to "get very aggressive on attributes", including emphasis on practical measurement (since that's really what NIST is all about). I don't think I need to say anything more about that than Bravo!
  • And thirdly, Grassi asked "What if we got rid of LOAs?!". This kind of iconoclastic thinking is overdue, and was floated as part of a broad push to revamp the way government's orthodox thinking on Identity Assurance is translated to the business world. Grassi and Grant don't say LOAs can or should be abandoned by government, but they do see that shoving the rounded business concepts of identity into government's square hole has not done anyone much credit.

Just one small part of NSTIC annoyed me today: the persistent idea that federation hubs are inherently simpler than one-to-one authentication. They showed the following classic sort of 'before and after' shots, where it seems self-evident that a hub (here the Federal Cloud Credential Exchange FCCX) reduces complexity. The reality is that multilateral brokered arrangements between RPs and IdPs are far more complex than simple bilateral direct contracts. And moreover, the new forms of agreements are novel and untested in real world business. The time and cost and unpredictability of working out these new arrangements is not properly accounted for and has often been fatal to identity federations.

IMG 5412 BEFORE cropped
IMG 5413 AFTER cropped


The dog barks and this time the caravan turns around

One of the top talking points at #CISmcc has of course been FIDO. The FIDO Alliance goes from strength to strength; we heard they have over 130 members now (remember it started with four or five less than 18 months ago). On Saturday afternoon there was a packed-out FIDO show case with six vendors showing real FIDO-ready products. And today there was a three hour deep dive into the two flagship FIDO protocols UAF (which enables better sharing of strong authentication signals such that passwords may be eliminated) and U2F (which standardises and strengthens Two Factor Authentication).

FIDO's marketing messages are improving all the time, thanks to a special focus on strategic marketing which was given its own working group. In particular, the Alliance is steadily clarifying the distinction between identity and authentication, and sticking adamantly to the latter. In other words, FIDO is really all about the attributes. FIDO leaves identity as a problem to be addressed further up the stack, and dedicates itself to strengthening the authentication signal sent from end-point devices to servers.

The protocol tutorials were excellent, going into detail about how "Attestation Certificates" are used to convey the qualities and attributes of authentication hardware (such as device model, biometric modality, security certifications, elapsed time since last user verification etc) thus enabling nice fine-grained policy enforcement on the RP side. To my mind, UAF and U2F show how nature intended PKI to have been used all along!

Some confusion remains as to why FIDO has two protocols. I heard some quiet calls for UAF and U2F to converge, yet that would seem to put the elegance of U2F at risk. And it's noteworthy that U2F is being taken beyond the original one time password 2FA, with at least one biometric vendor at the showcase claiming to use it instead of the heavier UAF.

Surprising use cases

Finally, today brought more fresh use cases from cohorts of users we socially privileged identity engineers for the most part rarely think about. Another NSTIC pilot partner is AARP, a membership organization providing "information, advocacy and service" to older people, retirees and other special needs groups. AARP's Jim Barnett gave a compelling presentation on the need to extend from the classic "free" business models of Internet services, to new economically sustainable approaches that properly protect personal information. Barnett stressed that "free" has been great and 'we wouldn't be where we are today without it' but it's just not going to work for health records for example. And identity is central to that.

There's so much more I could report if I had time. But I need to get some sleep before another packed day. All this changing my mind is exhausting.

Cheers again from Monterey.

Posted in Security, Privacy, PKI, Language, Identity, Federated Identity, e-health

Webinar: Big Privacy

I'm presenting a Constellation Research webinar next week on my latest research into "Big Privacy" (June 18th in the US / June 19th in Australia). I hope you can join us.

Register here.

We live in an age where billionaires are self-made on the back of the most intangible of assets – the information they have amassed about us. That information used to be volunteered in forms and questionnaires and contracts but increasingly personal information is being observed and inferred.

The modern world is awash with data. It’s a new and infinitely re-usable raw material. Most of the raw data about us is an invisible by-product of our mundane digital lives, left behind by the gigabyte by ordinary people who do not perceive it let alone understand it.

Many Big Data and digital businesses proceed on the basis that all this raw data is up for grabs. There is a particular widespread assumption that data in the "public domain" is free-for-all, and if you’re clever enough to grab it, then you’re entitled to extract whatever you can from it.

In the webinar, I'll try to show how some of these assumptions are naive. The public is increasingly alarmed about Big Data and averse to unbridled data mining. Excessive data mining isn't just subjectively 'creepy'; it can be objectively unlawful in many parts of the world. Conventional data protection laws turn out to be surprisingly powerful in in the face of Big Data. Data miners ignore international privacy laws at their peril!

Today there are all sorts of initiatives trying to forge a new technology-privacy synthesis. They go by names like "Privacy Engineering" and "Privacy by Design". These are well meaning efforts but they can be a bit stilted. They typically overlook the strengths of conventional privacy law, and they can miss an opportunity to engage the engineering mind.

It’s not politically correct but I believe we must admit that privacy is full of contradictions and competing interests. We need to be more mature about privacy. Just as there is no such thing as perfect security, there can never be perfect privacy either. And is where the professional engineering mindset should be brought in, to help deal with conflicting requirements.

If we’re serious about Privacy by Design and Privacy Engineering then we need to acknowledge the tensions. That’s some of the thinking behind Constellation's new Big Privacy compact. To balance privacy and Big Data, we need to hold a conversation with users that respects the stresses and strains, and involves them in working through the new privacy deal.

The webinar will cover these highlights of the Big Privacy pact:

    • Respect and Restraint
    • Super transparency
    • And a fair deal for Personal Information.

Have a disruptive technology implementation story? Get recognised for your leadership. Apply for the 2014 SuperNova Awards for leaders in disruptive technology.

Posted in Social Media, Privacy, Constellation Research, Biometrics, Big Data

Three billion was a Snap

The latest Snowden revelations include the NSA's special programs for extracting photos and identifying from the Internet. Amongst other things the NSA uses their vast information resources to correlate location cues in photos -- buildings, streets and so on -- with satellite data, to work out where people are. They even search especially for passport photos, because these are better fodder for facial recognition algorithms. The audacity of these government surveillance activities continues to surprise us, and their secrecy is abhorrent.

Yet an ever greater scale of private sector surveillance has been going on for years in social media. With great pride, Facebook recently revealed its R&D in facial recognition. They showcased the brazenly named "DeepFace" biometric algorithm, which is claimed to be 97% accurate in recognising faces from regular images. Facebook has made a swaggering big investment in biometrics.

Data mining needs raw material, there's lots of it out there, and Facebook has been supremely clever at attracting it. It's been suggested that 20% of all photos now taken end up in Facebook. Even three years ago, Facebook held 10,000 times as many photographs as the Library of Congress:

Largest photo libraries
[Picture courtesy of the now retired 1000memories.com blog]

And Facebook will spend big buying other photo lodes. Last year they tried to buy Snapchat for the spectacular sum of three billion dollars. The figure had pundits reeling. How could a start-up company with 30 people be worth so much? All the usual dot com comparisons were made; the offer seemed a flight of fancy.

But no, the offer was a rational consideration for the precious raw material that lies buried in photo data.

Snapchat generates at least 100 million new images every day. Three billion dollars was, pardon me, a snap. I figure that at a ballpark internal rate of return of 10%, a $3B investment is equivalent to $300M p.a. so even if the Snapchat volume stopped growing, Facebook would have been paying one cent for every new snap, in perpetuity.

These days, we have learned from Snowden and the NSA that communications metadata is just as valuable as the content of our emails and phone calls. So remember that it's the same with photos. Each digital photo comes from a device that embeds within the image metadata usually including the time and place of when the picture was taken. And of course each Instagram or Snapchat is a social post, sent by an account holder with a history and rich context in which the image yields intimate real time information about what they're doing, when and where.

The hallmark of the Snapchat service is transience: all those snaps are supposed to flit from one screen to another before vaporising. Now of course that idea is contestable; enthusiasts worked out pretty quickly how to retrieve snaps from old memory. And in any case, transience is a red herring, perhaps a deliberate distraction, because the metadata matters more, and Snapchat admits in its Privacy Policy that it pretty well keeps the lot:

  • When you access or use our Services, we automatically collect information about you, including:
  • Usage Information: When you send or receive messages via our Services, we collect information about these messages, including the time, date, sender and recipient of the Snap. We also collect information about the number of messages sent and received between you and your friends and which friends you exchange messages with most frequently.
  • Log Information: We log information about your use of our websites, including your browser type and language, access times, pages viewed, your IP address and the website you visited before navigating to our websites.
  • Device Information: We may collect information about the computer or device you use to access our Services, including the hardware model, operating system and version, MAC address, unique device identifier, phone number, International Mobile Equipment Identity ("IMEI") and mobile network information. In addition, the Services may access your device's native phone book and image storage applications, with your consent, to facilitate your use of certain features of the Services.
  • Location Information: With your consent, we may collect information about the location of your device to facilitate your use of certain features of our Services, determine the speed at which your device is traveling, add location-based filters to your Snaps (such as local weather), and for any other purpose described in this privacy policy.

Snapchat goes on to declare it may use any of this information to "personalize and improve the Services and provide advertisements, content or features that match user profiles or interests" and it reserves the right to share any information with "vendors, consultants and other service providers who need access to such information to carry out work on our behalf".

So back to the data mining: nothing stops Snapchat -- or a new parent company -- running biometric facial recognition over the snaps as they pass through the servers, to extract additional "profile" information. And there's an extra kicker that makes Snapchats extra valuable for biometric data miners. The vast majority of Snapchats are selfies. So if you extract a biometric template from a snap, you already know who it belongs to, without anyone having to tag it. Snapchat would provide a hundred million auto-calibrations every day for facial recognition algorithms! On Facebook, the privacy aware turn off photo tagging, but with Snapchats, self identification is inherent to the experience and is unlikely to be ever be disabled.

NSA has all your selfies

As I've discussed before, the morbid thrill of Snowden's spying revelations has tended to overshadow his sober observations that when surveillance by the state is probably inevitable, we need to be discussing accountability.

While we're all ventilating about the NSA, it's time we also attended to private sector spying and properly debated the restraints that may be appropriate on corporate exploitation of social data.

Personally I'm much more worried that an infomopoly has all my selfies.

Have a disruptive technology implementation story? Get recognised for your leadership. Apply for the 2014 SuperNova Awards for leaders in disruptive technology.

Posted in Social Networking, Social Media, Privacy, Biometrics, Big Data