Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Big Privacy: a new Constellation Research Report

I've just completed a major new Constellation Research report looking at how today's privacy practices cope with Big Data. The report draws together my longstanding research on the counter-intuitive strengths of technology-neutral data protection laws, and melds it with my new Constellation colleagues' vast body of work in data analytics. The synergy is honestly exciting and illuminating.

Big Data promises tremendous benefits for a great many stakeholders but the potential gains are jeopardised by the excesses of a few. Some cavalier online businesses are propelled by a naive assumption that data in the "public domain" is up for grabs, and with that they often cross a line.

For example, there are apps and services now that will try to identify pictures you take of strangers in public, by matching them biometrically against data supersets compiled from social networking sites and other publically accessible databases. Many find such offerings quite creepy but they may be at a loss as to what to do about it, or even how to think through the issues objectively. Yet the very metaphor of data mining holds some of the clues. If, as some say, raw data is like crude oil, just waiting to be mined and exploited by enterprising prospecters, then surely there are limits, akin to mining permits?

Many think the law has not kept pace with technology, and that digital innovators are free to do what they like with any data they can get their hands on. But technologists repreatedly underestimate the strength of conventional data protection laws and regulations. The extraction of PII from raw data may be interpreted under technology neutral privacy principles as an act of Collection and as such is subject to existing statutes. Around the world, Google thus found they are not actually allowed to gather Personal Data that happens to be available in unencrypted Wi-Fi transmission as StreetView cars drive by homes and offices. And Facebook found they are not actually allowed to automatically identify people in photos through face recognition without consent. And Target probably would find, if they tried it outside the USA, that they cannot flag selected female customers as possibly pregnant by analysing their buying habits.

On the other hand, orthodox privacy policies and static user agreements do not cater for the way personal data can be conjured tomorrow from raw data collected today. Traditional privacy regimes require businesses to spell out what personally identifiable information (PII) they collect and why, and to restrict secondary usage. Yet with Big Data, with the best will in the world, a company might not know what data analytics will yield down the track. If mutual benefits for business and customer alike might be uncovered, a freeze-frame privacy arrangement may be counter-productive.

Thus the fit between data analytics and data privacy standards is complex and sometimes surprising. While existing laws are not to be underestimated, we do need something new. As far as I know it was Ray Wang in his Harvard Business Review blog who first called for a fresh privacy compact amongst users and businesses.

The spirit of data privacy is simply framed: organisations that know us should respect the knowledge they have, they should be open about what they know, and they should be restrained in what they do with it. In the Age of Big Data, let's have businesses respect the intelligence they extract from data mining, just as they should respect the knowledge they collect directly through forms and questionnaires.

I like the label "Big Privacy"; it is grandly optimistic, like "Big Data" itself, and at the same time implies a challenge to do better than regular privacy practices.

Ontario Privacy Commissioner Dr Ann Cavoukian writes about Big Privacy, describing it simply as "Privacy By Design writ large". But I think there must be more to it than that. Big Data is quantitatively but also qualitatively different from ordinary data analyis.

To summarise the basic elements of a Big Data compact:

  • Respect and Restraint: In the face of Big Data’s temptations, remember that privacy is not only about what we do with PII; just as important is what we choose not to do.
  • Super transparency: Who knows what lies ahead in Big Data? If data privacy means being open about what PII is collected and why, then advanced privacy means going further, telling people more about the business models and the sorts of results data mining is expected to return.
  • Engage customers in a fair deal for PII: Information businesses ought to set out what PII is really worth to them (especially when it is extracted in non-obvious ways from raw data) and offer a fair "price" for it, whether in the form of "free" products and services, or explicit payment.
  • Really innovate in privacy: There’s a common refrain that “privacy hampers innovation” but often that's an intellectually lazy cover for reserving the right to strip-mine PII. Real innovation lies in business practices which create and leverage PII while honoring privacy principles.

My report, "Big Privacy" Rises to the Challenges of Big Data may be downloaded from the Constellation Research website.

Posted in Social Media, Privacy, Constellation Research, Big Data

Who's listening to Ed Snowden?

In one of the most highly anticipated sessions ever at the annual South-by-Southwest (SXSW) culture festival, NSA whistle blower Ed Snowden appeared via live video link from Russia. He joined two privacy and security champions from the American Civil Liberties Union – Chris Soghoian and Ben Wizner – to canvass the vexed tensions between intelligence and law enforcement, personal freedom, government accountability and digital business models.

These guys traversed difficult ground, with respect and much nuance. They agreed the issues are tough, and that proper solutions are non-obvious and slow-coming. The transcript is available here.

Yet afterwards the headlines and tweet stream were dominated by "Snowden's Tips" for personal online security. It was as if Snowden had been conducting a self-help workshop or a Cryptoparty. He was reported to recommend we encrypt our hard drives, encrypt our communications, and use Tor (the special free-and-open-source encrypted browser). These are mostly fine suggestions but I am perplexed why they should be the main takeaways from a complex discussion. Are people listening to Snowdenis broader and more general policy lessons? I fear not. I believe people still conflate secrecy and privacy. At the macro level, the confusion makes it difficult to debate national security policy properly; at a micro level, even if crypto was practical for typical citizens, it is not a true privacy measure. Citizens need so much more than secrecy technologies, whether it's SSL-always-on at web sites, or do-it-yourself encryption.

Ed Snowden is a remarkably measured and thoughtful commentator on national security. Despite being hounded around the word, he is not given to sound bites. His principal concerns appear to be around public accountability, oversight and transparency. He speaks of the strengths and weaknesses of the governance systems already in place; he urges Congress to hold security agency heads to account.

When drawn on questions of technology, he doesn't dispense casual advice; instead he calls for multifaceted responses to our security dilemmas: more cryptological research, better random number generators, better testing, more robust cryptographic building blocks and more careful product design. Deep, complicated engineering stuff.

So how did the media, both mainstream and online alike, distill Snowden's sweeping analysis of politics, policy and engineering into three sterile and quasi-survivalist snippets?

Partly it's due to the good old sensationalism of all modern news media: everyone likes a David-and-Goliath angle where individuals face off against pitiless governments. And there's also the ruthless compression: newspapers cater for an audience with school-age reading levels and attention spans, and Twitter clips our contributions to 140 characters.

But there is also a deeper over-simplification of privacy going on which inhibits our progress.

Too often, people confuse privacy for secrecy. Privacy gets framed as a need to hide from prying eyes, and from that starting position, many advocates descend into a combative, everyone-for-themselves mindset.

However privacy has very little to do with secrecy. We shouldn't have to go underground to enjoy that fundamental human right to be let alone. The social reality is that most of us wish to lead rich and quite public lives. We actually want others to know us – to know what we do, what we like, and what we think – but all within limits. Digital privacy (or more clinically, data protection) is not about hiding; rather it is a state where those who know us are restrained in what they do with the knowledge they have about us.

Privacy is the protection you need when your affairs are not confidential!

So encryption is a sterile and very limited privacy measure. As the SXSW panellists agreed, today's encryption tools really are the preserve of deep technical specialists. Ben Wizner quipped that if the question is how can average users protect themselves online, and the answer is Tor, then "we have failed".

And the problems with cryptography are not just usability and customer experience. A fundamental challenge with the best encryption is that everyone needs to be running the tools. You cannot send out encrypted email unilaterally – you need to first make sure all your correspondents have installed the right software and they've got trusted copies of your encryption keys, or they won't be able to unscramble your messages.

Chris Soghoian also nailed the business problem that current digital revenue models are largely incompatible with encryption. The wondrous free services we enjoy from the Googles and Facebooks of the world are funded in the main by mining our data streams, figuring out our interests, habits and connections, and monetising that synthesised information. The web is in fact bankrolled by surveillance – by Big Business as opposed to government.

End-to-end encryption prevents data mining and would ruin the business model of the companies we've become attached to. If we were to get serious with encryption, we may have to cough up the true price for our modern digital lifestyles.

The SXSW privacy and security panellists know all this. Snowden in particular spent much of his time carefully reiterating many of the basics of data privacy. For instance he echoed the Collection Limitation Principle when he said of large companies that they "can't collect any data; [they] should only collect data and hold it for as long as necessary for the operation of the business". And the Openness Principle: "data should not be collected without people's knowledge and consent". If I was to summarise Snowden's SXSW presentation, I'd say privacy will only be improved by reforming the practices of both governments and big businesses, and by putting far more care into digital product development. Ed Snowden himself doesn't promote neat little technology tips.

It's still early days for the digital economy. We're experiencing an online re-run of the Wild West, with humble users understandably feeling forced to take measures into their own hands. So many individuals have become hungry for defensive online tools and tips. But privacy is more about politics and regulation than technology. I hope that people listen more closely to Ed Snowden on policy, and that his lasting legacy is more about legal reform and transparency than Do-It-Yourself encryption.

Posted in Security, Privacy, Internet

The strengths and weaknesses of Data Privacy in the Age of Big Data

This is the abstract of a current privacy conference proposal.

Synopsis

Many Big Data and online businesses proceed on a naive assumption that data in the "public domain" is up for grabs; technocrats are often surprised that conventional data protection laws can be interpreted to cover the extraction of PII from raw data. On the other hand, orthodox privacy frameworks don't cater for the way PII can be created in future from raw data collected today. This presentation will bridge the conceptual gap between data analytics and privacy, and offer new dynamic consent models to civilize the trade in PII for goods and services.

Abstract

It’s often said that technology has outpaced privacy law, yet by and large that's just not the case. Technology has certainly outpaced decency, with Big Data and biometrics in particular becoming increasingly invasive. However OECD data privacy principles set out over thirty years ago still serve us well. Outside the US, rights-based privacy law has proven effective against today's technocrats' most worrying business practices, based as they are on taking liberties with any data that comes their way. To borrow from Niels Bohr, technologists who are not surprised by data privacy have probably not understood it.

The cornerstone of data privacy in most places is the Collection Limitation principle, which holds that organizations should not collect Personally Identifiable Information beyond their express needs. It is the conceptual cousin of security's core Need-to-Know Principle, and the best starting point for Privacy-by-Design. The Collection Limitation principle is technology neutral and thus blind to the manner of collection. Whether PII is collected directly by questionnaire or indirectly via biometric facial recognition or data mining, data privacy laws apply.

It’s not for nothing we refer to "data mining". But few of those unlicensed data gold diggers seem to understand that the synthesis of fresh PII from raw data (including the identification of anonymous records like photos) is merely another form of collection. The real challenge in Big Data is that we don’t know what we’ll find as we refine the techniques. With the best will in the world, it is hard to disclose in a conventional Privacy Policy what PII might be collected through synthesis down the track. The age of Big Data demands a new privacy compact between organisations and individuals. High minded organizations will promise to keep people abreast of new collections and will offer ways to opt in, and out and in again, as the PII-for-service bargain continues to evolve.

Posted in Social Networking, Privacy, Biometrics, Big Data

How are we to defend anonymity?

If anonymity is important, what is the legal basis for defending it?
I find that conventional data privacy law in most places around the world already protects anonymity, insofar as the act of de-anonymization represents an act of PII Collection - the creation of a named record. As such, de-anonymization cannot be lawfully performed without an express need to to do, or consent.

Foreword

Cynics have been asking the same rhetorical question "is privacy dead?" for at least 40 years. Certainly information technology and ubiquitous connectivity have made it nearly impossible to hide, and so anonymity is critically ill. But privacy is not the same thing as secrecy; privacy is a state where those who know us, respect the knowledge they have about us. Privacy generally doesn't require us hiding from anyone; it requires restraint on the part of those who hold Personal Information about us.

The typical public response to data breaches, government surveillance and invasions like social media facial recognition is vociferous. People in general energetically assert their rights to not be tracked online, or to have their personal information exploited behind their backs. These reactions show that the idea of privacy alive and well.

The end of anonymity perhaps

Against a backdrop of spying revelations and excesses by social media companies especially in regards to facial recognition, there have been recent calls for a "new jurisprudence of anonymity"; see Yale law professor Jed Rubenfeld writing in the Washington Post of 13 Jan 2014. I wonder if there is another way to crack the nut? Because any new jurisprudence is going to take a very long time.

Instead, I suggest we leverage the way most international privacy law and privacy experience -- going back decades -- is technology neutral with regards to the method of collection. In some jurisdictions like Australia, the term "collection" is not even defined in privacy law. Instead, the law just uses the normal plain English sense of the word, when it frames principles like Collection Limitation: basically, you are not allowed to collect (by any means) Personally Identifiable Information without a good reasonable express reason. It means that if PII gets into a data system, the system is accountable under privacy law for that PII, no matter how it got there.

This technology neutral view of PII collection has satisfying ramifications for all the people who intuit that Big Data has got too "creepy". We can argue that if a named record is produced afresh by a Big Data process (especially if that record is produced without the named person being aware of it, and from raw data that was originally collected for some other purpose) then that record has logically been collected. Whether PII is collected directly, or collected indirectly, or is in fact created by an obscure process, privacy law is largely agnostic.

Prof Rubenfeld wrote:

  • "The NSA program isn’t really about gathering data. It's about mining data. All the data are there already, digitally stored and collected by telecom giants, just waiting." [italics in original]

I suggest that the output of the data mining, if it is personally identifiable and especially if it has been rendered identifiable by processing previously anonymous raw data, has is a fresh collection by the mining operation. As such, the miners should be accountable for their newly minted PII, just as though they had collected gathered it directly from the persons concerned.

For now, I don't want to go further and argue the rights and wrongs of surveillance. I just want to show a new way to frame the privacy questions in surveillance and big data, making use of existing jurisprudence. If I am right and the NSA is in effect collecting PII as it goes about its data mining, then that provides a possibly fresh understanding of what's going on, within which we can objectively analyse the rights and wrongs.

I am actually the first to admit that within this frame, the NSA might still be justified in mining data, and there might be no actual technical breach of information privacy law, if for instance the NSA enjoys a law enforcement exemption. These are important questions that need to be debated, but elsewhere (see my recent blog on our preparedness to actually have such a debate). My purpose right now is to frame a way to defend anonymity using as much existing legal infrastructure as possible.

But Collection is not limited everywhere

There is an important legal-technical question in all this: Is the collection of PII actually regulated? In Europe, Australia, New Zealand and in dozens of countries, collection is limited, but in the USA, there is no general restriction against collecting PII. America has no broad data protection law, and in any case, the Fair Information Practice Principles (FIPPs) don't include a Collection Limitation principle.

So there may be few regulations in the USA that would carry my argument there! Nevertheless, surely we can use international jurisprudence in Collection Limitation instead of creating new American jurisprudence around anonymity?

So I'd like to put the following questions Jed Rubenfeld:


  • Do technology neutral Collection Limitation Principles in theory provide a way to bring de-anonymised data into scope for data privacy laws? Is this a way to address peoples' concerns with Big Data?
  • How does international jurisprudence around Collection Limitation translate to American schools of legal thought?
  • Does this way of looking at the problem create new impetus for Collection Limitation to be introduced into American privacy principles, especially the FIPPs?

Appendix: "Applying Information Privacy Norms to Re-Identification"

In 2013 I presented some of these ideas to an online symposium at the Harvard Law School Petrie-Flom Center, on the Law, Ethics & Science of Re-identification Demonstrations. What follows is an extract from that presentation, in which I spell out carefully the argument -- which was not obvious to some at the time -- that when genetics researchers combine different data sets to demonstrate re-identification of donated genomic material, they are in effect collecting patient PII. I argue that this type of collection should be subject to ethics committee approval just as if the researchers were collecting the identities from the patients directly.

... I am aware of two distinct re-identification demonstrations that have raised awareness of the issues recently. In the first, Yaniv Erlich [at MIT's Whitehead Institute] used what I understand are new statistical techniques to re-identify a number of subjects that had donated genetic material anonymously to the 1000 Genomes project. He did this by correlating genes in the published anonymous samples with genes in named samples available from genealogical databases. The 1000 Genomes consent form reassured participants that re-identification would be "very hard". In the second notable demo, Latanya Sweeney re-identified volunteers in the Personal Genome Project using her previously published method of using a few demographic values (such as date or birth, sex and postal code) extracted from the otherwise anonymous records.

A great deal of the debate around these cases has focused on the consent forms and the research subjects’ expectations of anonymity. These are important matters for sure, yet for me the ethical issue in de-anonymisation demonstrations is more about the obligations of third parties doing the identification who had nothing to do with the original informed consent arrangements. The act of recording a person’s name against erstwhile anonymous data represents a collection of personal information. The implications for genomic data re-identification are clear.

Let’s consider Subject S who donates her DNA, ostensibly anonymously, to a Researcher R1, under some consent arrangement which concedes there is a possibility that S will be re-identified. And indeed, some time later, an independent researcher R2 does identify S and links her to the DNA sample. The fact is that R2 has collected personal information about S. If R2 has no relationship with S, then S has not consented to this new collection of her personal information.

Even if the consent form signed at the time of the original collection includes a disclaimer that absolute anonymity cannot be guaranteed, re-identifying the DNA sample later represents a new collection, one that has been undertaken without any consent. Given that S has no knowledge of R2, there can be no implied consent in her original understanding with R1, even if absolute anonymity was disclaimed.

Naturally the re-identification demonstrations have served a purpose. It is undoubtedly important that the limits of anonymity be properly understood, and the work of Yaniv and Latanya contribute to that. Nevertheless, these demonstrations were undertaken without the knowledge much less the consent of the individuals concerned. I contend that bioinformaticians using clever techniques to attach names to anonymous samples need ethics approval, just as they would if they were taking fresh samples from the people concerned.

See also my letter to the editor of Science magazine.

Posted in Privacy, Big Data

The Snapchat data breach

Yesterday it was reported by The Verge that anonymous hackers have accessed Snapchat's user database and posted 4.6 million user names and phone numbers. In an apparent effort to soften the blow, two digits of the phone numbers were redacted. So we might assume this is a "white hat" exercise, designed to shame Snapchat into improving their security. Indeed, a few days ago Snapchat themselves said they had been warned of vulnerabilities in their APIs that would allow a mass upload of user records.

The response of many has been, well, so what? Some people have casually likened Snapchat's list to a public White Pages; others have played it down as "just email addresses".

Let's look more closely. The leaked list was not in fact public names and phone numbers; it was user names and phone numbers. User names might often be email addresses but these are typically aliases; people frequently choose email addresses that reveal little or nothing of their real world identity. We should assume there is intent in an obscure email address for the individual to remain secret.

Identity theft has become a highly organised criminal enterprise. Crime gangs patiently acquire multiple data sets over many months, sometimes years, gradually piecing together detailed personal profiles. It's been shown time and time again by privacy researchers (perhaps most notably Latanya Sweeney) that re-identification is enabled by linking diverse data sets. And for this purpose, email addresses and phone numbers are superbly valuable indices for correlating an individual's various records. Your email address is common across most of your social media registrations. And your phone number allows your real name and street address to be looked up from reverse White Pages. So the Snapchat breach could be used to join aliases or email addresses to real names and addresses via the phone numbers. For a social engineering attack on a call centre -- or even to open a new bank account -- an identity thief can go an awful long way with real name, street address, email address and phone number.

I was asked in an interview to compare the theft of stolen phone numbers with social security numbers. I surprised the interviewer when I said phone numbers are probably even more valuable to the highly organised ID thief, for they can be used to index names in public directories, and to link different data sets, in ways that SSNs (or credit card numbers for that matter) cannot.

So let us start to treat all personal inormation -- especially when aggregated in bulk -- more seriously! And let's be more cautious in the way we categorise personal or Personally Identifiable Information (PII).

Importantly, most regulatory definitions of PII already embody the proper degree of caution. Look carefully at the US government definition of Personally Identifiable Information:

  • information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual (underline added).

This means that items of data can constitute PII if other data can be combined to identify the person concerned. That is, the fragments are regarded as PII even if it is the whole that does the identifying.

And remember that the middle I in PII stands for Identifiable, and not, as many people presume, Identifying. To meet the definition of PII, data need not uniquely identify a person, it merely needs to be directly or indirectly identifiable with a person. And this is how it should be when we heed the way information technologies enable identification through linkages.

Almost anywhere else in the world, data stores like Snapchat's would automatically fall under data protection and information privacy laws; regulators would take a close look at whether the company had complied with the OECD Privacy Principles, and whether Snapchat's security measures were fit for purpose given the PII concerned. But in the USA, companies and commentators alike still have trouble working out how serious these breaches are. Each new breach is treated in an ad hoc manner, often with people finessing the difference between credit card numbers -- as in the recent Target breach -- and "mere" email addresses like those in the Snapchat and Epsilon episodes.

Surely the time has come to simply give proper regulatory protection to all PII.

Posted in Social Networking, Social Media, Security, Privacy, Identity, Fraud, Big Data

Facebook's challenge to the Collection Limitation Principle

Facebook's challenge to the Collection Limitation Principle

An extract from our chapter in the forthcoming Encyclopedia of Social Network Analysis and Mining (to be published by Springer in 2014).

Stephen Wilson, Lockstep Consulting, Sydney, Australia.
Anna Johnston, Salinger Privacy, Sydney, Australia.

Key Points

  • Facebook's business practices pose a risk of non-compliance with the Collection Limitation Principle (OECD Privacy Principle No. 1, and corresponding Australian National Privacy Principles NPP 1.1 through 1.4).
  • Privacy problems will likely remain while Facebook's business model remains unsettled, for the business is largely based on collecting and creating as much Personal Information as it can, for subsequent and as yet unspecified monetization.
  • If an OSN business doesn't know how it is eventually going to make money from Personal Information, then it has a fundamental difficulty with the Collection Limitation principle.

Introduction

Facebook is an Internet and societal phenomenon. Launched in 2004, in just a few years it has claimed a significant proportion of the world's population as regular users, becoming by far the most dominant Online Social Network (OSN). With its success has come a good deal of controversy, especially over privacy. Does Facebook herald a true shift in privacy values? Or, despite occasional reckless revelations, are most users no more promiscuous than they were eight years ago? We argue it's too early to draw conclusions about society as a whole from the OSN experience to date. In fact, under laws that currently stand, many OSNs face a number of compliance risks in dozens of jurisdictions.

Over 80 countries worldwide now have enacted data privacy laws, around half of which are based on privacy principles articulated by the OECD. Amongst these are the Collection Limitation Principle which requires businesses to not gather more Personal Information than they need for the tasks at hand, and the Use Limitation Principle which dictates that Personal Information collected for one purpose not be arbitrarily used for others without consent.
Overt collection, covert collection (including generation) and "innovative" secondary use of Personal Information are the lifeblood of Facebook. While Facebook's founder would have us believe that social mores have changed, a clash with orthodox data privacy laws creates challenges for the OSN business model in general.

This article examines a number of areas of privacy compliance risk for Facebook. We focus on how Facebook collects Personal Information indirectly, through the import of members' email address books for "finding friends", and by photo tagging. Taking Australia's National Privacy Principles from the Privacy Act 1988 (Cth) as our guide, we identify a number of potential breaches of privacy law, and issues that may be generalised across all OECD-based privacy environments.

Terminology

Australian law tends to use the term "Personal Information" rather than "Personally Identifiable Information" although they are essentially synonymous for our purposes.

Terms of reference: OECD Privacy Principles and Australian law

The Organisation for Economic Cooperation and Development has articulated eight privacy principles for helping to protect personal information. The OECD Privacy Principles are as follows:

  • 1. Collection Limitation Principle
  • 2. Data Quality Principle
  • 3. Purpose Specification Principle
  • 4. Use Limitation Principle
  • 5. Security Safeguards Principle
  • 6. Openness Principle
  • 7. Individual Participation Principle
  • 8. Accountability Principle

Of most interest to us here are principles one and four:

  • Collection Limitation Principle: There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.
  • Use Limitation Principle: Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with [the Purpose Specification] except with the consent of the data subject, or by the authority of law.

At least 89 counties have some sort of data protection legislation in place [Greenleaf, 2012]. Of these, in excess of 30 jurisdictions have derived their particular privacy regulations from the OECD principles. One example is Australia.

We will use Australia's National Privacy Principles NPPs in the Privacy Act 1988 as our terms of reference for analysing some of Facebook's systemic privacy issues. In Australia, Personal Information is defined as: information or an opinion (including information or an opinion forming part of a database), whether true or not, and whether recorded in a material form or not, about an individual whose identity is apparent, or can reasonably be ascertained, from the information or opinion.

Indirect collection of contacts

One of the most significant collections of Personal Information by Facebook is surely the email address book of those members that elect to have the site help "find friends". This facility provides Facebook with a copy of all contacts from the address book of the member's nominated email account. It's the very first thing that a new user is invited to do when they register. Facebook refer to this as "contact import" in the Data Use Policy (accessed 10 August 2012).

"Find friends" is curtly described as "Search your email for friends already on Facebook". A link labelled "Learn more" in fine print leads to the following additional explanation:

  • "Facebook won't share the email addresses you import with anyone, but we will store them on your behalf and may use them later to help others search for people or to generate friend suggestions for you and others. Depending on your email provider, addresses from your contacts list and mail folders may be imported. You should only import contacts from accounts you've set up for personal use." [underline added by us].

Without any further elaboration, new users are invited to enter their email address and password if they have a cloud based email account (such as Hotmail, gmail, Yahoo and the like). These types of services have an API through which any third party application can programmatically access the account, after presenting the user name and password.

It is entirely possible that casual users will not fully comprehend what is happening when they opt in to have Facebook "find friends". Further, there is no indication that, by default, imported contact details are shared with everyone. The underlined text in the passage quoted above shows Facebook reserves the right to use imported contacts to make direct approaches to people who might not even be members.

Importing contacts represents an indirect collection by Facebook of Personal Information of others, without their authorisation or even knowledge. The short explanatory information quoted above is not provided to the individuals whose details are imported and therefore does not constitute a Collection Notice. Furthermore, it leaves the door open for Facebook to use imported contacts for other, unspecified purposes. The Data Use Policy imposes no limitations as to how Facebook may make use of imported contacts.

Privacy harms are possible in social networking if members blur the distinction between work and private lives. Recent research has pointed to the risky use of Facebook by young doctors, involving inappropriate discussion of patients [Moubarak et al, 2010]. Even if doctors are discreet in their online chat, we are concerned that they may run foul of the Find Friends feature exposing their connections to named patients. Doctors on Facebook who happen to have patients in their web mail address books can have associations between individuals and their doctors become public. In mental health, sexual health, family planning, substance abuse and similar sensitive fields, naming patients could be catastrophic for them.

While most healthcare professionals may use a specific workplace email account which would not be amenable to contacts import, many allied health professionals, counselors, specialists and the like run their sole practices as small businesses, and naturally some will use low cost or free cloud-based email services. Note that the substance of a doctor's communications with their patients over web mail is not at issue here. The problem of exposing associations between patients and doctors arises simply from the presence of a name in an address book, even if the email was only ever used for non-clinical purposes such as appointments or marketing.

Photo tagging and biometric facial recognition

One of Facebook's most "innovative" forms of Personal Information Collection would have to be photo tagging and the creation of biometric facial recognition templates.

Photo tagging and "face matching" has been available in social media for some years now. On photo sharing sites such as Picasa, this technology "lets you organize your photos according to the people in them" in the words of the Picasa help pages. But in more complicated OSN settings, biometrics has enormous potential to both enhance the services on offer and to breach privacy.

In thinking about facial recognition, we start once more with the Collection Principle. Importantly, nothing in the Australian Privacy Act circumscribes the manner of collection; no matter how a data custodian comes to be in possession of Personal Information (being essentially any data about a person whose identity is apparent) they may be deemed to have collected it. When one Facebook member tags another in a photo on the site, then the result is that Facebook has overtly but indirectly collected PI about the tagged person.

Facial recognition technologies are deployed within Facebook to allow its servers to automatically make tag suggestions; in our view this process constitutes a new type of Personal Information Collection, on a potentially vast scale.

Biometric facial recognition works by processing image data to extract certain distinguishing features (like the separation of the eyes, nose, ears and so on) and computing a numerical data set known as a template that is highly specific to the face, though not necessarily unique. Facebook's online help indicates that they create templates from multiple tagged photos; if a user removes a tag from one of their photo, that image is not used in the template.

Facebook subsequently makes tag suggestions when a member views photos of their friends. They explain the process thus:

  • "We are able to suggest that your friend tag you in a picture by scanning and comparing your friend‘s pictures to information we've put together from the other photos you've been tagged in".

So we see that Facebook must be more or less continuously checking images from members' photo albums against its store of facial recognition templates. When a match is detected, a tag suggestion is generated and logged, ready to be displayed next time the member is online.

What concerns us is that the proactive creation of biometric matches constitutes a new type of PI Collection, for Facebook must be attaching names -- even tentatively, as metadata -- to photos. This is a covert and indirect process.

Photos of anonymous strangers are not Personal Information, but metadata that identifies people in those photos most certainly is. Thus facial recognition is converting hitherto anonymous data -- uploaded in the past for personal reasons unrelated to photo tagging let alone covert identification -- into Personal Information.

Facebook limits the ability to tag photos to members who are friends of the target. This is purportedly a privacy enhancing feature, but unfortunately Facebook has nothing in its Data Use Policy to limit the use of the biometric data compiled through tagging. Restricting tagging to friends is likely to actually benefit Facebook for it reduces the number of specious or mischievous tags, and it probably enhances accuracy by having faces identified only by those who know the individuals.

A fundamental clash with the Collection Limitation Principle

In Australian privacy law, as with the OECD framework, the first and foremost privacy principle concerns Collection. Australia's National Privacy Principle NPP 1 requires that an organisation refrain from collecting Personal Information unless (a) there is a clear need to collect that information; (b) the collection is done by fair means, and (c) the individual concerned is made aware of the collection and the reasons for it.

In accordance with the Collection Principle (and others besides), a conventional privacy notice and/or privacy policy must give a full account of what Personal Information an organisation collects (including that which it creates internally) and for what purposes. And herein lies a fundamental challenge for most online social networks.

The core business model of many Online Social Networks is to take advantage of Personal Information, in many and varied ways. From the outset, Facebook founder, Mark Zuckerberg, appears to have been enthusiastic for information built up in his system to be used by others. In 2004, he told a colleague "if you ever need info about anyone at Harvard, just ask" (as reported by Business Insider). Since then, Facebook has experienced a string of privacy controversies, including the "Beacon" sharing feature in 2007, which automatically imported members' activities on external websites and re-posted the information on Facebook for others to see.

Facebook's privacy missteps are characterised by the company using the data it collects in unforeseen and barely disclosed ways. Yet this is surely what Facebook's investors expect the company to be doing: innovating in the commercial exploitation of personal information. The company's huge market valuation derives from a widespread faith in the business community that Facebook will eventually generate huge revenues. An inherent clash with privacy arises from the fact that Facebook is a pure play information company: its only significant asset is the information it holds about its members. There is a market expectation that this asset will be monetized and maximised. Logically, anything that checks the network's flux in Personal Information -- such as the restraints inherent in privacy protection, whether adopted from within or imposed from without -- must affect the company's futures.

Conclusion

Perhaps the toughest privacy dilemma for innovation in commercial Online Social Networking is that these businesses still don't know how they are going to make money from their Personal Information lode. Even if they wanted to, they cannot tell what use they will eventually make of it, and so a fundamental clash with the Collection Limitation Principle remains.

Acknowledgements

An earlier version of this article was originally published by LexisNexis in the Privacy Law Bulletin (2010).

References

  • Greenleaf G., "Global Data Privacy Laws: 89 Countries, and Accelerating", Privacy Laws & Business International Report, Issue 115, Special Supplement, February 2012 Queen Mary School of Law Legal Studies Research Paper No. 98/2012
  • Moubarak G., Guiot A. et al "Facebook activity of residents and fellows and its impact on the doctor--patient relationship" J Med Ethics, 15 December 2010

Posted in Social Networking, Social Media, Privacy

Are we ready to properly debate surveillance and privacy?

The cover of Newsweek magazine on 27 July 1970 featured an innocent couple being menaced by cameras and microphones and new technologies like computer punch cards and paper tape. The headline hollered “IS PRIVACY DEAD?”.

The same question has been posed every few years ever since.

In 1999, Sun Microsystems boss Scott McNally urged us to “get over” the idea we have “zero privacy”; in 2008, Ed Giorgio from the Office of the US Director of National Intelligence chillingly asserted that “privacy and security are a zero-sum game”; Facebook’s Mark Zuckerberg proclaimed in 2010 that privacy was no longer a “social norm”. And now the scandal around secret surveillance programs like PRISM and the Five Eyes’ related activities looks like another fatal blow to privacy. But the fact that cynics, security zealots and information magnates have been asking the same rhetorical question for over 40 years suggests that the answer is No!

PRISM, as revealed by whistle blower Ed Snowden, is a Top Secret electronic surveillance program of the US National Security Agency (NSA) to monitor communications traversing most of the big Internet properties including, allegedly, Apple, Facebook, Google, Microsoft, Skype, Yahoo and YouTube. Relatedly, intelligence agencies have evidently also been obtaining comprehensive call records from major telephone companies, eavesdropping on international optic fibre cables, and breaking into the cryptography many take for granted online.

In response, forces lined up at tweet speed on both sides of the stereotypical security-privacy divide. The “hawks” say privacy is a luxury in these times of terror, if you've done nothing wrong you have nothing to fear from surveillance, and in any case, much of the citizenry evidently abrogates privacy in the way they take to social networking. On the other side, libertarians claim this indiscriminate surveillance is the stuff of the Stasi, and by destroying civil liberties, we let the terrorists win.

Governments of course are caught in the middle. President Obama defended PRISM on the basis that we cannot have 100% security and 100% privacy. Yet frankly that’s an almost trivial proposition. It's motherhood. And it doesn’t help to inform any measured response to the law enforcement challenge, for we don’t have any tools that would let us design a computer system to an agreed specification in the form of, say “98% Security + 93% Privacy”. It’s silly to us the language of “balance” when we cannot measure the competing interests objectively.

Politicians say we need a community debate over privacy and national security, and they’re right (if not fully conscientious in framing the debate themselves). Are we ready to engage with these issues in earnest? Will libertarians and hawks venture out of their respective corners in good faith, to explore this difficult space?

I suggest one of the difficulties is that all sides tend to confuse privacy for secrecy. They’re not the same thing.

Privacy is a state of affairs where those who have Personal Information (PII) about us are constrained in how they use it. In daily life, we have few absolute secrets, but plenty of personal details. Not many people wish to live their lives underground; on the contrary we actually want to be well known by others, so long as they respect what they know about us. Secrecy is a sufficient but not necessary condition for privacy. Robust privacy regulations mandate strict limits on what PII is collected, how it is used and re-used, and how it is shared.

Therefore I am a privacy optimist. Yes, obviously too much PII has broken the banks in cyberspace, yet it is not necessarily the case that any “genie” is “out of the bottle”.
If PII falls into someone’s hands, privacy and data protection legislation around the world provides strong protection against re-use. For instance, in Australia Google was found to have breached the Privacy Act when its StreetView cars recorded unencrypted Wi-Fi transmissions; the company cooperated in deleting the data concerned. In Europe, Facebook’s generation of tag suggestions without consent by biometric processes was ruled unlawful; regulators there forced Facebook to cease facial recognition and delete all old templates.

We might have a better national security debate if we more carefully distinguished privacy and secrecy.

I see no reason why Big Data should not be a legitimate tool for law enforcement. I have myself seen powerful analytical tools used soon after a terrorist attack to search out patterns in call records in the vicinity to reveal suspects. Until now, there has not been the technological capacity to use these tools pro-actively. But with sufficient smarts, raw data and computing power, it is surely a reasonable proposition that – with proper and transparent safeguards in place – population-wide communications metadata can be screened to reveal organised crimes in the making.

A more sophisticated and transparent government position might ask the public to give up a little secrecy in the interests of national security. The debate should not be polarised around the falsehood that security and privacy are at odds. Instead we should be debating and negotiating appropriate controls around selected metadata to enable effective intelligence gathering while precluding unexpected re-use. If (and only if) credible and verifiable safeguards can be maintained to contain the use and re-use of personal communications data, then so can our privacy.

For me the awful thing about PRISM is not that metadata is being mined; it’s that we weren’t told about it. Good governments should bring the citizenry into their confidence.

Are we prepared to honestly debate some awkward questions?

  • Has the world really changed in the past 10 years such that surveillance is more necessary now? Should the traditional balances of societal security and individual liberties enshrined in our traditional legal structures be reviewed for a modern world?
  • Has the Internet really changed the risk landscape, or is it just another communications mechanism. Is the Internet properly accommodated by centuries old constitutions?
  • How can we have confidence in government authorities to contain their use of communications metadata? Is it possible for trustworthy new safeguards to be designed?

Many years ago, cryptographers adopted a policy of transparency. They have forsaken secret encryption algorithms, so that the maths behind these mission critical mechanisms is exposed to peer review and ongoing scrutiny. Secret algorithms are fragile in the long term because it’s only a matter of time before someone exposes them and weakens their effectiveness. Security professionals have a saying: “There is no security in obscurity”.

For precisely the same reason, we must not have secret government monitoring programs either. If the case is made that surveillance is a necessary evil, then it would actually be in everyone’s interests for governments to run their programs out in the open.

Posted in Trust, Security, Privacy, Internet, Big Data

Measuring anonymity

As we head towards 2014, de-identification of personal data sets is going to be a hot issue. I saw several things at last week's Constellation Connected Enterprise conference (CCE) that will make sure of this!

First, recall that in Australia a new definition of Personal Information (PI or "PII") means that anonymous data that can potentially be re-identified in future may have to be classified as PII today. I recently discussed how security and risk practitioners can deal with the uncertainty in re-identifiability.

And there's a barrage of new tracking, profiling and interior geo-location technologies (like Apple's iBeacon) which typically come with a promise of anonymity. See for example Tesco's announcement of face scanning for targeting adverts at their UK petrol stations.

The promise of anonymity is crucial, but it is increasingly hard to keep. Big Data techniques that join de-identified information to other data sets are able to ind correlations and reverse the anonymisation process. The science of re-identification started with the work of Dr Latanya Sweeny who famously identified a former governor and his medical records using zip codes and electoral roll data; more recently we've seen DNA "hackers" who can unmask anonymous DNA donors by joining genomic databases to public family tree information.

At CCE we saw many exciting Big Data developments, which I'll explore in more detail in coming weeks. Business Intelligence as-a-service is expanding rapidly, and is being flipped my innovative vendors to align (whether consciously or not) with customer centric Vendor Relationship Management models of doing business. And there are amazing new tools for enriching unstructured data, like newly launched Paxata's Adaptive Data Preparation Platform. More to come.

With the ability to re-identify data comes Big Responsibilities. I believe that to help businesses meet their privacy promises, we're going to need new tools to measure de-identification and hence gauge the risk of re-identification. It seems that some new generation data analytics products will allow us to run what-if scenarios to help understand the risks.

Just before CCE I also came across some excellent awareness raising materials from Voltage Security in Cupertino. Voltage CTO Terence Spies shared with me his "Deidentification Taxonomy" reproduced here with his kind permission. Voltage are leaders in Format Preserving Encryption and Tokenization -- typically used to hide credit card numbers from thieves in payment systems -- and they're showing how the tools may be used more broadly for de-identifying databases. I like the way Terence has characterised the reversibility (or not) of de-identification approaches, and further broken out various tokenization technologies.

Deidentification core Terence Spies Voltage Oct2013



Reference: Voltage Security. Reproduced with permission.

These are the foundations of the important new science of de-identification. Privacy engineers need to work hard at re-identification, so that consumers do not lose faith in the important promises made that so much data collected from their daily movements through cyber space are indeed anonymous.

Posted in Security, Privacy, Constellation Research, Big Data

Is it Personal Information or not ? Embrace the uncertainty.

The Australian parliament recently revised our definition of Personal Information (or, roughly equivalently, what Americans call Personally Indentifiable Information, or PII). We have lowered the bar somewhat, with a new regime that will categorise even more examples of data as Personal Information (PI). And this has triggered fresh anxieties amongst security practitioners about different interpretations of the law. But I like to think the more liberal definition provides an opportunity for security professionals to actually embrace privacy practice, precisely because, more than ever, privacy management is about uncertainty and risk.

Since 1988, the Australian Privacy Act has defined Personal Information as:

  • information or an opinion ... whether true or not, and whether recorded in a material form or not, about an individual whose identity is apparent, or can reasonably be ascertained, from the information or opinion (underline added).

The Privacy Amendment (Enhancing Privacy Protection) Act 2012 says that PI is:

  • information or an opinion about an identified individual, or an individual who is reasonably identifiable: (a) whether the information or opinion is true or not; and (b) whether the information or opinion is recorded in a material form or not (underline added).

What matters for the present discussion is that the amendments remove the previous condition that identification of the individual be done from the Personal Information itself. So under the new definition, we are required to consider data as PI if there is a reasonable likelihood that it may be identified in future by any means!

  • [Note that much of the discussion that follows applies equally to the US concept of PII. The US General Services Administration defines Personally Identifiable Information as information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual (underline added). This means that items of data can constitute PII if other data can be combined to identify the person concerned. The fragments are regarded as PII even if it is the whole that does the identifying.
  • Crucially, the middle I in PII stands for Identifiable, and not, as many people presume, Identifying. PII need not uniquely identify a person, it merely needs to be identifiable with a person. ]

On more than one occasion in the last few weeks, I've been asked -- semi-rhetorically -- if the new Australian definition means that even an IP or MAC address nowadays could count as "Personal Information". And I've said that in short, yes it appears so!

Some security people are uncomfortable with this, but why? When it comes down to it, what is so worrying about having to take care of Personal Information? In Australia and in 100-odd other jurisdictions with OECD based data protection laws, it means that data custodians are required to handle their information assets in accordance with certain sets of Privacy Principles. This is not trivial but neither is it necessarily onerous. If the obligations in the Privacy Principles are examined in a timely manner, alongside compliance, security and information management, then they can be accommodated as just another facet of organisational hygiene.

So for instance, consider a large data base of 'anonymous' records indexed by MAC address. This is just the sort of data that's being collected by retailers with in-store cell phone tracking systems, and used to study how customers move through the facility and interact with stores and merchandise. Strictly speaking, if the records are not identifiable then they are not PII and data protection laws do not apply. But the new definition of Personal Information in Australia means IT designers need to consider the prospect of the records becoming identifiable in the event that another data set comes into play. And why not? If anonymous data becomes identified then the data custodian will suddenly find themselves in scope for privacy laws, so it's prudent to plan for that scenario now. Depending on the custodian's risk appetite, any large potentially identifiable data set should be managed with regard to Privacy Principles. These would dictate that the collection of records should be limited to what's required for clear business purposes; that records collected for one purpose not be casually used for other unrelated purposes; and that the organisation be open about what data it collects and why. These sorts of measures are really pretty sensible.

Security practitioners I've spoken with about PI and identifiability are also upset about the ambiguity in the definition of Personal Information, and that classic bit of qualified legalese: "reasonably" identifiable. They complain that the identifiability of a piece of data is relative and fluid, and they don't want to have to interpret the legal definition. But I'm struck here by an inconsistency, because security management is all about uncertainty.

Yes, identifiability changes over time and in response to organisational developments. But security professionals and ICT architects should treat the future identification of a piece of unnamed data as just another sort of threat. The probability that data becomes identifiable depends on a range of variables that are a lot like other factors (like the emergence of other data, changes of circumstance, or developments in data analysis) that are routinely evaluated during risk assessment.

To deal with identifiability and the classification of data as PI or not PI, you should look at the following:


  • consider the broad context of your data assets, how they are used, and how they are linked to other data sets
  • think about how your data assets might grow and evolve over time
  • look at business pressures and plans to expand the detail and value of data
  • make assumptions, and document them, as you do with any business analysis
  • and plan to review periodically.

Many organisations maintain a formal Information Assets Inventory and/or an Information Classification regime, and these happen to be ideal management mechanisms in which to classify data as PI or not PI. That decision should be made against the backdrop of the organisation's risk appetite. How conservative or adventurous are in you respect of other risks? If you happen to mis-classify Personal Information, what could be the consequences, and how would the organisation respond? Do some scenario planning, and involve legal, risk and compliance. While you're at it, take the chance to raise awareness outside IT of how information is managed.

Be prepared to review and change your classifications from non-PI to PI over time. Remember that security managers should always be prepared for change. Embrace the uncertainty in Personal Information!

Truly, privacy can be tackled by IT professionals in much the same way as security. There are no certainties in security and it's the same in privacy. We will never have perfect privacy; rather, privacy management is really about putting reasonable arrangements in place for controlling the flow of Personal Information.

So, if something that's anonymous today, might be identified later, you're going have to deal with that eventually. Why not start the planning now, treat identifiability as just another threat, and roll your privacy and security management together?

Update 30 Sept 2013

I've just come across the 2010 iapp essay The changing meaning of 'personal data' by William B. Baker and Anthony Matyjaszewski. I think it's an excellent survey of the issues; it's very valuable for its span across dozens of different international jurisdictions. And it's accessible to non-lawyers.

The essay looks specifically at the question of whether IP addresses can be PII, and highlights a trend in the US towards conceding that IP addresses combined with other data cane identify, and may therefore count as PII:

Privacy regulators in the European Union regard dynamic IP addresses as personal information. Even though dynamic IP addresses change over time, and cannot be directly used to identify an individual, the EU Article 29 Working Party believes that a copyright holder using "reasonable means" can obtain a user's identity from an IP address when pursuing abusers of intellectual property rights. More recently, other European privacy regulators have voiced similar views regarding permanent IP addresses, noting that they can be used to track and, eventually, identify individuals.
This contrasts sharply to the approach taken in the United States under laws such as COPPA where, a decade ago, the FTC considered whether to classify even static IP addresses as personal information but ultimately rejected the idea out of concern that it would unnecessarily increase the scope of the law. In the past few years, however, the FTC has begun to suggest that IP addresses should be considered PII for much the same reasons as their European counterparts. Indeed, in a recent consent decree, the FTC included within the definition of "nonpublic, individually-identifiable information" an “IP address (or other "persistent identifier")." And the HIPAA Privacy Rule treats IP addresses as a form of "protected health information" by listing them as a type of data that must be removed from PHI for deidentification purposes.

Baker & Matyjaszewski stress that "foreseeability [of re-identification] may simply be a function of one's ingenuity". And indeed it is. But I would reiterate that foreseeing of all sorts of potential adverse events is exactly what security professionals do, day in, day out. Nobody really knows what the chances are of a web site being hacked, or of a trusted employee going feral, but risk assessments routinely involve us gauging these eventualities, which we do by making assumptions, writing them down, drawing conclusions, and reviewing them from time to time. Privacy threats are no different, including the uncertainty about whether data may one day be rendered identifiable.

Posted in Security, Privacy

What's really happening to privacy?

The cover of Newsweek magazine on 27 July 1970 featured a cartoon couple cowered by computer and communications technology, and the urgent all-caps headline “IS PRIVACY DEAD?”

Is Privacy Dead Newsweek

Four decades on, Newsweek is dead, but we’re still asking the same question.

Every generation or so, our notions of privacy are challenged by a new technology. In the 1880s (when Warren and Brandeis developed the first privacy jurisprudence) it was photography and telegraphy; in the 1970s it was computing and consumer electronics. And now it’s the Internet, a revolution that has virtually everyone connected to everyone else (and soon everything) everywhere, and all of the time. Some of the world’s biggest corporations now operate with just one asset – information – and a vigorous “publicness” movement rallies around the purported liberation of shedding what are said by writers like Jeff Jarvis (in his 2011 book “Public Parts”) to be old fashioned inhibitions. Online Social Networking, e-health, crowd sourcing and new digital economies appear to have shifted some of our societal fundamentals.

However the past decade has seen a dramatic expansion of countries legislating data protection laws, in response to citizens’ insistence that their privacy is as precious as ever. And consumerized cryptography promises absolute secrecy. Privacy has long stood in opposition to the march of invasive technology: it is the classical immovable object met by an irresistible force.

So how robust is privacy? And will the latest technological revolution finally change privacy forever?

Soaking in information

We live in a connected world. Young people today may have grown tired of hearing what a difference the Internet has made, but a crucial question is whether relatively new networking technologies and sheer connectedness are exerting novel stresses to which social structures have yet to adapt. If “knowledge is power” then the availability of information probably makes individuals today more powerful than at any time in history. Search, maps, Wikipedia, Online Social Networks and 3G are taken for granted. Unlimited deep technical knowledge is available in chat rooms; universities are providing a full gamut of free training via Massive Open Online Courses (MOOCs). The Internet empowers many to organise in ways that are unprecedented, for political, social or business ends. Entirely new business models have emerged in the past decade, and there are indications that political models are changing too.

Most mainstream observers still tend to talk about the “digital” economy but many think the time has come to drop the qualifier. Important services and products are, of course, becoming inherently digital and whole business categories such as travel, newspapers, music, photography and video have been massively disrupted. In general, information is the lifeblood of most businesses. There are countless technology-billionaires whose fortunes are have been made in industries that did not exist twenty or thirty years ago. Moreover, some of these businesses only have one asset: information.

Banks and payments systems are getting in on the action, innovating at a hectic pace to keep up with financial services development. There is a bewildering array of new alternative currencies like Linden dollars, Facebook Credits and Bitcoins – all of which can be traded for “real” (reserve bank-backed) money in a number of exchanges of varying reputation. At one time it was possible for Entropia Universe gamers to withdraw dollars at ATMs against their virtual bank balances.

New ways to access finance have arisen, such as peer-to-peer lending and crowd funding. Several so-called direct banks in Australia exist without any branch infrastructure. Financial institutions worldwide are desperate to keep up, launching amongst other things virtual branches and services inside Online Social Networks (OSNs) and even virtual worlds. Banks are of course keen to not have too many sales conducted outside the traditional payments system where they make their fees. Even more strategically, banks want to control not just the money but the way the money flows, because it has dawned on them that information about how people spend might be even more valuable than what they spend.

Privacy in an open world

For many for us, on a personal level, real life is a dynamic blend of online and physical experiences. The distinction between digital relationships and flesh-and-blood ones seems increasingly arbitrary; in fact we probably need new words to describe online and offline interactions more subtly, without implying a dichotomy.

Today’s privacy challenges are about more than digital technology: they really stem from the way the world has opened up. The enthusiasm of many for such openness – especially in Online Social Networking – has been taken by some commentators as a sign of deep changes in privacy attitudes. Facebook's Mark Zuckerberg for instance said in 2010 that “People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people - and that social norm is just something that has evolved over time”. And yet serious academic investigation of the Internet’s impact on society is (inevitably) still in its infancy. Social norms are constantly evolving but it’s too early to tell to if they have reached a new and more permissive steady state. The views of information magnates in this regard should be discounted given their vested interest in their users' promiscuity.

At some level, privacy is about being closed. And curiously for a fundamental human right, the desire to close off parts of our lives is relatively fresh. Arguably it’s even something of a “first world problem”. Formalised privacy appears to be an urban phenomenon, unknown as such to people in villages when everyone knew everyone – and their business. It was only when large numbers of people congregated in cities that they became concerned with privacy. For then they felt the need to structure the way they related to large numbers of people – family, friends, work mates, merchants, professionals and strangers – in multi-layered relationships. So privacy was borne of the first industrial revolution. It has taken prosperity and active public interest to create the elaborate mechanisms that protect our personal privacy from day to day and which we take for granted today: the postal services, direct dial telephones, telecommunications regulations, individual bedrooms in large houses, cars in which we can escape or a while, and now of course the mobile handset.

In control

Privacy is about respect and control. Simply put, if someone knows me, then they should respect what they know; they should exercise restraint in how they use that knowledge, and be guided by my wishes. Generally, privacy is not about anonymity or secrecy. Of course, if we live life underground then unqualified privacy can be achieved, yet most of us exist in diverse communities where we actually want others to know a great deal about us. We want merchants to know our shipping address and payment details, healthcare providers to know our intimate details, hotels to know our travel plans and so on. Practical privacy means that personal information is not shared arbitrarily, and that individuals retain control over the tracks of their lives.

Big Data: Big Future

Big Data tools are being applied everywhere, from sifting telephone call records to spot crimes in the planning, to DNA and medical research. Every day, retailers use sophisticated data analytics to mine customer data, ostensibly to better uncover true buyer sentiments and continuously improve their offerings. Some department stores are interested in predicting such major life changing events as moving house or falling pregnant, because then they can target whole categories of products to their loyal customers.

Real time Big Data will become embedded in our daily lives, through several synchronous developments. Firstly computing power, storage capacity and high speed Internet connectivity all continue to improve at exponential rates. Secondly, there are more and more “signals” for data miners to choose from. No longer do you have to consciously tell your OSN what you like or what you’re doing, because new augmented reality devices are automatically collecting audio, video and locational data, and trading it around a complex web of digital service providers. And miniaturisation is leading to a whole range of smart appliances, smart cars and even smart clothes with built-in or ubiquitous computing.

The privacy risks are obvious, and yet the benefits are huge. So how should we think about the balance in order to optimise the outcome? Let’s remember that information powers the new digital economy, and the business models of many major new brands like Facebook, Twitter, Four Square and Google incorporate a bargain for Personal Information. We obtain fantastic services from these businesses “for free” but in reality they are enabled by all that information we give out as we search, browse, like, friend, tag, tweet and buy.

The more innovation we see ahead, the more certain it seems that data will be the core asset of cyber enterprises. To retain and even improve our privacy in the unfolding digital world, we must be able to visualise the data flows that we’re engaged in, evaluate what we get in return for our information, and determine a reasonable trade of costs and benefits

Is Privacy Dead? If the same rhetorical question needs to be asked over and over for decades, then it’s likely the answer is no.

Posted in Social Networking, Privacy, Internet, Culture, Big Data