Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The Privacy Shield - another blunt weapon

For many years, American businesses have enjoyed a bit of special treatment under European data privacy laws. The so-called "Safe Harbor" arrangement was negotiated by the Federal Communications Commission (FCC) so that companies could self-declare broad compliance with data security rules. Normally organisations are not permitted to move Personally Identifiable Information (PII) about Europeans beyond the EU unless the destination has equivalent privacy measures in place. The "Safe Harbor" arrangement was a shortcut around full compliance; as such it was widely derided by privacy advocates outside the USA, and for some years had been questioned by the more activist regulators in Europe. And so it seemed inevitable that the arrangement would be eventually annulled, as it was last October.

With the threat of most personal data flows from Europe into America being halted, US and EU trade officials have worked overtime for five months to strike a new deal. Today (January 29) the US Department of Commerce announced the "EU-US Privacy Shield".

The Privacy Shield is good news for commerce of course. But I hope that in the excitement, American businesses don't lose sight of the broader sweep of privacy law. Even better would be to look beyond compliance, and take the opportunity to rethink privacy, because there is more to it than security and regulatory short cuts.

The Privacy Shield and the earlier Safe Harbor arrangement are really only about satisfying one corner of European data protection laws, namely transborder flows. The transborder data flow rules basically say you must not move personal data from an EU state into a jurisdiction where the privacy protections are weaker than in Europe. Many countries actually have the same sort of laws, including Australia. Normally, as a business, you would have to demonstrate to a European data protection authority (DPA) that your information handling is complying with EU laws, either by situating your data centre in a similar jurisdiction, or by implementing legally binding measures for safeguarding data to EU standards. This is why so many cloud service providers are now building fresh infrastructure in the EU.

But there is more to privacy than security and data centre location. American businesses must not think that just because there is a new get-out-of-jail clause for transborder flows, their privacy obligations are met. Much more important than raw data security are the bedrocks of privacy: Collection Limitation, Usage Limitation, and Transparency.

Basic data privacy laws the world-over require organisations to exercise constraint and openness. That is, Personal Information must not be collected without a real demonstrated need (or without consent); once collected for a primary purpose, Personal Information should not be used for unrelated secondary purposes; and individuals must be given reasonable notice of what personal data is being collected about them, how it is collected, and why. It's worth repeating: general data protection is not unique to Europe; at last count, over 100 countries around the world had passed similar laws; see Prof Graham Greenleaf's Global Tables of Data Privacy Laws and Bills, January 2015.

Over and above Safe Harbor, American businesses have suffered some major privacy missteps. The Privacy Shield isn't going to make overall privacy better by magic.

For instance, Google in 2010 was caught over-collecting personal information through its StreetView cars. It is widely known (and perfectly acceptable) that mapping companies use the positions of unique WiFi routers for their geolocation databases. Google continuously collects WiFi IDs and coordinates via its StreetView cars. The privacy problem here was that some of the StreetView cars were also collecting unencrypted WiFi traffic (for "research purposes") whenever they came across it. In over a dozen countries around the world, Google admitted they had breached local privacy laws by colelcting excessive PII, apologised for the overreach, explained it as inadvertent, and deleted all the WiFi records in question. The matter was settled in just a few months in places like Korea, Japan and Australia. But in the US, where there is no general collection limitation privacy rule, Google has been defending what they did. Absent general data privacy protection, the strongest legislation that seems to apply to the StreetView case is wire tap law, but its application to the Internet is complex. And so the legal action has taken years and years, and it's still not resolved.

I don't know why Google doesn't see that a privacy breach in the rest of the world is a privacy breach in the US, and instead of fighting it, concede that the collection of WiFi traffic was unnecessary and wrong.

Other proof that European privacy law is deeper and broader than the Privacy Shield is found in social networking mishaps. Over the years, many of Facebook's business practices for instance have been found unlawful in the EU. Recently there was the final ruling against "Find Friends", which uploads the contact details of third parties without their consent. Before that there was the long running dispute over biometric photo tagging. When Facebook generates tag suggestions, what they're doing is running facial recognition algorithms over photos in their vast store of albums, without the consent of the people in those photos. Identifying otherwise anonymous people, without consent (and without restraint as to what might be done next with that new PII), seems to be an unlawful under the Collection Limitation and Usage Limitation principles.

In 2012, Facebook was required to shut down their photo tagging in Europe. They have been trying to re-introduce it ever since. Whether they are successful or not will have nothing to do with the "Privacy Shield".

The Privacy Shield comes into a troubled trans-Atlantic privacy environment. Whether or not the new EU-US arrangement fares better than the Safe Harbor remains to be seen. But in any case, since the Privacy Shield really aims to free up business access to data, sadly it's unlikely to do much good for true privacy.

The examples cited here are special cases of the collision of Big Data with data privacy, which is one of my special interest areas at Constellation Research. See for example "Big Privacy" Rises to the Challenges of Big Data.

Posted in Social Networking, Social Media, Privacy, Biometrics, Big Data

Finding friends gets a little harder

The highest court in Germany has ruled that Facebook’s “Find Friends” function is unlawful there. The decision is the culmination of legal action started in 2010 by German consumer groups, and confirms the rulings of other lower courts in 2012 and 2014. The gist of the privacy breach is that Facebook is illegitimately using details of third parties obtained from members, to market to those third parties without their consent. Further, the “Find Friends” feature was found to not be clearly explained to members when they are invited to use it.

My Australian privacy colleague Anna Johnston and I published a paper in 2011 examining these very issues; see "Privacy Compliance Problems for Facebook", IEEE Technology and Society Magazine, V31.2, December 1, 2011, at the Social Science Research Network, SSRN.

Here’s a recap of our analysis.

One of the most significant collections of Personally Identifiable Information (PII) by online social networks is the email address books of members who elect to enable “Find Friends” and similar functions. This is typically the very first thing that a new user is invited to do when they register for an OSN. And why wouldn’t it be? Finding friends is core to social networking.

New Facebook members are advised, immediately after they first register, that “Searching your email account is the fastest way to find your friends”. There is a link to some minimal explanatory information:

    • Import contacts from your account and store them on Facebook's servers where they may be used to help others search for or connect with people or to generate suggestions for you or others. Contact info from your contact list and message folders may be imported. Professional contacts may be imported but you should send invites to personal contacts only. Please send invites only to friends who will be glad to get them.

This is pretty subtle. New users may not fully comprehend what is happening when they elect to “Find Friends”.

A key point under international privacy regulations is that this importing of contacts represents an indirect collection of PII of others (people who happen to be in a member’s email address book), without their, knowledge let alone authorisation.

By the way, it’s interesting that Facebook mentions “professional contacts” because there is a particular vulnerability for professionals which I reported in The Journal of Medical Ethics in 2010. If a professional, especially one in sole practice, happens to have used her web mail to communicate with clients, then those clients’ details may be inadvertently uploaded by “Find Friends”, along with crucial metadata like the association with the professional concerned. Subsequently, the network may try to introduce strangers to each other on the basis they are mutual “friends” of that certain professional. In the event she happens to be a mental health counsellor, a divorce attorney or a private detective for instance, the consequences could be grave.

It’s not known how Facebook and other OSNs will respond to the German decision. As Anna Johnston and I wrote in 2011, the quiet collection of people’s details in address books conflicts with basic privacy principles in a great many jurisdictions, not just Germany. The problem has been known for years, so various solutions might be ready to roll out quite quickly. The fix might be as simple in principle as giving proper notice to the people who’s details have been uploaded, before their PII is used by the network. It seems to me that telling people what’s going on like this would, fittingly, be the “social” thing to do.

But the problem from the operators’ commercial points of view is that notices and the like introduce friction, and that’s the enemy of infomopolies. So once again, a major privacy ruling from Europe may see a re-calibration of digital business practices, and some limits placed on the hitherto unrestrained information rush.

Posted in Social Networking, Privacy

Breaking down digital identity

Identity online is a vexed problem. The majority of Internet fraud today can be related to weaknesses in the way we authenticate people electronically. Internet identity is terribly awkward too. Unfortunately today we still use password techniques dating back to 1960s mainframes that were designed for technicians, by technicians.

Our identity management problems also stem from over-reach. For one thing, the information era heralded new ways to reach and connect with people, with almost no friction. We may have taken too literally the old saw “information wants to be free.” Further, traditional ways of telling who people are, through documents and “old boys networks” creates barriers, which are anathema to new school Internet thinkers.

For the past 10-to-15 years, a heady mix of ambitions has informed identity management theory and practice: improve usability, improve security and improve “trust.” Without ever pausing to unravel the rainbow, the identity and access management industry has created grandiose visions of global “trust frameworks” to underpin a utopia of seamless stranger-to-stranger business and life online.

Well-resourced industry consortia and private-public partnerships have come and gone over the past decade or more. Numerous “trust” start-up businesses have launched and failed. Countless new identity gadgets, cryptographic algorithms and payment schemes have been tried.

And yet the identity problem is still with us. Why is identity online so strangely resistant to these well-meaning efforts to fix it? In particular, why is federated identity so dramatically easier said than done?

Identification is a part of risk management. In business, service providers use identity to manage the risk that they might be dealing with the wrong person. Different transactions carry different risks, and identification standards are varied accordingly. Conversely, if a provider cannot be sure enough who someone is, they now have the tools to withhold or limit their services. For example, when an Internet customer signs in from an unusual location, payment processors can put a cap on the dollar amounts they will authorize.

Across our social and business walks of life, we have distinct ways of knowing people, which yields a rich array of identities by which we know and show who we are to others. These Identities have evolved over time to suit different purposes. Different relationships rest on different particulars, and so identities naturally become specific not general.

The human experience of identity is one of ambiguity and contradictions. Each of us simultaneously holds a weird and wonderful ensemble of personal, family, professional and social identities. Each is different, sometimes radically so. Some of us lead quite secret lives, and I’m not thinking of anything salacious, but maybe just the role-playing games that provide important escapes from the humdrum.

Most of us know how it feels when identities collide. There’s no better example than what I call the High School Reunion Effect: that strange dislocation you feel when you see acquaintances for the first time in decades. You’ve all moved on, you’ve adopted new personae in new contexts – not the least of which is the one defined by a spouse and your own new family. Yet you find yourself re-winding past identities, relating to your past contemporaries as you all once were, because it was those school relationships, now fossilised, that defined you.

Frankly, we’ve made a mess of the pivotal analogue-to-digital conversion of identity. In real life we know identity is malleable and relative, yet online we’ve rendered it crystalline and fragile.
We’ve come close to the necessary conceptual clarity. Some 10 years ago a network of “identerati” led by Kim Cameron of Microsoft composed the “Laws of Identity,” which contained a powerful formulation of the problem to be addressed. The Laws defined Digital Identity as “a set of claims made [about] a digital subject.”

Your Digital Identity is a proxy for a relationship, pointing to a suite of particulars that matter about you in a certain context. When you apply for a bank account, when you subsequently log on to Internet banking, when you log on to your work extranet, or to Amazon or PayPal or Twitter, or if you want to access your electronic health record, the relevant personal details are different each time.
The flip side of identity management is privacy. If authentication concerns what a Relying Party needs to know about you, then privacy is all about what they don’t need to know. Privacy amounts to information minimization; security professionals know this all too well as the “Need to Know” principle.

All attempts at grand global identities to date have failed. The Big Certification Authorities of the 1990s reckoned a single, all-purpose digital certificate would meet the needs of all business, but they were wrong. Ever more sophisticated efforts since then have also failed, such as the Infocard Foundation, Liberty Alliance and the Australian banking sector’s Trust Centre.

Significantly, federation for non-trivial identities only works within regulatory monocultures – for example the US Federal Bridge CA, or the Scandinavian BankID network – where special legislation authorises banks and governments to identify customers by the one credential. The current National Strategy for Trusted Identities in Cyberspace has pondered legislation to manage liability but has balked. The regulatory elephant remains in the room.

As an aside, obviously social identities like Facebook and Twitter handles federate very nicely, but these are issued by organisations that don't really know who we are, and they're used by web sites that don't really care who we are; social identity federation is a poor model for serious identity management.

A promising identity development today is the Open Identity Foundation’s Attribute Exchange Network, a new architecture seeking to organise how identity claims may be traded. The Attribute Exchange Network resonates with a growing realization that, in the words of Andrew Nash, a past identity lead at Google and at PayPal, “attributes are at least as interesting as identities – if not more so.”

If we drop down a level and deal with concrete attribute data instead of abstract identities, we will start to make progress on the practical challenges in authentication: better resistance to fraud and account takeover, easier account origination and better privacy.

My vision is that by 2019 we will have a fresh marketplace of Attribute Providers. The notion of “Identity Provider” should die off, for identity is always in the eye of the Relying Party. What we need online is an array of respected authorities and agents that can vouch for our particulars. Banks can provide reliable electronic proof of our payment card numbers; government agencies can attest to our age and biographical details; and a range of private businesses can stand behind attributes like customer IDs, membership numbers and our retail reputations.

In five years time I expect we will adopt a much more precise language to describe how to deal with people online, and it will reflect more faithfully how we’ve transacted throughout history. As the old Italian proverb goes: It is nice to “trust” but it’s better not to.

This article first appeared as "Abandoning identity in favor of attributes" in Secure ID News, 2 December, 2014.

Posted in Trust, Social Networking, Security, Identity, Federated Identity

Identity Management Moves from Who to What

The State Of Identity Management in 2015

Constellation Research recently launched the "State of Enterprise Technology" series of research reports. These assess the current enterprise innovations which Constellation considers most crucial to digital transformation, and provide snapshots of the future usage and evolution of these technologies.

My second contribution to the state-of-the-state series is "Identity Management Moves from Who to What". Here's an excerpt from the report:

Introduction

In spite of all the fuss, personal identity is not usually important in routine business. Most transactions are authorized according to someone’s credentials, membership, role or other properties, rather than their personal details. Organizations actually deal with many people in a largely impersonal way. People don’t often care who someone really is before conducting business with them. So in digital Identity Management (IdM), one should care less about who a party is than what they are, with respect to attributes that matter in the context we’re in. This shift in focus is coming to dominate the identity landscape, for it simplifies a traditionally multi-disciplined problem set. Historically, the identity management community has made too much of identity!

Six Digital Identity Trends for 2015

SoS IdM Summary Pic

1. Mobile becomes the center of gravity for identity. The mobile device brings convergence for a decade of progress in IdM. For two-factor authentication, the cell phone is its own second factor, protected against unauthorized use by PIN or biometric. Hardly anyone ever goes anywhere without their mobile - service providers can increasingly count on that without disenfranchising many customers. Best of all, the mobile device itself joins authentication to the app, intimately and seamlessly, in the transaction context of the moment. And today’s phones have powerful embedded cryptographic processors and key stores for accurate mutual authentication, and mobile digital wallets, as Apple’s Tim Cook highlighted at the recent White House Cyber Security Summit.

2. Hardware is the key – and holds the keys – to identity. Despite the lure of the cloud, hardware has re-emerged as pivotal in IdM. All really serious security and authentication takes place in secure dedicated hardware, such as SIM cards, ATMs, EMV cards, and the new Trusted Execution Environment mobile devices. Today’s leading authentication initiatives, like the FIDO Alliance, are intimately connected to standard cryptographic modules now embedded in most mobile devices. Hardware-based identity management has arrived just in the nick of time, on the eve of the Internet of Things.

3. The “Attributes Push” will shift how we think about identity. In the words of Andrew Nash, CEO of Confyrm Inc. (and previously the identity leader at PayPal and Google), “Attributes are at least as interesting as identities, if not more so.” Attributes are to identity as genes are to organisms – they are really what matters about you when you’re trying to access a service. By fractionating identity into attributes and focusing on what we really need to reveal about users, we can enhance privacy while automating more and more of our everyday transactions.

The Attributes Push may recast social logon. Until now, Facebook and Google have been widely tipped to become “Identity Providers”, but even these giants have found federated identity easier said than done. A dark horse in the identity stakes – LinkedIn – may take the lead with its superior holdings in verified business attributes.

4. The identity agenda is narrowing. For 20 years, brands and organizations have obsessed about who someone is online. And even before we’ve solved the basics, we over-reached. We've seen entrepreneurs trying to monetize identity, and identity engineers trying to convince conservative institutions like banks that “Identity Provider” is a compelling new role in the digital ecosystem. Now at last, the IdM industry agenda is narrowing toward more achievable and more important goals - precise authentication instead of general identification.

Digital Identity Stack (3 1)

5. A digital identity stack is emerging. The FIDO Alliance and others face a challenge in shifting and improving the words people use in this space. Words, of course, matter, as do visualizations. IdM has suffered for too long under loose and misleading metaphors. One of the most powerful abstractions in IT was the OSI networking stack. A comparable sort of stack may be emerging in IdM.

6. Continuity will shape the identity experience. Continuity will make or break the user experience as the lines blur between real world and virtual, and between the Internet of Computers and the Internet of Things. But at the same time, we need to preserve clear boundaries between our digital personae, or else privacy catastrophes await. “Continuous” (also referred to as “Ambient”) Authentication is a hot new research area, striving to provide more useful and flexible signals about the instantaneous state of a user at any time. There is an explosion in devices now that can be tapped for Continuous Authentication signals, and by the same token, rich new apps in health, lifestyle and social domains, running on those very devices, that need seamless identity management.

A snapshot at my report "Identity Moves from Who to What" is available for download at Constellation Research. It expands on the points above, and sets out recommendations for enterprises to adopt the latest identity management thinking.

Posted in Social Networking, Security, Privacy, Identity, FIDO Alliance, Federated Identity, Constellation Research, Biometrics, Big Data, Trust

The State of the State of Privacy

Constellation Research analysts are wrapping up a very busy 2014 with a series of "State of the State" reports. For my part I've looked at the state of privacy, which I feel is entering its adolescent stage.

Here's a summary.

1. Consumers have not given up privacy - they've been tricked out of it.
The impression is easily formed that people just don’t care about privacy anymore, but in fact people are increasingly frustrated with privacy invasions. They’re tired of social networks mining users’ personal lives; they are dismayed that video game developers can raid a phone’s contact lists with impunity; they are shocked by the deviousness of Target analyzing women’s shopping histories to detect pregnant customers; and they are revolted by the way magnates help themselves to operational data like Uber’s passenger movements for fun or allegedly for harassment – just because they can.

2. Private sector surveillance is overshadowed by government intrusion, but is arguably just as bad.
Edward Snowden’s revelations of a massive military-industrial surveillance effort were of course shocking, but they should not steal all the privacy limelight. In parallel with and well ahead of government spy programs, the big OSNs and search engine companies have been gathering breathtaking amounts of data, all in the interests of targeted advertising. These data stores have come to the attention of the FBI and CIA who must be delighted that someone else has done so much of their spying for them. These businesses boast that they know us better than we know ourselves. That’s chilling. We need to break through into a post-Snowden world.

3. The U.S. is the Canary Islands of privacy.
The United States remains the only major economy without broad-based information privacy laws. There are almost no restraints on what American businesses may do with personal information they collect from their customers, or synthesize from their operations. In the rest of the world, most organizations must restrict their collection of data, limit the repurposing of data, and disclose their data handling practices in full. Individuals may want to move closer to European-style privacy protection, while many corporations prefer the freedom they have in America to hang on to any data they like while they figure out how to make money out of it. Digital companies like to call this “innovation” and grandiose claims are made about its criticality for the American economy, but many consumers would prefer the sort of innovation that respects their privacy while delivering value-for-data.

4. Privacy is more about politics than technology.
Privacy can be seen as a power play between individual rights and the interests of governments and businesses. Most of us actually want businesses to know quite a lot about us, but we expect them to respect what they know and to be restrained in how they use it. Privacy is less about what organizations do with information than what they choose not to do with it. Hence, privacy cannot be a technology issue. It is not about keeping things secret but rather, keeping them close. Privacy is actually the protection we need when things are not secret.

5. Land grab for “public” data accelerates.

Image Analysis As Cracking Tower (1 1 1)
Data is an immensely valuable raw material. We should re-frame unstructured data as “information ore”. More than data mining, Big Data is really about data refining but unlike the stuff of traditional extractive industries, data seems inexhaustible, and the cost of extraction is near zero. A huge amount of Big Data activity is propelled by the misconception that data in the public domain is free for all. The reality is that many data protection laws govern the collection and use of personal data regardless of where it comes from. That is, personal data in the “public domain” is in fact encumbered. This is counter-intuitive to many, yet many public resources are regulated - including minerals, electromagnetic spectrum and intellectual property.

6. Data literacy will be key to digital safety.
Computer literacy is one thing, but data literacy is different and less tangible. We have strong privacy intuitions that have evolved over centuries but in cyberspace we lose our bearings. We don’t have the familiar social cues when we go online, so now we need to develop new ones. And we need to build up a common understanding of how data flows in the digital economy. Today we train kids in financial literacy to engender a first-hand sense of how commerce works; data literacy may become even more important as a life skill. It's more than being able to work an operating system, a device and umpteen apps. It means having meaningful mental models of what goes on in computers. Without understanding this, we can’t construct effective privacy policies or privacy labeling.

7. Privacy will get worse before it gets better.
Privacy is messy, even where data protection rules are well entrenched. Consider the controversial Right To Be Forgotten in Europe, which requires search engine operators to provide a mechanism for individuals to request removal of old, inaccurate and harmful reports from results. The new rule has been derived from existing privacy principles, which treat the results of search algorithms as a form of synthesis rather than a purely objective account of history, and therefore hold the search companies partly responsible for the offense their processes might produce. Yet, there are plenty of unintended consequences, and collisions with other jurisprudence. The sometimes urgent development of new protections for old civil rights is never plain sailing.

My report "Privacy Enters Adolescence" can be downloaded here. It expands on the points above, and sets out recommendations for improving awareness of how Personal Data flows in the digital economy, negotiating better deals in the data-for-value bargain, the conduct of Privacy Impact Assessments, and developing a "Privacy Bill of Rights".

Posted in Social Networking, Social Media, Internet, Constellation Research, Big Data

Too smart?

An Engadget report today, "Hangouts eavesdrops on your chats to offer 'smart suggestions'" describes a new "spy/valet" feature being added to Google's popular video chat tool.

  • "Google's Hangouts is gaining a handy, but slightly creepy new feature today. The popular chat app will now act as a digital spy-slash-valet by eavesdropping on your conversations to offer 'smart suggestions.' For instance, if a pal asks 'where are you?' it'll immediately prompt you to share your location, then open a map so you can pin it precisely."

It's sad that this sort of thing still gets meekly labeled as "creepy". The privacy implications are serious and pretty easy to see.

Google is evidently processing the text of Hangouts as they fly through their system, extracting linguistic cues, interpreting what's being said using Artificial Intelligence, extracting new meaning and insights, and offering suggestions.

We need some clarification about whether any covert tests of this technology have been undertaken during the R&D phase. A company obviously doesn't launch a new product like this without a lot of research, feasibility testing, prototyping and testing. Serious work on 'smart suggestions' would not start without first testing how it works in real life. So I wonder if any of this evaluation was done covertly on live data? Are Google researchers routinely eavesdropping on hangouts to develop the 'smart suggestions' technology?

If so, is such data usage covered by their Privacy Policy (you know, under the usual "in order to improve our service" justification)? And is usage sanctioned internationally in the stricter privacy regimes?

Many people have said to me I'm jumping the gun, and that Google would probably test the new Hangouts feature on its own employees. Perhaps, but given that scanning gmails is situation normal for Google, and they have a "privacy" culture that joins up all their business units so that data may be re-purposed almsot without limit, I feel sure that running AI algorithms on text without telling people would be par for the course.

In development and in operation, we need to know what steps are taken to protect the privacy of hangout data. What personally identifiable data and metadata is retained for other purposes? Who inside Google is granted access to the data and especially the synthtised insights? How long does any secondary usage persist for? Are particularly sensitive matters (like health data, financial details, corporate intellectual property etc.) filtered out?

This is well beyond "creepy". Hangouts and similar video chat are certainly wonderdful technologies. We're using them routinely for teaching, education, video conferencing, collaboration and consultation. The tools may become entrenched in corporate meetings, telecommuting, healthcare and the professions. But if I am talking with my doctor, or discussing patents with my legal team, or having a clandestine chat with a lover, I clearly do not want any unsolicited contributions from the service provider. More fundamentally, I want assurance that no machine is ever tapping into these sorts of communications, running AI algorithms, and creating new insights. If I'm wrong about covert testing on live data, then Google could do what Apple did and publish an Open Letter clarifying their data usage practices and strategies.

Come to think of it, if Google is running natural language processing algorithms over the Hangouts stream, might they be augmenting their gmail scanning the same way? Their business model is to extract insights about users from any data they get their hands on. Until now it's been a crude business of picking out keywords and using them to profile users' interests and feed targeted advertising. But what if they could get deeper information about us through AI? Is there any sign from their historical business practices that Google would not do this? And what if they can extract sensitive information like mental health indications? Even with good intent and transarency, predicting healthcare from social media is highly problematic as shown by the "Samaritans Radar" experience.

Artificial Intelligence is one of the new frontiers. Hot on the heels of the successes of IBM Watson, we're seeing Natural Language Processing and analytics rapidly penetrate business and now consumer applications. Commentators are alternately telling us that AI will end humanity, and not to worry about it. For now, I call on people to simply think clearly through the implications, such as for privacy. If AI programs are clever enough to draw deep insights about us from what we say, then the "datapreneurs" in charge of those algorithms need to remember they are just as accountable for privacy as if they have asked us reveal all by filling out a questionnaire.

Posted in Social Networking, Social Media, Privacy, Internet, Big Data

The Prince of Data Mining

Facial recognition is digital alchemy. It's the prince of data mining.

Facial recognition takes previously anonymous images and conjures peoples' identities. It's an invaluable capability. Once they can pick out faces in crowds, trawling surreptitiously through anyone and everyone's photos, the social network businesses can work out what we're doing, when and where we're doing it, and who we're doing it with. The companies figure out what we like to do without us having to 'like' or favorite anything.

So Google, Facebook, Apple at al have invested hundreds of megabucks in face recognition R&D and buying technology start-ups. And they spend billions of dollars buying images and especially faces, going back to Google's acquisition of Picasa in 2004, and most recently, Facebook's ill-fated $3 billion offer for Snapchat.

But if most people find face recognition rather too creepy, then there is cause for optimism. The technocrats have gone too far. What many of them still don't get is this: If you take anonymous data (in the form of photos) and attach names to that data (which is what Facebook photo tagging does - it guesses who people are in photos are, attaches putative names to records, and invites users to confirm them) then you Collect Personal Information. Around the world, existing pre-biometrics era black letter Privacy Law says you can't Collect PII even indirectly like that without am express reason and without consent.

When automatic facial recognition converts anonymous data into PII, it crosses a bright line in the law.

Posted in Social Networking, Privacy, Biometrics, Big Data

Crowd sourcing private sector surveillance

A repeated refrain of cynics and “infomopolists” alike is that privacy is dead. People are supposed to know that anything on the Internet is up for grabs. In some circles this thinking turns into digital apartheid; some say if you’re so precious about your privacy, just stay offline.

But socialising and privacy are hardly mutually exclusive; we don’t walk around in public with our names tattooed on our foreheads. Why can’t we participate in online social networks in a measured, controlled way without submitting to the operators’ rampant X-ray vision? There is nothing inevitable about trading off privacy for conviviality.

The privacy dangers in Facebook and the like run much deeper than the self-harm done by some peoples’ overly enthusiastic sharing. Promiscuity is actually not the worst problem, neither is the notorious difficulty of navigating complex and ever changing privacy settings.

The advent of facial recognition presents far more serious and subtle privacy challenges.

Facebook has invested heavily in face recognition technology, and not just for fun. Facebook uses it in effect to crowd-source the identification and surveillance of its members. With facial recognition, Facebook is building up detailed pictures of what people do, when, where and with whom.

You can be tagged without consent in a photo taken and uploaded by a total stranger.

The majority of photos uploaded to personal albums over the years were not intended for anything other than private viewing.

Under the privacy law of Australia and data protection regulations in dozens of other jurisdictions, what matters is whether data is personally identifiable. The Commonwealth Privacy Act 1988 (as amended in 2014) defines “Personal Information” as: “information or an opinion about an identified individual, or an individual who is reasonably identifiable”.

Whenever Facebook attaches a member’s name to a photo, they are converting hitherto anonymous data into Personal Information, and in so doing, they become subject to privacy law. Automated facial recognition represents an indirect collection of Personal Information. However too many people still underestimate the privacy implications; some technologists naively claim that faces are “public” and that people can have no expectation of privacy in their facial images, ignoring that information privacy as explained is about the identifiability and identification of data; the words “public” and “private” don’t even figure in the Privacy Act!

If a government was stealing into our photo albums, labeling people and profiling them, there would be riots. It's high time that private sector surveillance - for profit - is seen for what it is, and stopped.

Posted in Social Networking, Social Media, Privacy, Biometrics

Four Corners' 'Privacy Lost': A demonstration of the Collection Principle

Tonight, Australian Broadcasting Corporation’s Four Corners program aired a terrific special, "Privacy Lost" written and produced by Martin Smith from the US public broadcaster PBS’s Frontline program.

Here we have a compelling demonstration of the importance and primacy of Collection Limitation for protecting our privacy.

UPDATE: The program we saw in Australia turns out to be a condensed version of PBS's two part The United States of Secrets from May 2014.

About the program

Martin Smith summarises brilliantly what we know about the NSA’s secret surveillance programs, thanks to the revelations of Ed Snowden, the Guardian’s Glenn Greenwald and the Washington Post’s Barton Gellman; he holds many additional interviews with Julia Angwin (author of “Dragnet Nation”), Chris Hoofnagle (UC Berkeley), Steven Levy (Wired), Christopher Soghoian (ACLU) and Tim Wu (“The Master Switch”), to name a few. Even if you’re thoroughly familiar with the Snowden story, I highly recommend “Privacy Lost” or the original "United States of Secrets" (which unlike the Four Corners edition can be streamed online).

The program is a ripping re-telling of Snowden’s expose, against the backdrop of George W. Bush’s PATRIOT Act and the mounting suspicions through the noughties of NSA over-reach. There are freshly told accounts of the intrigues, of secret optic fibre splitters installed very early on in AT&T’s facilities, scandals over National Security Letters, and the very rare case of the web hosting company Calyx who challenged their constitutionality (and yet today, with the letter withdrawn, remains unable to tell us what the FBI was seeking). The real theme of Smith’s take on surveillance then emerges, when he looks at the rise of data-driven businesses -- first with search, then advertising, and most recently social networking -- and the “data wars” between Google, Facebook and Microsoft.

In my view, the interplay between government surveillance and digital businesses is the most important part of the Snowden epic, and it receives the proper emphasis here. The depth and breadth of surveillance conducted by the private sector, and the insights revealed about what people might be up to creates irresistible opportunities for the intelligence agencies. Hoofnagle tells us how the FBI loves Facebook. And we see the discovery of how the NSA exploits the tracking that’s done by the ad companies, most notably Google’s “PREF” cookie.

One of the peak moments in “Privacy Lost” comes when Gellman and his specialist colleague Ashkan Soltani present their evidence about the PREF cookie to Google – offering an opportunity for the company to comment before the story is to break in the Washington Post. The article ran on December 13, 2013; we're told it was then the true depth of the privacy problem was revealed.

My point of view

Smith takes as a given that excessive intrusion into private affairs is wrong, without getting into the technical aspects of privacy (such as frameworks for data protection, and various Privacy Principles). Neither does he unpack the actual privacy harms. And that’s fine -- a TV program is not the right place to canvass such technical arguments.

When Gellman and Soltani reveal that the NSA is using Google’s tracking cookie, the government gets joined irrefutably to the private sector in a mass surveillance apparatus. And yet I am not sure the harm is dramatically worse when the government knows what Facebook and Google already know.

Privacy harms are tricky to work out. Yet obviously no harm can come from abusing Personal Information if that information is not collected in the first place! I take away from “Privacy Lost” a clear impression of the risks created by the data wars. We are imperiled by the voracious appetite of digital businesses that hang on indefinitely to masses of data about us, while they figure out ever cleverer ways to make money out of it. This is why Collection Limitation is the first and foremost privacy protection. If a business or government doesn't have a sound and transparent reason for having Personal Information about us, then they should not have it. It’s as simple as that.

Martin Smith has highlighted the symbiosis between government and private sector surveillance. The data wars not only made dozens of billionaires but they did much of the heavy lifting for the NSA. And this situation is about to get radically more fraught. On the brink of the Internet of Things, we need to question if we want to keep drowning in data.

Posted in Social Networking, Social Media, Security, Privacy, Internet

Schrodinger's Privacy: A Master Class

Master Class: How to Protect Your Customer's Digital Identity and Personal Data

A Social Media Week Sydney event #SMWSydney
Law Lounge, Sydney University Law School
New Law School Building
Eastern Ave, Camperdown
Fri, Sep 26 - 10:00 AM - 11:30 AM

How can you navigate privacy fact and fiction, without the geeks and lawyers boring each other to death?

It's often said that technology has outpaced privacy law. Many digital businesses seem empowered by this brash belief. And so they proceed with apparent impunity to collect and monetise as much Personal Information as they can get their hands on.

But it's a myth!

Some of the biggest corporations in the world, including Google and Facebook, have been forcefully brought to book by privacy regulations. So, we have to ask ourselves:

  • what does privacy law really mean for social media in Australia?
  • is privacy "good for business"?
  • is privacy "not a technology issue"?
  • how can digital businesses navigate fact & fiction, without their geeks and lawyers boring each other to death?

In this Social Media Week Master Class I will:

  • unpack what's "creepy" about certain online practices
  • show how to rate data privacy issues objectively
  • analyse classic misadventures with geolocation, facial recognition, and predicting when shoppers are pregnant
  • critique photo tagging and crowd-sourced surveillance
  • explain why Snapchat is worth more than three billion dollars
  • analyse the regulatory implications of Big Data, Biometrics, Wearables and The Internet of Things.

We couldn't have timed this Master Class better, coming two weeks after the announcement of the Apple Watch, which will figure prominently in the class!

So please come along, for a fun and in-depth a look at social media, digital technology, the law, and decency.

Register here.

About the presenter

Steve Wilson is a technologist, who stumbled into privacy 12 years ago. He rejected those well meaning slogans (like "Privacy Is Good For Business!") and instead dug into the relationships between information technology and information privacy. Now he researches and develops design patterns to help sort out privacy, alongside all the other competing requirements of security, cost, usability and revenue. His latest publications include:

  • "The collision between Big Data and privacy law" due out in October in the Australian Journal of Telecommunications and the Digital Economy.

Posted in Social Networking, Social Media, Privacy, Internet, Biometrics, Big Data