The cover of Newsweek magazine on 27 July 1970 featured a cartoon couple cowered by computer and communications technology, and the urgent all-caps headline “IS PRIVACY DEAD?”
Four decades on, Newsweek is dead, but we’re still asking the same question.
Every generation or so, our notions of privacy are challenged by a new technology. In the 1880s (when Warren and Brandeis developed the first privacy jurisprudence) it was photography and telegraphy; in the 1970s it was computing and consumer electronics. And now it’s the Internet, a revolution that has virtually everyone connected to everyone else (and soon everything) everywhere, and all of the time. Some of the world’s biggest corporations now operate with just one asset – information – and a vigorous “publicness” movement rallies around the purported liberation of shedding what are said by writers like Jeff Jarvis (in his 2011 book “Public Parts”) to be old fashioned inhibitions. Online Social Networking, e-health, crowd sourcing and new digital economies appear to have shifted some of our societal fundamentals.
However the past decade has seen a dramatic expansion of countries legislating data protection laws, in response to citizens’ insistence that their privacy is as precious as ever. And consumerized cryptography promises absolute secrecy. Privacy has long stood in opposition to the march of invasive technology: it is the classical immovable object met by an irresistible force.
So how robust is privacy? And will the latest technological revolution finally change privacy forever?
Soaking in information
We live in a connected world. Young people today may have grown tired of hearing what a difference the Internet has made, but a crucial question is whether relatively new networking technologies and sheer connectedness are exerting novel stresses to which social structures have yet to adapt. If “knowledge is power” then the availability of information probably makes individuals today more powerful than at any time in history. Search, maps, Wikipedia, Online Social Networks and 3G are taken for granted. Unlimited deep technical knowledge is available in chat rooms; universities are providing a full gamut of free training via Massive Open Online Courses (MOOCs). The Internet empowers many to organise in ways that are unprecedented, for political, social or business ends. Entirely new business models have emerged in the past decade, and there are indications that political models are changing too.
Most mainstream observers still tend to talk about the “digital” economy but many think the time has come to drop the qualifier. Important services and products are, of course, becoming inherently digital and whole business categories such as travel, newspapers, music, photography and video have been massively disrupted. In general, information is the lifeblood of most businesses. There are countless technology-billionaires whose fortunes are have been made in industries that did not exist twenty or thirty years ago. Moreover, some of these businesses only have one asset: information.
Banks and payments systems are getting in on the action, innovating at a hectic pace to keep up with financial services development. There is a bewildering array of new alternative currencies like Linden dollars, Facebook Credits and Bitcoins – all of which can be traded for “real” (reserve bank-backed) money in a number of exchanges of varying reputation. At one time it was possible for Entropia Universe gamers to withdraw dollars at ATMs against their virtual bank balances.
New ways to access finance have arisen, such as peer-to-peer lending and crowd funding. Several so-called direct banks in Australia exist without any branch infrastructure. Financial institutions worldwide are desperate to keep up, launching amongst other things virtual branches and services inside Online Social Networks (OSNs) and even virtual worlds. Banks are of course keen to not have too many sales conducted outside the traditional payments system where they make their fees. Even more strategically, banks want to control not just the money but the way the money flows, because it has dawned on them that information about how people spend might be even more valuable than what they spend.
Privacy in an open world
For many for us, on a personal level, real life is a dynamic blend of online and physical experiences. The distinction between digital relationships and flesh-and-blood ones seems increasingly arbitrary; in fact we probably need new words to describe online and offline interactions more subtly, without implying a dichotomy.
Today’s privacy challenges are about more than digital technology: they really stem from the way the world has opened up. The enthusiasm of many for such openness – especially in Online Social Networking – has been taken by some commentators as a sign of deep changes in privacy attitudes. Facebook's Mark Zuckerberg for instance said in 2010 that “People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people - and that social norm is just something that has evolved over time”. And yet serious academic investigation of the Internet’s impact on society is (inevitably) still in its infancy. Social norms are constantly evolving but it’s too early to tell to if they have reached a new and more permissive steady state. The views of information magnates in this regard should be discounted given their vested interest in their users' promiscuity.
At some level, privacy is about being closed. And curiously for a fundamental human right, the desire to close off parts of our lives is relatively fresh. Arguably it’s even something of a “first world problem”. Formalised privacy appears to be an urban phenomenon, unknown as such to people in villages when everyone knew everyone – and their business. It was only when large numbers of people congregated in cities that they became concerned with privacy. For then they felt the need to structure the way they related to large numbers of people – family, friends, work mates, merchants, professionals and strangers – in multi-layered relationships. So privacy was borne of the first industrial revolution. It has taken prosperity and active public interest to create the elaborate mechanisms that protect our personal privacy from day to day and which we take for granted today: the postal services, direct dial telephones, telecommunications regulations, individual bedrooms in large houses, cars in which we can escape or a while, and now of course the mobile handset.
Privacy is about respect and control. Simply put, if someone knows me, then they should respect what they know; they should exercise restraint in how they use that knowledge, and be guided by my wishes. Generally, privacy is not about anonymity or secrecy. Of course, if we live life underground then unqualified privacy can be achieved, yet most of us exist in diverse communities where we actually want others to know a great deal about us. We want merchants to know our shipping address and payment details, healthcare providers to know our intimate details, hotels to know our travel plans and so on. Practical privacy means that personal information is not shared arbitrarily, and that individuals retain control over the tracks of their lives.
Big Data: Big Future
Big Data tools are being applied everywhere, from sifting telephone call records to spot crimes in the planning, to DNA and medical research. Every day, retailers use sophisticated data analytics to mine customer data, ostensibly to better uncover true buyer sentiments and continuously improve their offerings. Some department stores are interested in predicting such major life changing events as moving house or falling pregnant, because then they can target whole categories of products to their loyal customers.
Real time Big Data will become embedded in our daily lives, through several synchronous developments. Firstly computing power, storage capacity and high speed Internet connectivity all continue to improve at exponential rates. Secondly, there are more and more “signals” for data miners to choose from. No longer do you have to consciously tell your OSN what you like or what you’re doing, because new augmented reality devices are automatically collecting audio, video and locational data, and trading it around a complex web of digital service providers. And miniaturisation is leading to a whole range of smart appliances, smart cars and even smart clothes with built-in or ubiquitous computing.
The privacy risks are obvious, and yet the benefits are huge. So how should we think about the balance in order to optimise the outcome? Let’s remember that information powers the new digital economy, and the business models of many major new brands like Facebook, Twitter, Four Square and Google incorporate a bargain for Personal Information. We obtain fantastic services from these businesses “for free” but in reality they are enabled by all that information we give out as we search, browse, like, friend, tag, tweet and buy.
The more innovation we see ahead, the more certain it seems that data will be the core asset of cyber enterprises. To retain and even improve our privacy in the unfolding digital world, we must be able to visualise the data flows that we’re engaged in, evaluate what we get in return for our information, and determine a reasonable trade of costs and benefits
Is Privacy Dead? If the same rhetorical question needs to be asked over and over for decades, then it’s likely the answer is no.
The mea culpa is a classic, straight out of the Zuckerberg copybook. They say they were misunderstood. They say they don't want to sell photos to ad men. They say members will always own their photos. But ownership is a red herring and the whole exercise is likely a stalking horse, designed to distract people from more significant issues around metadata and Facebook's ever deepening ability to infer PII.
Firstly, let's be clear that greater sharing follows the acquisition as night follows day. I noted at the time that the only way to understand Facebook's billion dollar spend on Instagram is around the value to be mined from the mother lode of photo data. In particular, image analysis and facial recognition grant Instagram and Facebook x-ray vision into their members' daily lives. They can work out what people are doing, with whom they're doing it, when and where. With these tools, they're moving quickly from collecting Personally Identifiable Information when it is volunteered by users, to PII that is observed and inferred. The quality and quantity of the PII flux is driven up dramatically. No longer is the lifeblood of Facebook -- the insights they have on 15% of the world's population -- filtered by what users elect to post and Like and tag, but now that information is raw, unexpurgated and automated.
Now ask where the money in photo data is to be made. It's not in selling candid snapshots of folks enjoying branded products. It's in the intelligence that image data yield about how people lead their lives. This intelligence is Facebook's one and only asset.
So it is metadata that we need to worry about. In its initial update to the Terms, Instagram said this: [You] agree that a business or other entity may pay us to display your username, likeness, photos (along with any associated metadata), and/or actions you take, in connection with paid or sponsored content or promotions, without any compensation to you.. In over 6,000 words "metadata" is mentioned just twice, parenthetically, and without any definition. Metadata is figuring more and more in the privacy discourse, and that's great, but we need to look beyond the usual stuff like geolocation and camera type embedded in the JPEGs. Much more important now is the latent identifiable personal content in images. Image analysis and image search provide endless new possibilities for infomopolies to extract value from photos.
A great deal of this week's outcry has focused on things like the lack of compensation, and all of Instagram's apology today is around the ownership of photos. But ownership is moot if they reserve their right to use and disclose metadata in any way they like. What actually matters is the individual's ability to understand and control what is done with any PII about them, including metadata. When the German privacy regulator acted against Facebook's facial recognition practices earlier this year, the principle they applied from OECD style legislation is that there are limits to what can be collected about individuals without their consent. The regulator ruled it unlawful for Facebook to extract biometric information from images when their users innocently think they're only tagging people in photos.
So when I read Instagram's excuse, I don't see any truly meaningful self-restraint in the way they can exploit image data. Their switch is not even a tactical retreat, for as yet, they're not giving anything up.
At the recent Gartner Identity & Access Summit, analyst Earl Perkins spoke of the potential for Facebook to be used as an enterprise IdP. I'd like to see these sorts of speculations dampened a little by filtering them through the understanding that identity is a proxy for relationship.
Here's the practical difficulty that shows why we must reframe what we're talking about. If Facebook were to be an Identity Issuer, they would have to be clear about what enterprises really need to know about their staff, customers, partners and so on. There is no standardised answer to that; every business gets to know its people in their own peculiar ways. Does Facebook with its x-ray vision into our personal lives have anything to offer enterprises? If we work out which assertions might be vouched for by Facebook, how would they be underwritten exactly?
And I really mean exactly because liability is what kills off most identity federations. The idea of re-using identity across contexts is easier said than done. Banks have tried and tried again to federate identities amongst themselves. The Australian experience (of Trust Centre and MAMBO) was that banks find it too complex to re-use each others' issued IDs because of the legal complexity, even when they're all operating under the same laws and regulations! So how on earth will business make the jump to using Facebook as an IdP when they have yet to figure out banks as IdP?
The old saw don't "Mix Business And Pleasure" turns out to predict the cyber world challenges of bringing social identities and business identities together. I have concluded that identity is metaphorical. Each identity is really a proxy for a relationship, and most of our intuitions about identity need to be reframed in terms of relationships. We're not talking simply about names! The types of relationship we entertain socially (and are free to curate for ourselves) may be fundamentally irreconcilable with the identities provided to us by businesses as a way to manage their risks, as is their prerogative.
The first and foremost privacy principle in any data protection regime is Collection Limitation. A classic instance is Australia's National Privacy Principle NPP 1, which requires that an organisation refrain from collecting Personal Information unless (a) there is a clear need to collect that information; (b) the collection is done by fair means, and (c) the individual concerned is made aware of the collection and the reasons for it.
And herein lies a fundamental challenge for most online social networks: if they were honest about the Collection Principle, they would have to say "We collect information about you to make money".
The core business model of many Online Social Networks is to exploit Personal Information, in many and varied ways. There's a bargain for Personal Information inherent in commercial social media. Some say the bargain is obvious to today's savvy netizens; it's said that everybody knows there is no such thing as a free lunch. But I am not so sure. I doubt that the average Facebook user really grasps what's going on. The bargain for their information is opaque and unfair.
From the outset, Facebook founder Mark Zuckerberg was tellingly enthusiastic for information built up in his system to be used by others. In 2004, he told a colleague "if you ever need info about anyone at Harvard, just ask".
Facebook has experienced a more or less continuous string of privacy controversies, including the "Beacon" sharing feature in 2007, which automatically imported members' activities on external websites and re-posted the information on Facebook for others to see. Facebook's privacy missteps almost always relate to the company using the data it collects in unforeseen and barely disclosed ways. Yet this is surely what Facebook's investors expect the company to be doing: innovating in the commercial exploitation of personal information. An inherent clash with privacy arises from the fact that Facebook is a pure play information company: its only significant asset is the information it holds about its members. The market expects this asset to be monetised and maximised. Logically, anything that checks the network's flux in Personal Information -- such as the restraints inherent in privacy protection, whether adopted from within or imposed from without -- must affect the company's futures.
Facebook's business model is enhanced by promiscuity amongst its members, so there is an apparent conflict of interest in the firm's privacy posture. The more information its members are willing to divulge, the greater is Facebook's value. Zuckerberg is far from a passive bystander in this; he has long tried to train his members to abandon privacy norms, in order to generate ever more information flux upon which the site depends. He is brazenly quick to judge what he sees as broader societal shifts. Interviewed at the 2010 TechCrunch conference, he said:
[In] the last five or six years, blogging has taken off in a huge way and all these different services that have people sharing all this information. People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time. We view it as our role in the system to constantly be innovating and be updating what our system is to reflect what the current social norms are.
It is rather too early to draw this sort of sweeping generalisation from the behaviours of a specially self-selected cohort of socially hyperactive users. Without underestimating the empirical importance of Facebook to hundreds of millions of people, surely one of the over-riding characteristics of OSN as a pastime is simply that it is fun. There is a sort of suspension of disbelief at work when people act in this digital world, divorced from normal social cues which may lead them to lower their guard. Facebook users are not fully briefed on the consequences of their actions, and so their behaviour to some extent is being directed by the site designers; it has not evolved naturally as Zuckerberg would have us believe.
Yet promiscuity is not in fact the source of the most valuable social data. Facebook has a particularly sorry history of hiding its most effective collection methods from view. Facial recognition is perhaps the best example. While it has offered photo tagging for years, it was only in early 2012 that Facebook started to talk plainly about how it constructs biometric templates from tags, and how it runs those templates over stored photo data to come up with tag suggestions. Meanwhile, the application of facial recognition is quietly expanding beyond what they reveal, with the likes of Facedeals for example starting to leverage Facebook's templates, in ways that are not disclosed in any Privacy Policies anywhere.
Privacy is largely about transparency. Businesses owe it to their members and customers to honestly disclose what data is collected and why. While social networks continue to obfuscate the true exchange of Personal Information for commercial value, we cannot take seriously their claims to respect our privacy.
It's an urgent, impatient sort of line in the sand, drawn by the new masters of the universe digital, as a challenge to everyone else. C'mon, get with the program! Innovate! Don't be so precious - so very 20th century! Don't you dig that Information Wants To Be Free? Clearly, old fashioned privacy is holding us back!
The stark choice posited between privacy and digital liberation is rarely examined with much diligence; often it's actually a fatalistic response to the latest breach or the latest eye popping digital development. In fact, those who earnestly assert that privacy is dead are almost always trying to sell us something, be it a political ideology, or a social networking prospectus, or sneakers targeted at an ultra-connected, geolocated, behaviorally qualified nano market segment.
Is it really too late for privacy? Is the genie out of the bottle? Even if we accepted the ridiculous premise that privacy is at odds with progress, no it's not too late, firstly because the pessimism (or commercial opportunism) generally confuses secrecy for privacy, and secondly because frankly, we aint seen nothin yet!
Technology certainly has laid us bare. Behavioural modeling, facial recognition, Big Data mining, natural language processing and so on have given corporations x-ray vision into our digital lives. While exhibitionism has been cultivated and normalised by the infomopolists, even the most guarded social network users may be defiled by Big Data wizards who without consent upload their contact lists, pore over their photo albums, and mine their shopping histories, as is their wanton business model.
So yes, a great deal about us has leaked out into what some see as an extended public domain. And yet we can be public and retain our privacy at the same time.
Some people seem defeated by privacy's definitional difficulties, yet information privacy is simply framed, and corresponding data protection laws readily understood. Information privacy is basically a state where those who know us are restrained in what they can do with the knowledge they have about us. Privacy is about respect, and protecting individuals against exploitation. It is not about secrecy or even anonymity. There are few cases where ordinary people really want to be anonymous. We actually want businesses to know -- within limits -- who we are, where we are, what we've done, what we like, but we want them to respect what they know, to not share it with others, and to not take advantage of it in unexpected ways. Privacy means that organisations behave as though it's a privilege to know us.
Many have come to see privacy as literally a battleground. The grassroots Cryptoparty movement has come together around a belief that privacy means hiding from the establishment. Cryptoparties teach participants how to use Tor and PGP, and spread a message of resistance. They take inspiration from the Arab Spring where encryption has of course been vital for the security of protestors and organisers. The one Cryptoparty I've attended so far in Sydney opened with tributes from Anonymous, and a number of recorded talks by activists who ranged across a spectrum of social and technosocial issues like censorship, copyright, national security and Occupy. I appreciate where they're coming from, for the establishment has always overplayed its security hand. Even traditionally moderate Western countries have governments charging like china shop bulls into web filtering and ISP data retention, all in the name of a poorly characterised terrorist threat. When governments show little sympathy for netizenship, and absolutely no understanding of how the web works, it's unsurprising that sections of society take up digital arms in response.
So ironically, when registering for a cryptoparty, you could not use encryption! For privacy, you have to either trust Eventbrite to have a reasonable policy and to stick to it, or you might rely on government regulations, if applicable. When registering, you give a little Personal Information to the organisers, and we expect that they will be restrained in what they do with it.
Going out in public never was a license for others to invade our privacy. We ought not to respond to online privacy invasions as if cyberspace is a new Wild West. We have always relied on regulatory systems of consumer protection to curb the excesses of business and government, and we should insist on the same in the digital age. We should not have to hide away if privacy is agreed to mean respecting the PII of customers, users and citizens, and restraining what data custodians do with that precious resource.
I ask anyone who thinks it's too late to reassert our privacy to think for a minute about where we're heading. We're still in the early days of the social web, and the information "innovators" have really only just begun. Look at what they've done so far:
- Big Data. The most notorious recent example of the power of data mining comes from Target's covert research into identifying customers who are pregnant based on their buying habits. Big Data practitioners are so enamoured with their ability to extract secrets from "public" data they seem blithely unaware that by generating fresh PII from their raw materials they are in fact collecting it as far as Information Privacy Law is concerned. As such, they’re legally liable for the privacy compliance of their cleverly synthesised data, just as if they had expressly gathered it all by questionnaire.
As an aside, I'm not one of those who fret that technology has outstripped privacy law. Principles-based Information Prvacy law copes well with most of this technology. OECD privacy principles (enacted in over seventy countries) and the US FIPPs require that companies be transarent about what PII they collect and why, and that they limit the ways in which PII is used for unrelated purposes, and how it may be disclosed. These principles are decades old and yet they have been recently re-affirmed by German regulators recently over Facebook's surreptitious use of facial recognition. I expect that Siri will attract like scrutiny as it rolls out in continental Europe.
So what's next?
- Google Glass may, in the privacy stakes, surpass both Siri and facial recognition of static photos. If actions speak louder than words, imagine the value to Google of digitising and knowing exactly what we do in real time.
- Facial recognition as a Service and the sale of biometric templates may be tempting for the photo sharing sites. If and when biometric authentication spreads into retail payments and mobile device security, these systems will face the challenge of enrollment. It might be attractive to share face templates previously collected by Facebook and voice prints by Apple.
So, is it really too late for privacy? The infomopolists and national security zealots may hope so, but surely even cynics will see there is great deal at stake, and that it might be just a little too soon to rush to judge something as important as this.
We think we're talking about a thing when we refer to identity provisioning, or "Bring Your Own Identity", or the choice of identity that's axiomatic in NSTIC. The Laws of Identity encouraged us to think in terms of identity as a commodity, but at the same time the Laws cannily defined Digital Identity as a "set of claims".
So identity is not a thing.
Rather, identity is a state of affairs: Identity is How I Am Known.[Update February 2013. I am embarrassed to admit I have only just discovered the work of Goffman and the dramaturgical analysis of identity. Goffman found that identity is an emergent property from social interaction, that it comes dynamically from the roles play, and that it is formed by the way we believe others see us. That is, personal identity is partly impressed upon us. This is the sort of view I have arrived at with Digital Identity. Read on ...]
Digital identity is really just the conspicuous surface of a relationship we have with the Identity Provider (IdP). That relationship grows over time, starting from the evidence of identity (like the legislated "100 point" check in Australian banking) gathered at registration time, after which the IdP issues our identifier. But the identifier is really just a proxy for the relationship we have with a service provider, a relationship which can be deep and unfolding, and usually more complex than any identifier on its own would suggest. The original evidence of identity is just a boundary condition; it might be common across several relationships for a time, but it's really not what the ongoing relationship is all about.
So what can it mean to try and exercise a choice of identity? In business it's the Relying Party that bears most of the risk if an identity is wrong, and so it is that the Relying Party is very often the IdP, for then they can best manage their risk. And here the choice of business identity is moot. If you don't have an identity that meets the RP's needs, then they have the perogative to turn you away. Think about a store that doesn't accept Diners Club; do you have any prospect of negotiating with them to pay by Diners if that's your choice of card? Can it make any difference to the store owner that you might have extra credentials to present in real time?
However, in social dealings, identity is different. Here we do narrate our own life stories, we curate our own identities.
What's going on here? How do we reconcile these contradictions across our plurality of identities? It might help to describe two different orders of Digital Identity:
- Expressed Identities that we control for ourselves and exercise in social circles, and
- Impressed Identities that are bestowed upon us by employers, businesses and government. We have little or no control over how the Impressed identities are created, save for the ultimate power to simply decline a job, a bank account or a passport if we don't like the conditions that go with them.
And every now and then, Expressed and Impressed identities come into conflict, never more viscerally than in what I call the High School Reunion Effect. Most of us have probably experienced the psychic dislocation of meeting old school friends for the first time in decades at a reunion. You've changed; they've changed; our current lives and contexts are unknown and unknowable to our old peers. Instead the group context is frozen in time, and we all struggle to relate to one another according to old identities, while editing ourselves to reflect the new individuals that we have become in new contexts. But here's the thing: our old identities actually return, to varying degrees, impressed by how the group as a whole used to be. So identity is plastic.
High school reunions showcase the dynamic mixture of Impressed and Expressed identities. The way we choose to express ourselves is molded to a point to fit an inter-personal context impressed upon us by a community.
Another example - of greater practical importance - of the tension between impressed and expressed identity is the "Real Name" policies of Google and Facebook. Here we saw a mighty clash of the rights of people to define how they are known in distinct spheres, and the interests of network operators to "know" their users for commercial purposes. Perhaps that type of conflict would be better understood if we saw how different orders of identity have different degrees of freedom? Identity is literally relative.
And then there is the Bring Your Own Identity movement, another battle ground where competing intuitions about identity are playing out. Here the claimed right to use whatever identification method one likes butts up against the enterprise's need to set its own standards for authentication technology and identification risk management. Some BYOI advocates say this is not just about user convenience; businesses may save serious money through BYOI because it will save them from issuing their own IDs, just as BYOD is thought to reduce device support costs. But in most cases, the cost to the business of mapping and interfacing all the expressed identities that users might elect to bring simply exceeds the cost of the organisation impressing IDs for itself.
Digital Identity is a heady intersection of social, technological, business and political frames of reference. Our intuitions - not surprisingly really - can fail us in cyberspace. I reckon progress in NSTIC and similar initiatives will depend on us appreciating that identity online isn't always what it seems.
Most people think that Apple's Siri is the coolest thing they've ever seen on a smart phone. It certainly is a milestone in practical human-machine interfaces, and will be widely copied. The combination of deep search plus natural language processing (NLP) plus voice recognition is dynamite.
And Siri also marks a new milestone in privacy invasion. I predict Siri will become the poster girl for PII piracy, the exemplar of the sly bargain for Personal Information at the heart of most social media.
If you haven't had the pleasure ... Siri is a wondrous new function built into the latest iPhone. It’s the state-of-the-art in artificial intelligence and NLP. You speak directly to Siri, ask her questions (yes, she's female) and tell her what to do with many of your other apps. Siri integrates with mail, text messaging, maps, search, weather, calendar and so on. Ask her "Will I need an umbrella in the morning?" and she'll look up the weather for you – after checking your calendar to see what city you’ll be in tomorrow. It's amazing.
Natural Language Processing is a fabulous idea of course. It radically improves the usability of smart phones, and even their safety with much improved hands-free operation.
An important technical detail is that NLP is very demanding on computing power. In fact it's beyond the capability of today's smart phones, even if each of them alone is more powerful than all of NASA's computers in 1969!. So all Siri's hard work is actually done on Apple's mainframe computers scattered around the planet. That is, all your interactions with Siri are sent into the cloud.
Imagine Siri was a human personal assistant. Imagine she's looking after your diary, placing calls for you, booking meetings, planning your travel, taking dictation, sending emails and text messages for you, reminding you of your appointments, even your significant other’s birthday. She's getting to know you all the while, learning your habits, your preferences, your personal and work-a-day networks.
And she's free!
Now, wouldn't the offer of a free human PA strike you as too good to be true?
When you dictate your mails and text messages to Siri, you’re providing Apple with content that's usually off limits to carriers, phone companies and ISPs. Siri is an end run around telecommunicationss intercept laws.
Of course there are many, many examples of where free social media apps mask a commercial bargain. Face recognition is the classic case. It was first made available on photo sharing sites as a neat way to organise one’s albums, but then Facebook went further by inviting photo tags from users and then automatically identifying people in other photos on others' pages. What's happening behind the scenes is that Facebook is running its face recognition templates over the billions of photos in their databases (which were originally uploaded for personal use long before face recognition was deployed). Given their business model and their track record, we can be certain that Facebook is using face recognition to identify everyone they possibly can, and thence work out fresh associations between countless people and situations accidentally caught on camera. Combine this with image processing and visual search technology (like Google's "Goggles") and the big social media companies have an incredible new eye in the sky. They can work out what we're doing, when, where and with whom. Nobody will need to like expressly "like" anything anymore when Facebook can see what cars we're driving, what brands we're wearing, where we spend our vacations, what we're eating, what makes us laugh. Apple, Facebook and others have understandably invested hundreds of millions of dollars in image recognition start-ups and intellectual property; with these tools they convert the hitherto anonymous image collections in Picassa, Flickr and the like into content-addressable PII gold mines. It's the next frontier of Big Data.
Now, there wouldn't be much wrong with these sorts of arrangements if the social media corporations were up-front about them. In their Privacy Policies they should detail what Personal Information they are extracting and collecting from all the voice and image data; they should explain why they collect this information, what they plan to do with it, how long they will retain it, and how they promise to limit secondary usage. They should explain that biometrics technology allows them to generate brand new PII out of members' snapshots and utterances. And they should acknowledge that by rendering data identifiable, they become accountable in many places under privacy and data protection laws for its safekeeping as PII. It's just not good enough to vaguely reserve their rights to "use personal information to help us develop, deliver, and improve our products, services, content, and advertising". They should treat their customers -- and all those innocents about whom they collect PII indirectly -- with proper respect, and stop pretending that 'service improvement' is what they're up to.
Siri along with face recognition herald a radical new type of privatised surveillance, and on a breathtaking scale. While Facebook stealthily "x-ray" photo albums without consent, Apple now has even more intimate access to our daily routines and personal habits. And they don’t even pay as much as a penny for our thoughts.
As cool as Siri may be, I myself will decline to use any natural language processing while the software runs in the cloud, and while the service providers refuse to restrain their use of my voice data. I'll wait for NLP to be done on my device with my data kept private.
And I'd happily pay cold hard cash for that kind of app, instead of having an infomopoly embed itself in my personal affairs.
After the scandal broke of how the iPhone app "Path" was accessing users' address books and transmitting them back to base, many in the developer community said they thought this was pretty common. The good folks over at Veracode decided to check, so they built another app that simply scans all code on your device for signs that the address book is being accessed. Believe it or not, the Apple operating system has a standard call, available to every app, called "ABAddressBookCopyArrayOfAllPeople".
Talking to the Veracode Research team about this iOS address book madness, the consensus was that none of this should come to a surprise to anyone who's been following mobile development or security research for mobile platforms (emphasis added).
This is terrific work.
Despite the Veracode team's reaction, I'm sure most of the public - even the technologically informed public - would indeed be very surprised to know any old app can freely access their contact lists. If developers are not surprised, perhaps they look at privacy differently?
What probably will surprise many technologists is that under black letter privacy law in Australia, Europe and elsewhere, it would be an offence for the company deploying the app to access contact information on a phone without a good reason and/or user consent (let alone to do it without any notice at all as was the case with Path). As Kriegsman writes in the Veracode article, it's hard to imagine why many of these apps have any cause to call ABAddressBookCopyArrayOfAllPeople.
Developers sometimes seem to think that if information is accessible to them, then it's fair game for re-use or innocant "research". The classic example was the collection of wifi transmissions by Google Street View cars. Many said at the time that if data is in the "public domain" then it's free to be collected and used. And they were very surprised indeed to learn that their presumption is simply wrong at law. Many privacy laws are generally blind to where Personally Identifiable Information is collected. If information is identifiable, and if you have no business collecting it, then you're not allowed to. It's black and white.
Journalist Farhad Manjoo at Slate recently lampooned the privacy interests of Facebook users, quipping sarcastically that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Funny.
A stripper might seem the archetype of promiscuity but she has a great deal of control over what's going on. There are strict limits to what she does and moreover, what others including the club are allowed to do to her. Strip club customers are banned from taking photos and exploiting the actors' exuberance, and only the most unscrupulous club would itself take advantage of the show for secondary purposes.
Facebook offers no such protection to their own members.
While people do need to be prudent on the Internet, the real privacy problem with Facebook is not the promiscuity of some of its members, but the blatant and boundless way that it pirates personal information. Regardless of the privacy settings, Facebook reserves all rights to do anything it likes with PI, behind the backs of even its most reserved users. That is the fundamental and persistent privacy breach. It's obscene.
Update 5 Dec 2011
Farhad Manjoo took me to task on Twitter and the Slate site [though his comments at Slate have since disappeared] saying I misunderstood the strip club analogy. He said what he really meant was propriety, not modesty: visitors to strip clubs shouldn't expect propriety and Facebook users shouldn't expect privacy. But I don't see how refining the metaphor makes his point any clearer or, to be frank, any less odious. I haven't been to a lot of strip clubs, but I think that their patrons know pretty much what to expect. Facebook on the other hand is deceptive (and has been officially determined to be so by the FTC). Strip clubs are overt; Facebook is tricky.
Some of us -- including both Manjoo and me -- have realised that everything Facebook does is calculated to extract commercial value from the Personal Information it collects and creates. But I don't belittle Facebook's users for falling for the trickery.
A repeated refrain of Facebook’s apologists is that privacy is dead. People are supposed to know that anything on the Internet is up for grabs. It’s digital apartheid: the new digital Brown Shirts say if you’re so precious about your privacy, just stay offline.
But socialising and privacy are hardly mutually exclusive; we don’t walk around in public with our names tattooed on our foreheads. Why can't we participate in these networks in a measured, controlled way without submitting to the operators' rampant X-Ray vision? And why can’t the apologists see how they’re sucked into generating the vast fortunes of Zuckerberg et al? It's nothing inevitable about trading off privacy for conviviality -- it's just more lucrative that way for the PI robber-barons.
The privacy dangers of Facebook are real and present and run so much deeper than the self-harm done by some peoples’ overly enthusiastic sharing. The privacy of millions in the mainstream of Facebook is imperilled. Facebook crowd-sources the identification and constant surveillance of its members. With facial recognition, Facebook is building up detailed pictures of what people do, when, where and with whom. I can be tagged without consent in a photo that was not taken by me, and not uploaded by me. The majority of photos in the cloud were not uploaded for this purpose. And look closely: When you remove a tag, Facebook does not remove the underlying biometric template, nor do they undertake to stop using the template that has been gifted to them by innocents who just think tagging their mates is kinda cool.
It’s not cool, it's insidious! And it’s rapaciously commercial like everything else they do. Facebook places no limitations whatsoever on the secondary uses it makes of the Personally Identifiable Information it's generating (which in itself is at odds with European and other privacy law, hence the law suits now underway in Germany).
You know, if a government was stealing into our photo albums, labelling people and profiling them, there would be riots.