A Social Media Week Sydney event #SMWSydney
Law Lounge, Sydney University Law School
New Law School Building
Eastern Ave, Camperdown
Fri, Sep 26 - 10:00 AM - 11:30 AM
How can you navigate privacy fact and fiction, without the geeks and lawyers boring each other to death?
It's often said that technology has outpaced privacy law. Many digital businesses seem empowered by this brash belief. And so they proceed with apparent impunity to collect and monetise as much Personal Information as they can get their hands on.
But it's a myth!
Some of the biggest corporations in the world, including Google and Facebook, have been forcefully brought to book by privacy regulations. So, we have to ask ourselves:
- what does privacy law really mean for social media in Australia?
- is privacy "good for business"?
- is privacy "not a technology issue"?
- how can digital businesses navigate fact & fiction, without their geeks and lawyers boring each other to death?
In this Social Media Week Master Class I will:
- unpack what's "creepy" about certain online practices
- show how to rate data privacy issues objectively
- analyse classic misadventures with geolocation, facial recognition, and predicting when shoppers are pregnant
- critique photo tagging and crowd-sourced surveillance
- explain why Snapchat is worth more than three billion dollars
- analyse the regulatory implications of Big Data, Biometrics, Wearables and The Internet of Things.
We couldn't have timed this Master Class better, coming two weeks after the announcement of the Apple Watch, which will figure prominently in the class!
So please come along, for a fun and in-depth a look at social media, digital technology, the law, and decency.
About the presenter
Steve Wilson is a technologist, who stumbled into privacy 12 years ago. He rejected those well meaning slogans (like "Privacy Is Good For Business!") and instead dug into the relationships between information technology and information privacy. Now he researches and develops design patterns to help sort out privacy, alongside all the other competing requirements of security, cost, usability and revenue. His latest publications include:
- "Big Privacy: The new standard for Big Data Privacy" from Constellation Research, and
- "The collision between Big Data and privacy law" due out in October in the Australian Journal of Telecommunications and the Digital Economy.
It's long been said that if you're getting something for free online, then you're not the customer, you're the product. It's a reference to the one-sided bargain for personal information that powers so many social businesses - the way that "infomopolies" as I call them exploit the knowledge they accumulate about us.
Now it's been revealed that we're even lower than product: we're lab rats.
Facebook data scientist Adam Kramer, with collaborators from UCSF and Cornell, this week reported on a study in which they tested how Facebook users respond psychologically to alternatively positive and negative posts. Their experimental technique is at once ingenious and shocking. They took the real life posts of nearly 700,000 Facebook members, and manipulated them, turning them slightly up- or down-beat. And then Kramer at al measured the emotional tone in how people reading those posts reacted in their own feeds. See Experimental evidence of massive-scale emotional contagion through social networks, Adam Kramer,Jamie Guillory & Jeffrey Hancock, in Proceedings of the National Academy of Sciences, v111.24, 17 June 2014.
The resulting scandal has been well-reported by many, including Kashmir Hill in Forbes, whose blog post nicely covers how the affair has unfolded, and includes a response by Adam Kramer himself.
Plenty has been written already about the dodgy (or non-existent) ethics approval, and the entirely contemptible claim that users gave "informed consent" to have their data "used" for research in this way. I want to draw attention here to Adam Kramer's unvarnished description of their motives. His response to the furore (provided by Hill in her blog) is, as she puts it, tone deaf. Kramer makes no attempt whatsover at a serious scientific justification for this experiment:
- "The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product ... [We] were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.
That is, this large scale psychological experiment was simply for product development.
Some apologists have, I hear, countered that social network feeds are manipulated all the time, notably by advertisers, to produce emotional responses.
Now that's interesting, because for their A-B experiment, Kramer and his colleagues took great pains to make sure the subjects were unaware of the manipulation. After all, the results would be meaningless if people knew what they were reading had been emotionally fiddled with.
In contrast, the ad industry has always insisted that today's digital consumers are super savvy, and they know the difference between advertising and real-life. Advertising is therefore styled as just a bit of harmless fun. But this line is I think further exposed by the Facebook Experiment as self-serving mythology, crafted by people who are increasingly expert at covertly manipulating perceptions, and who now have the data, collected dishonestly, to prove it.
My Constellation Research colleague Alan Lepofsky as been working on new ways to characterise users in cyberspace. Frustrated with the oversimplified cliche of the "Digital Millennials", Alan has developed a fresh framework for categorizing users according to their comfort with technology and their actual knowledge of it. See his new research report "Segmenting Audiences by Digital Proficiency".
This sort of schema could help frame the answers to some vital open questions. In today's maelstrom of idealism and hyperbole, we're struggling to predict how things are going to turn out, and to build appropriate policies and management structures. We are still guessing how the digital revolution is really going to change the human condition? We're not yet rigorously measuring the sorts of true changes, if any, that the digital transformation is causing.
We hold such disparate views about cyberspace right now. When the Internet does good – for example through empowering marginalized kids at schools, fueling new entrepreneurship, or connecting disadvantaged communities – it is described as a power for good, a true "paradigm shift". But when it does bad – as when kids are bullied online or when phishing scams hook inexperienced users – then the Internet is said to be just another communications medium. Such inconsistent attitudes are with us principally because the medium is still so new. Yet we all know how important it is, and that far reaching policy decisions are being made today. So it’s good to see new conceptual frameworks for analyzing the range of ways that people engage with and utilise the Internet.
Vast fortunes are being made through online business models that purport to feed a natural hunger to be social. With its vast reach and zero friction, the digital medium might radically amplify aspects of the social drive, quite possibly beyond what nature intended. As supremely communal beings, we humans have evolved elaborate social bearings for getting on in diverse groups, and we've built social conventions that govern how we meet, collaborate, refer, partner, buy and sell, amalgamate, marry, and split. We are incredibly adept at reading body language, spotting untruths, and gaming each other for protection or for personal advantage. In cyberspace, few of the traditional cues are available to us; we literally lose our bearings online. And therefore naive Internet users fall prey to spam, fake websites and all manner of scams.
How are online users adapting to their new environment and evolving new instincts? I expect there will be interesting correlations between digital resilience and the sophistication measures in Alan’s digital proficiency framework. We might expect Digital Natives to be better equipped inherently to detect and respond to online threats, although they might be somewhat more at risk by virtue of being more active. I wonder too if the risk-taking behavior which exacerbates some online risks for adolescents would be relatively more common amongst Digital Immigrants? By the same token, the Digital Skeptics who are knowledgeable yet uncomfortable may be happy staying put in that quadrant, or only venturing out for selected cyber activities, because they’re consciously managing their digital exposure.
We certainly do need new ways like Alan's Digital Proficiency Framework to understand society’s complex "Analog to Digital" conversion. I commend it to you.
In one of the most highly anticipated sessions ever at the annual South-by-Southwest (SXSW) culture festival, NSA whistle blower Ed Snowden appeared via live video link from Russia. He joined two privacy and security champions from the American Civil Liberties Union – Chris Soghoian and Ben Wizner – to canvass the vexed tensions between intelligence and law enforcement, personal freedom, government accountability and digital business models.
These guys traversed difficult ground, with respect and much nuance. They agreed the issues are tough, and that proper solutions are non-obvious and slow-coming. The transcript is available here.
Yet afterwards the headlines and tweet stream were dominated by "Snowden's Tips" for personal online security. It was as if Snowden had been conducting a self-help workshop or a Cryptoparty. He was reported to recommend we encrypt our hard drives, encrypt our communications, and use Tor (the special free-and-open-source encrypted browser). These are mostly fine suggestions but I am perplexed why they should be the main takeaways from a complex discussion. Are people listening to Snowdenis broader and more general policy lessons? I fear not. I believe people still conflate secrecy and privacy. At the macro level, the confusion makes it difficult to debate national security policy properly; at a micro level, even if crypto was practical for typical citizens, it is not a true privacy measure. Citizens need so much more than secrecy technologies, whether it's SSL-always-on at web sites, or do-it-yourself encryption.
Ed Snowden is a remarkably measured and thoughtful commentator on national security. Despite being hounded around the word, he is not given to sound bites. His principal concerns appear to be around public accountability, oversight and transparency. He speaks of the strengths and weaknesses of the governance systems already in place; he urges Congress to hold security agency heads to account.
When drawn on questions of technology, he doesn't dispense casual advice; instead he calls for multifaceted responses to our security dilemmas: more cryptological research, better random number generators, better testing, more robust cryptographic building blocks and more careful product design. Deep, complicated engineering stuff.
So how did the media, both mainstream and online alike, distill Snowden's sweeping analysis of politics, policy and engineering into three sterile and quasi-survivalist snippets?
Partly it's due to the good old sensationalism of all modern news media: everyone likes a David-and-Goliath angle where individuals face off against pitiless governments. And there's also the ruthless compression: newspapers cater for an audience with school-age reading levels and attention spans, and Twitter clips our contributions to 140 characters.
But there is also a deeper over-simplification of privacy going on which inhibits our progress.
Too often, people confuse privacy for secrecy. Privacy gets framed as a need to hide from prying eyes, and from that starting position, many advocates descend into a combative, everyone-for-themselves mindset.
However privacy has very little to do with secrecy. We shouldn't have to go underground to enjoy that fundamental human right to be let alone. The social reality is that most of us wish to lead rich and quite public lives. We actually want others to know us – to know what we do, what we like, and what we think – but all within limits. Digital privacy (or more clinically, data protection) is not about hiding; rather it is a state where those who know us are restrained in what they do with the knowledge they have about us.
Privacy is the protection you need when your affairs are not confidential!
So encryption is a sterile and very limited privacy measure. As the SXSW panellists agreed, today's encryption tools really are the preserve of deep technical specialists. Ben Wizner quipped that if the question is how can average users protect themselves online, and the answer is Tor, then "we have failed".
And the problems with cryptography are not just usability and customer experience. A fundamental challenge with the best encryption is that everyone needs to be running the tools. You cannot send out encrypted email unilaterally – you need to first make sure all your correspondents have installed the right software and they've got trusted copies of your encryption keys, or they won't be able to unscramble your messages.
Chris Soghoian also nailed the business problem that current digital revenue models are largely incompatible with encryption. The wondrous free services we enjoy from the Googles and Facebooks of the world are funded in the main by mining our data streams, figuring out our interests, habits and connections, and monetising that synthesised information. The web is in fact bankrolled by surveillance – by Big Business as opposed to government.
End-to-end encryption prevents data mining and would ruin the business model of the companies we've become attached to. If we were to get serious with encryption, we may have to cough up the true price for our modern digital lifestyles.
The SXSW privacy and security panellists know all this. Snowden in particular spent much of his time carefully reiterating many of the basics of data privacy. For instance he echoed the Collection Limitation Principle when he said of large companies that they "can't collect any data; [they] should only collect data and hold it for as long as necessary for the operation of the business". And the Openness Principle: "data should not be collected without people's knowledge and consent". If I was to summarise Snowden's SXSW presentation, I'd say privacy will only be improved by reforming the practices of both governments and big businesses, and by putting far more care into digital product development. Ed Snowden himself doesn't promote neat little technology tips.
It's still early days for the digital economy. We're experiencing an online re-run of the Wild West, with humble users understandably feeling forced to take measures into their own hands. So many individuals have become hungry for defensive online tools and tips. But privacy is more about politics and regulation than technology. I hope that people listen more closely to Ed Snowden on policy, and that his lasting legacy is more about legal reform and transparency than Do-It-Yourself encryption.
The cover of Newsweek magazine on 27 July 1970 featured an innocent couple being menaced by cameras and microphones and new technologies like computer punch cards and paper tape. The headline hollered “IS PRIVACY DEAD?”.
The same question has been posed every few years ever since.
In 1999, Sun Microsystems boss Scott McNally urged us to “get over” the idea we have “zero privacy”; in 2008, Ed Giorgio from the Office of the US Director of National Intelligence chillingly asserted that “privacy and security are a zero-sum game”; Facebook’s Mark Zuckerberg proclaimed in 2010 that privacy was no longer a “social norm”. And now the scandal around secret surveillance programs like PRISM and the Five Eyes’ related activities looks like another fatal blow to privacy. But the fact that cynics, security zealots and information magnates have been asking the same rhetorical question for over 40 years suggests that the answer is No!
PRISM, as revealed by whistle blower Ed Snowden, is a Top Secret electronic surveillance program of the US National Security Agency (NSA) to monitor communications traversing most of the big Internet properties including, allegedly, Apple, Facebook, Google, Microsoft, Skype, Yahoo and YouTube. Relatedly, intelligence agencies have evidently also been obtaining comprehensive call records from major telephone companies, eavesdropping on international optic fibre cables, and breaking into the cryptography many take for granted online.
In response, forces lined up at tweet speed on both sides of the stereotypical security-privacy divide. The “hawks” say privacy is a luxury in these times of terror, if you've done nothing wrong you have nothing to fear from surveillance, and in any case, much of the citizenry evidently abrogates privacy in the way they take to social networking. On the other side, libertarians claim this indiscriminate surveillance is the stuff of the Stasi, and by destroying civil liberties, we let the terrorists win.
Governments of course are caught in the middle. President Obama defended PRISM on the basis that we cannot have 100% security and 100% privacy. Yet frankly that’s an almost trivial proposition. It's motherhood. And it doesn’t help to inform any measured response to the law enforcement challenge, for we don’t have any tools that would let us design a computer system to an agreed specification in the form of, say “98% Security + 93% Privacy”. It’s silly to us the language of “balance” when we cannot measure the competing interests objectively.
Politicians say we need a community debate over privacy and national security, and they’re right (if not fully conscientious in framing the debate themselves). Are we ready to engage with these issues in earnest? Will libertarians and hawks venture out of their respective corners in good faith, to explore this difficult space?
I suggest one of the difficulties is that all sides tend to confuse privacy for secrecy. They’re not the same thing.
Privacy is a state of affairs where those who have Personal Information (PII) about us are constrained in how they use it. In daily life, we have few absolute secrets, but plenty of personal details. Not many people wish to live their lives underground; on the contrary we actually want to be well known by others, so long as they respect what they know about us. Secrecy is a sufficient but not necessary condition for privacy. Robust privacy regulations mandate strict limits on what PII is collected, how it is used and re-used, and how it is shared.
Therefore I am a privacy optimist. Yes, obviously too much PII has broken the banks in cyberspace, yet it is not necessarily the case that any “genie” is “out of the bottle”.
If PII falls into someone’s hands, privacy and data protection legislation around the world provides strong protection against re-use. For instance, in Australia Google was found to have breached the Privacy Act when its StreetView cars recorded unencrypted Wi-Fi transmissions; the company cooperated in deleting the data concerned. In Europe, Facebook’s generation of tag suggestions without consent by biometric processes was ruled unlawful; regulators there forced Facebook to cease facial recognition and delete all old templates.
We might have a better national security debate if we more carefully distinguished privacy and secrecy.
I see no reason why Big Data should not be a legitimate tool for law enforcement. I have myself seen powerful analytical tools used soon after a terrorist attack to search out patterns in call records in the vicinity to reveal suspects. Until now, there has not been the technological capacity to use these tools pro-actively. But with sufficient smarts, raw data and computing power, it is surely a reasonable proposition that – with proper and transparent safeguards in place – population-wide communications metadata can be screened to reveal organised crimes in the making.
A more sophisticated and transparent government position might ask the public to give up a little secrecy in the interests of national security. The debate should not be polarised around the falsehood that security and privacy are at odds. Instead we should be debating and negotiating appropriate controls around selected metadata to enable effective intelligence gathering while precluding unexpected re-use. If (and only if) credible and verifiable safeguards can be maintained to contain the use and re-use of personal communications data, then so can our privacy.
For me the awful thing about PRISM is not that metadata is being mined; it’s that we weren’t told about it. Good governments should bring the citizenry into their confidence.
Are we prepared to honestly debate some awkward questions?
- Has the world really changed in the past 10 years such that surveillance is more necessary now? Should the traditional balances of societal security and individual liberties enshrined in our traditional legal structures be reviewed for a modern world?
- Has the Internet really changed the risk landscape, or is it just another communications mechanism. Is the Internet properly accommodated by centuries old constitutions?
- How can we have confidence in government authorities to contain their use of communications metadata? Is it possible for trustworthy new safeguards to be designed?
Many years ago, cryptographers adopted a policy of transparency. They have forsaken secret encryption algorithms, so that the maths behind these mission critical mechanisms is exposed to peer review and ongoing scrutiny. Secret algorithms are fragile in the long term because it’s only a matter of time before someone exposes them and weakens their effectiveness. Security professionals have a saying: “There is no security in obscurity”.
For precisely the same reason, we must not have secret government monitoring programs either. If the case is made that surveillance is a necessary evil, then it would actually be in everyone’s interests for governments to run their programs out in the open.
The cover of Newsweek magazine on 27 July 1970 featured a cartoon couple cowered by computer and communications technology, and the urgent all-caps headline “IS PRIVACY DEAD?”
Four decades on, Newsweek is dead, but we’re still asking the same question.
Every generation or so, our notions of privacy are challenged by a new technology. In the 1880s (when Warren and Brandeis developed the first privacy jurisprudence) it was photography and telegraphy; in the 1970s it was computing and consumer electronics. And now it’s the Internet, a revolution that has virtually everyone connected to everyone else (and soon everything) everywhere, and all of the time. Some of the world’s biggest corporations now operate with just one asset – information – and a vigorous “publicness” movement rallies around the purported liberation of shedding what are said by writers like Jeff Jarvis (in his 2011 book “Public Parts”) to be old fashioned inhibitions. Online Social Networking, e-health, crowd sourcing and new digital economies appear to have shifted some of our societal fundamentals.
However the past decade has seen a dramatic expansion of countries legislating data protection laws, in response to citizens’ insistence that their privacy is as precious as ever. And consumerized cryptography promises absolute secrecy. Privacy has long stood in opposition to the march of invasive technology: it is the classical immovable object met by an irresistible force.
So how robust is privacy? And will the latest technological revolution finally change privacy forever?
Soaking in information
We live in a connected world. Young people today may have grown tired of hearing what a difference the Internet has made, but a crucial question is whether relatively new networking technologies and sheer connectedness are exerting novel stresses to which social structures have yet to adapt. If “knowledge is power” then the availability of information probably makes individuals today more powerful than at any time in history. Search, maps, Wikipedia, Online Social Networks and 3G are taken for granted. Unlimited deep technical knowledge is available in chat rooms; universities are providing a full gamut of free training via Massive Open Online Courses (MOOCs). The Internet empowers many to organise in ways that are unprecedented, for political, social or business ends. Entirely new business models have emerged in the past decade, and there are indications that political models are changing too.
Most mainstream observers still tend to talk about the “digital” economy but many think the time has come to drop the qualifier. Important services and products are, of course, becoming inherently digital and whole business categories such as travel, newspapers, music, photography and video have been massively disrupted. In general, information is the lifeblood of most businesses. There are countless technology-billionaires whose fortunes are have been made in industries that did not exist twenty or thirty years ago. Moreover, some of these businesses only have one asset: information.
Banks and payments systems are getting in on the action, innovating at a hectic pace to keep up with financial services development. There is a bewildering array of new alternative currencies like Linden dollars, Facebook Credits and Bitcoins – all of which can be traded for “real” (reserve bank-backed) money in a number of exchanges of varying reputation. At one time it was possible for Entropia Universe gamers to withdraw dollars at ATMs against their virtual bank balances.
New ways to access finance have arisen, such as peer-to-peer lending and crowd funding. Several so-called direct banks in Australia exist without any branch infrastructure. Financial institutions worldwide are desperate to keep up, launching amongst other things virtual branches and services inside Online Social Networks (OSNs) and even virtual worlds. Banks are of course keen to not have too many sales conducted outside the traditional payments system where they make their fees. Even more strategically, banks want to control not just the money but the way the money flows, because it has dawned on them that information about how people spend might be even more valuable than what they spend.
Privacy in an open world
For many for us, on a personal level, real life is a dynamic blend of online and physical experiences. The distinction between digital relationships and flesh-and-blood ones seems increasingly arbitrary; in fact we probably need new words to describe online and offline interactions more subtly, without implying a dichotomy.
Today’s privacy challenges are about more than digital technology: they really stem from the way the world has opened up. The enthusiasm of many for such openness – especially in Online Social Networking – has been taken by some commentators as a sign of deep changes in privacy attitudes. Facebook's Mark Zuckerberg for instance said in 2010 that “People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people - and that social norm is just something that has evolved over time”. And yet serious academic investigation of the Internet’s impact on society is (inevitably) still in its infancy. Social norms are constantly evolving but it’s too early to tell to if they have reached a new and more permissive steady state. The views of information magnates in this regard should be discounted given their vested interest in their users' promiscuity.
At some level, privacy is about being closed. And curiously for a fundamental human right, the desire to close off parts of our lives is relatively fresh. Arguably it’s even something of a “first world problem”. Formalised privacy appears to be an urban phenomenon, unknown as such to people in villages when everyone knew everyone – and their business. It was only when large numbers of people congregated in cities that they became concerned with privacy. For then they felt the need to structure the way they related to large numbers of people – family, friends, work mates, merchants, professionals and strangers – in multi-layered relationships. So privacy was borne of the first industrial revolution. It has taken prosperity and active public interest to create the elaborate mechanisms that protect our personal privacy from day to day and which we take for granted today: the postal services, direct dial telephones, telecommunications regulations, individual bedrooms in large houses, cars in which we can escape or a while, and now of course the mobile handset.
Privacy is about respect and control. Simply put, if someone knows me, then they should respect what they know; they should exercise restraint in how they use that knowledge, and be guided by my wishes. Generally, privacy is not about anonymity or secrecy. Of course, if we live life underground then unqualified privacy can be achieved, yet most of us exist in diverse communities where we actually want others to know a great deal about us. We want merchants to know our shipping address and payment details, healthcare providers to know our intimate details, hotels to know our travel plans and so on. Practical privacy means that personal information is not shared arbitrarily, and that individuals retain control over the tracks of their lives.
Big Data: Big Future
Big Data tools are being applied everywhere, from sifting telephone call records to spot crimes in the planning, to DNA and medical research. Every day, retailers use sophisticated data analytics to mine customer data, ostensibly to better uncover true buyer sentiments and continuously improve their offerings. Some department stores are interested in predicting such major life changing events as moving house or falling pregnant, because then they can target whole categories of products to their loyal customers.
Real time Big Data will become embedded in our daily lives, through several synchronous developments. Firstly computing power, storage capacity and high speed Internet connectivity all continue to improve at exponential rates. Secondly, there are more and more “signals” for data miners to choose from. No longer do you have to consciously tell your OSN what you like or what you’re doing, because new augmented reality devices are automatically collecting audio, video and locational data, and trading it around a complex web of digital service providers. And miniaturisation is leading to a whole range of smart appliances, smart cars and even smart clothes with built-in or ubiquitous computing.
The privacy risks are obvious, and yet the benefits are huge. So how should we think about the balance in order to optimise the outcome? Let’s remember that information powers the new digital economy, and the business models of many major new brands like Facebook, Twitter, Four Square and Google incorporate a bargain for Personal Information. We obtain fantastic services from these businesses “for free” but in reality they are enabled by all that information we give out as we search, browse, like, friend, tag, tweet and buy.
The more innovation we see ahead, the more certain it seems that data will be the core asset of cyber enterprises. To retain and even improve our privacy in the unfolding digital world, we must be able to visualise the data flows that we’re engaged in, evaluate what we get in return for our information, and determine a reasonable trade of costs and benefits
Is Privacy Dead? If the same rhetorical question needs to be asked over and over for decades, then it’s likely the answer is no.
Many of the identerati campaign on Twitter and on the blogosphere for a federated new order, where banks in particular should be able to deal with new customers based on those customer’s previous registrations with other banks. Why, they ask, should a bank put you through all that identity proofing palava when you must have already passed muster at any number of banks before? Why can’t your new bank pick up the fact that you’ve been identified already? The plea to federate makes a lot of sense, but as I’ve argued previously, the devil is in the legals.
Funnily enough, a clue as to the nature of this problem is contained in the disclaimers on many of the identerati’s blogs and Twitter accounts:
"These are personal opinions only and do not reflect the position of my employer".
Come on. We all know that’s bullshit.
The bloggers I’m talking about are thought leaders at their employers. Many of them have written the book on identity. They're chairing the think tanks. What they say goes! So their blogs do in fact reflect very closely what their employers think.
So why the disclaimer? It's a legal technicality. A company’s lawyers do not want the firm held liable for the consequences of a random reader following an opinion provided outside the very tightly controlled conditions of a consulting contract; the lawyers do not want any remarks in a blog to be taken as advice.
And it's the same with federated identity. Accepting another bank's identification of an individual is something that cannot be done casually. Regardless of the common sense embodied in federated identity, the banks’ lawyers are saying to all institutions, sure, we know you're all putting customers through the same identity proofing protocols, but unless there is a contract in place, you must not rely on another bank's process; you have to do it yourself.
Now, there is a way to chip away at the tall walls of legal habit. This is going to sound a bit semantic, but we are talking about legal technicalities here, and semantics is the name of the game. Instead of Bank X representing to Bank Y that X can provide the "Identity" of a new customer, Bank X could provide a digitally notarised copy of some of the elements of the identity proofing. Elements could be provided as digitally signed messages saying "Here's a copy of Steve’s gas bill" or "Here's a copy of Steve’s birth certificate which we have previously verified". We could all stop messing around with abstract identities (which in the fine print mean different things to different Relying Parties) and instead drop down a level and exchange information about verified claims, or "identity assertions". Individual RPs could then pull together the elements of identity they need, add them up to an identification fit for their own purpose, and avoid the implications of having third parties "provide identity". The semantics would be easier if we only sought to provide elements of identity. All IdPs could be simplified and streamlined as Attribute Providers.
Journalist Farhad Manjoo at Slate recently lampooned the privacy interests of Facebook users, quipping sarcastically that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Funny.
A stripper might seem the archetype of promiscuity but she has a great deal of control over what's going on. There are strict limits to what she does and moreover, what others including the club are allowed to do to her. Strip club customers are banned from taking photos and exploiting the actors' exuberance, and only the most unscrupulous club would itself take advantage of the show for secondary purposes.
Facebook offers no such protection to their own members.
While people do need to be prudent on the Internet, the real privacy problem with Facebook is not the promiscuity of some of its members, but the blatant and boundless way that it pirates personal information. Regardless of the privacy settings, Facebook reserves all rights to do anything it likes with PI, behind the backs of even its most reserved users. That is the fundamental and persistent privacy breach. It's obscene.
Update 5 Dec 2011
Farhad Manjoo took me to task on Twitter and the Slate site [though his comments at Slate have since disappeared] saying I misunderstood the strip club analogy. He said what he really meant was propriety, not modesty: visitors to strip clubs shouldn't expect propriety and Facebook users shouldn't expect privacy. But I don't see how refining the metaphor makes his point any clearer or, to be frank, any less odious. I haven't been to a lot of strip clubs, but I think that their patrons know pretty much what to expect. Facebook on the other hand is deceptive (and has been officially determined to be so by the FTC). Strip clubs are overt; Facebook is tricky.
Some of us -- including both Manjoo and me -- have realised that everything Facebook does is calculated to extract commercial value from the Personal Information it collects and creates. But I don't belittle Facebook's users for falling for the trickery.
It is no exaggeration to characterise the theft of personal information as an epidemic. Personal information in digital form is the lifeblood of banking and payments, government services, healthcare, a great deal of retail commerce, and entertainment. But personal records―especially digital identities―are stolen in the millions by organised criminals, to appropriate not just money but also the broader and fast growing intangible assets of “digital natives”. The Internet has given criminals x-ray vision into peoples’ details, and perfect digital disguises with which to defraud business and governments.
Credit card fraud over the Internet is the model cyber crime. Childs play to perpetrate, and fuelled by a thriving black market in stolen personal data, online card fraud represents 70% of all card fraud in Australia, continues to grow at 30-50% p.a., and here cost over A$120 million in 2010 (see http://lockstep.com.au/blog/2011/09/27/au-cnp-fraud-cy2010). The importance of this crime goes beyond the gross losses, for some of the proceeds are going to fund terrorism, as acknowledged by the US Homeland Security Committee.
Yet there is a deeper lesson in online card fraud: it needs to be seen as a special case of digital identity theft. ID theft is perpetrated by sophisticated organised crime gangs, behind the backs of the best trained and best behaved users, aided and abetted by insiders corrupted by enormous rewards. No amount of well meaning security policy or user awareness can defeat the profit motives of today’s online fraudsters.
As the digital economy is to the wider economy, so cyber crime is to crime at large. And yet the e-business environment remains stuck in a Wild West stage of development: it’s everyone for themselves! There is no consistency in the gadgets foisted upon consumers to access online businesses and services; worse, most are flawed and readily subverted by hackers. We could build security deep into our transaction platforms to prevent identity theft, phishing, web site spoofing and spam―the requisite building blocks like digital signature toolkits and personal smart devices are now ubiquitous―but instead, almost all attention turns to user awareness. Yet education has reached its use-by date, rendered utterly obsolete by the industrialisation of cybercrime (see also http://lockstep.com.au/library/online_banking_review/obr-lockstep-200810-many-hand.pdf). Most everyone now knows they need a firewall and anti-virus software but they're misguided measures when most identities are stolen in other channels utterly beyond the users' control. The predominant technology neutral policy position of government and the banking industry has not fostered market driven innovation as hoped but instead has created a leadership vacuum, leaving consumers to fend for themselves.
To really curtail cyber crime we need the sort of concerted and balanced effort that typifies security in all other walks of life, like transportation, energy and finance. Car owners don't fit their own seat belts and airbags as after-market nice-to-haves; bank customers don’t need to install their own security screens; bank robbers are not kept at bay by security audits alone. The time has come, now that we’re constructing the digital economy, to embrace intelligent security technologies that can actually prevent identity theft and cyber crime.
Too many analyses of Google's and Facebook's Real Names policy take a narrow view of pseudonyms, conceding only that they may benefit for example "[dissidents] in Egypt, China, colonial America [and] whistle-blowers inside corporations and labour unions" (see Berin Szoka's "What's in a Pseudo-name?").
There's evidently a belief that regular upstanding citzens have no need for pseudonyms, and a veiled suspicion that wanting one means you must have something to hide. Yet in truth, a great many ordinary Internet users have developed pseudonymous habits to protect themselves in the Wild West that is cyberspace today.
To frustrate the efforts of junk mailers and spammers, it's standard practice amongst many to use multiple e-mail addresses, or to fib about their location or their age when filling in forms. And where does the Real Names creed leave all the advice we've been giving our kids for years in social networking, to hide their age, their location and any identifying details?
It's important for everyone -- not just Mid-Eastern freedom fighters -- to have the autonomy to represent themselves how they like social settings.
What a twisted world is cyberspace these days! Think about it: Why the hell is the onus on users to defend their use of nicknames, when it ought to be the informopolies that justify imposing their self-serving rules on how we users refer to ourselves? We don't go around in public with our 'real names' tattooed on our foreheads! No "Social network" should be dictating how we socialise!