The cover of Newsweek magazine on 27 July 1970 featured a cartoon couple cowered by computer and communications technology, and the urgent all-caps headline “IS PRIVACY DEAD?”
Four decades on, Newsweek is dead, but we’re still asking the same question.
Every generation or so, our notions of privacy are challenged by a new technology. In the 1880s (when Warren and Brandeis developed the first privacy jurisprudence) it was photography and telegraphy; in the 1970s it was computing and consumer electronics. And now it’s the Internet, a revolution that has virtually everyone connected to everyone else (and soon everything) everywhere, and all of the time. Some of the world’s biggest corporations now operate with just one asset – information – and a vigorous “publicness” movement rallies around the purported liberation of shedding what are said by writers like Jeff Jarvis (in his 2011 book “Public Parts”) to be old fashioned inhibitions. Online Social Networking, e-health, crowd sourcing and new digital economies appear to have shifted some of our societal fundamentals.
However the past decade has seen a dramatic expansion of countries legislating data protection laws, in response to citizens’ insistence that their privacy is as precious as ever. And consumerized cryptography promises absolute secrecy. Privacy has long stood in opposition to the march of invasive technology: it is the classical immovable object met by an irresistible force.
So how robust is privacy? And will the latest technological revolution finally change privacy forever?
Soaking in information
We live in a connected world. Young people today may have grown tired of hearing what a difference the Internet has made, but a crucial question is whether relatively new networking technologies and sheer connectedness are exerting novel stresses to which social structures have yet to adapt. If “knowledge is power” then the availability of information probably makes individuals today more powerful than at any time in history. Search, maps, Wikipedia, Online Social Networks and 3G are taken for granted. Unlimited deep technical knowledge is available in chat rooms; universities are providing a full gamut of free training via Massive Open Online Courses (MOOCs). The Internet empowers many to organise in ways that are unprecedented, for political, social or business ends. Entirely new business models have emerged in the past decade, and there are indications that political models are changing too.
Most mainstream observers still tend to talk about the “digital” economy but many think the time has come to drop the qualifier. Important services and products are, of course, becoming inherently digital and whole business categories such as travel, newspapers, music, photography and video have been massively disrupted. In general, information is the lifeblood of most businesses. There are countless technology-billionaires whose fortunes are have been made in industries that did not exist twenty or thirty years ago. Moreover, some of these businesses only have one asset: information.
Banks and payments systems are getting in on the action, innovating at a hectic pace to keep up with financial services development. There is a bewildering array of new alternative currencies like Linden dollars, Facebook Credits and Bitcoins – all of which can be traded for “real” (reserve bank-backed) money in a number of exchanges of varying reputation. At one time it was possible for Entropia Universe gamers to withdraw dollars at ATMs against their virtual bank balances.
New ways to access finance have arisen, such as peer-to-peer lending and crowd funding. Several so-called direct banks in Australia exist without any branch infrastructure. Financial institutions worldwide are desperate to keep up, launching amongst other things virtual branches and services inside Online Social Networks (OSNs) and even virtual worlds. Banks are of course keen to not have too many sales conducted outside the traditional payments system where they make their fees. Even more strategically, banks want to control not just the money but the way the money flows, because it has dawned on them that information about how people spend might be even more valuable than what they spend.
Privacy in an open world
For many for us, on a personal level, real life is a dynamic blend of online and physical experiences. The distinction between digital relationships and flesh-and-blood ones seems increasingly arbitrary; in fact we probably need new words to describe online and offline interactions more subtly, without implying a dichotomy.
Today’s privacy challenges are about more than digital technology: they really stem from the way the world has opened up. The enthusiasm of many for such openness – especially in Online Social Networking – has been taken by some commentators as a sign of deep changes in privacy attitudes. Facebook's Mark Zuckerberg for instance said in 2010 that “People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people - and that social norm is just something that has evolved over time”. And yet serious academic investigation of the Internet’s impact on society is (inevitably) still in its infancy. Social norms are constantly evolving but it’s too early to tell to if they have reached a new and more permissive steady state. The views of information magnates in this regard should be discounted given their vested interest in their users' promiscuity.
At some level, privacy is about being closed. And curiously for a fundamental human right, the desire to close off parts of our lives is relatively fresh. Arguably it’s even something of a “first world problem”. Formalised privacy appears to be an urban phenomenon, unknown as such to people in villages when everyone knew everyone – and their business. It was only when large numbers of people congregated in cities that they became concerned with privacy. For then they felt the need to structure the way they related to large numbers of people – family, friends, work mates, merchants, professionals and strangers – in multi-layered relationships. So privacy was borne of the first industrial revolution. It has taken prosperity and active public interest to create the elaborate mechanisms that protect our personal privacy from day to day and which we take for granted today: the postal services, direct dial telephones, telecommunications regulations, individual bedrooms in large houses, cars in which we can escape or a while, and now of course the mobile handset.
Privacy is about respect and control. Simply put, if someone knows me, then they should respect what they know; they should exercise restraint in how they use that knowledge, and be guided by my wishes. Generally, privacy is not about anonymity or secrecy. Of course, if we live life underground then unqualified privacy can be achieved, yet most of us exist in diverse communities where we actually want others to know a great deal about us. We want merchants to know our shipping address and payment details, healthcare providers to know our intimate details, hotels to know our travel plans and so on. Practical privacy means that personal information is not shared arbitrarily, and that individuals retain control over the tracks of their lives.
Big Data: Big Future
Big Data tools are being applied everywhere, from sifting telephone call records to spot crimes in the planning, to DNA and medical research. Every day, retailers use sophisticated data analytics to mine customer data, ostensibly to better uncover true buyer sentiments and continuously improve their offerings. Some department stores are interested in predicting such major life changing events as moving house or falling pregnant, because then they can target whole categories of products to their loyal customers.
Real time Big Data will become embedded in our daily lives, through several synchronous developments. Firstly computing power, storage capacity and high speed Internet connectivity all continue to improve at exponential rates. Secondly, there are more and more “signals” for data miners to choose from. No longer do you have to consciously tell your OSN what you like or what you’re doing, because new augmented reality devices are automatically collecting audio, video and locational data, and trading it around a complex web of digital service providers. And miniaturisation is leading to a whole range of smart appliances, smart cars and even smart clothes with built-in or ubiquitous computing.
The privacy risks are obvious, and yet the benefits are huge. So how should we think about the balance in order to optimise the outcome? Let’s remember that information powers the new digital economy, and the business models of many major new brands like Facebook, Twitter, Four Square and Google incorporate a bargain for Personal Information. We obtain fantastic services from these businesses “for free” but in reality they are enabled by all that information we give out as we search, browse, like, friend, tag, tweet and buy.
The more innovation we see ahead, the more certain it seems that data will be the core asset of cyber enterprises. To retain and even improve our privacy in the unfolding digital world, we must be able to visualise the data flows that we’re engaged in, evaluate what we get in return for our information, and determine a reasonable trade of costs and benefits
Is Privacy Dead? If the same rhetorical question needs to be asked over and over for decades, then it’s likely the answer is no.
Biometrics seems to be going gang busters in the developing world. I fear we're seeing a new wave of technological imperialism. In this post I will examine whether the biometrics field is mature enough for the lofty social goal of empowering the world's poor and disadvantaged with "identity".
The independent Center for Global Development has released a report "Identification for Development: The Biometrics Revolution" which looks at 160 different identity programs using biometric technologies. By and large, it's a study of the vital social benefits to poor and disadvantaged peoples when they gain an official identity and are able to participate more fully in their countries and their markets.
The CGD report covers some of the kinks in how biometrics work in the real world, like the fact that a minority of people can be unable to enroll and they need to be subsequently treated carefully and fairly. But I feel the report takes biometric technology for granted. In contrast, independent experts have shown there is insufficient science for biometric performance to be predicted in the field. I conclude biometrics are not ready to support such major public policy initiatives as ID systems.
The state of the science of biometrics
I recently came across a weighty assessment of the science of biometrics presented by one of the gurus, Jim Wayman, and his colleagues to the NIST IBPC 2010 biometric testing conference. The paper entitled "Fundamental issues in biometric performance testing: A modern statistical and philosophical framework for uncertainty assessment" should be required reading for all biometrics planners and pundits.
Here are some important extracts:
[Technology] testing on artificial or simulated databases tells us only about the performance of a software package on that data. There is nothing in a technology test that can validate the simulated data as a proxy for the “real world”, beyond a comparison to the real world data actually available. In other words, technology testing on simulated data cannot logically serve as a proxy for software performance over large, unseen, operational datasets. [p15, emphasis added].
In a scenario test, [False Non Match Rate and False Match Rate] are given as rates averaged over total transactions. The transactions often involve multiple data samples taken of multiple persons at multiple times. So influence quantities extend to sampling conditions, persons sampled and time of sampling. These quantities are not repeatable across tests in the same lab or across labs, so measurands will be neither repeatable nor reproducible. We lack metrics for assessing the expected variability of these quantities between tests and models for converting that variability to uncertainty in measurands.[p17].
To explain, a biometric "technology test" is when a software package is exercised on a standardised data set, usually in a bake-off such as NIST's own biometric performance tests over the years. And a "scenario test" is when the biometric system is tested in the lab using actual test subjects. The meaning of the two dense sentences underlined by me in the extracts is: technology test results from one data set do not predict performance on any other data set or scenario, and biometrics practitioners still have no way to predict the accuracy of their solutions in the real world.
The authors go on:
[To] report false match and false non-match performance metrics for [iris and face recognition] without reporting on the percentage of data subjects wearing contact lenses, the period of time between collection of the compared image sets, the commercial systems used in the collection process, pupil dilation, and lighting direction is to report "nothing at all". [pp17-18].
And they conclude, amongst other things:
[False positive and false negative] measurements have historically proved to be neither reproducible nor repeatable except in very limited cases of repeated execution of the same software package against a static database on the same equipment. Accordingly, "technology" test metrics have not aligned well with "scenario" test metrics, which have in turn failed to adequately predict field performance. [p22].
The limitations of biometric testing has repeatedly been stressed by no less an authority than the US FBI. In their State-of-the-Art Biometric Excellence Roadmap (SABER) Report the FBI cautions that:
For all biometric technologies, error rates are highly dependent upon the population and application environment. The technologies do not have known error rates outside of a controlled test environment. Therefore, any reference to error rates applies only to the test in question and should not be used to predict performance in a different application. [p4.10]
The SABER report also highlighted a widespread weakness in biometric testing, namely that accuracy measurements usually only look at accidental errors:
The intentional spoofing or manipulation of biometrics invalidates the “zero effort imposter” assumption commonly used in performance evaluations. When a dedicated effort is applied toward fooling biometrics systems, the resulting performance can be dramatically different. [p1.4]
A few years ago, the Future of Identity in the Information Society Consortium ("FIDIS", a research network funded by the European Community’s Sixth Framework Program) wrote a major report on forensics and identity systems. FIDIS looked at the spoofability of many biometrics modalities in great detail (pp 28-69). These experts concluded:
Concluding, it is evident that the current state of the art of biometric devices leaves much to be desired. A major deficit in the security that the devices offer is the absence of effective liveness detection. At this time, the devices tested require human supervision to be sure that no fake biometric is used to pass the system. This, however, negates some of the benefits these technologies potentially offer, such as high-throughput automated access control and remote authentication. [p69]
Biometrics in public policy
To me, biometrics is in an appalling and astounding state of affairs. The prevailing public understanding of how these technologies work is utopian, based probably on nothing more than science fiction movies, and the myth of biometric uniqueness. In stark contrast, scientists warn there is no telling how biometrics will work in the field, and the FBI warns that bench testing doesn't predict resistance to attack. It's very much like the manufacturer of a safe confessing to a bank manager they don't know how it will stand up in an actual burglary.
This situation has bedeviled enterprise and financial services security for years. Without anyone admitting it, it's possible that the slow uptake of biometrics in retail and banking (save for Japan and their odd hand vein ATMs) is a result of hard headed security officers backing off when they look deep into the tech. But biometrics is going gang busters in the developing world, with vendors thrilling to this much bigger and faster moving market.
The stakes are so very high in national ID systems, especially in the developing world, where resistance to their introduction is relatively low, for various reasons. I'm afraid there is great potential for technological imperialism, given the historical opacity of this industry and its reluctance to engage with the issues.
To be sure vendors are not taking unfair advantage of the developing world ID market, they need to answer some questions:
- Firstly, how do they respond to Jim Wayman, the FIDIS Consortium and the FBI? Is it possible to predict how finger print readers, face recognition and iris scanners are going to operate, over years and years, in remote and rural areas?
- In particular, how good is liveness detection? Can these solutions be trusted in unattended operation for such critical missions as e-voting?
- What contingency plans are in place for biometric ID theft? Can the biometric be cancelled and reissued if compromised? Wouldn't it be catastrophic for the newly empowered identity holder to find themselves cut out of the system if their biometric can no longer be trusted?
It's an urgent, impatient sort of line in the sand, drawn by the new masters of the universe digital, as a challenge to everyone else. C'mon, get with the program! Innovate! Don't be so precious - so very 20th century! Don't you dig that Information Wants To Be Free? Clearly, old fashioned privacy is holding us back!
The stark choice posited between privacy and digital liberation is rarely examined with much diligence; often it's actually a fatalistic response to the latest breach or the latest eye popping digital development. In fact, those who earnestly assert that privacy is dead are almost always trying to sell us something, be it a political ideology, or a social networking prospectus, or sneakers targeted at an ultra-connected, geolocated, behaviorally qualified nano market segment.
Is it really too late for privacy? Is the genie out of the bottle? Even if we accepted the ridiculous premise that privacy is at odds with progress, no it's not too late, firstly because the pessimism (or commercial opportunism) generally confuses secrecy for privacy, and secondly because frankly, we aint seen nothin yet!
Technology certainly has laid us bare. Behavioural modeling, facial recognition, Big Data mining, natural language processing and so on have given corporations x-ray vision into our digital lives. While exhibitionism has been cultivated and normalised by the infomopolists, even the most guarded social network users may be defiled by Big Data wizards who without consent upload their contact lists, pore over their photo albums, and mine their shopping histories, as is their wanton business model.
So yes, a great deal about us has leaked out into what some see as an extended public domain. And yet we can be public and retain our privacy at the same time.
Some people seem defeated by privacy's definitional difficulties, yet information privacy is simply framed, and corresponding data protection laws readily understood. Information privacy is basically a state where those who know us are restrained in what they can do with the knowledge they have about us. Privacy is about respect, and protecting individuals against exploitation. It is not about secrecy or even anonymity. There are few cases where ordinary people really want to be anonymous. We actually want businesses to know -- within limits -- who we are, where we are, what we've done, what we like, but we want them to respect what they know, to not share it with others, and to not take advantage of it in unexpected ways. Privacy means that organisations behave as though it's a privilege to know us.
Many have come to see privacy as literally a battleground. The grassroots Cryptoparty movement has come together around a belief that privacy means hiding from the establishment. Cryptoparties teach participants how to use Tor and PGP, and spread a message of resistance. They take inspiration from the Arab Spring where encryption has of course been vital for the security of protestors and organisers. The one Cryptoparty I've attended so far in Sydney opened with tributes from Anonymous, and a number of recorded talks by activists who ranged across a spectrum of social and technosocial issues like censorship, copyright, national security and Occupy. I appreciate where they're coming from, for the establishment has always overplayed its security hand. Even traditionally moderate Western countries have governments charging like china shop bulls into web filtering and ISP data retention, all in the name of a poorly characterised terrorist threat. When governments show little sympathy for netizenship, and absolutely no understanding of how the web works, it's unsurprising that sections of society take up digital arms in response.
So ironically, when registering for a cryptoparty, you could not use encryption! For privacy, you have to either trust Eventbrite to have a reasonable policy and to stick to it, or you might rely on government regulations, if applicable. When registering, you give a little Personal Information to the organisers, and we expect that they will be restrained in what they do with it.
Going out in public never was a license for others to invade our privacy. We ought not to respond to online privacy invasions as if cyberspace is a new Wild West. We have always relied on regulatory systems of consumer protection to curb the excesses of business and government, and we should insist on the same in the digital age. We should not have to hide away if privacy is agreed to mean respecting the PII of customers, users and citizens, and restraining what data custodians do with that precious resource.
I ask anyone who thinks it's too late to reassert our privacy to think for a minute about where we're heading. We're still in the early days of the social web, and the information "innovators" have really only just begun. Look at what they've done so far:
- Big Data. The most notorious recent example of the power of data mining comes from Target's covert research into identifying customers who are pregnant based on their buying habits. Big Data practitioners are so enamoured with their ability to extract secrets from "public" data they seem blithely unaware that by generating fresh PII from their raw materials they are in fact collecting it as far as Information Privacy Law is concerned. As such, they’re legally liable for the privacy compliance of their cleverly synthesised data, just as if they had expressly gathered it all by questionnaire.
As an aside, I'm not one of those who fret that technology has outstripped privacy law. Principles-based Information Prvacy law copes well with most of this technology. OECD privacy principles (enacted in over seventy countries) and the US FIPPs require that companies be transarent about what PII they collect and why, and that they limit the ways in which PII is used for unrelated purposes, and how it may be disclosed. These principles are decades old and yet they have been recently re-affirmed by German regulators recently over Facebook's surreptitious use of facial recognition. I expect that Siri will attract like scrutiny as it rolls out in continental Europe.
So what's next?
- Google Glass may, in the privacy stakes, surpass both Siri and facial recognition of static photos. If actions speak louder than words, imagine the value to Google of digitising and knowing exactly what we do in real time.
- Facial recognition as a Service and the sale of biometric templates may be tempting for the photo sharing sites. If and when biometric authentication spreads into retail payments and mobile device security, these systems will face the challenge of enrollment. It might be attractive to share face templates previously collected by Facebook and voice prints by Apple.
So, is it really too late for privacy? The infomopolists and national security zealots may hope so, but surely even cynics will see there is great deal at stake, and that it might be just a little too soon to rush to judge something as important as this.
In information security we've been saddled for years with the tacit assumption that deep down we each have one "true" identity, and that the best way to resolve rights and responsibilities is to render that identity as unique. This "singular identity" paradigm has had a profound and unhelpful influence on security and its sub-disciplines like authentication, PKI, biometrics and federated identity management.
Federated Identity is basically a sort of mash-up of the things that are known about us in different contexts. When describing federated identity, its proponents often point out how drivers licences are presented to boot-strap a new relationship. But it is a category error to abstract this case to as an example of Federated ID, because while a licence might prove your identity when joining a video store, it does not persist in that relationship. Instead the individual is given a new identity: that of a video store member.
A less trivial example is your identity as an employee. When you sign on, HR might sight your driver licence to make sure they get your legal name correct. But thereafter you carry a company ID badge - your identity in that context. You do not present your driver licence to get in the door at work.
Federated Identity posits, often implicitly, that we only really need one identity. The "Identity 2.0" movement properly stresses the multiplicity of our relationships but it usually seeks to hang all relationships off one ID. The beguiling yet utopian OSCON2005 presentation by Dick Hardt shows vividly how many ways there are to be known (although Harte went a step too far when he tried to create a single, albeit fuzzy, uber identity transcending all contexts).
I favor an alternate view - that each of us actually exercises a portfolio of separate identities and that we switch between them in different contexts. This is not an academic distinction; it really makes a big difference where you draw the line on how much you need to know to set a unique identity.
I am an authorised signatory to my company's corporate bank account. I happen to hold my personal bank account at the same institution, and thus I have two different key cards from the same bank. Technically, when I bank on behalf of my company, I exercise a different identity than when I bank for myself, even if I am in the same branch or at the same ATM. There is no "federation" between my corporate and personal identities; it is not even sensible to think in terms of my personal identity "plus" my corporate attributes when I am conducting business banking. After all, so much corporate law concerns separating the identity of a company's people from the company itself. And I think this is more than a technicality too because I truly feel like a different person when I'm conducting Lockstep banking compared to personal banking. I think it's because I am two different people.
Kim Cameron's seminal Laws of Identity deliberately promoted the plurality of identity. Cameron included a fresh definition of digital identity as "a set of claims made by one digital subject about itself or another digital subject". He knew that this relativist definition might be unfamiliar, admitting that it "does not jive with some widely held beliefs - for example that within a given context, identities have to be unique".
That "widely held belief" seems to be a special product of the computer age. Before the advent of "Identity Management", we lived happily in a world of plural identities. Each of us could be by turns a citizen, an employee, a chartered professional, a customer, a bank account holder, a credit cardholder, a patient, a club member, another club official, and so on. It was seemingly only after we started getting computer accounts that it occurred to people to think in terms of one "primary" identity threading a number of secondary roles. Conventional Access Control insists on a singular authentication of who I am, followed by multiple authorisations of what I am entitled to do. This principle was laid down by computer scientists in the 1970s.
The idea that we need to establish a true identity before granting access to particular services is unhelpful to many modern online services. Consider the importance of confidentiality in "apomediation" (where people seek medical information from non technical but "expert" patients) and online psychological counselling. Few will enrol in these important new patient-managed healthcare services if they have to identify themselves before providing an alias. Instead, participants in medical social networking will feel strongly that their avatars' identities in and of themselves are real. Likewise, in virtual worlds and in role playing online games, it's conventional wisdom that participants can adopt distinctly different personae compared to their workaday identities.
Despite the efforts of Kim Cameron and others, and despite the all-too-familiar experience of exercising a range of ids, the singular identity paradigm has proved hard to shake. In defiance of the plurality that features in the Laws of Identity, most federated identity formulations actually reuse identities across totally unrelated contexts, in order to conveniently hang multiple roles off the one identity.
The old paradigm also explains the surprisingly easy acceptance of biometrics. The very idea of biometric authentication plays straight into the world view that each user has one "true" identity. Yet these technologies are deeply problematic; in practice their accuracy is disappointing; worse, in the event a biometric is ever stolen, it's impossible with any of today's solutions to cancel and re-issue the identity. Biometrics' overwhelming intuitive appeal must be based on an idea that what matters in all transactions is the biological person. But it's not. In most real world transactions, the role is all that matters. Only rarely (such as when investigating fraud) do we go to the forensic extreme of knowing the person.
There are grave risks if we insist on the individual being bodily involved in routine transactions. It would make everything intrinsically linked, violating inherently and irreversibly the most fundamental privacy principle: Don't collect personal information when it's not required.
Why are so many people willing to embrace biometrics in spite of their risks and imperfections? It may be because we've been inadvertently seduced by the idea of a single identity.
Yet another breathless report crossed my desk via Twitter this morning where the rise of mobile payments is predicted to lead to cards and cash "disappearing", in this case by 2020. Notably, this hyperventilation comes not from a tech vendor but instead from a "research" company.
So I started to wonder why the success of mobile payments (or any other disruptive technology) is so often framed in terms of winner-take-all. Surely we can imagine new payments modalities being super successful without having to see plastic cards and cash disappear? It might just be that press releases and Twitter tend towards polar language. More likely, and not unrelatedly, it's because a lot of people really think this way.
It's especially ironic given how the term "ecosystem" tops most Buzzword Bingo cards these days. If commentators were to actually think ecologically for a minute they'd realise that the extinction of a Family or Order at the hands of another is very rare indeed.
Once again, in relation to charges levelled against their own, politicians have claimed that like everyone else, they deserve the presumption of innocence. But the old saw "innocent until proven guilty" is no universal human right. It is merely a corollary of the 18th century Blackstone's Formulation: "Better that ten guilty persons escape than that one innocent suffer".
For persons in positions of trust -- politicians, police officers, customs officers, judges and so on -- different calculations apply. The community cuts public officers less slack, because the consequences of their misconduct are far reaching. When only one bad apple can spoil the barrel, Blackstone's Formulation patently does not apply. It is probably better that 10 innocent politicians (or police officers or airport baggage handlers) lose their jobs than for one wrongdoer to stay in place.
If politicians agree to be held to higher standards than members of the public, then as part of the bargain, they cede the presumption of innocence.
These days it’s common to hear the modest disclaimer that there are some questions science can’t answer. I most recently came across such a show of humility by Dr John Kirk speaking on ABC Radio National’s Ockham’s Razor . Kirk says that “science cannot adjudicate between theism and atheism” and insists that science cannot bridge the divide between physics and metaphysics. Yet surely the long history of science shows that divide is not hard and fast.
Science is not merely about the particular answers; it’s about the steady campaign on all that is knowable.
Science demystifies. Way before having all the detailed answers, each fresh scientific wave works to banish the mysterious, that which previously lay beyond human comprehension.
Textbook examples are legion where new sciences have rendered previously fearsome phenomena as firstly explicable and then often manageable: astronomy, physiology, meteorology, sedimentology, seismology, microbiology, psychology and neurology, to name a few.
It's sometimes said that in science, the questions matter more than the answers. Good scientists ask good questions, but great ones show where there is no question anymore.
Once something profound is no longer beyond understanding, that awareness permeates society. Each wave of scientific advance is usually signaled by new technologies, but more vital to the human condition is that science gives us confidence. In an enlightened society, those with no scientific training at all still appreciate that science gets how the world works. Over time this tacit rational confidence has energised modernity, supplanting astrologers, shamans, witch doctors, and even the churches. Laypeople may not know how televisions work, nor nuclear medicine, semiconductors, anaesthetics, antibiotics or fibre optics, but they sure know it’s not by magic.
The arc of science parts mystery’s curtain. Contrary to John Kirk's partitions, science frequently renders the metaphysical as natural and empirically knowable. My favorite example: To the pre-Copernican mind, the Sun was perfect and ethereal, but when Galileo trained his new telescope upon it, he saw spots. These imperfections were shocking enough, but the real paradigm shift came when Galileo observed the sunspots to move across the face, disappear and then return hours later on the other limb. Thus the Sun was shown―in what must have truly been a heart-stopping epiphany―to be a sphere turning on its axis: geometric, humble, altogether of this world, and very reasonably the centre of a solar system as Copernicus had reasoned a few decades earlier. This was science exercising its most profound power, titrating the metaphysical.
An even more dramatic turn was Darwin's discovery that all the world’s living complexity was explicable without god. He thus dispelled teleology (the search for ultimate reason). He not only neutralised the Argument from Design for the existence of god, but also the very need for god. The deepest lesson of Darwinism is that there is simply no need to ask "What am I doing here?" because the wondrous complexity of all of biology, including humanity's own existence are seen to have arisen through natural selection, without a designer, and moreover, without a reason. Darwin himself felt keenly the gravity of this outcome and what it would mean to his deeply religious wife, and for that reason he kept his work secret for so long. It seems philosophers appreciate the deep lessons of Darwinism more than our modest scientists: Karl Marx saw that evolution “deals the death-blow to teleology” and Frederich Nietzsche claimed “God is dead ... we have killed him”.
So why shouldn’t we expect science to continue? Why should we doubt ― or perhaps fear ― its power to remove all mystery? Of course many remaining riddles are very hard indeed, and I know there’s no guarantee science will be able to solve them. But I don't see the logic of rejecting the possibility that it will. Some physicists feel they’re homing in why the physical constants should have their special values. And many cognitive scientists and philosophers of the mind suspect a theory of consciousness is within reach. I’m not saying anyone yet really gets consciousness yet, but surely most would agree that it just doesn’t feel a total enigma anymore.
Science is more than the books it produces. It’s the power to keep writing new ones.
References. “Why is science such a worry?” Ockham's Razor 18 December 2011 http://www.abc.net.au/radionational/programs/ockhamsrazor/ockham27s-razor-18-december-2011/3725968
Journalist Farhad Manjoo at Slate recently lampooned the privacy interests of Facebook users, quipping sarcastically that "the very idea of making Facebook a more private place borders on the oxymoronic, a bit like expecting modesty at a strip club". Funny.
A stripper might seem the archetype of promiscuity but she has a great deal of control over what's going on. There are strict limits to what she does and moreover, what others including the club are allowed to do to her. Strip club customers are banned from taking photos and exploiting the actors' exuberance, and only the most unscrupulous club would itself take advantage of the show for secondary purposes.
Facebook offers no such protection to their own members.
While people do need to be prudent on the Internet, the real privacy problem with Facebook is not the promiscuity of some of its members, but the blatant and boundless way that it pirates personal information. Regardless of the privacy settings, Facebook reserves all rights to do anything it likes with PI, behind the backs of even its most reserved users. That is the fundamental and persistent privacy breach. It's obscene.
Update 5 Dec 2011
Farhad Manjoo took me to task on Twitter and the Slate site [though his comments at Slate have since disappeared] saying I misunderstood the strip club analogy. He said what he really meant was propriety, not modesty: visitors to strip clubs shouldn't expect propriety and Facebook users shouldn't expect privacy. But I don't see how refining the metaphor makes his point any clearer or, to be frank, any less odious. I haven't been to a lot of strip clubs, but I think that their patrons know pretty much what to expect. Facebook on the other hand is deceptive (and has been officially determined to be so by the FTC). Strip clubs are overt; Facebook is tricky.
Some of us -- including both Manjoo and me -- have realised that everything Facebook does is calculated to extract commercial value from the Personal Information it collects and creates. But I don't belittle Facebook's users for falling for the trickery.
I'm going to follow my own advice and not accept the premise of Google's and Facebook's Real Names policy that it somehow is good for quality. My main rebuttal of Real Names is that it's a commercial tactic and not a well grounded worthy social policy.
But here are a few other points I would make if I did want to argue the merits of anonymity - a quality and basic right I honestly thought was unimpeachable!
Nothing to hide? Puhlease!
Much of the case for Real Names riffs on the tired old 'nothing to hide' argument. This tough-love kind of view that respectable people should not be precious about privacy tends to be the preserve of middle class, middle aged white men who through accident of birth have never personally experienced persecution, or had grounds to fear it.
I wish more of the privileged captains of the Internet could imagine that expressing one's political or religious views (for example) brings personal risks to many of the dispossessed or disadvantaged in the world. And as Identity Woman points out, we're not just talking about resistance fighters in the Middle East but also women in 21st century America who are pilloried for challenging the sexist status quo!
Some have argued that people who fear for their own safety should take their networking offline. That's an awfully harsh perpetuation of the digital divide. I don't deny that there are other ways for evil states to track us down online, and that using pseudonyms is no guarantee of safety. The Internet is indeed a risky place for conducting resistance for those who have mortal fears of surveillance. But ask the people who recently rose up on the back of social media if the risks were worth it, and the answer will be yes. Now ask them if the balance changes under a Real Names policy. And who benefits?
Some of the Internet metaphors are so bad they’re not even wrong
Some continue to compare the Internet with a "public square" and suggest there should be no expectation of privacy. In response, I note first of all that the public-private dichotomy is a red herring. Information privacy law is about controlling the flow of Personally Identifiable Information. Most privacy law doesn't care whether PII has come from the public domain or not: corporations and governments are not allowed to exploit PII harvested without consent.
Let's remember the standard set piece of spy movies where agents retreat to busy squares to have their most secret conversations. One's everyday activities in "public" are actually protected in many ways by the nature of the traditional social medium. Our voices don't carry far, and we can see who we're talking to. Our disclosures are limited to the people in our vicinity, we can whisper or use body language to obfuscate our messages, there is no retention of our PII, and so on. These protections are shattered by information technologies.
If Google's and Facebook's call for the end of anonymity were to extend to public squares, we'd be talking about installing CCTVs, tatooing peoples' names on their foreheads, recording everyone's comings and goings, and providing those records to any old private company to make whatever commercial use they see fit.
Medical OSN apartheid
What about medical social networking, which is one of the next frontiers for patient centric care, especially of mental health. Are patients supposed to use their real names for "transparency" and "integrity"? Of course not, because studies show participation in healthcare in general depends on privacy, and many patients decline to seek treatment if they fear they will be exposed.
Now, Real Names advocates would no doubt seek to make medical OSN a special case, but that would imply an expectation that all healthcare discussions be taken off regular social circles. That's just not how real life socialising occurs.
Anonymity != criminality
There's a recurring angle that anonymity is somehow unlawful or unscrupulous. This attitude is based more on guesswork than criminology. If there were serious statistics on crime being aided and abetted by anonymity then we could debate this point, but there aren't. All we have are wild pronouncements like Eugene Kaspersky's call for an Internet Passport. It seems to me that a great deal of crime is enabled by having too much identity online. It's ludicrous that I should hand over so much Personal Information to establish my bona fides in silly little transactions, when we all know that data is being hoovered up and used behind our backs by identity thieves.
And the idea that OSNs have crime prevention at heart when they force us to use "real names" is a little disingenuous when their response to bullying, child pornography, paedophilia and so on has for so long been characterised by keeping themselves at a cool distance.
What’s real anyway?
What’s so real about "real names" anyway? It's not like Google or Facebook they can check them (in fact, when it suited their purposes, the OSNs previously disclaimed any ability to verify names).
But more's the point, given names are arbitrary. It's perfectly normal for people growing up to not "identify with" the names their parents picked for them (or indeed to not identity with their parents at all). We all put some distance between our adult selves and our childhoods. A given family name is no more real in any social sense than any other handle we choose for ourselves.
In a favorite West Wing episode, the press secretary advises VP running mate Leo McGarry that he doesn't have to "accept the premise of the question". Let's remember this when engaging with the self-appointed social scientists and public policy makers at Google, Facebook et al who insist we use "real names" on the Internet.
It's terrific that Google’s Real Names policy has been soundly rebutted so widely, with earnest and worthy defences of the right to anonymity. I especially like the posts by Identity Woman, Dana Boyd, and Alexis Madrigal at The Atlantic who compellingly relates how his own position shifted on the questions as he thought them through.
But at the same time I am disappointed so many defenders of freedom have been drawn into arguing the pros and cons of "transparency". The Namesake infographic (which dates from May, before the Real Names furore broke out, and was reprised by Mashable last week) dumbs down the debate by accepting it as a fight between extremes. Frustratingly, it grants legitimacy to Zuckerberg’s mad ideas that having two identities shows a lack of integrity.
As an aside, using the label "transparency" sub-textually reframes identity with a pro-Real Names bias, especially when juxtaposed against "anonymity" which sounds shady. Is it really fair to call it "transparency" when forcing people to reveal more than is necessary about themselves when they’re socialising?
This issue is really not about transparency at all. Let’s say loud and clear: the Real Names policies of Facebook and Google+ are self-serving commercial tactics intended to maximise the commercial value of their networked stores of Personal Information.
Obviously these informopolies add more value to their network data when they can index it with precision. The use of multiple personae disaggregates the metadata held by OSNs and reduces its value to advertisers and all other PI pirates. In fact reserving the right for individuals to disaggregate their PI is one of the cornerstones of information privacy. Thus in Australia we forbid businesses from reusing government-issued identifiers like Medicare numbers and driver license numbers.
We should not accept the premise that a Real Names policy serves any user-positive purpose, like "transparency", or that it forces better integrity in how people conduct themselves socially. The idea that bloggers are less than honest when not named is, ironically, utterly devoid of social nuance. At every turn, we instinctively compartmentalise our personae, revealing what matters when we interact in different circles – home, work, social, medical – and instinctively holding back what doesn't.
"Online Social Networks" should not seek to change the way we socialise.
We must not allow gurus like Zuckerberg get away with self-serving philosophies like 'we all have one true identity'. He really has no deep insights into the human condition. What he has is a mind-boggling personal fortune based entirely on knowledge about people he has harvested on largely false pretences, and which is diluted when those people are allowed to name themselves socially as they do in real life.