Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Cyber Security Summit - Part 2

This is Part 2 of my coverage of the White House #CyberSecuritySummit; see Part 1 here.

On Feb 13th, at President Obama's request, a good number of the upper echelon of Internet experts gathered at Stanford University in Silicon Valley to work out what to do next about cybersecurity and consumer protection online. The Cyber Security Summit was put together around Obama's signing a new Executive Order to create new cyber threat information sharing hubs and standards to foster sharing while protecting privacy, and it was meant to maintain the momentum of his cybersecurity and privacy legislative program.

The main session of the summit traversed very few technical security issues. The dominant theme was intelligence sharing: how can business and government share what they know in real time about vulnerabilities and emerging cyber attacks? Just a couple of speakers made good points about preventative measures. Intel President Renee James highlighted the importance of a "baseline of computing security"; MasterCard CEO Ajay Banga was eloquent on how innovation can thrive in a safety-oriented regulated environment like road infrastructure and road rules. So apart from these few deviations, the summit had a distinct military intelligence vibe, in keeping with the cyber warfare trope beloved by politicians.

On the one hand, it would be naive to expect such an event to make actual progress. And I don't mind a political showcase if it secures the commitment of influencers and builds awareness. But on the other hand, the root causes of our cybersecurity dilemma have been well known for years, and this esteemed gathering seemed oblivious to them.

Where's the serious talk of preventing cyber security problems? Where is the attention to making e-business platforms and digital economy infostructure more robust?

Personal Information today is like nitroglycerin - it has to be handled with the utmost care, lest it blow up in your face. So we have the elaborate and brittle measures of PCI-DSS or the HIPAA security rules, rendered useless by the slightest operational slip-up.

How about rendering personal information safer online, so it cannot be stolen, co-opted, modified and replayed? If stolen information couldn't be used by identity thieves with impunity, we would neutralise the bulk of today's cybercrime. This is how EMV Chip & PIN payment security works. Personal data and purchase details are combined in a secure chip and digitally signed under the customer's control, to prove to the merchant that the transaction was genuine. The signed transaction data cannot be easily hacked (thanks Jim Fenton for the comment; see below); stolen identity data is useless to a thief if they don't control the chip; a stolen chip is only good for single transactions (and only if the PIN is stolen as well) rather than the mass fraud perpetrated after raiding large databases.

It's obvious (isn't it?) that we need to do something radically different before the Internet of Things turns into a digital cesspool. The good news for privacy and security in ubiquitous computing is that most smart devices can come with Secure Elements and built-in digital signature capability, so that all the data they broadcast can be given pedigree. We should be able to know tell for sure that every piece of information flowing in the IoT has come from a genuine device, with definite attributes, operating with the consent of its owner.

The technical building blocks for a properly secure IoT are at hand. Machine-to-Machine (M2M) identity modules (MIMs) and Trusted Execution Environments (TEEs) provide safe key storage and cryptographic functionality. The FIDO Alliance protocols leverage this embedded hardware and enable personal attributes to be exchanged reliably. Only a couple of years ago, Vint Cerf in an RSA Conference keynote speculated that ubiquitous public key cryptography would play a critical role in the Internet of Things, but he didn't know how exactly.

In fact, we have have known what to do with this technology for years.

At the close of the Cyber Security Summit, President Obama signed his Executive Order -- in ink. The irony of using a pen to sign a cybersecurity order seemed lost on all concerned. And it is truly tragic.

Clinto Ahern 1998
In 1998, Bill Clinton and his Irish counterpart Bertie Ahern signed an US-Ireland communique on e-commerce. At that time, the presidents used smartcards. In 2003, Bill Gates espoused the importance of chip technology for authentication.

We probably wouldn't need a cybersecurity summit in 2015 if serious transaction security had been built into the cyber infrastructure over a decade ago.

Posted in Smartcards, Security

Obama's Cybersecurity Summit

The White House Summit on Cybersecurity and Consumer Protection was hosted at Stanford University on Friday February 13. I followed the event from Sydney, via the live webcast.

It would be naive to expect the White House Cybersecurity Summit to have been less political. President Obama and his colleagues were in their comfort zone, talking up America's recent economic turnaround, and framing their recent wins squarely within Silicon Valley where the summit took place. With a few exceptions, the first two hours was more about green energy, jobs and manufacturing than cyber security. It was a lot like a lost episode of The West Wing.

The exceptions were important. Some speakers really nailed some security issues. I especially liked the morning contributions from Intel President Renee James and MasterCard CEO Ajay Banga. James highlighted that Intel has worked for 10 years to improve "the baseline of computing security", making her one of the few speakers to get anywhere near the inherent insecurity of our cyber infrastructure. The shocking truth is that cyberspace is built on terrible foundations; the software development practices and operating systems that bear the economy today were not built for the job. For mine, the Summit was too much about military/intelligence themed information sharing, and not enough about why our systems are such a shambles. I know it's a dry subject but if they're serious about security, policy makers really have to engage with software quality and reliability, instead of thrilling to kids learning to code. Software development practices are to blame for many of our problems; more on software failures here.

Ajay Banga was one of several speakers to urge the end of passwords. He summed up the authentication problem very nicely: "Stop making us remember things in order to prove who we are". He touched on MasterCard's exploration of continuous authentication bracelets and biometrics (more news of which coincidentally came out today). It's important however that policy makers' understanding of digital infrastructure resilience, cybercrime and cyber terrorism isn't skewed by everyone's favourite security topic - customer authentication. It's in need of repair yet it is not to blame for the vast majority of breaches. Mom and Pop struggle with passwords and they deserve better, but the vast majority of stolen personal data is lifted by organised criminals en masse from poorly secured back-end databases. Replacing customer passwords or giving everyone biometrics is not going to solve the breach epidemic.

Banga also indicated that the Information Highway should be more like road infrastructure. He highlighted that national routes are regulated, drivers are licensed, there are rules of the road, standardised signs, and enforcement. All these infrastructure arrangements leave plenty of room for innovation in car design, but it's accepted that "all cars have four wheels".

Tim Cook was then the warm-up act before Obama. Many on Twitter unkindly branded Cook's speech as an ad for Apple, paid for by the White House, but I'll accentuate the positives. Cook continues to campaign against business models that monetize personal data. He repeated his promise made after the ApplePay launch that they will not exploit the data they have on their customers. He put privacy before security in everything he said.

Cook painted a vision where digital wallets hold your passport, driver license and other personal documents, under the user's sole control, and without trading security for convenience. I trust that he's got the mobile phone Secure Element in mind; until we can sort out cybersecurity at large, I can't support the counter trend towards cloud-based wallets. The world's strongest banks still can't guarantee to keep credit card numbers safe, so we're hardly ready to put our entire identities in the cloud.

In his speech, President Obama reiterated his recent legislative agenda for information sharing, uniform breach notification, student digital privacy, and a Consumer Privacy Bill of Rights. He stressed the need for private-public partnership and cybersecurity responsibility to be shared between government and business. He reiterated the new Cyber Threat Intelligence Integration Center. And as flagged just before the summit, the president signed an Executive Order that will establish cyber threat information sharing "hubs" and standards to foster sharing while protecting privacy.

Obama told the audience that cybersecurity "is not an ideological issue". Of course that message was actually for Congress which is deliberating over his cyber legislation. But let's take a moment to think about how ideology really does permeate this arena. Three quasi-religious disputes come to mind immediately:

  • Free speech trumps privacy. The ideals of free speech have been interpreted in the US in such a way that makes broad-based privacy law intractable. The US is one of only two major nations now without a general data protection statute (the other is China). It seems this impasse is rarely questioned anymore by either side of the privacy debate, but perhaps the scope of the First Amendment has been allowed to creep out too far, for now free speech rights are in effect being granted even to computers. Look at the controversy over the "Right to be Forgotten" (RTBF), where Google is being asked to remove certain personal search results if they are irrelevant, old and inaccurate. Jimmy Wales claims this requirement harms "our most fundamental rights of expression and privacy". But we're not talking about speech here, or even historical records, but rather the output of a computer algorithm, and a secret algorithm at that, operated in the service of an advertising business. The vociferous attacks on RTBF are very ideological indeed.
  • "Innovation" trumps privacy. It's become an unexamined mantra that digital businesses require unfettered access to information. I don't dispute that some of the world's richest ever men, and some of the world's most powerful ever corporations have relied upon the raw data that exudes from the Internet. It's just like the riches uncovered by the black gold rush on the 1800s. But it's an ideological jump to extrapolate that all cyber innovation or digital entrepreneurship must continue the same way. Rampant data mining is laying waste to consumer confidence and trust in the Internet. Some reasonable degree of consumer rights regulation seems inevitable, and just, if we are to avert a digital Tragedy of the Commons.
  • National Security trumps privacy. I am a rare privacy advocate who actually agrees that the privacy-security equilibrium needs to be adjusted. I believe the world has changed since some of our foundational values were codified, and civil liberties are just one desirable property of a very complicated social system. However, I call out one dimensional ideology when national security enthusiasts assert that privacy has to take a back seat. There are ways to explore a measured re-calibration of privacy, to maintain proportionality, respect and trust.

President Obama described the modern technological world as a "magnificent cathedral" and he made an appeal to "values embedded in the architecture of the system". We should look critically at whether the values of entrepreneurship, innovation and competitiveness embedded in the way digital business is done in America could be adjusted a little, to help restore the self-control and confidence that consumers keep telling us is evaporating online.

Posted in Trust, Software engineering, Security, Internet

The state of the state: Privacy enters Adolescence

Constellation Research recently launched the "State of Enterprise Technology" series of research reports. The series assesses the current state of the enterprise technologies Constellation consider crucial to digital transformation, and provide snapshots of the future usage and evolution of these technologies. Constellation will continue to publish reports in our State of Enterprise Technology series throughout Q1.

My first contribution to this series, "Privacy Enters Adolescence", focuses on Safety and Privacy. I've looked at information data privacy in 2015, and identified seven trends of which you should be aware in order to potect your customer's information.

Here's an excerpt from the report:

Digital Safety and Privacy

Constellation's business theme of Digital Safety and Privacy is all about the art and science of maximizing the information assets of a business, including its most important assets – its people. Our research in this theme enables clients to capitalize on cloud, mobility, Big Data and the Internet of Things, without compromising the digital safety of the business, and the privacy and trust of your end users.

Seven Digital Safety and Privacy Trends for 2015


  • Consumers have not given up privacy - they've been tricked out of it. The impression is easily formed that people just don’t care about privacy anymore. Yet there is no proof that privacy is dead. In fact, a robust study of young adults has shown no major difference between them and older people on the importance of privacy.
  • Private sector surveillance is overshadowed by government intrusion, but is arguably just as bad. There is nothing inevitable about private sector surveillance. Consumers are waking up to the fact that digital business models are generating unprecedented fortunes on the back of the personal data they are giving away in loyalty programs, social networks, search, cloud email, and fitness trackers. Most people remain blissfully ignorant of what's being done with all that data, but we see budding signs of resentment from consumers whose every interaction is exploited without their consent.
  • The U.S. is the Canary Islands of privacy. The United States remains the only major economy without broad-based information privacy laws.
  • Privacy is more about politics than technology. Privacy can be seen as a power play between individual rights and the interests of governments and businesses.
  • The land grab for "public" data accelerates. Data is an immensely valuable raw material. More than data mining, Big Data is really about data refining. And unlike the stuff of traditional extraction industries, data seems inexhaustible, and the cost of extraction is near zero. Something akin to land rights for privacy may be the future.
  • Data literacy will be key to digital safety. Computer literacy is one thing, but data literacy is different and less well defined so far. When we go online, we don’t have the familiar social cues, so now we need to develop new ones. And we need to build up a common understanding of how data flows in the digital economy. Data literacy is more than being able to work an operating system, a device and umpteen apps: it means having meaningful mental models of what goes on in computers.
  • Privacy will get worse before it gets better. Privacy is messy, even in jurisdictions where data protection rules are well entrenched. Consider the controversial new Right to Be Forgotten ruling of the European Court of Justice, which resulted in plenty of unintended consequences, and collisions with other jurisprudence, namely the United States' protection of free speech.

My report "Privacy Enters Adolescence" can be downloaded here. It expands on the points above, and sets out recommendations for improving awareness of how personal data flows in the digital economy, negotiating better deals in the data-for-value bargain, and the conduct of Privacy Impact Assessments.

Posted in Social Media, Privacy, Cloud, Big Data

Suspension of Disbelief and digital safety

If the digital economy is really the economy then it's high time we moved beyond hoping that we can simply train users to be safe online. Is the real economy only for heros who can protect themselves in the jungle, writing their own code. As if they're carrying their own guns? Or do we as a community build structures and standards and insist on technologies that work for all?

For most people, the World Wide Web experience still a lot like watching cartoons on TV. The human-machine interface is almost the same. The images and actions are just as synthetic; crucially, nothing on a web browser is real. Almost anything goes -- just as the Roadrunner defies gravity in besting Coyote, there are no laws of physics that temper the way one bit of multimedia leads to the next. Yes, there is a modicum of user feedback in the way we direct some of the action when browsing and e-shopping, but it's quite illusory; for the most part all we're really doing is flicking channels across a billion pages.

It's the suspension of disbelief when browsing that lies at the heart of many of the safety problems we're now seeing. Inevitably we lose our bearings in the totally synthetic World Wide Web. We don't even realise it, we're taken in by a virtual reality, and we become captive to social engineering.

But I don't think it's possible to tackle online safety by merely countering users' credulity. Education is not the silver bullet, because the Internet is really so technologically complex and abstract that it lies beyond the comprehension of most lay people.

Using the Internet 'safely' today requires deep technical skills, comparable to the level of expertise needed to operate an automobile circa 1900. Back then you needed to be able to do all your own mechanics [roughly akin to the mysteries of maintaining anti-virus software], look after the engine [i.e. configure the operating system and firewall], navigate the chaotic emerging road network [there's yet no trusted directory for the Internet, nor any road rules], and even figure out how to fuel the contraption [consumer IT supply chains is about as primitive as the gasoline industry was 100 years ago]. The analogy with the early car industry becomes especially sharp for me when I hear utopian open source proponents argue that writing ones own software is the best way to be safe online.

The Internet is so critical (I'd have thought this was needless to say) that we need ways of working online that don't require us to all be DIY experts.

I wrote a first draft of this blog six years ago, and at that time I called for patience in building digital literacy and sophistication. "It took decades for safe car and road technologies to evolve, and the Internet is still really in its infancy" I said in 2009. But I'm less relaxed about his now, on the brink of the Internet of Things. It's great that the policy makers like the US FTC are calling on connected device makers to build in security and privacy, but I suspect the Internet of Things will require the same degree of activist oversight and regulation as does the auto industry, for the sake of public order and the economy. Do we have the appetite to temper breakneck innovation with safety rules?

Posted in Culture, Internet, Security

Consumerization of Authentication

For the second year running, the FIDO Alliance hosted a consumer authentication showcase at CES, the gigantic Consumer Electronics Show in Las Vegas, this year featuring four FIDO Alliance members.

This is a watershed in Internet security and privacy - never before has authentication been a headline consumer issue.

Sure we've all talked about the password problem for ten years or more, but now FIDO Alliance members are doing something about it, with easy-to-use solutions designed specifically for mass adoption.

The FIDO Alliance is designing the authentication plumbing for everything online. They are creating new standards and technical protocols allowing secure personal devices (phones, personal smart keys, wearables, and soon a range of regular appliances) to securely transmit authentication data to cloud services and other devices, in some cases eliminating passwords altogether.

See also my ongoing FIDO Alliance research at Constellation.

Posted in Privacy, Identity, Constellation Research, Security

We cannot pigeon-hole risk

In electronic business, Relying Parties (RPs) need to understand their risks of dealing with the wrong person (say a fraudulent customer or a disgruntled ex employee), determine what they really need to know about those people in order to help manage risk, and then in many cases, design a registration process for bringing those people into the business fold. With federated identity, the aim is to offload the registration and other overheads onto an Identity Provider (IdP). But evaluating IdPs and forging identity management arrangements has proven to be enormously complex, and the federated identity movement has been looking for ways to streamline and standardize the process.

One approach is to categorise different classes of IdP, matched to different transaction types. "Levels of Assurance" (LOAs) have been loosely standardised by many governments and in some federated identity frameworks, like the Kantara Initiative. The US Authentication Guideline NIST SP 800-63 is one of the preeminent de facto standards, adopted by the National Strategy for Trusted Identities in Cyberspace (NSTIC). But over the years, adoption of SP 800-63 in business has been disappointing, and now NIST has announced a review.

One of my problem with LOAs is simply stated: I don't believe it's possible to pigeon-hole risk.

With risk management, the devil is in the detail. Risk Management standards like ISO 31000 require organisations to start by analysing the threats that are peculiar to their environment. It's folly to take short cuts here, and it's also well recognised that you cannot "outsource" liability.

To my mind, the LOA philosophy goes against risk management fundamental. To come up with an LOA rating is an intermediate step that takes an RP's risk analysis, squeezes it into a bin (losing lots of information as a result), which is then used to shortlist candidate IdPs, before going into detailed due diligence where all those risk details need to be put back on the table.

I think we all know by now of cases where RPs have looked at candidate IdPs at a given LOA, been less than satisfied with the available offerings, and have felt the need for an intermediate level, something like "LOA two and a half" (this problem was mentioned at CIS 2014 more than once, and I have seen it first hand in the UK IDAP).

Clearly what's going on here is an RP's idea of "LOA 2" differs from a given IdP's idea of the same LOA 2. This is because everyone's risk appetite and threat profile is different. Moreover, the detailed prescription of "LOA 2" must differ from one identity provider to the next. When an RP thinks they need "LOA 2.5" what they're relly asking for is a customised identification. If an off-the-shelf "LOA 2" isn't what it seems, then there can't be any hope for an agreed intermediate LOA 2.5. Even if an IdP and an RP agree in one instance, soon enough we will get a fresh call for "LOA 2.75 please".

We cannot pigeonhole risk. Attaching chunky one dimensional Levels of Assurance is misleading. There is no getting away from the need to do detailed analysis of the threats and therefore the authentication needs required.

Posted in Security, Identity, Federated Identity

Making cyber safe like cars

This is an updated version of arguments made in Lockstep's submission to the 2009 Cyber Crime Inquiry by the Australian federal government.

In stark contrast to other fields, cyber safety policy is almost exclusively preoccupied with user education. It's really an obsession. Governments and industry groups churn out volumes of well-meaning and technically reasonable security advice, but for the average user, this material is overwhelming. There is a subtle implication that security is for experts, and that the Internet isn't safe unless you go to extremes. Moreover, even if consumers do their very best online, their personal details can still be taken over in massive criminal raids on databases that hardly anyone even know exist.

Too much onus is put on regular users protecting themselves online, and this blinds us to potential answers to cybercrime. In other walks of life, we accept a balanced approach to safety, and governments are less reluctant to impose standards than they are on the Internet. Road safety for instance rests evenly on enforceable road rules, car technology innovation, certified automotive products, mandatory quality standards, traffic management systems, and driver training and licensing. Education alone would be nearly worthless.

Around cybercrime we have a bizarre allergy to technology. We often hear that 'Preventing data breaches not a technology issue' which may be politically correct but it's faintly ridiculous. Nobody would ever say that preventing car crashes is 'not a technology issue'.

Credit card fraud and ID theft in general are in dire need of concerted technological responses. Consider that our Card Not Present (CNP) payments processing arrangements were developed many years ago for mail orders and telephone orders. It was perfectly natural to co-opt the same processes when the Internet arose, since it seemed simply to be just another communications medium. But the Internet turned out to be more than an extra channel: it connects everyone to everything, around the clock.

The Internet has given criminals x-ray vision into peoples' banking details, and perfect digital disguises with which to defraud online merchants. There are opportunities for crime now that are both quantitatively and qualitatively radically different from what went before. In particular, because identity data is available by the terabyte and digital systems cannot tell copies from originals, identity takeover is child's play.

You don't even need to have ever shopped online to run foul of CNP fraud. Most stolen credit card numbers are obtained en masse by criminals breaking into obscure backend databases. These attacks go on behind the scenes, out of sight of even the most careful online customers.

So the standard cyber security advice misses the point. Consumers are told earnestly to look out for the "HTTPS" padlock that purportedly marks a site as secure, to have a firewall, to keep their PCs "patched" and their anti-virus up to date, to only shop online at reputable merchants, and to avoid suspicious looking sites (as if cyber criminals aren't sufficiently organised to replicate legitimate sites in their entirety). But none of this advice touches on the problem of coordinated massive heists of identity data.

Merchants are on the hook for unwieldy and increasingly futile security overheads. When a business wishes to accept credit card payments, it's straightforward in the real world to install a piece of bank-approved terminal equipment. But to process credit cards online, shopkeepers have to sign up to onerous PCI-DSS requirements that in effect require even small business owners to become IT security specialists. But to what end? No audit regime will ever stop organised crime. To stem identity theft, we need to make stolen IDs less valuable.

All this points to urgent public policy matters for governments and banks. It is not enough to put the onus on individuals to guard against ad hoc attacks on their credit cards. Systemic changes and technological innovation are needed to render stolen personal data useless to thieves. It's not that the whole payments processing system is broken; rather, it is vulnerable at just one point where stolen digital identities can be abused.

Digital identities are the keys to our personal kingdoms. As such they really need to be treated as seriously as car keys, which have become very high tech indeed. Modern car keys cannot be duplicated at a suburban locksmith. It's possible you've come across office and filing cabinet keys that carry government security certifications. And we never use the same keys for our homes and offices; we wouldn't even consider it (which points to the basic weirdness in Single Sign On and identity federation).

In stark contrast to car keys, almost no attention is paid to the pedigree of digital identities. Technology neutrality has bred a bewildering array of ad hoc authentication methods, including SMS messages, one time password generators, password calculators, grid cards and picture passwords; at the same time we've done nothing at all to inhibit the re-use of stolen IDs.

It's high time government and industry got working together on a uniform and universal set of smart identity tools to properly protect consumers online.

Stay tuned for more of my thoughts on identity safety, inspired by recent news that health identifiers may be back on the table in the gigantic U.S. e-health system. The security and privacy issues are large but the cyber safety technology is at hand!

Posted in Fraud, Identity, Internet, Payments, Privacy, Security

RSS error - you may have missed three blog posts

Sorry followers, but I had an error in the HTML of a mid December blog post, and my RSS feed was probably broken. You might have missed these three posts:

The consumerization of security

Increasingly, commentators are calling into question the state of information security. It's about time. We infosec professionals need to take action before our customers force us to.

Standard security is just not intellectually secure. Information Security Management Systems and security audits are based on discredited quality management frameworks like ISO 9000 and waterfall methodologies. The derivative PCI-DSS regime mitigates accidental losses and amateur attacks but is farcically inadequate in the face of organised crime. The economics of perimeter security are simply daft: many databases are now worth billionsof dollars to identity thieves, but they're protected by meagre firewalls and administrators with superuser privileges on $40K salaries. Threat & Risk Assessments have their roots in Failure Modes & Criticality Analysis (FMECA) which is hopeless in the highly non-linear and unpredictable world of software, where a trivial mistake in one part of a program can have unlimited impact on the whole system; witness the #gotofail episode. Software is so easy to write and businesses are so obsessed with time to market that the world now rests on layer upon layer of bloated spaghetti code. The rapidity of software development has trumped quality and UI design. We have fragile home computers that are impossibly complex to operate safely, and increasingly, Internet-connected home appliances with the same characteristics.

We can't adequately protect credit card numbers, yet we're joy-riding like a 12-year old on a stolen motorcycle into an Internet of Things.

We're going to have to fix complexity and quality before security stands a chance.

Maybe the market will come to the rescue. Consumers seem to tolerate crappy computer quality to some degree, doubtless weighing up the benefits of being online versus the hassle of the occasional blue screen or hard drive crash. But when things like cars, thermostats and swimming pool filters, which don't need to be computers, become computers, consumers may make a harsher judgement of technology reliability.

Twenty years ago when I worked in medical device software -- pre-Internet, let alone the Internet of Things -- I recall an article about quality which predicted the public would paradoxically put up with more bugs in flight control software than they would in a light switch. In a way, that analysis predicted one of the driving forces for technology today: consumerization.

Posted in Security

The State of the State of Privacy

Constellation Research analysts are wrapping up a very busy 2014 with a series of "State of the State" reports. For my part I've looked at the state of privacy, which I feel is entering its adolescent stage.

Here's a summary.

1. Consumers have not given up privacy - they've been tricked out of it.
The impression is easily formed that people just don’t care about privacy anymore, but in fact people are increasingly frustrated with privacy invasions. They’re tired of social networks mining users’ personal lives; they are dismayed that video game developers can raid a phone’s contact lists with impunity; they are shocked by the deviousness of Target analyzing women’s shopping histories to detect pregnant customers; and they are revolted by the way magnates help themselves to operational data like Uber’s passenger movements for fun or allegedly for harassment – just because they can.

2. Private sector surveillance is overshadowed by government intrusion, but is arguably just as bad.
Edward Snowden’s revelations of a massive military-industrial surveillance effort were of course shocking, but they should not steal all the privacy limelight. In parallel with and well ahead of government spy programs, the big OSNs and search engine companies have been gathering breathtaking amounts of data, all in the interests of targeted advertising. These data stores have come to the attention of the FBI and CIA who must be delighted that someone else has done so much of their spying for them. These businesses boast that they know us better than we know ourselves. That’s chilling. We need to break through into a post-Snowden world.

3. The U.S. is the Canary Islands of privacy.
The United States remains the only major economy without broad-based information privacy laws. There are almost no restraints on what American businesses may do with personal information they collect from their customers, or synthesize from their operations. In the rest of the world, most organizations must restrict their collection of data, limit the repurposing of data, and disclose their data handling practices in full. Individuals may want to move closer to European-style privacy protection, while many corporations prefer the freedom they have in America to hang on to any data they like while they figure out how to make money out of it. Digital companies like to call this “innovation” and grandiose claims are made about its criticality for the American economy, but many consumers would prefer the sort of innovation that respects their privacy while delivering value-for-data.

4. Privacy is more about politics than technology.
Privacy can be seen as a power play between individual rights and the interests of governments and businesses. Most of us actually want businesses to know quite a lot about us, but we expect them to respect what they know and to be restrained in how they use it. Privacy is less about what organizations do with information than what they choose not to do with it. Hence, privacy cannot be a technology issue. It is not about keeping things secret but rather, keeping them close. Privacy is actually the protection we need when things are not secret.

5. Land grab for “public” data accelerates.

Image Analysis As Cracking Tower (1 1 1)
Data is an immensely valuable raw material. We should re-frame unstructured data as “information ore”. More than data mining, Big Data is really about data refining but unlike the stuff of traditional extractive industries, data seems inexhaustible, and the cost of extraction is near zero. A huge amount of Big Data activity is propelled by the misconception that data in the public domain is free for all. The reality is that many data protection laws govern the collection and use of personal data regardless of where it comes from. That is, personal data in the “public domain” is in fact encumbered. This is counter-intuitive to many, yet many public resources are regulated - including minerals, electromagnetic spectrum and intellectual property.

6. Data literacy will be key to digital safety.
Computer literacy is one thing, but data literacy is different and less tangible. We have strong privacy intuitions that have evolved over centuries but in cyberspace we lose our bearings. We don’t have the familiar social cues when we go online, so now we need to develop new ones. And we need to build up a common understanding of how data flows in the digital economy. Today we train kids in financial literacy to engender a first-hand sense of how commerce works; data literacy may become even more important as a life skill. It's more than being able to work an operating system, a device and umpteen apps. It means having meaningful mental models of what goes on in computers. Without understanding this, we can’t construct effective privacy policies or privacy labeling.

7. Privacy will get worse before it gets better.
Privacy is messy, even where data protection rules are well entrenched. Consider the controversial Right To Be Forgotten in Europe, which requires search engine operators to provide a mechanism for individuals to request removal of old, inaccurate and harmful reports from results. The new rule has been derived from existing privacy principles, which treat the results of search algorithms as a form of synthesis rather than a purely objective account of history, and therefore hold the search companies partly responsible for the offense their processes might produce. Yet, there are plenty of unintended consequences, and collisions with other jurisprudence. The sometimes urgent development of new protections for old civil rights is never plain sailing.

My report "Privacy Enters Adolescence" can be downloaded here. It expands on the points above, and sets out recommendations for improving awareness of how Personal Data flows in the digital economy, negotiating better deals in the data-for-value bargain, the conduct of Privacy Impact Assessments, and developing a "Privacy Bill of Rights".

Posted in Social Networking, Social Media, Internet, Constellation Research, Big Data