Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Improving the position of the CISO

Over the years, we security professionals have tried all sorts of things to make better connections with other parts of the business. We have broadened our qualifications, developed new Return on Security Investment tools, preached that security is a "business enabler", and strived to talk about solutions and not boring old technologies. But we've had mixed success.

Once when I worked as a principal consultant for a large security services provider, a new sales VP came in to the company with a fresh approach. She was convinced that the customer conversation had to switch from technical security to something more meaningful to the wider business: Risk Management. For several months after that I joined call after call with our sales teams, all to no avail. We weren't improving our lead conversions; in fact with banks we seemed to be going backwards. And then it dawned on me: there isn't much anyone can tell bankers about risk they don't already know.

Joining the worlds of security and business is easier said than done. So what is the best way for security line managers to engage with their peers? How can they truly contribute to new business instead of being limited to protecting old business? In a new investigation I've done at Constellation Research I've been looking at how classical security analysis skills and tools can be leveraged for strategic information management.

Remember that the classical frame for managing security is "Confidentiality-Integrity-Availability" or C-I-A. This is how we conventionally look at defending enterprise information assets; threats to security are seen in terms of how critical data may be exposed to unauthorised access, or lost, damaged or stolen, or otherwise made inaccessible to legitimate users. The stock-in-trade for the Chief Information Security Officer (CISO) is the enterprise information asset register and the continuous exercise of Threat & Risk Assessment around those assets.

I suggest that this way of looking at assets can be extended, shifting from a defensive mindset to a strategic, forward outlook. When the CISO has developed a birds-eye view of their organisation's information assets, they are ideally positioned to map the value of the assets more completely. What is it that makes information valuable exactly? It depends on the business - and security professionals are very good at looking at context. So for example, in financial services or technology, companies can compete on the basis of their original research, so it's the lead time to discovery that sets them apart. On the other hand, in healthcare and retail, the completeness of customer records is a critical differentiator for it allows better quality relationships to be created. And when dealing with sensitive personal information, as in the travel and hospitality industry, the consent and permissions attached to data records determine how they may be leveraged for new business. These are the sorts of things that make different data valuable in different contexts.

CISOs are trained to look at data through different prisms and to assess data in different dimensions. I've found that CISOs are therefore ideally qualified to bring a fresh approach to building the value of enterprise information assets. They can take a more pro-active role in information management, and carve out a new strategic place for themselves in the C-suite.

My new report, "Strategic Opportunities for the CISO", is available now.

Posted in Big Data, Constellation Research, Management theory

Letter: Online threats do damage

A letter to the editor of The Saturday Paper, published Nov 15, 2014.

In his otherwise fresh and sympathetic “Web of abuse” (November 8-14), Martin McKenzie-Murray unfortunately concludes by focusing on the ability of victims of digital hate to “[rationally] assess their threat level”. More’s the point, symbolic violence is still violent. The threat of sexual assault by men against women is inherently terrifying and damaging, whether it is carried out or not. Any attenuation of the threat of rape dehumanises all of us.

There’s a terrible double standard among cyber-libertarians. When good things happen online – such as the Arab Spring, WikiLeaks, social networking and free education – they call the internet a transformative force for good. Yet they can play down digital hate crimes as “not real”, and disown their all-powerful internet as just another communications medium.

Stephen Wilson, Five Dock, NSW.

Posted in Culture, Internet, Popular culture

The Prince of Data Mining

Facial recognition is digital alchemy. It's the prince of data mining.

Facial recognition takes previously anonymous images and conjures peoples' identities. It's an invaluable capability. Once they can pick out faces in crowds, trawling surreptitiously through anyone and everyone's photos, the social network businesses can work out what we're doing, when and where we're doing it, and who we're doing it with. The companies figure out what we like to do without us having to 'like' or favorite anything.

So Google, Facebook, Apple at al have invested hundreds of megabucks in face recognition R&D and buying technology start-ups. And they spend billions of dollars buying images and especially faces, going back to Google's acquisition of Picasa in 2004, and most recently, Facebook's ill-fated $3 billion offer for Snapchat.

But if most people find face recognition rather too creepy, then there is cause for optimism. The technocrats have gone too far. What many of them still don't get is this: If you take anonymous data (in the form of photos) and attach names to that data (which is what Facebook photo tagging does - it guesses who people are in photos are, attaches putative names to records, and invites users to confirm them) then you Collect Personal Information. Around the world, existing pre-biometrics era black letter Privacy Law says you can't Collect PII even indirectly like that without am express reason and without consent.

When automatic facial recognition converts anonymous data into PII, it crosses a bright line in the law.

Posted in Social Networking, Privacy, Biometrics, Big Data

From Information Security to Digital Competitiveness

Exploring new strategic opportunities for CIOs and CISOs.

For as long as we've had a distinct information security profession, it has been said that security needs to be a "business enabler". But what exactly does that mean? How can security professionals advance from their inherently defensive postures, into more strategic positions, and contribute actively to the growth of the business? This is the focus of my latest work at Constellation Research. It turns out that security professionals have special tools and skills ideally suited to a broader strategic role in information management.

The role of Chief Information Security Officer (CISO) is a tough one. Security is red hot. Not a week goes by without news of another security breach.

Information now is the lifeblood of most organisations; CISOs and their teams are obviously crucial in safeguarding that. But a purely defensive mission seldom allows for much creativity, or a positive reputation amongst one's peers. A predominantly reactive work mode -- as important as it is from day to day -- can sometimes seem precarious. The good news for CISOs' career security and job satisfaction is they happen to have special latent skills to innovate and build out those most important digital assets.

Information assets are almost endless: accounts, ledgers and other legal records, sales performance, stock lists, business plans, R&D plans, product designs, market analyses and forecasts, customer data, employee files, audit reports, patent specifications and trade secrets. But what is it about all this information that actually needs protecting? What exactly makes any data valuable? These questions take us into the mind of the CISO.

Security management is formally all about the right balance of Confidentiality, Integrity and Availability in the context of the business. Different businesses have different needs in these three dimensions.

Think of the famous industrial secrets like the recipes for KFC or Coca Cola. These demand the utmost confidentiality and integrity but the availability of the information can be low (nay, must be low) because it is accessed as a whole so seldomly. Medical records too have traditionally needed confidentiality more than availability, but that's changing. Complex modern healthcare demands electronic records, and these do need high availability especially in emergency care settings.

In contrast, for public information like stock prices there is no value in confidentiality whatsoever, and instead, availability and integrity are paramount. On the other hand, market-sensitive information that listed companies periodically report to stock exchanges must have very strict confidentiality for a relatively brief period.

Security professionals routinely compile Information Asset Inventories and plan for appropriate C-I-A for each type of data held. From there, a Threat & Risk Assessment (TRA) is generally undertaken, to examine the adverse events that might compromise the Confidentiality, Integrity and/or Availability. The likelihood and the impact of each adverse event are estimated and multiplied together to gauge the practical risk posed by each known threat. By prioritising counter-measures for the identified threats, in line with the organisation's risk appetite, the TRA helps guide a rational program of investment in security.

Now their practical experience can put CISOs in a special position to enhance their organisation's information assets rather than restrict themselves to hardening information against just the negative impacts.

Here's where the CISO's mindset comes into play in a new way. The real value of information lies not so much in the data itself as in its qualities. Remember the cynical old saw "It's not what you know, it's who you know". There's a serious side to the saying, which highlights that really useful information has pedigree.

So the real action is in the metadata; that is, data about data. It may have got a bad rap recently thanks to surveillance scandals, but various thinkers have long promoted the importance of metadata. For example, back in the 1980s, Citibank CEO Walter Wriston famously said "information about money will become almost as important as money itself". What a visionary endorsement of metadata!

The important latent skills I want to draw out for CISOs is their practiced ability to deal with the qualities of data. To bring greater value to the business, CISOs can start thinking about the broader pedigree of data and not merely its security qualities. They should spread their wings beyond C-I-A, to evaluate all sorts of extra dimensions, like completeness, reliability, originality, currency, privacy and regulatory compliance.

The core strategic questions for the modern CISO are these: What is it about your corporate information that gives you competitive advantage? What exactly makes information valuable?

The CISO has the mindset and the analytical tools to surface these questions and positively engage their executive peers in finding the answers.

My new Constellation Research report will be published soon.

Posted in Security, Privacy, Management theory, Constellation Research

An unpublished letter on The Right To Be Forgotten

In response to "The Solace of Oblivion", Jeffrey Toobin, The New Yorker, September 29th, 2014.

The "Right to be Forgotten" is an unfortunate misnomer for a balanced data control measure decided by the European Court of Justice. The new rule doesn't seek to erase the past but rather to restore some of its natural distance. Privacy is not about secrecy but moderation. Yet restraint is toxic to today's information magnates, and the response of some to even the slightest throttling of their control of data has been shrill. Google doth protest too much when it complains that having to adjust its already very elaborate search filters makes it an unwilling censor.

The result of a multi-billion dollar R&D program, Google's search engine is a great deal more than a latter-day microfiche. Its artificial intelligence tries to predict what users are really looking for, and as a result, its insights are all the more valuable to Google's real clients -- the advertisers. No search result is a passive reproduction of data from a "public domain". Google makes the public domain public. So if search reveals Personal Information that was hitherto impossible to find, then Google should at least participate in helping to temper the unintended privacy consequences.

Stephen Wilson
October 1, 2014.

Posted in Big Data, Internet, Privacy

The worst privacy misconception of all

I was discussing definitions of Personally Identifiable Information (PII) with some lawyers today, one of whom took exception to the US General Services Administration definition: information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual". This lawyer concluded rather hysterically that under such a definition, "nobody can use the internet without a violation".

Similarly, I've seen engineers in Australia recoil at the possibility that IP and MAC Addresses might be treated as PII because it is increasingly easy to link them to the names of device owners. I was recently asked "Why are they stopping me collecting IP addresses?". The answer is, they're not.

There are a great many misconceptions about privacy, but the idea that 'if it's personal you can't use it' is by far the worst.

Nothing in any broad-based data privacy law I know of says personal information cannot be collected or used.

Rather, what data privacy laws actually say is: if you're collecting and using PII, be careful.

Privacy is about restraint. The general privacy laws of Australia, Europe and 100-odd countries say things like don't collect PII without consent, don't collect PII beyond what you demonstrably need, don't use PII collected for one purpose for other unrelated purposes, tell individuals if you can what PII you hold about them, give people access to the PII you have, and do not retain PII for longer than necessary.

Such rules are entirely reasonable, and impose marginal restrictions on the legitimate conduct of business. And they align very nicely with standard security practice which promotes the Need To Know principle and the Principle of Least Privilege.

Compliance with Privacy Principles does add some overhead to data management compared with anonymous data. If re-identification techniques and ubiquitous inter-connectedness means that hardly any data is going to stay anonymous anymore, then yes, privacy laws mean that data should be treated more cautiously than was previously the case. And what exactly is wrong with that?

If data is the new gold then it's time data custodians took more care.

Posted in Big Data, Privacy

PKI as nature intended

Few technologies are so fundamental and yet so derided at the same time as public key infrastructure. PKI is widely thought of as obsolete or generically intrusive yet it is ubiquitous in SIM cards, SSL, chip and PIN cards, and cable TV. Technically, public key infrastructure Is a generic term for a management system for keys and certificates; there have always been endless ways to build PKIs (note the plural) for different communities, technologies, industries and outcomes. And yet “PKI” has all too often come to mean just one way of doing identity management. In fact, PKI doesn’t necessarily have anything to do with identity at all.

This blog is an edited version of a feature I once wrote for SC Magazine. It is timely in the present day to re-visit the principles that make for good PKI implementations and contextualise them in one of the most contemporary instances of PKI: the FIDO Alliance protocols for secure attribute management. In my view, FIDO realises PKI ‘as nature intended’.

“Re-thinking PKI”

In their earliest conceptions in the early-to-mid 1990s, digital certificates were proposed to authenticate nondescript transactions between parties who had never met. Certificates were construed as the sole means for people to authenticate one another. Most traditional PKI was formulated with no other context; the digital certificate was envisaged to be your all-purpose digital identity.

Orthodox PKI has come in for spirited criticism. From the early noughties, many commentators pointed to a stark paradox: online transaction volumes and values were increasing rapidly, in almost all cases without the help of overt PKI. Once thought to be essential, with its promise of "non repdudiation", PKI seemed anything but, even for significant financial transactions.

There were many practical problems in “big” centralised PKI models. The traditional proof of identity for general purpose certificates was intrusive; the legal agreements were complex and novel; and private key management was difficult for lay people. So the one-size-fits-all electronic passport failed to take off. But PKI's critics sometimes throw the baby out with the bathwater.
In the absence of any specific context for its application, “big” PKI emphasized proof of personal identity. Early certificate registration schemes co-opted identification benchmarks like that of the passport. Yet hardly any regular business transactions require parties to personally identify one another to passport standards.

”Electronic business cards”

Instead in business we deal with others routinely on the basis of their affiliations, agency relationships, professional credentials and so on. The requirement for orthodox PKI users to submit to strenuous personal identity checks over and above their established business credentials was a major obstacle in the adoption of digital certificates.

It turns out that the 'killer applications' for PKI overwhelmingly involve transactions with narrow contexts, predicated on specific credentials. The parties might not know each other personally, but invariably they recognize and anticipate each other's qualifications, as befitting their business relationship.

Successful PKI came to be characterized by closed communities of interest, prior out-of-band registration of members, and in many cases, special-purpose application software featuring additional layers of context, security and access controls.

So digital certificates are much more useful when implemented as application-specific 'electronic business cards,' than as one-size-fits-all electronic passports. And, by taking account of the special conditions that apply to different e-business processes, we have the opportunity to greatly simplify the registration processes, user experience and liability arrangements that go with PKI.

The real benefits of digital signatures

There is a range of potential advantages in using PKI, including its cryptographic strength and resistance to identity theft (when implemented with private keys in hardware). Many of its benefits are shared with other technologies, but at least two are unique to PKI.

First, digital signatures provide robust evidence of the origin and integrity of electronic transactions, persistent over time and over 'distance’ (that is, the separation of sender and receiver). This greatly simplifies audit logging, evidence collection and dispute resolution, and cuts the future cost of investigation and fraud. If a digitally signed document is archived and checked at a later date, the quality of the signature remains undiminished over many years, even if the public key certificate has long since expired. And if a digitally signed message is passed from one relying party to another and on to many more, passing through all manner of intermediate systems, everyone still receives an identical, verifiable signature code authenticating the original message.

Electronic evidence of the origin and integrity of a message can, of course, be provided by means other than a digital signature. For example, the authenticity of typical e-business transactions can usually be demonstrated after the fact via audit logs, which indicate how a given message was created and how it moved from one machine to another. However, the quality of audit logs is highly variable and it is costly to produce legally robust evidence from them. Audit logs are not always properly archived from every machine, they do not always directly evince data integrity, and they are not always readily available months or years after the event. They are rarely secure in themselves, and they usually need specialists to interpret and verify them. Digital signatures on the other hand make it vastly simpler to rewind transactions when required.

Secondly, digital signatures and certificates are machine readable, allowing the credentials or affiliations of the sender to be bound to the message and verified automatically on receipt, enabling totally paperless transacting. This is an important but often overlooked benefit of digital signatures. When processing a digital certificate chain, relying party software can automatically tell that:

    • the message has not been altered since it was originally created
    • the sender was authorized to launch the transaction, by virtue of credentials or other properties endorsed by a recognized Certificate Authority
    • the sender's credentials were valid at the time they sent the message; and
    • the authority which signed the certificate was fit to do so.

One reason we can forget about the importance of machine readability is that we have probably come to expect person-to-person email to be the archetypal PKI application, thanks to email being the classic example to illustrate PKI in action. There is an implicit suggestion in most PKI marketing and training that, in regular use, we should manually click on a digital signature icon, examine the certificate, check which CA issued it, read the policy qualifier, and so on. Yet the overwhelming experience of PKI in practice is that it suits special purpose and highly automated applications, where the usual receiver of signed transactions is in fact a computer.

Characterising good applications

Reviewing the basic benefits of digital signatures allows us to characterize the types of e-business applications that merit investment in PKI.

Applications for which digital signatures are a good fit tend to have reasonably high transaction volumes, fully automatic or straight-through processing, and multiple recipients or multiple intermediaries between sender and receiver. In addition, there may be significant risk of dispute or legal ramifications, necessitating high quality evidence to be retained over long periods of time. These include:

    • Tax returns
    • Customs reporting
    • E-health care
    • Financial trading
    • Insurance
    • Electronic conveyancing
    • Superannuation administration
    • Patent applications.

This view of the technology helps to explain why many first-generation applications of PKI were problematic. Retail internet banking is a well-known example of e-business which flourished without the need for digital certificates. A few banks did try to implement certificates, but generally found them difficult to use. Most later reverted to more conventional access control and backend security mechanisms.Yet with hindsight, retail funds transfer transactions did not have an urgent need for PKI, since they could make use of existing backend payment systems. Funds transfer is characterized by tightly closed arrangements, a single relying party, built-in limits on the size of each transaction, and near real-time settlement. A threat and risk assessment would show that access to internet banking can rest on simple password authentication, in exactly the same way as antecedent phone banking schemes.

Trading complexity for specificity

As discussed, orthodox PKI was formulated with the tacit assumption that there is no specific context for the transaction, so the digital certificate is the sole means for authenticating the sender. Consequently, the traditional schemes emphasized high standards of personal identity, exhaustive contracts and unusual legal devices like Relying Party Agreements. They also often resorted to arbitrary 'reliance limits,' which have little meaning for most of the applications listed on the previous page. Notoriously, traditional PKI requires users to read and understand certification practice statements (CPS).

All that overhead stemmed from not knowing what the general-purpose digital certificate was going to be used for. On the other hand, if particular digital certificates are constrained to defined applications, then the complexity surrounding their specific usage can be radically reduced.

The role of PKI in all contemporary 'killer applications' is fundamentally to help automate the online processing of electronic transactions between parties with well-defined credentials. This is in stark contrast to the way PKI has historically been portrayed, where strangers Alice and Bob use their digital certificates to authenticate context-free general messages, often presumed to be sent by email. In reality, serious business messages are never sent stranger-to-stranger with no context or cues as to the parties' legitimacy.

Using generic email is like sending a fax on plain paper. Instead, business messaging is usually highly structured. Parties have an expectation that only certain types of transactions are going to occur between them and they equip themselves accordingly (for instance, a health insurance office is not set up to handle tax returns). The sender is authorized to act in defined types of transactions by virtue of professional credentials, a relevant license, an affiliation with some authority, endorsement by their employer, and so on. And the receiver recognizes the source of those credentials. The sender and receiver typically use prescribed forms and/or special purpose application software with associated user agreements and license conditions, adding context and additional layers of security around the transaction.

PKI got smart

When PKI is used to help automate the online processing of transactions between parties in the context of an existing business relationship, we should expect the legal arrangements between the parties to still apply. For business applications where digital certificates are used to identify users in specific contexts, the question of legal liability should be vastly simpler than it is in the general purpose PKI scenario where the issuer does not know what the certificates might be used for.
The new vision for PKI means the technology and processes should be no more of a burden on the user than a bank card. Rather than imagine that all public key certificates are like general purpose electronic passports, we can deploy multiple, special purpose certificates, and treat them more like electronic business cards. A public key certificate issued on behalf of a community of business users and constrained to that community can thereby stand for any type of professional credential or affiliation.

We can now automate and embed the complex cryptography deeply into smart devices -- smartcards, smart phones, USB keys and so on -- so that all terms and conditions for use are application focused. As far as users are concerned, a smartcard can be deployed in exactly the same way as any magnetic stripe card, without any need to refer to - or be limited by - the complex technology contained within (see also Simpler PKI is on the cards). Any application-specific smartcard can be issued under rules and controls that are fit for their purpose, as determined by the community of users or an appropriate recognized authority. There is no need for any user to read a CPS. Communities can determine their own evidence-of-identity requirements for issuing cards, instead of externally imposed personal identity checks. Deregulating membership rules dramatically cuts the overheads traditionally associated with certificate registration.

Finally, if we constrain the use of certificates to particular applications then we can factor the intended usage into PKI accreditation processes. Accreditation could then allow for particular PKI scheme rules to govern liability. By 'black-boxing' each community's rules and arrangements, and empowering the community to implement processes that are fit for its purpose, the legal aspects of accreditation can be simplified, reducing one of the more significant cost components of the whole PKI exercise (having said that, it never ceases to amaze how many contemporary healthcare PKIs still cling onto face-to-face passport grade ID proofing as if that's the only way to do digital certificates).

Fast forward

The preceding piece is a lightly edited version of the article ”Rethinking PKI” that first appeared in Secure Computing Magazine in 2003. Now, over a decade later, we’re seeing the same principles realised by the FIDO Alliance.

The FIDO protocols U2F and UAF enable specific attributes of a user and their smart devices to be transmitted to a server. Inherent to the FIDO methods are digital certificates that confer attributes and not identity, relatively large numbers of private keys stored locally in the users’ devices (and without the users needing to be aware of them as such) and digital signatures automatically applied to protocol messages to bind the relevant attributes to the authentication exchanges.

Surely, this is how PKI should have been deployed all along.

Posted in Security, PKI, Internet, Identity

Crowd sourcing private sector surveillance

A repeated refrain of cynics and “infomopolists” alike is that privacy is dead. People are supposed to know that anything on the Internet is up for grabs. In some circles this thinking turns into digital apartheid; some say if you’re so precious about your privacy, just stay offline.

But socialising and privacy are hardly mutually exclusive; we don’t walk around in public with our names tattooed on our foreheads. Why can’t we participate in online social networks in a measured, controlled way without submitting to the operators’ rampant X-ray vision? There is nothing inevitable about trading off privacy for conviviality.

The privacy dangers in Facebook and the like run much deeper than the self-harm done by some peoples’ overly enthusiastic sharing. Promiscuity is actually not the worst problem, neither is the notorious difficulty of navigating complex and ever changing privacy settings.

The advent of facial recognition presents far more serious and subtle privacy challenges.

Facebook has invested heavily in face recognition technology, and not just for fun. Facebook uses it in effect to crowd-source the identification and surveillance of its members. With facial recognition, Facebook is building up detailed pictures of what people do, when, where and with whom.

You can be tagged without consent in a photo taken and uploaded by a total stranger.

The majority of photos uploaded to personal albums over the years were not intended for anything other than private viewing.

Under the privacy law of Australia and data protection regulations in dozens of other jurisdictions, what matters is whether data is personally identifiable. The Commonwealth Privacy Act 1988 (as amended in 2014) defines “Personal Information” as: “information or an opinion about an identified individual, or an individual who is reasonably identifiable”.

Whenever Facebook attaches a member’s name to a photo, they are converting hitherto anonymous data into Personal Information, and in so doing, they become subject to privacy law. Automated facial recognition represents an indirect collection of Personal Information. However too many people still underestimate the privacy implications; some technologists naively claim that faces are “public” and that people can have no expectation of privacy in their facial images, ignoring that information privacy as explained is about the identifiability and identification of data; the words “public” and “private” don’t even figure in the Privacy Act!

If a government was stealing into our photo albums, labeling people and profiling them, there would be riots. It's high time that private sector surveillance - for profit - is seen for what it is, and stopped.

Posted in Social Networking, Social Media, Privacy, Biometrics

Dumbing down Snowden

Ed Snowden was interviewed today as part of the New Yorker festival. This TechCruch report says Snowden "was asked a couple of variants on the question of what we can do to protect our privacy. His first answer called for a reform of government policies." He went on to add some remarks about Google, Facebook and encryption and that's what the report chose to focus on. The TechCrunch headline: "Snowden's Privacy Tips".

Mainstream and even technology media reportage does Snowden a terrible disservice and takes the pressure off from government policy.

I've listened to the New Yorker online interview. After being asked by a listener what they should do about privacy, Snowden gave a careful, nuanced, and comprehensive answer over five minutes. His very first line was this is an incredibly complex topic and he did well to stick to plain language throughout. He canvassed a great many issues including: the need for policy reform, the 'Nothing to Hide' argument, the inversion of civil rights when governments ask us to justify the right to be left alone, the collusion of companies and governments, the poor state of product security and usability, the chilling effect on industry of government intervention in security, metadata, and the radicalisation of computer scientists today being comparable with physicists in the Cold War.

Only after all that, and a follow up question about 'ordinary people', did Snowden say 'don't use Dropbox'.

Consistently, when Snowden is asked what to do about privacy, his answers are primarily about politics not technology. When pressed, he dispenses the odd advice about using Tor and disk encryption, but Snowden's chief concerns (as I have discussed in depth previously) are around accountability, government transparency, better cryptology research, better security product quality, and so on. He is no hacker.

I am simply dismayed how Snowden's sophisticated analyses are dumbed down to security tips. He has never been a "cyber Agony Aunt". The proper response to NSA overreach has to be agitation for regime change, not do-it-yourself cryptography. That is Snowden's message.

Posted in Social Media, Security, Privacy, Internet

Four Corners' 'Privacy Lost': A demonstration of the Collection Principle

Tonight, Australian Broadcasting Corporation’s Four Corners program aired a terrific special, "Privacy Lost" written and produced by Martin Smith from the US public broadcaster PBS’s Frontline program.

Here we have a compelling demonstration of the importance and primacy of Collection Limitation for protecting our privacy.

UPDATE: The program we saw in Australia turns out to be a condensed version of PBS's two part The United States of Secrets from May 2014.

About the program

Martin Smith summarises brilliantly what we know about the NSA’s secret surveillance programs, thanks to the revelations of Ed Snowden, the Guardian’s Glenn Greenwald and the Washington Post’s Barton Gellman; he holds many additional interviews with Julia Angwin (author of “Dragnet Nation”), Chris Hoofnagle (UC Berkeley), Steven Levy (Wired), Christopher Soghoian (ACLU) and Tim Wu (“The Master Switch”), to name a few. Even if you’re thoroughly familiar with the Snowden story, I highly recommend “Privacy Lost” or the original "United States of Secrets" (which unlike the Four Corners edition can be streamed online).

The program is a ripping re-telling of Snowden’s expose, against the backdrop of George W. Bush’s PATRIOT Act and the mounting suspicions through the noughties of NSA over-reach. There are freshly told accounts of the intrigues, of secret optic fibre splitters installed very early on in AT&T’s facilities, scandals over National Security Letters, and the very rare case of the web hosting company Calyx who challenged their constitutionality (and yet today, with the letter withdrawn, remains unable to tell us what the FBI was seeking). The real theme of Smith’s take on surveillance then emerges, when he looks at the rise of data-driven businesses -- first with search, then advertising, and most recently social networking -- and the “data wars” between Google, Facebook and Microsoft.

In my view, the interplay between government surveillance and digital businesses is the most important part of the Snowden epic, and it receives the proper emphasis here. The depth and breadth of surveillance conducted by the private sector, and the insights revealed about what people might be up to creates irresistible opportunities for the intelligence agencies. Hoofnagle tells us how the FBI loves Facebook. And we see the discovery of how the NSA exploits the tracking that’s done by the ad companies, most notably Google’s “PREF” cookie.

One of the peak moments in “Privacy Lost” comes when Gellman and his specialist colleague Ashkan Soltani present their evidence about the PREF cookie to Google – offering an opportunity for the company to comment before the story is to break in the Washington Post. The article ran on December 13, 2013; we're told it was then the true depth of the privacy problem was revealed.

My point of view

Smith takes as a given that excessive intrusion into private affairs is wrong, without getting into the technical aspects of privacy (such as frameworks for data protection, and various Privacy Principles). Neither does he unpack the actual privacy harms. And that’s fine -- a TV program is not the right place to canvass such technical arguments.

When Gellman and Soltani reveal that the NSA is using Google’s tracking cookie, the government gets joined irrefutably to the private sector in a mass surveillance apparatus. And yet I am not sure the harm is dramatically worse when the government knows what Facebook and Google already know.

Privacy harms are tricky to work out. Yet obviously no harm can come from abusing Personal Information if that information is not collected in the first place! I take away from “Privacy Lost” a clear impression of the risks created by the data wars. We are imperiled by the voracious appetite of digital businesses that hang on indefinitely to masses of data about us, while they figure out ever cleverer ways to make money out of it. This is why Collection Limitation is the first and foremost privacy protection. If a business or government doesn't have a sound and transparent reason for having Personal Information about us, then they should not have it. It’s as simple as that.

Martin Smith has highlighted the symbiosis between government and private sector surveillance. The data wars not only made dozens of billionaires but they did much of the heavy lifting for the NSA. And this situation is about to get radically more fraught. On the brink of the Internet of Things, we need to question if we want to keep drowning in data.

Posted in Social Networking, Social Media, Security, Privacy, Internet