Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

FIDO Alliance - Update

You can be forgiven if the FIDO Alliance is not on your radar screen. It was launched barely 18 months ago, to help solve the "password crisis" online, but it's already proven to be one of most influential security bodies yet.

The typical Internet user has dozens of accounts and passwords. Not only are they a pain in the arse, poor password practices are increasingly implicated in fraud and terrible misadventures like the recent "iCloud Hack" which exposed celebrities' personal details.

With so many of our assets, our business and our daily lives happening in cyberspace, we desperately need better ways to prove who we are online – and even more importantly, prove what we entitled to do there.

The FIDO Alliance is a new consortium of identity management vendors, product companies and service providers working on strong authentication standards. FIDO’s vision is to tap the powers of smart devices – smart phones today and wearables tomorrow – to log users on to online services more securely and more conveniently.

FIDO was founded by Lenovo, PayPal, and security technology companies AGNITiO, Nok Nok Labs and Validity Sensors, and launched in February 2013. Since then the Alliance has grown to over 130 members. Two new authentication standards have been published for peer review, half a dozen companies showcased FIDO-Ready solutions at the 2014 Consumer Electronic Show (CES) in Las Vegas, and PayPal has released its ground-breaking pay-by-fingerprint app for the Samsung Galaxy S5.

The FIDO Alliance includes technology heavyweights like Google, Lenovo, Microsoft and Samsung; payments giants Discover, MasterCard, PayPal and Visa; financial services companies such as Aetna, Bank of America and Goldman Sachs; and e-commerce players like Netflix and Salesforce.com. There are also a couple of dozen biometrics vendors, many leading Identity and Access Management (IDAM) solutions and services, and almost every cell phone SIM and smartcard supplier in the world.

I have been watching FIDO since its inception and reporting on it for Constellation Research. The third update in my series of research reports on FIDO is now available and can be downloaded here. The report looks in depth at what the Alliance has to offer vendors and end user communities, its critical success factors, and how and why this body is poised to shake up authentication like never before.

Posted in Constellation Research, Identity, Security, Smartcards

Privacy watch

Update 22 September 2014

Last week, Apple suddenly went from silent to expansive on privacy, and the thrust of my blog straight after the Apple Watch announcement is now wrong. Apple posted a letter from CEO Tim Cook at www.apple.com/privacy along with a document that sets outs how "We’ve built privacy into the things you use every day".

The paper is very interesting. It's a sophisticated and balanced account of policy, business strategy and technology elements that go to create privacy. Apple highlights that they:

  • forswear the exploitation of customer data
  • do not scan content or messages
  • do not let their small "iAd" business take data from other Apple departments
  • require certain privacy protective practices on the part of their health app developers.

They have also provided quite decent information about how Siri and health data is handled.

Apple's stated privacy posture is all about respect and self-restraint. Setting out these principles and commitments is a very welcome development indeed. I congratulate them.

Today Apple launched their much anticipated wrist watch, described by CEO Tim Cook as "the most personal device they have ever developed". He got that right!

Rather more than a watch, it's a sort of guardian angel. The Apple Watch has Siri built-in, along with new haptic sensors and buzzers, a heartbeat monitor, accelerometer, and naturally the GPS and Wi-Fi geolocation capability to track your speed and position throughout the day. So they say "Apple Watch is an all-day fitness tracker and a highly advanced sports watch in a single device".

Apple Watch

The Apple Watch will be a paragon of digital disruption. To understand and master disruption today requires the coordination of mobility, Big Data, the cloud and user interfaces. These cannot be treated as isolated technologies, so when a company like Apple controls them all, at scale, real transformation follows.

Thus Apple is one of the few businesses that can make promises like this: "Over time, Apple Watch gets to know you the way a good personal trainer would". In this we hear echoes of the smarts that power Siri, and we are reminded that amid the novel intimacy we have with these devices, many serious privacy problems have yet to be resolved.

The Apple Event today was a play in four acts:
Act I: the iPhone 6 release;
Act II: Apple Pay launch;
Act III: the Apple Watch announcement;
Act IV: U2 played live and released their new album free on iTunes!

It was fascinating to watch the thematic differences across these stanzas. With Apple Pay, they stressed security and privacy; we were told about the Secure Element, the way card numbers are replaced by random numbers (tokenization), and an architecture where Apple cannot see how much you spend nor where you spend it. On the other hand, when it came to the Apple Watch and its integrated health sensors, privacy wasn't mentioned, not at all. We are left to deduce that aggregating personal health data at Apple's servers is a part of a broader plan.

The cornerstones of data privacy include Collection Limitation, Use Limitation (or "Purpose Specification") and Openness. Custodians of our Personally Identifiable Information (PII) should refrain from collecting and retaining PII they don't really need; they should specify what they do with PII and restrict unrelated secondary usage; and they should tell people what they're doing, generally in a Privacy Policy. With Siri, Apple sadly fails all these tests.See Update 22 September 2014 above.

The Apple Privacy Policy is altogether silent on Siri. The document details the sorts of information collected through its overt business processes like registration, sales and support, but it says nothing about the voice recordings and transcripts of Siri communications. Neither does the Siri FAQ mention what is done with all that data. It's quite an omission, seeing that when you dictate an SMS or an email to Siri, Apple retains a copy of communications that are normally out of bounds for your telecomms carrier.

It's been left to journalists to try and find out what Apple does with the information it mines from Siri. Wired magazine discovered eventually that Apple retains masked Siri voice recordings for six months; it then purportedly de-identifies them and keeps them for a further 18 months, for research. Yet even these explanations don't touch on the extracted contents of the communications, nor the metadata, like the trends and correlations that go to Siri's learning. If the purpose of Siri is ostensibly to automate the operation of the iPhone and its apps, then Apple should be refrain from using the by-products of Siri's voice processing for anything else. But we just don't know what they do, and Apple imposes no self-restraint.See Update 22 September 2014 above.

We should hope for radically greater transparency with the Apple Watch and its health apps. Most of the watch's data processing and analytics will be carried out in the cloud. So Apple will come to hold detailed records of its users' exercise regimes, their performance figures, trend data and correlations. These are health records. Inevitably, health applications will take in other medical data, like food diaries entered by users, statistics imported from other databases, and detailed measurements from Internet-connected scales, blood pressure monitors and even medical devices. Apple will see what we're doing to improve our health, day by day, year on year. They will come to know more about what's making us healthy and what's not than we do ourselves.

Apple Watch Activity App

Now, the potential benefits from this sort of personal technology to self-managed care and preventative medicine are enormous. But so are the data management and privacy obligations.

Within the US, Apple will doubtless be taking steps to avoid falling under the stringent HIPAA regulations, yet in the rest of the world, a more subtle but far-reaching problem looms. Many broad based data privacy regimes forbid the collection of health information without consent. And the laws of the European Union, Australia, New Zealand and elsewhere are generally technology neutral. This means that data collected directly from patients or doctors, and fresh data collected by way of automated algorithms are treated essentially the same way. So when a sophisticated health management app running in the cloud somewhere mines all that exercise and lifestyle data, and starts to make inferences about health and wellbeing, great care needs to be taken that the indiviuals concerned know what's going on in advance, and have given their informed consent.

One of the deep privacy challenges in Big Data is that data miners don't know what they're going to find. Even with the best will in the world, a company can struggle to say in its Privacy Policy what PII is expects to extract (and thus collect) in future from the raw data it collects today. At Constellation Research we've been fleshing out a new sort of compact between businesses and individuals that seeks to keep users abreast of developments in data analytics, and promises to provide people with proper control of personal Big Data results.

It ought to be possible to expressly opt in to Big Data processes when you can understand the pros and cons and the net benefits, and to later opt out, and opt back in again, as the benefit equation shifts over time. But even visualising the products of Big Data is hard; I believe graphical user interfaces (GUIs) to allow people to comprehend and actively control the process will be one of the great software design problems of our age.

Apple are obviously preeminent in GUI and user experience innovation. You would think if anyone can create the novel yet intuitive interfaces desperately needed to control Big Data PII, Apple can. But first they will have to embrace their responsibilities for the increasingly intimate details they are helping themselves to. If the Apple Watch is "the most personal device they've ever designed" then let's see privacy and data protection commitments to match.

Posted in Privacy, e-health, Constellation Research, Cloud, Big Data

Constellation Connected Enterprise 2014

Constellation's Connected Enterprise (CCE) is an immersive innovation summit for senior business leaders. The theme of this year’s Connected Enterprise is Dominate Digital Disruption. There will be over 200 other early adopters at CCE, at Half Moon Bay outside San Francisco, to discover and share how digital businesses can realise their brand promises, transform their business models, increase revenues, reduce costs, and improve compliance.

CCE is a three day executive retreat, comprising more or less continuous keynotes from visionaries and interactive best practices panels. There are deep one-on-one interviews with market makers, and numerous new enterprise technology demos. And there's the the Constellation SuperNova Awards Gala Dinner.

Register before September 30 to take advantage of early bird pricing. Use code BBLG14 for VIP privileges throughout the event.

See you there!

Posted in Constellation Research

Safeguarding the pedigree of identifiers

The problem of identity takeover

The root cause of much identity theft and fraud today is the sad fact that customer reference numbers and personal identifiers are so easy to copy. Simple numerical data like bank account numbers and health IDs can be stolen from many different sources, and replayed in bogus trans-actions.

Our personal data nowadays is leaking more or less constantly, through breached databases, websites, online forms, call centres and so on, to such an extent that customer reference numbers on their own are no longer reliable. Privacy consequentially suffers because customers are required to assert their identity through circumstantial evidence, like name and address, birth date, mother’s maiden name and other pseudo secrets. All this data in turn is liable to be stolen and used against us, leading to spiralling identity fraud.

To restore the reliability of personal identifiers, we need to know their pedigree. We need to know that a presented number is genuine, that it originated from a trusted authority, it’s been stored safely by its owner, and it’s been presented with the owner’s consent.

"Notarising" personal data in chip devices

There are ways of issuing personal data to a smart chip device that prevent those data from being stolen, copied and claimed by anyone else. One way to do so is to encapsulate and notarise personal data in a unique digital certificate issued to a chip. Today, a great many personal devices routinely embody cryptographically suitable chips for this purpose, including smart phones, SIM cards, “Secure Elements”, smartcards and many wearable computers.

Consider an individual named Smith to whom Organisation A has issued a unique customer reference number N. If N is saved in ordinary computer memory or something like a magnetic stripe card, then it has no pedigree. Once the number N is presented by the cardholder in a transaction, it looks like any other number. To better safeguard N in a chip device, it can be sealed into a digital certificate, as follows:

1. generate a fresh private-public key pair inside Smith’s chip
2. export the public key
3. create a digital certificate around the public key, with an attribute corresponding to N
4. have the certificate signed by (or on behalf of) organisation A.

Pedigree Diagram 140901

The result of coordinating these processes and technologies is a logical triangle that inextricably binds cardholder Smith to their reference number N and to a specific personally controlled device. The certificate signed by organisation A attests to Smith’s ownership of both N and a particular key unique to the device. Keys generated inside the chip are retained internally, never divulged to outsiders. It is impossible to copy the private key to another device, so the triangle cannot be cloned, reproduced or counterfeited.

Note that this technique lies at the core of the EMV "Chip-and-PIN" system where the smart payment card digitally signs cardholder and transaction data, rendering it immune to replay, before sending it to the merchant terminal. See also my 2012 paper Calling for a uniform approach to card fraud, offline and on. Now we should generalise notarised personal data and digitally signed transactions beyond Card-Present payments into as much online business as possible.

Restoring privacy and consumer control

When Smith wants to present their personal number in an electronic transaction, instead of simply copying N out of memory (at which point it would lose its pedigree), Smith’s transaction software digitally signs the transaction using the certificate containing N. With standard security software, any third party can then verify that the transaction originated from a genuine chip holding the unique key certified by A as matching the number N.

Note that N doesn’t have to be a customer number or numeric identifier; it could be any personal data, such as a biometric template or a package of medical information like an allergy alert.

The capability to manage multiple key pairs and certificates, and to sign transactions with a nominated private key, is increasingly built into smart devices today. By narrowing down what you need to know about someone to a precise customer reference number or similar personal data item, we will reduce identity theft and fraud while radically improving privacy. This sort of privacy enhancing technology is the key to a safe Internet of Things, and fortunately now is widely available.

Posted in Smartcards, Security, PKI, Payments, Identity, Fraud, Biometrics

Engaging engineers in privacy

Updated from original post January 2013.

I have come to believe that a systemic conceptual shortfall affects typical technologists’ thinking about privacy. It may be that engineers tend to take literally the well-meaning slogan that “privacy is not a technology issue”. And I say this in all seriousness. We are forever sugar coating privacy, urging that "privacy is good for business". It's naive. There are plenty of extremes where - sadly - some businesses do very well ignoring privacy. In the mainstream, many organization struggle to resolve privacy with other competing demands, like security, usability, cost and time to market.

I believe the best thing we can do for privacy systemically is to treat it like another one of the many often conflicting requirements faced by designers and engineers, and improve the tools they have to resolve the right balance. This is what engineers do.

Online, we’re talking about data privacy, or data protection, but systems designers bring to work a spectrum of personal outlooks about privacy in the human sphere. Yet what matters is the precise wording of data privacy law, like Australia’s Privacy Act. To illustrate the difference, here’s the sort of experience I’ve had time and time again.

During the course of conducting a PIA in 2011, I spent time with the development team working on a new government database. These were good, senior people, with sophisticated understanding of information architecture, and they’d received in-house privacy training. But they harboured restrictive views about privacy. An important clue was the way they habitually referred to “private” information rather than Personal Information (or equivalently, Personally Identifiable Information, PII). After explaining that Personal Information is the operable term in Australian legislation, and reviewing its definition as essentially any information about an identifiable person, we found that the team had not appreciated the extent of the PII in their system. They had overlooked that most of their audit logs collect PII, albeit indirectly and automatically, and that information about clients in their register provided by third parties was also PII (despite it being intuitively ‘less private’ by virtue of originating from others).

I attributed these blind spots to the developers’ loose framing of “private” information. Online and in privacy law alike, things are very crisp. The definition of PII as any data relating to an individual whose identity is readily apparent sets a low bar, embracing a great many data classes and, by extension, informatics processes. It might be counter-intuitive that PII originating from so many places (even the public domain) falls under privacy regulations, yet the definition of PII is clear cut and readily factored into systems analysis. After getting that, the team engaged in the PIA with fresh energy, and we found and rectified several privacy risks that had gone unnoticed.

Here are some more of the recurring misconceptions I’ve noticed over the past decade:


  • “Personal” Information is sometimes taken to mean especially delicate information such as payment card details, rather than any information pertaining to an identifiable individual; see also this exchange with US data breach analyst Jake Kouns over the Epsilon incident in 2011 in which tens of millions of user addresses were taken from a bulk email house;
  • the act of collecting PII is sometimes regarded only in relation to direct collection from the individual concerned; technologists can overlook that PII provided by a third party to a data custodian is nevertheless being collected by the custodian; likewise technologists may not appreciate that generating PII internally, through event logging for instance, also represent collection.

These instances and others show that many ICT practitioners suffer important gaps in their understanding. Security professionals in particular may be forgiven for thinking that most legislated Privacy Principles are legal technicalities irrelevant to them, for generally only one of the principles in any given set is overtly about security. Yet every one of the privacy principles in any data protection regime are impacted by information technology and security practices; see Mapping Privacy requirements onto the IT function, Privacy Law & Policy Reporter, v10.1 & 10.2, 2003. I believe the gaps in the privacy knowledge of ICT practitioners are not random but are systemic, probably resulting from privacy training for non-privacy professionals not being properly integrated with their particular world views.

To properly deal with data privacy, ICT practitioners need to have privacy framed in a way that leads to objective design requirements. Luckily there already exist several unifying frameworks for systematising the work of development teams. One tool that resonates strongly with data privacy practice is the Threat & Risk Assessment (TRA).

A TRA is for analysing infosec requirements and is widely practiced in the public and private sectors in Australia. There are a number of standards that guide the conduct of TRAs, such as ISO 31000. A TRA is used to systematically catalogue all foreseeable adverse events that threaten an organisation’s information assets, identify candidate security controls to mitigate those threats, and prioritise the deployment of controls to bring all risks down to an acceptable level. The TRA process delivers real world management decisions, understanding that non zero risks are ever present, and that no organisation has an unlimited security budget.

The TRA exercise is readily extensible to help Privacy by Design. A TRA can expressly incorporate privacy as an aspect of information assets worth protecting, alongside the conventional security qualities of confidentiality, integrity and availability ("C.I.A.").

Lockstep AusCERT 2013 Designing Privacy by Design (1 2) ASSET INVENTORY  pbd tra

A crucial subtlety here is that privacy is not the same as confidentiality, yet they are frequently conflated. A fuller understanding of privacy leads designers to consider the Collection, Use, Disclosure and Access & Correction principles, over and above confidentiality when they analyse information assets. The table above illustrates how privacy related factors can be accounted for alongside “C.I.A.”. In another blog post I discuss the selection of controls to mitigate privacy threats, within a unified TRA framework.

And in this post I look at how the definitional uncertainties in privacy and the unfolding identifiability of PII should not cause security professionals much anxiety - because they're trained to deal with uncertainties and likelihoods.

We continue to actively research the closer integration of security and privacy practices.

Posted in Security, Privacy

It's not too late for privacy

Have you heard the news? "Privacy is dead!"

The message is urgent. It's often shouted in prominent headlines, with an implied challenge. The new masters of the digital universe urge the masses: C'mon, get with the program! Innovate! Don't be so precious! Don't you grok that Information Wants To Be Free? Old fashioned privacy is holding us back!

The stark choice posited between privacy and digital liberation is rarely examined with much intellectual rigor. Often, "privacy is dead" is just a tired fatalistic response to the latest breach or eye-popping digital development, like facial recognition, or a smartphone's location monitoring. In fact, those who earnestly assert that privacy is over are almost always trying to sell us something, be it sneakers, or a political ideology, or a wanton digital business model.

Is it really too late for privacy? Is the "genie out of the bottle"? Even if we accepted the ridiculous premise that privacy is at odds with progress, no it's not too late, for a couple of reasons. Firstly, the pessimism (or barely disguised commercial opportunism) generally confuses secrecy for privacy. And secondly, frankly, we aint seen nothin yet!

Conflating privacy and secrecy

Technology certainly has laid us bare. Behavioral modeling, facial recognition, Big Data mining, natural language processing and so on have given corporations X-Ray vision into our digital lives. While exhibitionism has been cultivated and normalised by the informopolists, even the most guarded social network users may be defiled by data prospectors who, without consent, upload their contact lists, pore over their photo albums, and mine their shopping histories.

So yes, a great deal about us has leaked out into what some see as an infinitely extended neo-public domain. And yet we can be public and retain our privacy at the same time. Just as we have for centuries of civilised life.

It's true that privacy is a slippery concept. The leading privacy scholar Daniel Solove once observed that "Privacy is a concept in disarray. Nobody can articulate what it means."

Some people seem defeated by privacy's definitional difficulties, yet information privacy is simply framed, and corresponding data protection laws are elegant and readily understood.

Information privacy is basically a state where those who know us are restrained in they do with the knowledge they have about us. Privacy is about respect, and protecting individuals against exploitation. It is not about secrecy or even anonymity. There are few cases where ordinary people really want to be anonymous. We actually want businesses to know - within limits - who we are, where we are, what we've done and what we like ... but we want them to respect what they know, to not share it with others, and to not take advantage of it in unexpected ways. Privacy means that organisations behave as though it's a privilege to know us. Privacy can involve businesses and governments giving up a little bit of power.

Many have come to see privacy as literally a battleground. The grassroots Cryptoparty movement came together around the heady belief that privacy means hiding from the establishment. Cryptoparties teach participants how to use Tor and PGP, and they spread a message of resistance. They take inspiration from the Arab Spring where encryption has of course been vital for the security of protestors and organisers. One Cryptoparty I attended in Sydney opened with tributes from Anonymous, and a number of recorded talks by activists who ranged across a spectrum of political issues like censorship, copyright, national security and Occupy.

I appreciate where they're coming from, for the establishment has always overplayed its security hand, and run roughshod over privacy. Even traditionally moderate Western countries have governments charging like china shop bulls into web filtering and ISP data retention, all in the name of a poorly characterised terrorist threat. When governments show little sympathy for netizenship, and absolutely no understanding of how the web works, it's unsurprising that sections of society take up digital arms in response.

Yet going underground with encryption is a limited privacy stratagem, because do-it-yourself encryption is incompatible with the majority of our digital dealings. The most nefarious and least controlled privacy offences are committed not by government but by Internet companies, large and small. To engage fairly and squarely with businesses, consumers need privacy protections, comparable to the safeguards against unscrupulous merchants we enjoy, uncontroversially, in traditional commerce. There should be reasonable limitations on how our Personally Identifiable Information (PII) is used by all the services we deal with. We need department stores to refrain from extracting health information from our shopping habits, merchants to not use our credit card numbers as customer reference numbers, shopping malls to not track patrons by their mobile phones, and online social networks to not x-ray our photo albums by biometric face recognition.

Encrypting everything we do would only put it beyond reach of the companies we obviously want to deal with. Look for instance at how the cryptoparties are organised. Some cryptoparties manage their bookings via the US event organiser Eventbrite to which attendants have to send a few personal details. So ironically, when registering for a cryptoparty, you can not use encryption!

The central issue is this: going out in public does not neutralise privacy. It never did in the physical world and it shouldn't be the case in cyberspace either. Modern society has long rested on balanced consumer protection regulations to curb the occasional excesses of business and government. Therefore we ought not to respond to online privacy invasions as if the digital economy is a new Wild West. We should not have to hide away if privacy is agreed to mean respecting the PII of customers, users and citizens, and restraining what data custodians do with that precious resource.

Data Mining and Data Refining

We're still in the early days of the social web, and the information innovation has really only just begun. There is incredible value to be extracted from mining the underground rivers of data coursing unseen through cyberspace, and refining that raw material into Personal Information.

Look at what the data prospectors and processors have managed to do already.


  • Facial recognition transforms vast stores of anonymous photos into PII, without consent, and without limitation. Facebook's deployment of biometric technology was covert and especially clever. For years they encouraged users to tag people they knew in photos. It seemed innocent enough but through these fun and games, Facebook was crowd-sourcing the facial recognition templates and calibrating their constantly evolving algorithms, without ever mentioning biometrics in their privacy policy or help pages. Even now Facebook's Data Use Policy is entirely silent on biometric templates and what they allow themselves to do with them.

    It's difficult to overstate the value of facial recognition to businesses like Facebook when they have just one asset: knowledge about their members and users. Combined with image analysis and content addressable graphical memory, facial recognition lets social media companies work out what we're doing, when, where and with whom. I call it piracy. Billions of everyday images have been uploaded over many years by users for ostensiby personal purposes, without any clue that technology would energe to convert those pictures into a commercial resource.

    Third party services like Facedeals are starting to emerge, using Facebook's photo resources for commercial facial recognition in public. And the most recent facial recognition entrepreneurs like Name Tag App boast of scraping images from any "public" photo databases they can find. But as we shall see below, in many parts of the world there are restrictions on leveraging public-facing databases, because there is a legal difference between anonymous data and identified information.

  • Some of the richest stores of raw customer data are aggregated in retailer databases. The UK department store Tesco for example is said to hold more data about British citizens than the government does. For years of course data analysts have combed through shopping history for marketing insights, but their predictive powers are growing rapidly. An infamous example is Target's covert development of methods to identify customers who are pregnant based on their buying habits. Some Big Data practitioners seem so enamoured with their ability to extract secrets from apparently mundane data, they overlook that PII collected indirectly by algorithm is subject to privacy law just as if it was collected directly by questionnaire. Retailers need to remember this as they prepare to exploit their massive loyalty databases into new financial services ventures.
  • Natural Language Processing (NLP) is the secret sauce in Apple's Siri, allowing her to take commands and dictation. Every time you dictate an email or a text message to Siri, Apple gets hold of telecommunications contet that is normally out of bounds to the phone companies. Siri is like a free PA that reports your daily activities back to the secretarial agency. There is no mention at all of Siri in Apple's Privacy Policy despite the limitless collection of intimate personal information.
  • And looking ahead, Google Glass in the privacy stakes will probably surpass both Siri and facial recognition. If actions speak louder than words, imagine the value to Google of seeing through Glass exactly what we do in real time. Digital companies wanting to know our minds won't need us to expressly "like" anything anymore; they'll be able to tell our preferences from our unexpurgated behaviours.

The surprising power of data protection regulations

There's a widespread belief that technology has outstripped privacy law, yet it turns out technology neutral data privacy law copes well with most digital developments. OECD privacy principles (enacted in over 100 countries) and the US FIPPs (Fair Information Practice Principles) require that companies be transarent about what PII they collect and why, limit the ways in which PII is used for unrelated purposes.

Privacy advocates can take heart from several cases where existing privacy regulations have proven effective against some of the informopolies' trespasses. And technologists and cynics who think privacy is hopeless should heed the lessons.


  • Google StreetView cars, while they drive up and down photographing the world, also collect Wi-Fi hub coordinates for use in geo-location services. In 2010 it was discovered that the StreetView software was also collecting unencrypted Wi-Fi network traffic, some of which contained Personal Information like user names and even passwords. Privacy Commissioners in Australia, Japan, Korea, the Netherlands and elsewhere found Google was in breach of their data protection laws. Google explained that the collection was inadverrtant, apologized, and destroyed all the wireless traffic that had been gathered.

    The nature of this privacy offence has confused some commentators and technologists. Some argue that Wi-Fi data in the public domain is not private, and "by definition" (so they like to say) categorically could not be private. Accordingly some believed Google was within its rights to do whatever it liked with such found data. But that reasoning fails to grasp the technicality that Data Protection laws in Europe, Australia and elsewhere do not essentially distinguish “public” from "private". In fact the word “private” doesn’t even appear in Australia’s “Privacy Act”. If data is identifiable, then privacy rights generally attach to it irrespective of how it is collected.

  • Facebook photo tagging was ruled unlawful by European privacy regulators in mid 2012, on the grounds it represents a collection of PII (by the operation of the biometric matching algorithm) without consent. By late 2012 Facebook was forced to shut down facial recognition and tag suggestions in the EU. This was quite a show of force over one of the most powerful companies of the digital age. More recently Facebook has started to re-introduce photo tagging, prompting the German privacy regulator to reaffirm that this use of biometrics is counter to their privacy laws.

It's never too late

So, is it really too late for privacy? Outside the United States at least, established privacy doctrine and consumer protections have taken technocrats by surprise. They have found, perhaps counter intuitively, that they are not as free as they thought to exploit all personal data that comes their way.

Privacy is not threatened so much by technology as it is by sloppy thinking and, I'm afraid, by wishful thinking on the part of some vested interests. Privacy and anonymity, on close reflection, are not the same thing, and we shouldn't want them to be! It's clearly important to be known by others in a civilised society, and it's equally important that those who do know us, are reasonably restrained in how they use that knowledge.

Posted in Social Networking, Social Media, Privacy

Hacking over breakfast

The other morning, out of the blue, a sort of mini DEF CON came to a business breakfast in Sydney, with a public demonstration of how to crack the Australian government's logons for businesses.

Hardware infosec specialists ICT Security convened a breakfast meeting ostensibly to tell people about Bitcoin. The only clue they had a bigger agenda was buried in the low key byline "How could Bitcoin technology compromise your password database security?". I confess I missed the sub-plot altogether.

After a wide-ranging introduction to all things Bitcoin - including the theory of money, random numbers, Block Chains, ASICs and libertarianism - an ICT Security architect stepped up to talk about AusKEY, the Australian Government B2G Single Sign On system. And what was the Bitcoin connection? Well it happens that the technology needed for Bicoin mining - namely affordable, high-performance custom chips for number crunching - is exactly what's needed to mount brute-force attacks on hashed passwords. And so ICT Security went on to demonstrate that the typical AusKEY password can be easily cracked. Moreover, they also showed off security holes in the AusKEY Java code where 'master' key details can be found in the clear.

The company says it has brought these vulnerabilities to the government's attention.

They said that their technique could defeat passwords as long as 10 mixed characters, which exceeds the regular advice for password safe practices.

It's not entirely clear what ICT Security was seeking to achieve by now demonstrating the attack in public.

White hat exposees are a keen feature of the security ecosystem, and very problematic. In Australia, such exercises are often met with criminal investigation. For example, in 2011 First State Super reported a young man to police after he sent them evidence that he found how the fund's client logons could be guessed. Early this year, Public Transport Victoria called in the law after a self-professed "security researcher" reported (at first privately) a simple hack to expose travellers' confidential details. And merely being in possession of evidence of an alleged cyber break-in was enough to get journalist Ben Grubb arrested by Queensland Police in 2011. So alleged hacking can attract zealous policing casting a wide net.

Government security managers will likely be smarting about the adverse AusKEY publicity. Just three months ago the hacker and writer Nik Cubrilovic published a raft of weaknesses in "MyGov", a Single Sign On for individuals in Australia's social security system. In classic style, Cubrilovic first raised his findings privately with the Department of Human Services, but when he got no satisfaction, he went public. At this stage, I don't know if the government has taken the MyGov matter further.

For mine, the main lesson of this morning's demonstration is that single factor government authentication is obsolete. It is not good enough for citizens to be brought into e-government systems using twenty year old password security. The world is moving on and fast; see the advances being made by the FIDO Alliance to standardise Multi Factor Authentication.

In fact the AusKEY system actually offers an optional hardware USB key, but it hasn't been popular. That must change. E-government is way too important for single factor authentication. Which is probably the name of ICT Security's game.

Posted in Security

Getting the security privacy balance wrong

National security analyst Dr Anthony Bergin of the Australian Strategic Policy Institute wrote of the government’s data retention proposals in the Sydney Morning Herald of August 14. I am a privacy advocate who accepts in fact that law enforcement needs new methods to deal with terrorism. I myself do trust there is a case for greater data retention in order to weed out terrorist preparations, but I reject Bergin’s patronising call that “Privacy must take a back seat to security”. He speaks soothingly of balance yet he rejects privacy out of hand. As such his argument for balance is anything but balanced.

Suspicions are rightly raised by the murkiness of the Australian government’s half-baked data retention proposals and by our leaders’ excruciating inability to speak cogently even about the basics. They bandy about metaphors for metadata that are so bad, they smack of misdirection. Telecommunications metadata is vastly more complex than addresses on envelopes; for one thing, the Dynamic IP Addresses of cell phones means for police to tell who made a call requires far more data than ASIO and AFP are letting on (more on this by Internet expert Geoff Huston here).

The way authorities jettison privacy so casually is of grave concern. Either they do not understand privacy, or they’re paying lip service to it. In truth, data privacy is simply about restraint. Organisations must explain what personal data they collect, why they collect, who else gets to access the data, and what they do with it. These principles are not at all at odds with national security. If our leaders are genuine in working with the public on a proper balance of privacy and security, then long-standing privacy principles about proportionality, transparency and restraint provide the perfect framework in which to hold the debate. Ed Snowden himself knows this; people should look beyond the trite hero-or-pariah characterisations and listen to his balanced analysis of national security and civil rights.

Cryptographers have a saying: There is no security in obscurity. Nothing is gained by governments keeping the existence of surveillance programs secret or unexplained, but the essential trust of the public is lost when their privacy is treated with contempt.

Posted in Trust, Security, Privacy

Revisiting software professionalism

The ongoing debate (or spat) on Twitter about the "No Estimates" movement had me reaching for the archives.

Some now say that being forced to provide estimates is somehow counter-productive for software developers. I've long thought about programming productivity, and the paradox that software is too soft.

Some programmers want special treatment. In effect, "No Estimates" proponents are claiming their particular work is not amenable to traditional metrics and management. Now in a way, they're right; there is as yet no such thing as software "engineering". There are none of the handbooks or standards that feature in chemical, mechanical and electrical engineering. But nevertheless, if a programmer knows what they're doing - if they know their subject matter and how their code behaves - then providing estimates is not all that difficult. Disclaiming one's ability to predict how long a task will take is a weird way to try and engage with the business.

Software is definitely a difficult medium. It's highly non-linear, and breeds amazing complexity. But a great many of today's problems, like the recent #gotofail and Heartbleed scandals, are manifestly due to chaotic development practices.

As such, programmers are part of the problem.

I once wrote a letter to the editor of ComputerWorld about this ...


IT Governance

Yes indeed, IT is made the scapegoat for a great many project disasters (ComputerWorld 28 September, 2005, page 1). But it may prove fruitless to force orthodox project management and corporate governance methodologies onto big IT projects. And at the same time, IT "professionals" are not entirely free of blame.

So the KPMG Global IT Project Management Survey found that the vast majority of technology projects run over budget. In the main, "technology" means software, whether we build or buy. The "software crisis" - the systemic inability to estimate software projects accurately and to deliver what's promised - is about 40 years old. And it's more subtle than KPMG suggests in blaming corporate governance. It is fashionable at the moment to look to governance to rectify business problems but in this case, it really is a technology issue.

Software project management truly is different from all other technical fields, for software does not obey the laws of nature. Building skyscrapers, tunnels, dams and bridges is relatively predictable. You start with site surveys and foundations, erect a sturdy framework, fill in the services, fit it out, and take away the scaffolding. Specifications don't change much over a several year project, and the tools don't change at all.

But with software, you can start a big project anywhere you like, and before the spec is signed off. Metaphorically speaking, the plumbing can go in before the framework. Hell, you don't even need a framework! Nothing physical holds a software system up.

And software coding is fast and furious. In a single day, a programmer can create a system more complex than an airport that might take 10,000 person-years to build. So software development is fun. Let's be honest: it's why the majority of programmers chose their craft in the first place.

Ironically it's the rapidity of programming that contributes the most to project overruns. We only use software in information systems because it's fast to write and easy to modify. So the temptation is irresistible to keep specs fluid and to change requirements at any time. Famously, the differences between prototype, "beta release" and product are marginal and arbitrary. Management and marketing take advantage of this fact, and unfortunately software engineers themselves yield too readily to the attraction of the last minute tweak.

The same dynamics of course afflict third party software components. They tend to change too often and fail to meet expectations, making life hell for IT systems integrators.

It won't be until software engineering develops the tools, standards and culture of a true profession that any of this will change. Then corporate governance will have something to govern in big technology projects. Meanwhile, programmers will remain more like playwrights than engineers, and just as manageable.

Posted in Software engineering, Management theory

BlackBerry Security Summit, 29 July 2014

Summary: BlackBerry is poised for a fresh and well differentiated play in the Internet of Things, with its combination of handset hardware security, its uniquely rated QNX operating system kernel, and its experience with the FIDO device authentication protocols.

To put it plainly, BlackBerry is not cool.

And neither is security.

But maybe two wrongs can make a right, in terms of a compelling story. BlackBerry's security story has always been strong, it's getting stronger, and it could save them.

Today I attended the BlackBerry Security Summit in New York City (Disclosure: my travel and accommodation were paid by BlackBerry). The event was announced very recently; none of my colleagues had heard of it. So what was the compelling need to put on a security show in New York? It turned out to be the 9:00am announcement that BlackBerry is acquiring the German voice security specialists Secusmart. BlackBerry and Secusmart have worked together for a long time; their stated aim is to put a real secure phone in the "hand of every President and every Chancellor".

Secusmart CEO Hans-Christoph Quelle is a forceful champion of voice security; in this age of evidently routine spying by state and competitors alike, there is enormous demand building for counter-surveillance in telephony and messaging. Secusmart is also responsible for the highly rated Micro SD cards that BlackBerry proudly use as removable security modules in their handsets. And this is where the SecuSmart tie-up really resonates for me. It comes hot on the heels of last week's Cloud Security Summit, where there was so much support for personal Hardware Security Modules (HSMs), be they Micro SD cards, USB keys, NFC Secure Elements, the good old "Trusted Platform Module" (TPM) or any number of proprietary chip sets.

Today's event also showcased BlackBerry's QNX division (acquired in 2010) and its secure operating system. CEO John Chen reckons that the software in 50% of connected cars runs on the QNX OS (and in high reliability settings like power stations, wind turbines and even gaming machines, the penetration is even higher). And so he is positioning BlackBerry as a major player in the Internet of Things.

We heard from QNX founder Dan Dodge about the elegance of their system. At just 100,000 lines of code, Dodge stressed that his team knows the software inside-out. There is not a single line of code in their OS that QNX did not write themselves. In contrast, such mastery is utterly impossible in the 15,000,000 lines that make up Linux or the estimated 50-70 million lines in Windows. It happens that I've recently lamented the parlous state of software quality and the need to return to first principles security. So I am on Dan Dodge's wavelength.

BlackBerry's security people had a little bit to say about identity as well, and apparently more's to come. For now, they are flagging that with 250 million customers in their messaging system, BBM represents "one of the biggest identity systems in the world". And as such the company does plan to "federate" it somehow. They reminded us at the same time of the BlackBerry Cloud slated for launch in December.

Going forward, the importance of strong, physical Two Factor Authentication for accessing the cloud is almost a given now. And the smartphone is fast becoming the predominant access mechanism, so the combination of secure elements, handsets and high security infrastructure is potent.

There's a lot that BlackBerry is keeping close to its chest, but for me one extant piece of the IoT puzzle was conspicuously absent today: the role of the FIDO Alliance protocols. After all, BlackBerry has been a FIDO Board Member for a long time. It seems to me that FIDO's protocols for exchanging verified authentication signals and information about devices should be an important element of BlackBerry's play in both its software infrastructure and its devices.

In closing, I'll revisit the very first thing we heard at today's event. It was a video testimonial, telling us "If you need nuclear security, you need BlackBerry". As I said, security really isn't cool. Jazzing up the company's ability to deliver "nuclear" grade to demanding clients is actually not the right message. Security in the Internet of Things -- and therefore in everyday life -- may turn out to be just as important.

We basically know that nuclear power plants are inherently risky; we know that planes will occasionally fall out of the sky. Paradoxically, the community has a reasonable appetite for risk and failures in very complex systems like those. Individually and/or collectively we have decided we just can't live without electricity and travel and so we've come to settle on a roughly acceptable finite cost in terms of failures. But when the mundanities of life go digital, the tolerance of failure will drop. When our cars and thermostats and light switches are connected to the Internet, and when a bug or a script kiddie's stunt can soon send whole neighbourhoods into a spin, consumers won't stand for it.

So the very best security we can currently engineer is in fact going to be necessary at scale for smart appliances, wearables, connected homes, smart meters and networked cars. We need a different gauge for this type of security, and it's going to be very tough to engineer and deploy economically. But right now, with its deep understanding of dependable OS's and commitment to high quality device hardware, it seems to me BlackBerry has a head-start in the Internet of Things.

Posted in Software engineering, Security, Identity, Cloud