Measuring anonymity

As we head towards 2014, de-identification of personal data sets is going to be a hot issue. I saw several things at last week’s Constellation Connected Enterprise conference (CCE) that will make sure of this!

First, recall that in Australia a new definition of Personal Information (PI or “PII”) means that anonymous data that can potentially be re-identified in future may have to be classified as PII today. I recently discussed how security and risk practitioners can deal with the uncertainty in re-identifiability.

And there’s a barrage of new tracking, profiling and interior geo-location technologies (like Apple’s iBeacon) which typically come with a promise of anonymity. See for example Tesco’s announcement of face scanning for targeting adverts at their UK petrol stations.

The promise of anonymity is crucial, but it is increasingly hard to keep. Big Data techniques that join de-identified information to other data sets are able to ind correlations and reverse the anonymisation process. The science of re-identification started with the work of Dr Latanya Sweeny who famously identified a former governor and his medical records using zip codes and electoral roll data; more recently we’ve seen DNA “hackers” who can unmask anonymous DNA donors by joining genomic databases to public family tree information.

At CCE we saw many exciting Big Data developments, which I’ll explore in more detail in coming weeks. Business Intelligence as-a-service is expanding rapidly, and is being flipped my innovative vendors to align (whether consciously or not) with customer centric Vendor Relationship Management models of doing business. And there are amazing new tools for enriching unstructured data, like newly launched Paxata’s Adaptive Data Preparation Platform. More to come.

With the ability to re-identify data comes Big Responsibilities. I believe that to help businesses meet their privacy promises, we’re going to need new tools to measure de-identification and hence gauge the risk of re-identification. It seems that some new generation data analytics products will allow us to run what-if scenarios to help understand the risks.

Just before CCE I also came across some excellent awareness raising materials from Voltage Security in Cupertino. Voltage CTO Terence Spies shared with me his “Deidentification Taxonomy” reproduced here with his kind permission. Voltage are leaders in Format Preserving Encryption and Tokenization — typically used to hide credit card numbers from thieves in payment systems — and they’re showing how the tools may be used more broadly for de-identifying databases. I like the way Terence has characterised the reversibility (or not) of de-identification approaches, and further broken out various tokenization technologies.

Reference: Voltage Security. Reproduced with permission.

These are the foundations of the important new science of de-identification. Privacy engineers need to work hard at re-identification, so that consumers do not lose faith in the important promises made that so much data collected from their daily movements through cyber space are indeed anonymous.