Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The Constellation Research Disruption Checklist for 2015

The Constellation Research analyst team has assembled a "year end checklist", offering suggestions designed to enable you to take better control of your digital strategy in 2015. We offer these actions to help you dominate "digital disruption" in the new year.

1. Matrix Commerce: Scrub your data

Guy Courtin

When it comes to Matrix Commerce, companies need to focus on the basics first. What are the basics? Cleaning up and getting your data in order. Much is discussed about the evolution of supply chains and the surrounding technologies. However these solutions are only as useful as the data that feeds them. Many CxOs that we have spoken to have discussed the need to focus on cleaning up their data. First work on a data audit to identify the most important sources of data for your efforts in Matrix Commerce. Second, focus on the systems that can process and make sense of this data. Finally, determine the systems and business processes that will be optimized with these improvements. Matrix Commerce starts with the right data. The systems and business processes that layer on top of this data are only as useful as the data. CxOs must continue to organize and clean their data house.

2. Safety and Privacy - Create your Enterprise Information Asset Inventory

Steve Wilson

In 2015, get on top of your information assets. When information is the lifeblood of your business, make sure you understand what really makes it valuable. Create (or refresh) your Enterprise Information Asset Inventory, and then think beyond the standard security dimensions of Confidentiality, Integrity and Availability. What sets your information apart from your competitors? Is it more complete, more up-to-date, more original or harder to acquire? To maximise the value of information, innovative organisations are gauging it in terms of utility, currency, jurisdictional certainty, privacy compliance and whatever other facets matter the most in their business environment. These innovative organizations structure their information technology and security functions to not merely protect the enterprise against threats, but to deliver the right data when and where it's needed most. Shifting from defensive security to strategic informatics is the key to success in the digital economy. Learn more about creating an information asset inventory.

3. Data to Decisions - Create your Big Data Plan of Action

Andy Mulholland

Big Data is arriving at the end of the hype cycle. In 2015, real-time decision support using ‘smart data’ extracted from Big Data will manifest as a requirement for competitiveness. Digital Business, or even just online sellers, are all reducing reaction and response times. Enterprises have huge business and technology investments in data that need to support their daily activities better, so its time to pivot from using Big Data for analysis and start examining how to deliver Smart Data to users and automated online systems. What is Smart Data? Well, let's say creating your organization's definition of Smart Data is priority number one in your Big Data strategy. Transformation in Digital markets requires a transformation in the competitive use of Big Data. Request a meeting with Constellation's CTO in residence, Andy Mulholland.

4. Next Gen CXP - Make Customer Experience Instinctual

Natalie Petouhoff

Stop thinking of Customer Experience as a functional or departmental initiative and start thinking about experience from the customer’s point of view.

Customers don’t distinguish between departments when they require service from your organization. Customer Experience is a responsibility shared amongst all employees. However, the division of companies into functional departments with separate goals means that customer experience is often fractured. Rid your company of this ethos in 2015 by using design thinking to create a culture of cohesive customer experience.

Ensure all employees live your company mythology, employ the right customer and internal-facing technologies, collect the right data, and make changes to your strategy and products as soon as possible. Read "Five Approaches to Drive Customer Loyalty in a Digital World".

5. Future of Work - Take Advantage of Collaboration

Alan Lepofsky

Over the last few years, there has been a growing movement in the way people communicate and collaborate with their colleagues and customers, shifting from closed systems like email and chat, to more transparent tools like social networks and communities. That trend will continue in 2015 as people become more comfortable with sharing and as collaboration tools become more integrated with the business software they use to get their jobs done. Employees should familiarize themselves with the tools available to them, and learn how to pick the right tool for each of the various scenarios that make up their work day. Read "Enterprise Collaboration: From Simple Sharing to Getting Work Done".

6. Future of Work - Prepare for Demographic Shifts

Holger Mueller

In the next ten years 10% to 20% of the North American and European workforce will retire. Leaders need to understand and prepare for this tremendous shift so performance remains steady as many of the workforce's highly skilled workers retire.

To ensure smooth a smooth transition, ensure your HCM software systems can accommodate a massive number of retirements, successions and career path developments, and new hires from external recruiting.

Constellation fully expects employment to be a sellers market going forward. People leaders should ensure their HCM systems facilitate employee motivation, engagement and retention, lest they lose their best employees to competitors. Read "Globalization, HR, and Business Model Success". Additional cloud HR case studies here and here.

7. Digital Marketing Transformation - Brand Priorities Must Convey Authenticity

Ray Wang

Brand authenticity must dominate digital and analog channels in 2015. Digital personas must not only reflect the brand, but also expand upon the analog experience. Customers love the analog experience, so deliver the same experience digitally. Brand conscious leaders must invest in the digital experience with an eye towards mass personalization at scale. While advertising plays a key role in distributing the brand message, investment in the design of digital experiences presents itself as a key area of investment for 2015. Download free executive brief: Can Brands Keep Their Promise?

8. Consumerization of IT: Use Mobile as the Gateway to Digital Transformation Projects

Ray Wang

Constellation believes that mobile is more than just the device. While smartphones and other devices are key enablers of 'mobile', design in digital transformation should take into account how these technologies address the business value and business model transformation required to deliver on breakthrough innovation. If you have not yet started your digital transformation or are considering using mobile as an additional digital transformation point, Constellation recommends that clients assess how a new generation of enterprise mobile apps can change the business by identifying a cross-functional business problem that cannot be solved with linear thinking, articulating the business problem and benefit, showing how the solution orchestrates new experiences, identifying how analytics and insights can fuel the business model shift, exploiting full native device features, and seeking frictionless experiences. You'll be digital before you know it. Read "Why the Third Generation of Enterprise Mobile is Designed for Digital Transformation"

9. Technology Optimization & Innovation - Prepare Your Public Cloud Strategy

Holger Mueller

In 2015 technology leaders will need to create, adjust and implement their public cloud strategy. Considering estimates pegging Amazon AWS at 15-20% of virtualized servers worldwide, CIOs and CTOs need to actively plan and execute their enterprise’s strategy vis-à-vis the public cloud. Reducing technical debt and establishing next generation best practices to leverage the new ‘on demand’ IT paradigm should be a top priority for CIOs and CTOs seeking organizational competitiveness, greater job security and fewer budget restrictions.

Posted in Social Media, Security, Privacy, Constellation Research, Cloud, Big Data

Too smart?

An Engadget report today, "Hangouts eavesdrops on your chats to offer 'smart suggestions'" describes a new "spy/valet" feature being added to Google's popular video chat tool.

  • "Google's Hangouts is gaining a handy, but slightly creepy new feature today. The popular chat app will now act as a digital spy-slash-valet by eavesdropping on your conversations to offer 'smart suggestions.' For instance, if a pal asks 'where are you?' it'll immediately prompt you to share your location, then open a map so you can pin it precisely."

It's sad that this sort of thing still gets meekly labeled as "creepy". The privacy implications are serious and pretty easy to see.

Google is evidently processing the text of Hangouts as they fly through their system, extracting linguistic cues, interpreting what's being said using Artificial Intelligence, extracting new meaning and insights, and offering suggestions.

We need some clarification about whether any covert tests of this technology have been undertaken during the R&D phase. A company obviously doesn't launch a new product like this without a lot of research, feasibility testing, prototyping and testing. Serious work on 'smart suggestions' would not start without first testing how it works in real life. So I wonder if any of this evaluation was done covertly on live data? Are Google researchers routinely eavesdropping on hangouts to develop the 'smart suggestions' technology?

If so, is such data usage covered by their Privacy Policy (you know, under the usual "in order to improve our service" justification)? And is usage sanctioned internationally in the stricter privacy regimes?

Many people have said to me I'm jumping the gun, and that Google would probably test the new Hangouts feature on its own employees. Perhaps, but given that scanning gmails is situation normal for Google, and they have a "privacy" culture that joins up all their business units so that data may be re-purposed almsot without limit, I feel sure that running AI algorithms on text without telling people would be par for the course.

In development and in operation, we need to know what steps are taken to protect the privacy of hangout data. What personally identifiable data and metadata is retained for other purposes? Who inside Google is granted access to the data and especially the synthtised insights? How long does any secondary usage persist for? Are particularly sensitive matters (like health data, financial details, corporate intellectual property etc.) filtered out?

This is well beyond "creepy". Hangouts and similar video chat are certainly wonderdful technologies. We're using them routinely for teaching, education, video conferencing, collaboration and consultation. The tools may become entrenched in corporate meetings, telecommuting, healthcare and the professions. But if I am talking with my doctor, or discussing patents with my legal team, or having a clandestine chat with a lover, I clearly do not want any unsolicited contributions from the service provider. More fundamentally, I want assurance that no machine is ever tapping into these sorts of communications, running AI algorithms, and creating new insights. If I'm wrong about covert testing on live data, then Google could do what Apple did and publish an Open Letter clarifying their data usage practices and strategies.

Come to think of it, if Google is running natural language processing algorithms over the Hangouts stream, might they be augmenting their gmail scanning the same way? Their business model is to extract insights about users from any data they get their hands on. Until now it's been a crude business of picking out keywords and using them to profile users' interests and feed targeted advertising. But what if they could get deeper information about us through AI? Is there any sign from their historical business practices that Google would not do this? And what if they can extract sensitive information like mental health indications? Even with good intent and transarency, predicting healthcare from social media is highly problematic as shown by the "Samaritans Radar" experience.

Artificial Intelligence is one of the new frontiers. Hot on the heels of the successes of IBM Watson, we're seeing Natural Language Processing and analytics rapidly penetrate business and now consumer applications. Commentators are alternately telling us that AI will end humanity, and not to worry about it. For now, I call on people to simply think clearly through the implications, such as for privacy. If AI programs are clever enough to draw deep insights about us from what we say, then the "datapreneurs" in charge of those algorithms need to remember they are just as accountable for privacy as if they have asked us reveal all by filling out a questionnaire.

Posted in Social Networking, Social Media, Privacy, Internet, Big Data

Thinking creatively about information assets in retail and hospitality

In my last blog Improving the Position of the CISO, I introduced the new research I've done on extending the classic "Confidentiality-Integrity-Availability" (C-I-A) frame for security analysis, to cover all the other qualities of enterprise information assets. The idea is to build a comprehensive account of what it is that makes information valuable in the context of the business, leveraging the traditional tools and skills of the CISO. After all, security professionals are particularly good at looking at context. Instead of restricting themselves to defending information assets against harm, CISOs can be helping to enhance those assets by building up their other competitive attributes.

Let's look at some examples of how this would work, in some classic Big Data applications in retail and hospitality.

Companies in these industries have long been amassing detailed customer databases under the auspices of loyalty programs. Supermarkets have logged our every purchase for many years, so they can for instance put together new deals on our favorite items, from our preferred brands, or from competitors trying to get us to switch brands. Likewise, hotels track when we stay and what we do, so they can personalise our experience, tailor new packages for us, and try to cross-sell services they predict we'll find attractive. Behind the scenes, the data is also used for operations to plan demand, fine tune their logistics and so on.

Big Data techniques amplify the value of information assets enormously, but they can take us into complicated territory. Consider for example the potential for loyalty information to be parlayed into insurance and other financial services products. Supermarkets find they now have access to a range of significant indicators of health & lifestyle risk factors which are hugely valuable in insurance calculations ... if only the data is permitted to be re-purposed like that.

The question is, what is it about the customer database of a given store or hotel that gives it an edge over its competitors? There many more attributes to think creatively about beyond C-I-A!

  • Utility
  • It's important to rigorously check that the raw data, the metadata and any derived analytics can actually be put to different business purposes.
    • Are data formats well-specified, and technically and semantically interoperable?
    • What would it cost to improve interoperability as needed?
    • Is the data physically available to your other business systems?
    • Does the rest of the business know what's in the data sets?
  • Completeness
    • Do you know more about your customers than your competitors do?
    • Do you supplement and enrich raw customer behaviours with questionaires, or linked data?
    • How far back in time do the records go?
    • Do you understand the reason any major gaps? Do the gaps themselves tell you anything?
    • What sort of metadata do you have? For example, do you retain time & location, trend data, changes, origins and so on?
  • Currency & Accuracy
    • Is your data up to date? Remember that accuracy can diminish over time, so the sheer age of a long term database can have a downside.
    • What mechanisms are there to keep data up to date?
  • Permissions & Consent
    • Have customers consented to secondary usage of data?
    • Is the consent specific, blanket or bundled?
    • Might customers be surprised and possibly put off to learn how their loyalty data is utilised?
    • Do the terms & conditions of participation in a loyalty program cover what you wish to do with the data?
    • Do the Ts&Cs (which might have been agreed to in the past) still align with the latest plans for data usage?
    • Are there opportunities to refresh the Ts&Cs?
    • Are there opportunities for customers to negotiate the value you can offer for re-purposing the data?

  • Compliance
  • When businesses derive new insights from data, it is possible that they are synthesising brand new Personal Information, and non-obvious privacy obligations can go along with that. The competitive advantage of Big Data can be squandered if regulations are overlooked, especially in international environments.
    • So where is the data held, and where does it flow?
    • Are applications for your data compliant with applicable regulations?
    • Is health information or similar sensitive Personal Information extracted or synthesised, and do you have specific consent for that?
    • Can you meet the Access & Correction obligations in many data protection regulations?

For more detail, my new report, "Strategic Opportunities for the CISO", is available now.

Posted in Big Data, Constellation Research, Management theory, Privacy, Security

Improving the position of the CISO

Over the years, we security professionals have tried all sorts of things to make better connections with other parts of the business. We have broadened our qualifications, developed new Return on Security Investment tools, preached that security is a "business enabler", and strived to talk about solutions and not boring old technologies. But we've had mixed success.

Once when I worked as a principal consultant for a large security services provider, a new sales VP came in to the company with a fresh approach. She was convinced that the customer conversation had to switch from technical security to something more meaningful to the wider business: Risk Management. For several months after that I joined call after call with our sales teams, all to no avail. We weren't improving our lead conversions; in fact with banks we seemed to be going backwards. And then it dawned on me: there isn't much anyone can tell bankers about risk they don't already know.

Joining the worlds of security and business is easier said than done. So what is the best way for security line managers to engage with their peers? How can they truly contribute to new business instead of being limited to protecting old business? In a new investigation I've done at Constellation Research I've been looking at how classical security analysis skills and tools can be leveraged for strategic information management.

Remember that the classical frame for managing security is "Confidentiality-Integrity-Availability" or C-I-A. This is how we conventionally look at defending enterprise information assets; threats to security are seen in terms of how critical data may be exposed to unauthorised access, or lost, damaged or stolen, or otherwise made inaccessible to legitimate users. The stock-in-trade for the Chief Information Security Officer (CISO) is the enterprise information asset register and the continuous exercise of Threat & Risk Assessment around those assets.

I suggest that this way of looking at assets can be extended, shifting from a defensive mindset to a strategic, forward outlook. When the CISO has developed a birds-eye view of their organisation's information assets, they are ideally positioned to map the value of the assets more completely. What is it that makes information valuable exactly? It depends on the business - and security professionals are very good at looking at context. So for example, in financial services or technology, companies can compete on the basis of their original research, so it's the lead time to discovery that sets them apart. On the other hand, in healthcare and retail, the completeness of customer records is a critical differentiator for it allows better quality relationships to be created. And when dealing with sensitive personal information, as in the travel and hospitality industry, the consent and permissions attached to data records determine how they may be leveraged for new business. These are the sorts of things that make different data valuable in different contexts.

CISOs are trained to look at data through different prisms and to assess data in different dimensions. I've found that CISOs are therefore ideally qualified to bring a fresh approach to building the value of enterprise information assets. They can take a more pro-active role in information management, and carve out a new strategic place for themselves in the C-suite.

My new report, "Strategic Opportunities for the CISO", is available now.

Posted in Big Data, Constellation Research, Management theory

The Prince of Data Mining

Facial recognition is digital alchemy. It's the prince of data mining.

Facial recognition takes previously anonymous images and conjures peoples' identities. It's an invaluable capability. Once they can pick out faces in crowds, trawling surreptitiously through anyone and everyone's photos, the social network businesses can work out what we're doing, when and where we're doing it, and who we're doing it with. The companies figure out what we like to do without us having to 'like' or favorite anything.

So Google, Facebook, Apple at al have invested hundreds of megabucks in face recognition R&D and buying technology start-ups. And they spend billions of dollars buying images and especially faces, going back to Google's acquisition of Picasa in 2004, and most recently, Facebook's ill-fated $3 billion offer for Snapchat.

But if most people find face recognition rather too creepy, then there is cause for optimism. The technocrats have gone too far. What many of them still don't get is this: If you take anonymous data (in the form of photos) and attach names to that data (which is what Facebook photo tagging does - it guesses who people are in photos are, attaches putative names to records, and invites users to confirm them) then you Collect Personal Information. Around the world, existing pre-biometrics era black letter Privacy Law says you can't Collect PII even indirectly like that without am express reason and without consent.

When automatic facial recognition converts anonymous data into PII, it crosses a bright line in the law.

Posted in Social Networking, Privacy, Biometrics, Big Data

An unpublished letter on The Right To Be Forgotten

In response to "The Solace of Oblivion", Jeffrey Toobin, The New Yorker, September 29th, 2014.

The "Right to be Forgotten" is an unfortunate misnomer for a balanced data control measure decided by the European Court of Justice. The new rule doesn't seek to erase the past but rather to restore some of its natural distance. Privacy is not about secrecy but moderation. Yet restraint is toxic to today's information magnates, and the response of some to even the slightest throttling of their control of data has been shrill. Google doth protest too much when it complains that having to adjust its already very elaborate search filters makes it an unwilling censor.

The result of a multi-billion dollar R&D program, Google's search engine is a great deal more than a latter-day microfiche. Its artificial intelligence tries to predict what users are really looking for, and as a result, its insights are all the more valuable to Google's real clients -- the advertisers. No search result is a passive reproduction of data from a "public domain". Google makes the public domain public. So if search reveals Personal Information that was hitherto impossible to find, then Google should at least participate in helping to temper the unintended privacy consequences.

Stephen Wilson
October 1, 2014.

Posted in Big Data, Internet, Privacy

The worst privacy misconception of all

I was discussing definitions of Personally Identifiable Information (PII) with some lawyers today, one of whom took exception to the US General Services Administration definition: information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other personal or identifying information that is linked or linkable to a specific individual". This lawyer concluded rather hysterically that under such a definition, "nobody can use the internet without a violation".

Similarly, I've seen engineers in Australia recoil at the possibility that IP and MAC Addresses might be treated as PII because it is increasingly easy to link them to the names of device owners. I was recently asked "Why are they stopping me collecting IP addresses?". The answer is, they're not.

There are a great many misconceptions about privacy, but the idea that 'if it's personal you can't use it' is by far the worst.

Nothing in any broad-based data privacy law I know of says personal information cannot be collected or used.

Rather, what data privacy laws actually say is: if you're collecting and using PII, be careful.

Privacy is about restraint. The general privacy laws of Australia, Europe and 100-odd countries say things like don't collect PII without consent, don't collect PII beyond what you demonstrably need, don't use PII collected for one purpose for other unrelated purposes, tell individuals if you can what PII you hold about them, give people access to the PII you have, and do not retain PII for longer than necessary.

Such rules are entirely reasonable, and impose marginal restrictions on the legitimate conduct of business. And they align very nicely with standard security practice which promotes the Need To Know principle and the Principle of Least Privilege.

Compliance with Privacy Principles does add some overhead to data management compared with anonymous data. If re-identification techniques and ubiquitous inter-connectedness means that hardly any data is going to stay anonymous anymore, then yes, privacy laws mean that data should be treated more cautiously than was previously the case. And what exactly is wrong with that?

If data is the new gold then it's time data custodians took more care.

Posted in Big Data, Privacy

Simply Secure is not simply private

Another week, another security collaboration launch!

"Simply Secure" calls itself “a small but growing organization [with] expertise in usability research, design, software development, and product management". Their mission has to do with improving the security functions that built-in so badly in most software today. Simply Secure is backed by Google and Dropbox, and supported by a diverse advisory board.

It's early days (actually early day, singular) so it might be churlish to point out that Simply Secure's strategic messaging is a little uneven ... except that the words being used to describe it shed light on the clarity of the thinking.

My first exposure to Simply Secure came last night, when I read an article in the Guardian by Cory Doctorow (who is one of their advisers). Doctorow places enormous emphasis on privacy; the word “privacy" outnumbers “security" 16 to three in the body of his column. Another admittedly shorter report about the launch by The Next Web doesn't mention privacy at all. And then there's the Simply Secure blog post, which cites privacy a great deal but every single time in conjunction with security, as in “security and privacy". That repeated phrasing conveys, to me at least, some discomfort. As I say, it's early days and the team is doubtless sorting out how to weigh and progress these closely related objectives.

But I hope they do it quickly. On the face of it, Simply Secure might only scratch the surface of privacy.

Doctorow's Guardian article is mostly concerned with encryption and the terrible implementations that have plagued us since the dawn of the Internet. It's definitely important that we improve here – and radically. If the Simply Secure initiative does nothing but make encryption easier to integrate into commodity software, that would be a great thing. I'm all for it. But it won't necessarily or even probably lead to better privacy, because privacy is about restraint not secrecy or anonymity.
As we go about our lives, we actually want to be known by others, but we want those who know us to be restrained in what they do with the knowledge they have about us. Privacy is the protection you need when your affairs are not secret.

I know Doctorow knows this – I've seen his terrific little speech on the steps on Comic-Con about PRISM. So I'm confused by his focus on cryptography.

How far does encryption get us? If we're using social networks, or if we're shopping and opting in to loyalty programs or selected targeted marketing, or if we're sharing our medical records with relatives, medicos, hospitals and researchers, then encryption becomes moot. We need mechanisms to restrain what the receivers of our personal information do with it. We all know the business model at work behind “free" online services; using encryption to protect privacy in social networking for instance would be like using an armoured van to deliver your valuables to Bernie Madoff.

Another limitation of user-centric or user-managed encryption has to do with Big Data. A great deal of personal information about us is created and collected unseen behind our backs, by sensors, and by analytics processes than manage to work out who we are by linking disparate data streams together. How could SS ameliorate those sorts of problems? If the SS vision includes encryption at rest as well as in transit, then how will the user control or even see all the secondary uses of their encrypted personal information?

There's a combativeness in Doctorow's explanation of Simply Secure and his tweets from yesterday on the topic. His aim is expressly to thwart the surveillance state, which in his view includes a symbiosis (if not conspiracy) between government and internet companies, where the former gets their dirty work done by the latter. I'm sure he and I both find that abhorrent in equal measure. But I argue the proper response to these egregious behaviours is political not technological (and political in the broad sense; I love that Snowden talks as much about accountability, legal processes, transparency and research as he does about encryption). If you think the government is exploiting the exploiters, then DIY encryption is a pretty narrow counter-measure. This is not the sort of society we want to live in, so let's work to change the establishment, rather than try to take it on in a crypto shoot-out.

Yes security technology is important but it's not nearly as important for privacy as the Rule of Law. Data privacy regimes instil restraint. The majority of businesses come to know that they are not at liberty to over-collect personal information, nor to re-use personal information unexpectedly and without consent. A minority of organisations flout data privacy principles, for example by slyly refining raw data into valuable personal knowledge, exploiting the trust citizens and users put in them. Some of these outfits flourish in the United States – the Canary Islands of privacy. Worldwide, the policing of privacy is patchy indeed, yet there have been spectacular legal victories in Europe and elsewhere against the excessive practices of really big companies like Facebook with their biometric data mining of photo albums, and Google's drift net-like harvesting of traffic from unencrypted Wi-Fi networks.

Pragmatically, I'm afraid encryption is such a fragile privacy measure. Once secrecy is penetrated, we need regulations to stem exploitation of our personal information.

By all means, let's improve cryptographic engineering and I wish the Simply Secure initiative all the best. So long as they don't call security privacy.

Posted in Security, Privacy, Language, Big Data

New Paper Coming: The collision between Big Data and privacy law

I have a new academic paper due to be published in October, in the Australian Journal of Telecommunications and the Digital Economy. Here is an extract.

Update: see Telecommunications Society members page.

The collision between Big Data and privacy law

Abstract

We live in an age where billionaires are self-made on the back of the most intangible of assets – the information they have about us. The digital economy is awash with data. It's a new and endlessly re-useable raw material, increasingly left behind by ordinary people going about their lives online. Many information businesses proceed on the basis that raw data is up for grabs; if an entrepreneur is clever enough to find a new vein of it, they can feel entitled to tap it in any way they like. However, some tacit assumptions underpinning today's digital business models are naive. Conventional data protection laws, older than the Internet, limit how Personal Information is allowed to flow. These laws turn out to be surprisingly powerful in the face of 'Big Data' and the 'Internet of Things'. On the other hand, orthodox privacy management was not framed for new Personal Information being synthesised tomorrow from raw data collected today. This paper seeks to bridge a conceptual gap between data analytics and privacy, and sets out extended Privacy Principles to better deal with Big Data.

Introduction

'Big Data' is a broad term capturing the extraction of knowledge and insights from unstructured data. While data processing and analysis is as old as computing, the term 'Big Data' has recently attained special meaning, thanks to the vast rivers of raw data that course unseen through the digital economy, and the propensity for entrepreneurs to tap that resource for their own profit, or to build new analytic tools for enterprises. Big Data represents one of the biggest challenges to privacy and data protection society has seen. Never before has so much Personal Information been available so freely to so many.

Big Data promises vast benefits for a great many stakeholders (Michael & Miller 2013: 22-24) but the benefits may be jeopardized by the excesses of a few overly zealous businesses. Some online business models are propelled by a naive assumption that data in the 'public domain' is up for grabs. Many think the law has not kept pace with technology, but technologists often underestimate the strength of conventional data protection laws and regulations. In particular, technology neutral privacy principles are largely blind to the methods of collection, and barely distinguish between directly and indirectly collected data. As a consequence, the extraction of Personal Information from raw data constitutes an act of collection and as such is subject to longstanding privacy statutes. Privacy laws such as that of Australia don't even use the words 'public' and 'private' to qualify the data flows concerned (Privacy Act 1988).

On the other hand, orthodox privacy policies and static data usage agreements do not cater for the way Personal Information can be synthesised tomorrow from raw data collected today. Privacy management must evolve to become more dynamic, instead of being preoccupied with unwieldy policy documents and simplistic technical notices about cookies.

Thus the fit between Big Data and data privacy standards is complex and sometimes surprising. While existing laws are not to be underestimated, there is a need for data privacy principles to be extended, to help individuals remain abreast of what's being done with information about them, and to foster transparency regarding the new ways for personal information to be generated.

Conclusion: Making Big Data privacy real

A Big Data dashboard like the one described could serve several parallel purposes in aid of progressive privacy principles. It could reveal dynamically to users what PII can be collected about them through Big Data; it could engage users in a fair and transparent exchange of value-for-PII transaction; and it could enable dynamic consent where users are able to opt in to Big Data processes, and opt out and in again, over time, as their understanding of the PII bargain evolves.

Big Data holds big promises, for the benefit of many. There are grand plans for population-wide electronic health records, new personalised financial services that leverage massive retail databases, and electricity grid management systems that draw on real-time consumption data from smart meters in homes, to extend the life of aging 'poles and wires' while reducing greenhouse gas emissions. The value to individuals and operators alike of these programs is amplified as computing power grows, new algorithms are researched, and more and more data sets are joined together. Likewise, the privacy risks are compounded. The potential value of Personal Information in the modern Big Data landscape cannot be represented in a static business model, and neither can the privacy pros and cons be captured in a fixed policy document. New user interfaces and visualisations like a 'Big Data dashboard' are needed to bring dynamic extensions to traditional privacy principles, and help people appreciate and intelligently negotiate the insights that can be extracted about them from the raw material that is data.

Posted in Privacy, Big Data

Schrodinger's Privacy: A Master Class

Master Class: How to Protect Your Customer's Digital Identity and Personal Data

A Social Media Week Sydney event #SMWSydney
Law Lounge, Sydney University Law School
New Law School Building
Eastern Ave, Camperdown
Fri, Sep 26 - 10:00 AM - 11:30 AM

How can you navigate privacy fact and fiction, without the geeks and lawyers boring each other to death?

It's often said that technology has outpaced privacy law. Many digital businesses seem empowered by this brash belief. And so they proceed with apparent impunity to collect and monetise as much Personal Information as they can get their hands on.

But it's a myth!

Some of the biggest corporations in the world, including Google and Facebook, have been forcefully brought to book by privacy regulations. So, we have to ask ourselves:

  • what does privacy law really mean for social media in Australia?
  • is privacy "good for business"?
  • is privacy "not a technology issue"?
  • how can digital businesses navigate fact & fiction, without their geeks and lawyers boring each other to death?

In this Social Media Week Master Class I will:

  • unpack what's "creepy" about certain online practices
  • show how to rate data privacy issues objectively
  • analyse classic misadventures with geolocation, facial recognition, and predicting when shoppers are pregnant
  • critique photo tagging and crowd-sourced surveillance
  • explain why Snapchat is worth more than three billion dollars
  • analyse the regulatory implications of Big Data, Biometrics, Wearables and The Internet of Things.

We couldn't have timed this Master Class better, coming two weeks after the announcement of the Apple Watch, which will figure prominently in the class!

So please come along, for a fun and in-depth a look at social media, digital technology, the law, and decency.

Register here.

About the presenter

Steve Wilson is a technologist, who stumbled into privacy 12 years ago. He rejected those well meaning slogans (like "Privacy Is Good For Business!") and instead dug into the relationships between information technology and information privacy. Now he researches and develops design patterns to help sort out privacy, alongside all the other competing requirements of security, cost, usability and revenue. His latest publications include:

  • "The collision between Big Data and privacy law" due out in October in the Australian Journal of Telecommunications and the Digital Economy.

Posted in Social Networking, Social Media, Privacy, Internet, Biometrics, Big Data