If the digital economy is really the economy then it's high time we moved beyond hoping that we can simply train users to be safe online. Is the real economy only for heros who can protect themselves in the jungle, writing their own code. As if they're carrying their own guns? Or do we as a community build structures and standards and insist on technologies that work for all?
For most people, the World Wide Web experience still a lot like watching cartoons on TV. The human-machine interface is almost the same. The images and actions are just as synthetic; crucially, nothing on a web browser is real. Almost anything goes -- just as the Roadrunner defies gravity in besting Coyote, there are no laws of physics that temper the way one bit of multimedia leads to the next. Yes, there is a modicum of user feedback in the way we direct some of the action when browsing and e-shopping, but it's quite illusory; for the most part all we're really doing is flicking channels across a billion pages.
It's the suspension of disbelief when browsing that lies at the heart of many of the safety problems we're now seeing. Inevitably we lose our bearings in the totally synthetic World Wide Web. We don't even realise it, we're taken in by a virtual reality, and we become captive to social engineering.
But I don't think it's possible to tackle online safety by merely countering users' credulity. Education is not the silver bullet, because the Internet is really so technologically complex and abstract that it lies beyond the comprehension of most lay people.
Using the Internet 'safely' today requires deep technical skills, comparable to the level of expertise needed to operate an automobile circa 1900. Back then you needed to be able to do all your own mechanics [roughly akin to the mysteries of maintaining anti-virus software], look after the engine [i.e. configure the operating system and firewall], navigate the chaotic emerging road network [there's yet no trusted directory for the Internet, nor any road rules], and even figure out how to fuel the contraption [consumer IT supply chains is about as primitive as the gasoline industry was 100 years ago]. The analogy with the early car industry becomes especially sharp for me when I hear utopian open source proponents argue that writing ones own software is the best way to be safe online.
The Internet is so critical (I'd have thought this was needless to say) that we need ways of working online that don't require us to all be DIY experts.
I wrote a first draft of this blog six years ago, and at that time I called for patience in building digital literacy and sophistication. "It took decades for safe car and road technologies to evolve, and the Internet is still really in its infancy" I said in 2009. But I'm less relaxed about his now, on the brink of the Internet of Things. It's great that the policy makers like the US FTC are calling on connected device makers to build in security and privacy, but I suspect the Internet of Things will require the same degree of activist oversight and regulation as does the auto industry, for the sake of public order and the economy. Do we have the appetite to temper breakneck innovation with safety rules?
This is a watershed in Internet security and privacy - never before has authentication been a headline consumer issue.
Sure we've all talked about the password problem for ten years or more, but now FIDO Alliance members are doing something about it, with easy-to-use solutions designed specifically for mass adoption.
The FIDO Alliance is designing the authentication plumbing for everything online. They are creating new standards and technical protocols allowing secure personal devices (phones, personal smart keys, wearables, and soon a range of regular appliances) to securely transmit authentication data to cloud services and other devices, in some cases eliminating passwords altogether.
See also my ongoing FIDO Alliance research at Constellation.
In electronic business, Relying Parties (RPs) need to understand their risks of dealing with the wrong person (say a fraudulent customer or a disgruntled ex employee), determine what they really need to know about those people in order to help manage risk, and then in many cases, design a registration process for bringing those people into the business fold. With federated identity, the aim is to offload the registration and other overheads onto an Identity Provider (IdP). But evaluating IdPs and forging identity management arrangements has proven to be enormously complex, and the federated identity movement has been looking for ways to streamline and standardize the process.
One approach is to categorise different classes of IdP, matched to different transaction types. "Levels of Assurance" (LOAs) have been loosely standardised by many governments and in some federated identity frameworks, like the Kantara Initiative. The US Authentication Guideline NIST SP 800-63 is one of the preeminent de facto standards, adopted by the National Strategy for Trusted Identities in Cyberspace (NSTIC). But over the years, adoption of SP 800-63 in business has been disappointing, and now NIST has announced a review.
One of my problem with LOAs is simply stated: I don't believe it's possible to pigeon-hole risk.
With risk management, the devil is in the detail. Risk Management standards like ISO 31000 require organisations to start by analysing the threats that are peculiar to their environment. It's folly to take short cuts here, and it's also well recognised that you cannot "outsource" liability.
To my mind, the LOA philosophy goes against risk management fundamental. To come up with an LOA rating is an intermediate step that takes an RP's risk analysis, squeezes it into a bin (losing lots of information as a result), which is then used to shortlist candidate IdPs, before going into detailed due diligence where all those risk details need to be put back on the table.
I think we all know by now of cases where RPs have looked at candidate IdPs at a given LOA, been less than satisfied with the available offerings, and have felt the need for an intermediate level, something like "LOA two and a half" (this problem was mentioned at CIS 2014 more than once, and I have seen it first hand in the UK IDAP).
Clearly what's going on here is an RP's idea of "LOA 2" differs from a given IdP's idea of the same LOA 2. This is because everyone's risk appetite and threat profile is different. Moreover, the detailed prescription of "LOA 2" must differ from one identity provider to the next. When an RP thinks they need "LOA 2.5" what they're relly asking for is a customised identification. If an off-the-shelf "LOA 2" isn't what it seems, then there can't be any hope for an agreed intermediate LOA 2.5. Even if an IdP and an RP agree in one instance, soon enough we will get a fresh call for "LOA 2.75 please".
We cannot pigeonhole risk. Attaching chunky one dimensional Levels of Assurance is misleading. There is no getting away from the need to do detailed analysis of the threats and therefore the authentication needs required.
This is an updated version of arguments made in Lockstep's submission to the 2009 Cyber Crime Inquiry by the Australian federal government.
In stark contrast to other fields, cyber safety policy is almost exclusively preoccupied with user education. It's really an obsession. Governments and industry groups churn out volumes of well-meaning and technically reasonable security advice, but for the average user, this material is overwhelming. There is a subtle implication that security is for experts, and that the Internet isn't safe unless you go to extremes. Moreover, even if consumers do their very best online, their personal details can still be taken over in massive criminal raids on databases that hardly anyone even know exist.
Too much onus is put on regular users protecting themselves online, and this blinds us to potential answers to cybercrime. In other walks of life, we accept a balanced approach to safety, and governments are less reluctant to impose standards than they are on the Internet. Road safety for instance rests evenly on enforceable road rules, car technology innovation, certified automotive products, mandatory quality standards, traffic management systems, and driver training and licensing. Education alone would be nearly worthless.
Around cybercrime we have a bizarre allergy to technology. We often hear that 'Preventing data breaches not a technology issue' which may be politically correct but it's faintly ridiculous. Nobody would ever say that preventing car crashes is 'not a technology issue'.
Credit card fraud and ID theft in general are in dire need of concerted technological responses. Consider that our Card Not Present (CNP) payments processing arrangements were developed many years ago for mail orders and telephone orders. It was perfectly natural to co-opt the same processes when the Internet arose, since it seemed simply to be just another communications medium. But the Internet turned out to be more than an extra channel: it connects everyone to everything, around the clock.
The Internet has given criminals x-ray vision into peoples' banking details, and perfect digital disguises with which to defraud online merchants. There are opportunities for crime now that are both quantitatively and qualitatively radically different from what went before. In particular, because identity data is available by the terabyte and digital systems cannot tell copies from originals, identity takeover is child's play.
You don't even need to have ever shopped online to run foul of CNP fraud. Most stolen credit card numbers are obtained en masse by criminals breaking into obscure backend databases. These attacks go on behind the scenes, out of sight of even the most careful online customers.
So the standard cyber security advice misses the point. Consumers are told earnestly to look out for the "HTTPS" padlock that purportedly marks a site as secure, to have a firewall, to keep their PCs "patched" and their anti-virus up to date, to only shop online at reputable merchants, and to avoid suspicious looking sites (as if cyber criminals aren't sufficiently organised to replicate legitimate sites in their entirety). But none of this advice touches on the problem of coordinated massive heists of identity data.
Merchants are on the hook for unwieldy and increasingly futile security overheads. When a business wishes to accept credit card payments, it's straightforward in the real world to install a piece of bank-approved terminal equipment. But to process credit cards online, shopkeepers have to sign up to onerous PCI-DSS requirements that in effect require even small business owners to become IT security specialists. But to what end? No audit regime will ever stop organised crime. To stem identity theft, we need to make stolen IDs less valuable.
All this points to urgent public policy matters for governments and banks. It is not enough to put the onus on individuals to guard against ad hoc attacks on their credit cards. Systemic changes and technological innovation are needed to render stolen personal data useless to thieves. It's not that the whole payments processing system is broken; rather, it is vulnerable at just one point where stolen digital identities can be abused.
Digital identities are the keys to our personal kingdoms. As such they really need to be treated as seriously as car keys, which have become very high tech indeed. Modern car keys cannot be duplicated at a suburban locksmith. It's possible you've come across office and filing cabinet keys that carry government security certifications. And we never use the same keys for our homes and offices; we wouldn't even consider it (which points to the basic weirdness in Single Sign On and identity federation).
In stark contrast to car keys, almost no attention is paid to the pedigree of digital identities. Technology neutrality has bred a bewildering array of ad hoc authentication methods, including SMS messages, one time password generators, password calculators, grid cards and picture passwords; at the same time we've done nothing at all to inhibit the re-use of stolen IDs.
It's high time government and industry got working together on a uniform and universal set of smart identity tools to properly protect consumers online.
Stay tuned for more of my thoughts on identity safety, inspired by recent news that health identifiers may be back on the table in the gigantic U.S. e-health system. The security and privacy issues are large but the cyber safety technology is at hand!
Sorry followers, but I had an error in the HTML of a mid December blog post, and my RSS feed was probably broken. You might have missed these three posts:
- The consumerization of security January 2
- The State of the State of Privacy December 30
- The Constellation Research Disruption Checklist for 2015 December 19.
Increasingly, commentators are calling into question the state of information security. It's about time. We infosec professionals need to take action before our customers force us to.
Standard security is just not intellectually secure. Information Security Management Systems and security audits are based on discredited quality management frameworks like ISO 9000 and waterfall methodologies. The derivative PCI-DSS regime mitigates accidental losses and amateur attacks but is farcically inadequate in the face of organised crime. The economics of perimeter security are simply daft: many databases are now worth billionsof dollars to identity thieves, but they're protected by meagre firewalls and administrators with superuser privileges on $40K salaries. Threat & Risk Assessments have their roots in Failure Modes & Criticality Analysis (FMECA) which is hopeless in the highly non-linear and unpredictable world of software, where a trivial mistake in one part of a program can have unlimited impact on the whole system; witness the #gotofail episode. Software is so easy to write and businesses are so obsessed with time to market that the world now rests on layer upon layer of bloated spaghetti code. The rapidity of software development has trumped quality and UI design. We have fragile home computers that are impossibly complex to operate safely, and increasingly, Internet-connected home appliances with the same characteristics.
We can't adequately protect credit card numbers, yet we're joy-riding like a 12-year old on a stolen motorcycle into an Internet of Things.
We're going to have to fix complexity and quality before security stands a chance.
Maybe the market will come to the rescue. Consumers seem to tolerate crappy computer quality to some degree, doubtless weighing up the benefits of being online versus the hassle of the occasional blue screen or hard drive crash. But when things like cars, thermostats and swimming pool filters, which don't need to be computers, become computers, consumers may make a harsher judgement of technology reliability.
Twenty years ago when I worked in medical device software -- pre-Internet, let alone the Internet of Things -- I recall an article about quality which predicted the public would paradoxically put up with more bugs in flight control software than they would in a light switch. In a way, that analysis predicted one of the driving forces for technology today: consumerization.
Posted in Security
Constellation Research analysts are wrapping up a very busy 2014 with a series of "State of the State" reports. For my part I've looked at the state of privacy, which I feel is entering its adolescent stage.
Here's a summary.
1. Consumers have not given up privacy - they've been tricked out of it.
The impression is easily formed that people just don’t care about privacy anymore, but in fact people are increasingly frustrated with privacy invasions. They’re tired of social networks mining users’ personal lives; they are dismayed that video game developers can raid a phone’s contact lists with impunity; they are shocked by the deviousness of Target analyzing women’s shopping histories to detect pregnant customers; and they are revolted by the way magnates help themselves to operational data like Uber’s passenger movements for fun or allegedly for harassment – just because they can.
2. Private sector surveillance is overshadowed by government intrusion, but is arguably just as bad.
Edward Snowden’s revelations of a massive military-industrial surveillance effort were of course shocking, but they should not steal all the privacy limelight. In parallel with and well ahead of government spy programs, the big OSNs and search engine companies have been gathering breathtaking amounts of data, all in the interests of targeted advertising. These data stores have come to the attention of the FBI and CIA who must be delighted that someone else has done so much of their spying for them. These businesses boast that they know us better than we know ourselves. That’s chilling. We need to break through into a post-Snowden world.
3. The U.S. is the Canary Islands of privacy.
The United States remains the only major economy without broad-based information privacy laws. There are almost no restraints on what American businesses may do with personal information they collect from their customers, or synthesize from their operations. In the rest of the world, most organizations must restrict their collection of data, limit the repurposing of data, and disclose their data handling practices in full. Individuals may want to move closer to European-style privacy protection, while many corporations prefer the freedom they have in America to hang on to any data they like while they figure out how to make money out of it. Digital companies like to call this “innovation” and grandiose claims are made about its criticality for the American economy, but many consumers would prefer the sort of innovation that respects their privacy while delivering value-for-data.
4. Privacy is more about politics than technology.
Privacy can be seen as a power play between individual rights and the interests of governments and businesses. Most of us actually want businesses to know quite a lot about us, but we expect them to respect what they know and to be restrained in how they use it. Privacy is less about what organizations do with information than what they choose not to do with it. Hence, privacy cannot be a technology issue. It is not about keeping things secret but rather, keeping them close. Privacy is actually the protection we need when things are not secret.
5. Land grab for “public” data accelerates.
6. Data literacy will be key to digital safety.
Computer literacy is one thing, but data literacy is different and less tangible. We have strong privacy intuitions that have evolved over centuries but in cyberspace we lose our bearings. We don’t have the familiar social cues when we go online, so now we need to develop new ones. And we need to build up a common understanding of how data flows in the digital economy. Today we train kids in financial literacy to engender a first-hand sense of how commerce works; data literacy may become even more important as a life skill. It's more than being able to work an operating system, a device and umpteen apps. It means having meaningful mental models of what goes on in computers. Without understanding this, we can’t construct effective privacy policies or privacy labeling.
7. Privacy will get worse before it gets better.
Privacy is messy, even where data protection rules are well entrenched. Consider the controversial Right To Be Forgotten in Europe, which requires search engine operators to provide a mechanism for individuals to request removal of old, inaccurate and harmful reports from results. The new rule has been derived from existing privacy principles, which treat the results of search algorithms as a form of synthesis rather than a purely objective account of history, and therefore hold the search companies partly responsible for the offense their processes might produce. Yet, there are plenty of unintended consequences, and collisions with other jurisprudence. The sometimes urgent development of new protections for old civil rights is never plain sailing.
My report "Privacy Enters Adolescence" can be downloaded here. It expands on the points above, and sets out recommendations for improving awareness of how Personal Data flows in the digital economy, negotiating better deals in the data-for-value bargain, the conduct of Privacy Impact Assessments, and developing a "Privacy Bill of Rights".
The Constellation Research analyst team has assembled a "year end checklist", offering suggestions designed to enable you to take better control of your digital strategy in 2015. We offer these actions to help you dominate "digital disruption" in the new year.
1. Matrix Commerce: Scrub your data
When it comes to Matrix Commerce, companies need to focus on the basics first. What are the basics? Cleaning up and getting your data in order. Much is discussed about the evolution of supply chains and the surrounding technologies. However these solutions are only as useful as the data that feeds them. Many CxOs that we have spoken to have discussed the need to focus on cleaning up their data. First work on a data audit to identify the most important sources of data for your efforts in Matrix Commerce. Second, focus on the systems that can process and make sense of this data. Finally, determine the systems and business processes that will be optimized with these improvements. Matrix Commerce starts with the right data. The systems and business processes that layer on top of this data are only as useful as the data. CxOs must continue to organize and clean their data house.
2. Safety and Privacy - Create your Enterprise Information Asset Inventory
In 2015, get on top of your information assets. When information is the lifeblood of your business, make sure you understand what really makes it valuable. Create (or refresh) your Enterprise Information Asset Inventory, and then think beyond the standard security dimensions of Confidentiality, Integrity and Availability. What sets your information apart from your competitors? Is it more complete, more up-to-date, more original or harder to acquire? To maximise the value of information, innovative organisations are gauging it in terms of utility, currency, jurisdictional certainty, privacy compliance and whatever other facets matter the most in their business environment. These innovative organizations structure their information technology and security functions to not merely protect the enterprise against threats, but to deliver the right data when and where it's needed most. Shifting from defensive security to strategic informatics is the key to success in the digital economy. Learn more about creating an information asset inventory.
3. Data to Decisions - Create your Big Data Plan of Action
Big Data is arriving at the end of the hype cycle. In 2015, real-time decision support using ‘smart data’ extracted from Big Data will manifest as a requirement for competitiveness. Digital Business, or even just online sellers, are all reducing reaction and response times. Enterprises have huge business and technology investments in data that need to support their daily activities better, so its time to pivot from using Big Data for analysis and start examining how to deliver Smart Data to users and automated online systems. What is Smart Data? Well, let's say creating your organization's definition of Smart Data is priority number one in your Big Data strategy. Transformation in Digital markets requires a transformation in the competitive use of Big Data. Request a meeting with Constellation's CTO in residence, Andy Mulholland.
4. Next Gen CXP - Make Customer Experience Instinctual
Stop thinking of Customer Experience as a functional or departmental initiative and start thinking about experience from the customer’s point of view.
Customers don’t distinguish between departments when they require service from your organization. Customer Experience is a responsibility shared amongst all employees. However, the division of companies into functional departments with separate goals means that customer experience is often fractured. Rid your company of this ethos in 2015 by using design thinking to create a culture of cohesive customer experience.
Ensure all employees live your company mythology, employ the right customer and internal-facing technologies, collect the right data, and make changes to your strategy and products as soon as possible. Read "Five Approaches to Drive Customer Loyalty in a Digital World".
5. Future of Work - Take Advantage of Collaboration
Over the last few years, there has been a growing movement in the way people communicate and collaborate with their colleagues and customers, shifting from closed systems like email and chat, to more transparent tools like social networks and communities. That trend will continue in 2015 as people become more comfortable with sharing and as collaboration tools become more integrated with the business software they use to get their jobs done. Employees should familiarize themselves with the tools available to them, and learn how to pick the right tool for each of the various scenarios that make up their work day. Read "Enterprise Collaboration: From Simple Sharing to Getting Work Done".
6. Future of Work - Prepare for Demographic Shifts
In the next ten years 10% to 20% of the North American and European workforce will retire. Leaders need to understand and prepare for this tremendous shift so performance remains steady as many of the workforce's highly skilled workers retire.
To ensure smooth a smooth transition, ensure your HCM software systems can accommodate a massive number of retirements, successions and career path developments, and new hires from external recruiting.
Constellation fully expects employment to be a sellers market going forward. People leaders should ensure their HCM systems facilitate employee motivation, engagement and retention, lest they lose their best employees to competitors. Read "Globalization, HR, and Business Model Success". Additional cloud HR case studies here and here.
7. Digital Marketing Transformation - Brand Priorities Must Convey Authenticity
Brand authenticity must dominate digital and analog channels in 2015. Digital personas must not only reflect the brand, but also expand upon the analog experience. Customers love the analog experience, so deliver the same experience digitally. Brand conscious leaders must invest in the digital experience with an eye towards mass personalization at scale. While advertising plays a key role in distributing the brand message, investment in the design of digital experiences presents itself as a key area of investment for 2015. Download free executive brief: Can Brands Keep Their Promise?
8. Consumerization of IT: Use Mobile as the Gateway to Digital Transformation Projects
Constellation believes that mobile is more than just the device. While smartphones and other devices are key enablers of 'mobile', design in digital transformation should take into account how these technologies address the business value and business model transformation required to deliver on breakthrough innovation. If you have not yet started your digital transformation or are considering using mobile as an additional digital transformation point, Constellation recommends that clients assess how a new generation of enterprise mobile apps can change the business by identifying a cross-functional business problem that cannot be solved with linear thinking, articulating the business problem and benefit, showing how the solution orchestrates new experiences, identifying how analytics and insights can fuel the business model shift, exploiting full native device features, and seeking frictionless experiences. You'll be digital before you know it. Read "Why the Third Generation of Enterprise Mobile is Designed for Digital Transformation"
9. Technology Optimization & Innovation - Prepare Your Public Cloud Strategy
In 2015 technology leaders will need to create, adjust and implement their public cloud strategy. Considering estimates pegging Amazon AWS at 15-20% of virtualized servers worldwide, CIOs and CTOs need to actively plan and execute their enterprise’s strategy vis-a-vis the public cloud. Reducing technical debt and establishing next generation best practices to leverage the new ‘on demand’ IT paradigm should be a top priority for CIOs and CTOs seeking organizational competitiveness, greater job security and fewer budget restrictions.
An Engadget report today, "Hangouts eavesdrops on your chats to offer 'smart suggestions'" describes a new "spy/valet" feature being added to Google's popular video chat tool.
- "Google's Hangouts is gaining a handy, but slightly creepy new feature today. The popular chat app will now act as a digital spy-slash-valet by eavesdropping on your conversations to offer 'smart suggestions.' For instance, if a pal asks 'where are you?' it'll immediately prompt you to share your location, then open a map so you can pin it precisely."
It's sad that this sort of thing still gets meekly labeled as "creepy". The privacy implications are serious and pretty easy to see.
Google is evidently processing the text of Hangouts as they fly through their system, extracting linguistic cues, interpreting what's being said using Artificial Intelligence, extracting new meaning and insights, and offering suggestions.
We need some clarification about whether any covert tests of this technology have been undertaken during the R&D phase. A company obviously doesn't launch a new product like this without a lot of research, feasibility testing, prototyping and testing. Serious work on 'smart suggestions' would not start without first testing how it works in real life. So I wonder if any of this evaluation was done covertly on live data? Are Google researchers routinely eavesdropping on hangouts to develop the 'smart suggestions' technology?
Many people have said to me I'm jumping the gun, and that Google would probably test the new Hangouts feature on its own employees. Perhaps, but given that scanning gmails is situation normal for Google, and they have a "privacy" culture that joins up all their business units so that data may be re-purposed almsot without limit, I feel sure that running AI algorithms on text without telling people would be par for the course.
In development and in operation, we need to know what steps are taken to protect the privacy of hangout data. What personally identifiable data and metadata is retained for other purposes? Who inside Google is granted access to the data and especially the synthtised insights? How long does any secondary usage persist for? Are particularly sensitive matters (like health data, financial details, corporate intellectual property etc.) filtered out?
This is well beyond "creepy". Hangouts and similar video chat are certainly wonderdful technologies. We're using them routinely for teaching, education, video conferencing, collaboration and consultation. The tools may become entrenched in corporate meetings, telecommuting, healthcare and the professions. But if I am talking with my doctor, or discussing patents with my legal team, or having a clandestine chat with a lover, I clearly do not want any unsolicited contributions from the service provider. More fundamentally, I want assurance that no machine is ever tapping into these sorts of communications, running AI algorithms, and creating new insights. If I'm wrong about covert testing on live data, then Google could do what Apple did and publish an Open Letter clarifying their data usage practices and strategies.
Come to think of it, if Google is running natural language processing algorithms over the Hangouts stream, might they be augmenting their gmail scanning the same way? Their business model is to extract insights about users from any data they get their hands on. Until now it's been a crude business of picking out keywords and using them to profile users' interests and feed targeted advertising. But what if they could get deeper information about us through AI? Is there any sign from their historical business practices that Google would not do this? And what if they can extract sensitive information like mental health indications? Even with good intent and transarency, predicting healthcare from social media is highly problematic as shown by the "Samaritans Radar" experience.
Artificial Intelligence is one of the new frontiers. Hot on the heels of the successes of IBM Watson, we're seeing Natural Language Processing and analytics rapidly penetrate business and now consumer applications. Commentators are alternately telling us that AI will end humanity, and not to worry about it. For now, I call on people to simply think clearly through the implications, such as for privacy. If AI programs are clever enough to draw deep insights about us from what we say, then the "datapreneurs" in charge of those algorithms need to remember they are just as accountable for privacy as if they have asked us reveal all by filling out a questionnaire.
In my last blog Improving the Position of the CISO, I introduced the new research I've done on extending the classic "Confidentiality-Integrity-Availability" (C-I-A) frame for security analysis, to cover all the other qualities of enterprise information assets. The idea is to build a comprehensive account of what it is that makes information valuable in the context of the business, leveraging the traditional tools and skills of the CISO. After all, security professionals are particularly good at looking at context. Instead of restricting themselves to defending information assets against harm, CISOs can be helping to enhance those assets by building up their other competitive attributes.
Let's look at some examples of how this would work, in some classic Big Data applications in retail and hospitality.
Companies in these industries have long been amassing detailed customer databases under the auspices of loyalty programs. Supermarkets have logged our every purchase for many years, so they can for instance put together new deals on our favorite items, from our preferred brands, or from competitors trying to get us to switch brands. Likewise, hotels track when we stay and what we do, so they can personalise our experience, tailor new packages for us, and try to cross-sell services they predict we'll find attractive. Behind the scenes, the data is also used for operations to plan demand, fine tune their logistics and so on.
Big Data techniques amplify the value of information assets enormously, but they can take us into complicated territory. Consider for example the potential for loyalty information to be parlayed into insurance and other financial services products. Supermarkets find they now have access to a range of significant indicators of health & lifestyle risk factors which are hugely valuable in insurance calculations ... if only the data is permitted to be re-purposed like that.
The question is, what is it about the customer database of a given store or hotel that gives it an edge over its competitors? There many more attributes to think creatively about beyond C-I-A!
- It's important to rigorously check that the raw data, the metadata and any derived analytics can actually be put to different business purposes.
- Are data formats well-specified, and technically and semantically interoperable?
- What would it cost to improve interoperability as needed?
- Is the data physically available to your other business systems?
- Does the rest of the business know what's in the data sets?
- Do you know more about your customers than your competitors do?
- Do you supplement and enrich raw customer behaviours with questionaires, or linked data?
- How far back in time do the records go?
- Do you understand the reason any major gaps? Do the gaps themselves tell you anything?
- What sort of metadata do you have? For example, do you retain time & location, trend data, changes, origins and so on?
- Currency & Accuracy
- Is your data up to date? Remember that accuracy can diminish over time, so the sheer age of a long term database can have a downside.
- What mechanisms are there to keep data up to date?
- Permissions & Consent
- Have customers consented to secondary usage of data?
- Is the consent specific, blanket or bundled?
- Might customers be surprised and possibly put off to learn how their loyalty data is utilised?
- Do the terms & conditions of participation in a loyalty program cover what you wish to do with the data?
- Do the Ts&Cs (which might have been agreed to in the past) still align with the latest plans for data usage?
- Are there opportunities to refresh the Ts&Cs?
- Are there opportunities for customers to negotiate the value you can offer for re-purposing the data?
- When businesses derive new insights from data, it is possible that they are synthesising brand new Personal Information, and non-obvious privacy obligations can go along with that. The competitive advantage of Big Data can be squandered if regulations are overlooked, especially in international environments.
- So where is the data held, and where does it flow?
- Are applications for your data compliant with applicable regulations?
- Is health information or similar sensitive Personal Information extracted or synthesised, and do you have specific consent for that?
- Can you meet the Access & Correction obligations in many data protection regulations?
For more detail, my new report, "Strategic Opportunities for the CISO", is available now.