Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Free search.

Search engines are wondrous things. I myself use Google search umpteen times a day. I don't think I could work or play without it anymore. And yet I am a strong supporter of the contentious "Right to be Forgotten". The "RTBF" is hotly contested, and I am the first to admit it's a messy business. For one thing, it's not ideal that Google itself is required for now to adjudicate RTBF requests in Europe. But we have to accept that all of privacy is contestable. The balance of rights to privacy and rights to access information is tricky. RTBF has a long way to go, and I sense that European jurors and regulators are open and honest about this.

One of the starkest RTBF debating points is free speech. Does allowing individuals to have irrelevant, inaccurate and/or outdated search results blocked represent censorship? Is it an assault on free speech? There is surely a technical-legal question about whether the output of an algorithm represents "free speech", and as far as I can see, that question remains open. Am I the only commentator suprised by this legal blind spot? I have to say that such uncertainty destabilises a great deal of the RTBF dispute.

I am not a lawyer, but I have a strong sense that search outputs are not the sort of thing that constitutes speech. Let's bear in mind what web search is all about.

Google search is core to its multi-billion dollar advertising business. Search results are not unfiltered replicas of things found in the public domain, but rather the subtle outcome of complex Big Data processes. Google's proprietary search algorithm is famously secret, but we do know how sensitive it is to context. Most people will have noticed that search results change day by day and from place to place. But why is this?

When we enter search parameters, the result we get is actually Google's guess about what we are really looking for. Google in effect forms a hypothesis, drawing on much more than the express parameters, including our search history, browsing history, location and so on. And in all likelihood, search is influenced by the many other things Google gleans from the way we use its other properties -- gmail, maps, YouTube, hangouts and Google+ -- which are all linked now under one master data usage policy.

And here's the really clever thing about search. Google monitors how well it's predicting our real or underlying concerns. It uses a range of signals and metrics, to assess what we do with search results, and it continuously refines those processes. This is what Google really gets out of search: deep understanding of what its users are interested in, and how they are likely to respond to targeted advertising. Each search result is a little test of Google's Artificial Intelligence, which, as some like to say, is getting to know us better than we know ourselves.

As important as they are, it seems to me that search results are really just a by-product of a gigantic information business. They are nothing like free speech.

Posted in Privacy, Internet, Big Data

The Creepy Test

I'm going to assume readers know what's meant by the "Creepy Test" in privacy. Here's a short appeal to use the Creepy Test sparingly and carefully.

The most obvious problem with the Creepy Test is its subjectivity. One person's "creepy" can be another person's "COOL!!". For example, a friend of mine thought it was cool when he used Google Maps to check out a hotel he was going to, and the software usefully reminded him of his check-in time (evidently, Google had scanned his gmail and mashed up the registration details next time he searched for the property). I actually thought this was way beyond creepy; imagine if it wasn't a hotel but a mental health facility, and Google was watching your psychiatric appointments.

In fact, for some people, creepy might actually be cool, in the same way as horror movies or chilli peppers are cool. There's already an implicit dare in the "Nothing To Hide" argument. Some brave souls seem to brag that they haven't done anything they don't mind being made public.

Our sense of what's creepy changes over time. We can get used to intrusive technologies, and that suits the agendas of infomoplists who make fortunes from personal data, hoping that we won't notice. On the other hand, objective and technology-neutral data privacy principles have been with us for over thirty years, and by and large, they work well to address contemporary problems like facial recognition, the cloud, and augmented reality glasses.

Using folksy terms in privacy might make the topic more accessible to laypeople, but it tends to distract from the technicalities of data privacy regulations. These are not difficult matters in the scheme of things; data privacy is technically about objective and reasonable controls on the collection, use and disclosure of personally identifiable information. I encourage anyone with an interest in privacy to spend time familiarising themselves with common Privacy Principles and the definition of Personal Information. And then it's easy to see that activities like Facebook's automated face recognition and Tag Suggestions aren't merely creepy; they are objectively unlawful!

Finally and most insideously, when emotive terms like creepy are used in debating public policy, it actually disempowers the critical voices. If "creepy" is the worst thing you can say about a given privacy concern, then you're marginalised.

We should avoid being subjective about privacy. By all means, let's use the Creepy Test to help spot potential privacy problems, and kick off a conversation. But as quickly as possible, we need to reduce privacy problems to objective criteria and, with cool heads, debate the appropriate responses.

See also A Theory of Creepy: Technology, Privacy and Shifting Social Norms by Omer Tene and Jules Polonetsky.

      • "Alas, intuitions and perceptions of 'creepiness' are highly subjective and difficult to generalize as social norms are being strained by new technologies and capabilities". Tene & Polonetsky.

Posted in Privacy, Biometrics, Big Data

On pure maths and innovation

An unpublished letter to the editor of The New Yorker, February 2015.

My letter

Alec Wilkinson says in his absorbing profile of the quiet genius Yitang Zhang ("The pursuit of beauty", February 2) that pure mathematics is done "with no practical purposes in mind". I do hope mathematicians will forever be guided by aesthetics more than economics, but nevertheless, pure maths has become a cornerstone of the Information Age, just as physics was of the Industrial Revolution. For centuries, prime numbers might have been intellectual curios but in the 1970s they were beaten into modern cryptography. The security codes that scaffold almost all e-commerce are built from primes. Any advances in understanding these abstract materials impacts the Internet itself, for better or for worse. So when Zhang demurs that his result is "useless for industry", he's mispeaking.

The online version of the article is subtitled "Solving an Unsolvable Problem". The apparent oxymoron belies a wondrous pattern we see in mathematical discovery. Conundrums widely accepted to be impossible are in fact solved quite often, and then frenetic periods of innovation usually follow. The surprise breakthrough is typically inefficient (or, far worse in a mathematician's mind, ugly) but it can inspire fresh thinking and lead to polished methods. We are in one of these intense creative periods right now. Until 2008, it was widely thought that true electronic cash was impossible, but then the mystery figure Satoshi Nakamoto created Bitcoin. While it overturned the conventional wisdom, Bitcoin is slow and anarchic, and problematic as mainstream money. But it has triggered a remarkable explosion of digital currency innovation.

A published letter

Another letter writer made a similar point:

As Alec Wilkinson points out in his Profile of the math genius Yitang Zhang, results in pure mathematics can be sources of wonder and delight, regardless of their applications. Yet applications do crop up. Nineteenth-century mathematicians showed that there are geometries as logical and complete as Euclidean geometry, but which are utterly distinct from it. This seemed of no practical use at the time, but Albert Einstein used non-Euclidean geometry to make the most successful model that we have of the behavior of the universe on large scales of distance and time. Abstract results in number theory, Zhang’s field, underlie cryptography used to protect communication on devices that many of us use every day. Abstract mathematics, beautiful in itself, continually results in helpful applications, and that’s pretty wonderful and delightful, too.

David Lee
Sandy Spring, Md.

On innovation

My favorite example of mathematical innovation concerns public key cryptography (and I ignore here the credible reports that PKC was invented by the Brits decades before but kept secret). For centuries, there was essentially one family of cryptographic algorithms, in which a secret key shared by sender and recipient is used to both encrypt and decrypt the protected communication. Key distribution is the central problem in so-called "Symmetric" Cryptography: how does the sender get the secret key to the recipient some time before sending the message? The dream was for the two parties to be able to establish a secret key without ever having to meet or using any secret channel. It was thought to be an unsolvable problem ... until it was solved by Ralph Merkle in 1974. His solution, dubbed "Merkle's Puzzles" was almost hypothetical; the details don't matter here but they were going to be awkward to put it mildly, involving millions of small messages. But the impact on cryptography was near instantaneous. The fact that, in theory, two parties really could establish a shared secret via public messages triggered a burst of development of practical public key cryptography, first of the Diffie-Hellman algorithm, and then RSA by Ron Rivest, Adi Shamir and Leonard Adleman. We probably wouldn't have e-commerce if it wasn't for Merkle's crazy curious maths.

Posted in Security, Science

Cyber Security Summit - Part 2

This is Part 2 of my coverage of the White House #CyberSecuritySummit; see Part 1 here.

On Feb 13th, at President Obama's request, a good number of the upper echelon of Internet experts gathered at Stanford University in Silicon Valley to work out what to do next about cybersecurity and consumer protection online. The Cyber Security Summit was put together around Obama's signing a new Executive Order to create new cyber threat information sharing hubs and standards to foster sharing while protecting privacy, and it was meant to maintain the momentum of his cybersecurity and privacy legislative program.

The main session of the summit traversed very few technical security issues. The dominant theme was intelligence sharing: how can business and government share what they know in real time about vulnerabilities and emerging cyber attacks? Just a couple of speakers made good points about preventative measures. Intel President Renee James highlighted the importance of a "baseline of computing security"; MasterCard CEO Ajay Banga was eloquent on how innovation can thrive in a safety-oriented regulated environment like road infrastructure and road rules. So apart from these few deviations, the summit had a distinct military intelligence vibe, in keeping with the cyber warfare trope beloved by politicians.

On the one hand, it would be naive to expect such an event to make actual progress. And I don't mind a political showcase if it secures the commitment of influencers and builds awareness. But on the other hand, the root causes of our cybersecurity dilemma have been well known for years, and this esteemed gathering seemed oblivious to them.

Where's the serious talk of preventing cyber security problems? Where is the attention to making e-business platforms and digital economy infostructure more robust?

Personal Information today is like nitroglycerin - it has to be handled with the utmost care, lest it blow up in your face. So we have the elaborate and brittle measures of PCI-DSS or the HIPAA security rules, rendered useless by the slightest operational slip-up.

How about rendering personal information safer online, so it cannot be stolen, co-opted, modified and replayed? If stolen information couldn't be used by identity thieves with impunity, we would neutralise the bulk of today's cybercrime. This is how EMV Chip & PIN payment security works. Personal data and purchase details are combined in a secure chip and digitally signed under the customer's control, to prove to the merchant that the transaction was genuine. The signed transaction data cannot be easily hacked (thanks Jim Fenton for the comment; see below); stolen identity data is useless to a thief if they don't control the chip; a stolen chip is only good for single transactions (and only if the PIN is stolen as well) rather than the mass fraud perpetrated after raiding large databases.

It's obvious (isn't it?) that we need to do something radically different before the Internet of Things turns into a digital cesspool. The good news for privacy and security in ubiquitous computing is that most smart devices can come with Secure Elements and built-in digital signature capability, so that all the data they broadcast can be given pedigree. We should be able to know tell for sure that every piece of information flowing in the IoT has come from a genuine device, with definite attributes, operating with the consent of its owner.

The technical building blocks for a properly secure IoT are at hand. Machine-to-Machine (M2M) identity modules (MIMs) and Trusted Execution Environments (TEEs) provide safe key storage and cryptographic functionality. The FIDO Alliance protocols leverage this embedded hardware and enable personal attributes to be exchanged reliably. Only a couple of years ago, Vint Cerf in an RSA Conference keynote speculated that ubiquitous public key cryptography would play a critical role in the Internet of Things, but he didn't know how exactly.

In fact, we have have known what to do with this technology for years.

At the close of the Cyber Security Summit, President Obama signed his Executive Order -- in ink. The irony of using a pen to sign a cybersecurity order seemed lost on all concerned. And it is truly tragic.

Clinto Ahern 1998
In 1998, Bill Clinton and his Irish counterpart Bertie Ahern signed an US-Ireland communique on e-commerce. At that time, the presidents used smartcards to digitally sign the agreement. Then in 2003 -- still early days -- Bill Gates espoused the importance of chip technology for authentication. Within weeks several notebook computers were released with integrated smartcard readers, but service providers chose not to use the stronger security options. Instead of leading a new wave of decent security, banks and governments muddled through with passwords and then password generators or text messages. Mainstream e-business missed a huge opportunity through the 2000s to embed smart authentication in the day-to-day user experience.

We probably wouldn't need a cybersecurity summit in 2015 if serious identity security had been built into the cyber infrastructure over a decade ago.

Posted in Smartcards, Security

Obama's Cybersecurity Summit

The White House Summit on Cybersecurity and Consumer Protection was hosted at Stanford University on Friday February 13. I followed the event from Sydney, via the live webcast.

It would be naive to expect the White House Cybersecurity Summit to have been less political. President Obama and his colleagues were in their comfort zone, talking up America's recent economic turnaround, and framing their recent wins squarely within Silicon Valley where the summit took place. With a few exceptions, the first two hours was more about green energy, jobs and manufacturing than cyber security. It was a lot like a lost episode of The West Wing.

The exceptions were important. Some speakers really nailed some security issues. I especially liked the morning contributions from Intel President Renee James and MasterCard CEO Ajay Banga. James highlighted that Intel has worked for 10 years to improve "the baseline of computing security", making her one of the few speakers to get anywhere near the inherent insecurity of our cyber infrastructure. The truth is that cyberspace is built on weak foundations; the software development practices and operating systems that bear the economy today were not built for the job. For mine, the Summit was too much about military/intelligence themed information sharing, and not enough about why our systems are so precarious. I know it's a dry subject but if they're serious about security, policy makers really have to engage with software quality and reliability, instead of thrilling to kids learning to code. Software development practices are to blame for many of our problems; more on software failures here.

Ajay Banga was one of several speakers to urge the end of passwords. He summed up the authentication problem very nicely: "Stop making us remember things in order to prove who we are". He touched on MasterCard's exploration of continuous authentication bracelets and biometrics (more news of which coincidentally came out today). It's important however that policy makers' understanding of digital infrastructure resilience, cybercrime and cyber terrorism isn't skewed by everyone's favourite security topic - customer authentication. Yes, it's in need of repair, yet authentication is not to blame for the vast majority of breaches. Mom and Pop struggle with passwords and they deserve better, but the vast majority of stolen personal data is lifted by organised criminals en masse from poorly secured back-end databases. Replacing customer passwords or giving everyone biometrics is not going to solve the breach epidemic.

Banga also indicated that the Information Highway should be more like road infrastructure. He highlighted that national routes are regulated, drivers are licensed, there are rules of the road, standardised signs, and enforcement. All these infrastructure arrangements leave plenty of room for innovation in car design, but it's accepted that "all cars have four wheels".

Tim Cook was then the warm-up act before Obama. Many on Twitter unkindly branded Cook's speech as an ad for Apple, paid for by the White House, but I'll accentuate the positives. Cook continues to campaign against business models that monetize personal data. He repeated his promise made after the ApplePay launch that they will not exploit the data they have on their customers. He put privacy before security in everything he said.

Cook painted a vision where digital wallets hold your passport, driver license and other personal documents, under the user's sole control, and without trading security for convenience. I trust that he's got the mobile phone Secure Element in mind; until we can sort out cybersecurity at large, I can't support the counter trend towards cloud-based wallets. The world's strongest banks still can't guarantee to keep credit card numbers safe, so we're hardly ready to put our entire identities in the cloud.

In his speech, President Obama reiterated his recent legislative agenda for information sharing, uniform breach notification, student digital privacy, and a Consumer Privacy Bill of Rights. He stressed the need for private-public partnership and cybersecurity responsibility to be shared between government and business. He reiterated the new Cyber Threat Intelligence Integration Center. And as flagged just before the summit, the president signed an Executive Order that will establish cyber threat information sharing "hubs" and standards to foster sharing while protecting privacy.

Obama told the audience that cybersecurity "is not an ideological issue". Of course that message was actually for Congress which is deliberating over his cyber legislation. But let's take a moment to think about how ideology really does permeate this arena. Three quasi-religious disputes come to mind immediately:

  • Free speech trumps privacy. The ideals of free speech have been interpreted in the US in such a way that makes broad-based privacy law intractable. The US is one of only two major nations now without a general data protection statute (the other is China). It seems this impasse is rarely questioned anymore by either side of the privacy debate, but perhaps the scope of the First Amendment has been allowed to creep out too far, for now free speech rights are in effect being granted even to computers. Look at the controversy over the "Right to be Forgotten" (RTBF), where Google is being asked to remove certain personal search results if they are irrelevant, old and inaccurate. Jimmy Wales claims this requirement harms "our most fundamental rights of expression and privacy". But we're not talking about speech here, or even historical records, but rather the output of a computer algorithm, and a secret algorithm at that, operated in the service of an advertising business. The vociferous attacks on RTBF are very ideological indeed.
  • "Innovation" trumps privacy. It's become an unexamined mantra that digital businesses require unfettered access to information. I don't dispute that some of the world's richest ever men, and some of the world's most powerful ever corporations have relied upon the raw data that exudes from the Internet. It's just like the riches uncovered by the black gold rush on the 1800s. But it's an ideological jump to extrapolate that all cyber innovation or digital entrepreneurship must continue the same way. Rampant data mining is laying waste to consumer confidence and trust in the Internet. Some reasonable degree of consumer rights regulation seems inevitable, and just, if we are to avert a digital Tragedy of the Commons.
  • National Security trumps privacy. I am a rare privacy advocate who actually agrees that the privacy-security equilibrium needs to be adjusted. I believe the world has changed since some of our foundational values were codified, and civil liberties are just one desirable property of a very complicated social system. However, I call out one dimensional ideology when national security enthusiasts assert that privacy has to take a back seat. There are ways to explore a measured re-calibration of privacy, to maintain proportionality, respect and trust.

President Obama described the modern technological world as a "magnificent cathedral" and he made an appeal to "values embedded in the architecture of the system". We should look critically at whether the values of entrepreneurship, innovation and competitiveness embedded in the way digital business is done in America could be adjusted a little, to help restore the self-control and confidence that consumers keep telling us is evaporating online.

Posted in Trust, Software engineering, Security, Internet

The state of the state: Privacy enters Adolescence

Constellation Research recently launched the "State of Enterprise Technology" series of research reports. The series assesses the current state of the enterprise technologies Constellation consider crucial to digital transformation, and provide snapshots of the future usage and evolution of these technologies. Constellation will continue to publish reports in our State of Enterprise Technology series throughout Q1.

My first contribution to this series, "Privacy Enters Adolescence", focuses on Safety and Privacy. I've looked at information data privacy in 2015, and identified seven trends of which you should be aware in order to potect your customer's information.

Here's an excerpt from the report:

Digital Safety and Privacy

Constellation's business theme of Digital Safety and Privacy is all about the art and science of maximizing the information assets of a business, including its most important assets – its people. Our research in this theme enables clients to capitalize on cloud, mobility, Big Data and the Internet of Things, without compromising the digital safety of the business, and the privacy and trust of your end users.

Seven Digital Safety and Privacy Trends for 2015


  • Consumers have not given up privacy - they've been tricked out of it. The impression is easily formed that people just don’t care about privacy anymore. Yet there is no proof that privacy is dead. In fact, a robust study of young adults has shown no major difference between them and older people on the importance of privacy.
  • Private sector surveillance is overshadowed by government intrusion, but is arguably just as bad. There is nothing inevitable about private sector surveillance. Consumers are waking up to the fact that digital business models are generating unprecedented fortunes on the back of the personal data they are giving away in loyalty programs, social networks, search, cloud email, and fitness trackers. Most people remain blissfully ignorant of what's being done with all that data, but we see budding signs of resentment from consumers whose every interaction is exploited without their consent.
  • The U.S. is the Canary Islands of privacy. The United States remains the only major economy without broad-based information privacy laws.
  • Privacy is more about politics than technology. Privacy can be seen as a power play between individual rights and the interests of governments and businesses.
  • The land grab for "public" data accelerates. Data is an immensely valuable raw material. More than data mining, Big Data is really about data refining. And unlike the stuff of traditional extraction industries, data seems inexhaustible, and the cost of extraction is near zero. Something akin to land rights for privacy may be the future.
  • Data literacy will be key to digital safety. Computer literacy is one thing, but data literacy is different and less well defined so far. When we go online, we don’t have the familiar social cues, so now we need to develop new ones. And we need to build up a common understanding of how data flows in the digital economy. Data literacy is more than being able to work an operating system, a device and umpteen apps: it means having meaningful mental models of what goes on in computers.
  • Privacy will get worse before it gets better. Privacy is messy, even in jurisdictions where data protection rules are well entrenched. Consider the controversial new Right to Be Forgotten ruling of the European Court of Justice, which resulted in plenty of unintended consequences, and collisions with other jurisprudence, namely the United States' protection of free speech.

My report "Privacy Enters Adolescence" can be downloaded here. It expands on the points above, and sets out recommendations for improving awareness of how personal data flows in the digital economy, negotiating better deals in the data-for-value bargain, and the conduct of Privacy Impact Assessments.

Posted in Social Media, Privacy, Cloud, Big Data

Suspension of Disbelief and digital safety

If the digital economy is really the economy then it's high time we moved beyond hoping that we can simply train users to be safe online. Is the real economy only for heros who can protect themselves in the jungle, writing their own code. As if they're carrying their own guns? Or do we as a community build structures and standards and insist on technologies that work for all?

For most people, the World Wide Web experience still a lot like watching cartoons on TV. The human-machine interface is almost the same. The images and actions are just as synthetic; crucially, nothing on a web browser is real. Almost anything goes -- just as the Roadrunner defies gravity in besting Coyote, there are no laws of physics that temper the way one bit of multimedia leads to the next. Yes, there is a modicum of user feedback in the way we direct some of the action when browsing and e-shopping, but it's quite illusory; for the most part all we're really doing is flicking channels across a billion pages.

It's the suspension of disbelief when browsing that lies at the heart of many of the safety problems we're now seeing. Inevitably we lose our bearings in the totally synthetic World Wide Web. We don't even realise it, we're taken in by a virtual reality, and we become captive to social engineering.

But I don't think it's possible to tackle online safety by merely countering users' credulity. Education is not the silver bullet, because the Internet is really so technologically complex and abstract that it lies beyond the comprehension of most lay people.

Using the Internet 'safely' today requires deep technical skills, comparable to the level of expertise needed to operate an automobile circa 1900. Back then you needed to be able to do all your own mechanics [roughly akin to the mysteries of maintaining anti-virus software], look after the engine [i.e. configure the operating system and firewall], navigate the chaotic emerging road network [there's yet no trusted directory for the Internet, nor any road rules], and even figure out how to fuel the contraption [consumer IT supply chains is about as primitive as the gasoline industry was 100 years ago]. The analogy with the early car industry becomes especially sharp for me when I hear utopian open source proponents argue that writing ones own software is the best way to be safe online.

The Internet is so critical (I'd have thought this was needless to say) that we need ways of working online that don't require us to all be DIY experts.

I wrote a first draft of this blog six years ago, and at that time I called for patience in building digital literacy and sophistication. "It took decades for safe car and road technologies to evolve, and the Internet is still really in its infancy" I said in 2009. But I'm less relaxed about his now, on the brink of the Internet of Things. It's great that the policy makers like the US FTC are calling on connected device makers to build in security and privacy, but I suspect the Internet of Things will require the same degree of activist oversight and regulation as does the auto industry, for the sake of public order and the economy. Do we have the appetite to temper breakneck innovation with safety rules?

Posted in Culture, Internet, Security

Consumerization of Authentication

For the second year running, the FIDO Alliance hosted a consumer authentication showcase at CES, the gigantic Consumer Electronics Show in Las Vegas, this year featuring four FIDO Alliance members.

This is a watershed in Internet security and privacy - never before has authentication been a headline consumer issue.

Sure we've all talked about the password problem for ten years or more, but now FIDO Alliance members are doing something about it, with easy-to-use solutions designed specifically for mass adoption.

The FIDO Alliance is designing the authentication plumbing for everything online. They are creating new standards and technical protocols allowing secure personal devices (phones, personal smart keys, wearables, and soon a range of regular appliances) to securely transmit authentication data to cloud services and other devices, in some cases eliminating passwords altogether.

See also my ongoing FIDO Alliance research at Constellation.

Posted in Privacy, Identity, Constellation Research, Security

We cannot pigeon-hole risk

In electronic business, Relying Parties (RPs) need to understand their risks of dealing with the wrong person (say a fraudulent customer or a disgruntled ex employee), determine what they really need to know about those people in order to help manage risk, and then in many cases, design a registration process for bringing those people into the business fold. With federated identity, the aim is to offload the registration and other overheads onto an Identity Provider (IdP). But evaluating IdPs and forging identity management arrangements has proven to be enormously complex, and the federated identity movement has been looking for ways to streamline and standardize the process.

One approach is to categorise different classes of IdP, matched to different transaction types. "Levels of Assurance" (LOAs) have been loosely standardised by many governments and in some federated identity frameworks, like the Kantara Initiative. The US Authentication Guideline NIST SP 800-63 is one of the preeminent de facto standards, adopted by the National Strategy for Trusted Identities in Cyberspace (NSTIC). But over the years, adoption of SP 800-63 in business has been disappointing, and now NIST has announced a review.

One of my problem with LOAs is simply stated: I don't believe it's possible to pigeon-hole risk.

With risk management, the devil is in the detail. Risk Management standards like ISO 31000 require organisations to start by analysing the threats that are peculiar to their environment. It's folly to take short cuts here, and it's also well recognised that you cannot "outsource" liability.

To my mind, the LOA philosophy goes against risk management fundamental. To come up with an LOA rating is an intermediate step that takes an RP's risk analysis, squeezes it into a bin (losing lots of information as a result), which is then used to shortlist candidate IdPs, before going into detailed due diligence where all those risk details need to be put back on the table.

I think we all know by now of cases where RPs have looked at candidate IdPs at a given LOA, been less than satisfied with the available offerings, and have felt the need for an intermediate level, something like "LOA two and a half" (this problem was mentioned at CIS 2014 more than once, and I have seen it first hand in the UK IDAP).

Clearly what's going on here is an RP's idea of "LOA 2" differs from a given IdP's idea of the same LOA 2. This is because everyone's risk appetite and threat profile is different. Moreover, the detailed prescription of "LOA 2" must differ from one identity provider to the next. When an RP thinks they need "LOA 2.5" what they're relly asking for is a customised identification. If an off-the-shelf "LOA 2" isn't what it seems, then there can't be any hope for an agreed intermediate LOA 2.5. Even if an IdP and an RP agree in one instance, soon enough we will get a fresh call for "LOA 2.75 please".

We cannot pigeonhole risk. Attaching chunky one dimensional Levels of Assurance is misleading. There is no getting away from the need to do detailed analysis of the threats and therefore the authentication needs required.

Posted in Security, Identity, Federated Identity

Making cyber safe like cars

This is an updated version of arguments made in Lockstep's submission to the 2009 Cyber Crime Inquiry by the Australian federal government.

In stark contrast to other fields, cyber safety policy is almost exclusively preoccupied with user education. It's really an obsession. Governments and industry groups churn out volumes of well-meaning and technically reasonable security advice, but for the average user, this material is overwhelming. There is a subtle implication that security is for experts, and that the Internet isn't safe unless you go to extremes. Moreover, even if consumers do their very best online, their personal details can still be taken over in massive criminal raids on databases that hardly anyone even know exist.

Too much onus is put on regular users protecting themselves online, and this blinds us to potential answers to cybercrime. In other walks of life, we accept a balanced approach to safety, and governments are less reluctant to impose standards than they are on the Internet. Road safety for instance rests evenly on enforceable road rules, car technology innovation, certified automotive products, mandatory quality standards, traffic management systems, and driver training and licensing. Education alone would be nearly worthless.

Around cybercrime we have a bizarre allergy to technology. We often hear that 'Preventing data breaches not a technology issue' which may be politically correct but it's faintly ridiculous. Nobody would ever say that preventing car crashes is 'not a technology issue'.

Credit card fraud and ID theft in general are in dire need of concerted technological responses. Consider that our Card Not Present (CNP) payments processing arrangements were developed many years ago for mail orders and telephone orders. It was perfectly natural to co-opt the same processes when the Internet arose, since it seemed simply to be just another communications medium. But the Internet turned out to be more than an extra channel: it connects everyone to everything, around the clock.

The Internet has given criminals x-ray vision into peoples' banking details, and perfect digital disguises with which to defraud online merchants. There are opportunities for crime now that are both quantitatively and qualitatively radically different from what went before. In particular, because identity data is available by the terabyte and digital systems cannot tell copies from originals, identity takeover is child's play.

You don't even need to have ever shopped online to run foul of CNP fraud. Most stolen credit card numbers are obtained en masse by criminals breaking into obscure backend databases. These attacks go on behind the scenes, out of sight of even the most careful online customers.

So the standard cyber security advice misses the point. Consumers are told earnestly to look out for the "HTTPS" padlock that purportedly marks a site as secure, to have a firewall, to keep their PCs "patched" and their anti-virus up to date, to only shop online at reputable merchants, and to avoid suspicious looking sites (as if cyber criminals aren't sufficiently organised to replicate legitimate sites in their entirety). But none of this advice touches on the problem of coordinated massive heists of identity data.

Merchants are on the hook for unwieldy and increasingly futile security overheads. When a business wishes to accept credit card payments, it's straightforward in the real world to install a piece of bank-approved terminal equipment. But to process credit cards online, shopkeepers have to sign up to onerous PCI-DSS requirements that in effect require even small business owners to become IT security specialists. But to what end? No audit regime will ever stop organised crime. To stem identity theft, we need to make stolen IDs less valuable.

All this points to urgent public policy matters for governments and banks. It is not enough to put the onus on individuals to guard against ad hoc attacks on their credit cards. Systemic changes and technological innovation are needed to render stolen personal data useless to thieves. It's not that the whole payments processing system is broken; rather, it is vulnerable at just one point where stolen digital identities can be abused.

Digital identities are the keys to our personal kingdoms. As such they really need to be treated as seriously as car keys, which have become very high tech indeed. Modern car keys cannot be duplicated at a suburban locksmith. It's possible you've come across office and filing cabinet keys that carry government security certifications. And we never use the same keys for our homes and offices; we wouldn't even consider it (which points to the basic weirdness in Single Sign On and identity federation).

In stark contrast to car keys, almost no attention is paid to the pedigree of digital identities. Technology neutrality has bred a bewildering array of ad hoc authentication methods, including SMS messages, one time password generators, password calculators, grid cards and picture passwords; at the same time we've done nothing at all to inhibit the re-use of stolen IDs.

It's high time government and industry got working together on a uniform and universal set of smart identity tools to properly protect consumers online.

Stay tuned for more of my thoughts on identity safety, inspired by recent news that health identifiers may be back on the table in the gigantic U.S. e-health system. The security and privacy issues are large but the cyber safety technology is at hand!

Posted in Fraud, Identity, Internet, Payments, Privacy, Security