Bank robber Willie Sutton, when asked why he robbed banks, answered "That's where the money is". It's the same with breaches. Large databases are the targets of people who want data. It's that simple.
Having said that, there are different sorts of breaches and corresponding causes. Most high profile breaches are obviously driven by financial crime, where attackers typically grab payment card details. Breaches are what powers most card fraud. Organised crime gangs don't pilfer card numbers one at a time from people's computers or insecure websites (and so the standard advice to consumers to change their passwords every month and to make sure they see a browser padlock is nice but don't think it will do anything to stop mass card fraud).
Instead of blaming end user failings, we need to really turn up the heat on enterprise IT. The personal data held by big merchant organisations (including even mundane operations like car parking chains) is now worth many hundreds of millions of dollars. If this kind of value was in the form of cash or gold, you'd see Fort Knox-style security around it. Literally. But how much money does even the biggest enterprise invest in security? And what do they get for their money?
The grim reality is that no amount of conventional IT security today can prevent attacks on assets worth billions of dollars. The simple economics is against us. It's really more a matter of luck than good planning that some large organisations have yet to be breached (and that's only so far as we know).
Organised crime is truly organised. If it's card details they want, they go after the big data stores, at payments processors and large retailers. The sophistication of these attacks is amazing even to security pros. The attack on Target's Point of Sale terminals for instance was in the "can't happen" category.
The other types of criminal breach include mischief, as when the iCloud photos of celebrities were leaked last year, hacktivism, and political or cyber terrorist attacks, like the one on Sony.
There's some evidence that identity thieves are turning now to health data to power more complex forms of crime. Instead of stealing and replaying card numbers, identity thieves can use deeper, broader information like patient records to either commit fraud against health system payers, or to open bogus accounts and build them up into complex scams. The recent Anthem database breach involved extensive personal records on 80 million individuals; we have yet to see how these details will surface in the identity black markets.
The ready availability of stolen personal data is one factor we find to be driving Identity and Access Management (IDAM) innovation; see "The State of Identity Management in 2015". Next generation IDAM will eventually make stolen data less valuable, but for the foreseeable future, all enterprises holding large customer datasets we will remain prime targets for identity thieves.
Now let's not forget simple accidents. The Australian government for example has had some clangers though these can happen to any big organisation. A few months ago a staffer accidentally attached the wrong a file to an email, and thus released the passport details of the G20 leaders. Before that, we saw a spreadsheet holding personal details of thousands of asylum seekers get mistakenly pasted into a government website HTML.
A lesson I want to bring out here is the terrible complexity and fragility of our IT systems. It doesn't take much for human error to have catastrophic results. Who among us has not accidentally hit 'Reply All' or attached the wrong file to an email? If you did an honest Threat & Risk Assessment on these sorts of everyday office systems, you'd have to conclude they are not safe to handle sensitive data nor to be operated by most human beings. But of course we simply can't afford notto use office IT. We've created a monster.
Again, criminal elements know this. The expert cryptographer Bruce Schneier once said "amateurs hack systems, professionals hack people". Access control on today's sprawling complex computer systems is generally poor, leaving the way open for inside jobs. Just look at the Chelsea Manning case, one of the worst breaches of all time, made possible by granting too high access privileges to too many staffers.
Outside government, access control is worse, and so is access logging - so system administrators often can't tell there's even been a breach until circumstantial evidence emerges. I am sure the majority of breaches are occurring without anyone knowing. It's simply inevitable.
Look at hotels. There are occasional reports of hotel IT breaches, but they are surely happening continuously. The guest details held in hotels is staggering - payment card details, license plates, travel itineraries including airline flight details, even passport numbers are held by some places. And these days, with global hotel chains, the whole booking database is available to a rogue employee from any place in the world, 24-7.
Please, don't anyone talk to me about PCI-DSS! The Payment Card Industry Data Security Standards for protecting cardholder details haven't had much effect at all. Some of the biggest breaches of all time have affected top tier merchants and payments processors which appear to have been PCI compliant. Yet the lawyers for the payments institutions will always argue that such-and-such a company wasn't "really" compliant. And the PCI auditors walk away from any liability for what happens in between audits. You can understand their position; they don't want to be accountable for wrong doings or errors committed behind their backs. However, cardholders and merchants are caught in the middle. If a big department store passes its PCI audits, surely we can expect them to be reasonably secure year-long? No, it turns out that the day after a successful audit, an IT intern can mis-configure a firewall or forget a patch; all those defences become useless, and the audit is rendered meaningless.
Which reinforces my point about the fragility of IT: it's impossible to make lasting security promises anymore.
In any case, PCI is really just a set of data handling policies and promises. They improve IT security hygiene, and ward off amateur attacks. But they are useless against organised crime or inside jobs.
There is an increasingly good argument to outsource data management. Rather than maintain brittle databases in the face of so much risk, companies are instead turning to large reputable cloud services, where the providers have the scale, resources and attention to detail to protect data in their custody. I previously looked at what matters in choosing cloud services from a geographical perspective in my Constellation Research report "Why Cloud Geography Matters in a Post-Snowden/NSA Era". And in forthcoming research I'll examine a broader set of contract-related KPIs to help buyers make the right choice of cloud service provider.
If you asked me what to do about data breaches, I'd say the short-to-medium term solution is to get with the strength and look for managed security services from specialist providers. In the longer term, we will have to see grassroots re-engineering of our networks and platforms, to harden them against penetration, and to lessen the opportunity for identity theft.
In the meantime, you can hope for the best, if you plan for the worst.
Actually, no, you can't hope.
The Australian government is to revamp the troubled Personally Controlled Electronic Health Record (PCEHR). In line with the Royle Review from Dec 2013, it is reported that patient participation is to change from the current Opt-In model to Opt-Out; see "Govt to make e-health records opt-out" by Paris Cowan, IT News.
That is to say, patient data from hospitals, general practice, pathology and pharmacy will be added by default to a central longitudinal health record, unless patients take steps (yet to be specified) to disable sharing.
The main reason for switching the consent model is simply to increase the take-up rate. But it's a much bigger change than many seem to realise.
The government is asking the community to trust it to hold essentially all medical records. Are the PCEHR's security and privacy safeguards up to scratch to take on this grave responsibility? I argue the answer is no, on two grounds.
Firstly there is the practical matter of PCEHR's security performance to date. It's not good, based on publicly available information. On multiple occasions, prescription details have been uploaded from community pharmacy to the wrong patient's records. There have been a few excuses made for this error, with blame sheeted home to the pharmacy. But from a system's perspective -- and health care is all about the systems -- you cannot pass the buck like that. Pharmacists are using a PCEHR system that was purportedly designed for them. And it was subject to system-wide threat & risk assessments that informed the architecture and design of not just the electronic records system but also the patient and healthcare provider identification modules. How can it be that the PCEHR allows such basic errors to occur?
Secondly and really fundamentally, you simply cannot invert the consent model as if it's a switch in the software. The privacy approach is deep in the DNA of the system. Not only must PCEHR security be demonstrably better than experience suggests, but it must be properly built in, not retrofitted.
Let me explain how the consent approach crops up deep in the architecture of something like PCEHR. During analysis and design, threat & risk assessments (TRAs) and privacy impact assessments (PIAs) are undertaken, to identify things that can go wrong, and to specify security and privacy controls. These controls generally comprise a mix of technology, policy and process mechanisms. For example, if there is a risk of patient data being sent to the wrong person or system, that risk can be mitigated a number of ways, including authentication, user interface design, encryption, contracts (that obligate receivers to act responsibly), and provider and patient information. The latter are important because, as we all should know, there is no such thing as perfect security. Mistakes are bound to happen.
One of the most fundamental privacy controls is participation. Individuals usually have the ultimate option of staying away from an information system if they (or their advocates) are not satisfied with the security and privacy arrangements. Now, these are complex matters to evaluate, and it's always best to assume that patients do not in fact have a complete understanding of the intricacies, the pros and cons, and the net risks. People need time and resources to come to grips with e-health records, so a default opt-in affords them that breathing space. And it errs on the side of caution, by requiring a conscious decision to participate. In stark contrast, a default opt-out policy embodies a position that the scheme operator believes it knows best, and is prepared to make the decision to participate on behalf of all individuals.
Such a position strikes many as beyond the pale, just on principle. But if opt-out is the adopted policy position, then clearly it has to be based on a risk assessment where the pros indisputably out-weigh the cons. And this is where making a late switch to opt-out is unconscionable.
You see, in an opt-in system, during analysis and design, whenever a risk is identified that cannot be managed down to negligible levels by way of technology and process, the ultimate safety net is that people don't need to use the PCEHR. It is a formal risk management ploy (a part of the risk manager's toolkit) to sometimes fall back on the opt-in policy. In an opt-in system, patients sign an agreement in which they accept some risk. And the whole security design is predicated on that.
Look at the most recent PIA done on the PCEHR in 2011; section 9.1.6 "Proposed solutions - legislation" makes it clear that opt-in participation is core to the existing architecture. The PIA makes a "critical legislative recommendation" including:
- a number of measures to confirm and support the 'opt in' nature of the PCEHR for consumers (Recommendations 4.1 to 4.3) [and] preventing any extension of the scope of the system, or any change to the 'opt in' nature of the PCEHR.
The PIA at section 2.2 also stresses that a "key design feature of the PCEHR System ... is opt in – if a consumer or healthcare provider wants to participate, they need to register with the system." And that the PCEHR is "not compulsory – both consumers and healthcare providers choose whether or not to participate".
A PDF copy of the PIA report, which was publicly available at the Dept of Health website for a few years after 2011, is archived here.
The fact is that if the government changes the PCEHR from opt-in to opt-out, it will invalidate the security and privacy assessments done to date. The PIAs and TRAs will have to be repeated, and the project must be prepared for major redesign.
The Royle Review report (PDF) did in fact recommend "a technical assessment and change management plan for an opt-out model ..." (Recommendation 14) but I am not aware that such a review has taken place.
To look at the seriousness of this another way, think about "Privacy by Design", the philosophy that's being steadily adopted across government. In 2014 NEHTA wrote in a submission (PDF) to the Australian Privacy Commissioner:
- The principle that entities should employ “privacy by design” by building privacy into their processes, systems, products and initiatives at the design stage is strongly supported by NEHTA. The early consideration of privacy in any endeavour ensures that the end product is not only compliant but meets the expectations of stakeholders.
One of the tenets of Privacy by Design is that you cannot bolt on privacy after a design is done. Privacy must be designed into the fabric of any system from the outset. All the way along, PCEHR has assumed opt-in, and the last PIA enshrined that position.
If the government was to ignore its own Privacy by Design credo, and not revisit the PCEHR architecture, it would be an amazing breach of the public's trust in the healthcare system.
Every now and then, a large organisation in the media spotlight will experience the special pain of having a password accidentally revealed in the background of a photograph or TV spot. Security commentator Graham Cluley has recorded a lot of these misadventures, most recently at a British national rail control room, and before that, in the Superbowl nerve centre and an emergency response agency.
Security folks love their schadenfreude but what are we to make of these SNAFUs? Of course, nobody is perfect. And some plumbers have leaky taps.
But these cases hold much deeper lessons. These are often critical infrastructure providers (consider that on financial grounds, there may be more at stake in Superbowl operations than the railways). The outfits making kindergarten security mistakes will have been audited many times over. So how on earth do they pass?
Posting passwords on the wall is not a random error - it's systemic. Some administrators do it out of habit, or desperation. They know it's wrong, but they do it anyway, and they do it with such regularity it gets caught on TV.
I really want to know if none of the security auditors at any of these organisations ever noticed the passwords in plain view? Or do the personnel do a quick clean up on the morning of each audit, only to revert to reality in between audits? Either way, here's yet more proof that security audit, frankly, is a sick joke. And that security practices aren't worth the paper they're printed on.
Security orthodoxy holds that people and process are more fundamental than technology, and that people are the weakest link. That's why we have security management processes and security audits. It's why whole industries have been built around security process standards like ISO 27000. So it's unfathomable to me that companies with passwords caught on camera can have have ever passed their audits.
Security isn't what people think it is. Instead of meticulous procedures and hawk-eyed inspections, too often it's just simple people going through the motions. Security isn't intellectually secure. The things we do in the name of "security" don't make us secure.
Let's not dismiss password flashing as a temporary embarrassment for some poor unfortunates. This should be humiliating for the whole information security industry. We need another way.
Picture credits: Graham Cluley.
Posted in Security
The State Of Identity Management in 2015
Constellation Research recently launched the "State of Enterprise Technology" series of research reports. These assess the current enterprise innovations which Constellation considers most crucial to digital transformation, and provide snapshots of the future usage and evolution of these technologies.
My second contribution to the state-of-the-state series is "Identity Management Moves from Who to What". Here's an excerpt from the report:
In spite of all the fuss, personal identity is not usually important in routine business. Most transactions are authorized according to someone’s credentials, membership, role or other properties, rather than their personal details. Organizations actually deal with many people in a largely impersonal way. People don’t often care who someone really is before conducting business with them. So in digital Identity Management (IdM), one should care less about who a party is than what they are, with respect to attributes that matter in the context we’re in. This shift in focus is coming to dominate the identity landscape, for it simplifies a traditionally multi-disciplined problem set. Historically, the identity management community has made too much of identity!
Six Digital Identity Trends for 2015
1. Mobile becomes the center of gravity for identity. The mobile device brings convergence for a decade of progress in IdM. For two-factor authentication, the cell phone is its own second factor, protected against unauthorized use by PIN or biometric. Hardly anyone ever goes anywhere without their mobile - service providers can increasingly count on that without disenfranchising many customers. Best of all, the mobile device itself joins authentication to the app, intimately and seamlessly, in the transaction context of the moment. And today’s phones have powerful embedded cryptographic processors and key stores for accurate mutual authentication, and mobile digital wallets, as Apple’s Tim Cook highlighted at the recent White House Cyber Security Summit.
2. Hardware is the key – and holds the keys – to identity. Despite the lure of the cloud, hardware has re-emerged as pivotal in IdM. All really serious security and authentication takes place in secure dedicated hardware, such as SIM cards, ATMs, EMV cards, and the new Trusted Execution Environment mobile devices. Today’s leading authentication initiatives, like the FIDO Alliance, are intimately connected to standard cryptographic modules now embedded in most mobile devices. Hardware-based identity management has arrived just in the nick of time, on the eve of the Internet of Things.
3. The “Attributes Push” will shift how we think about identity. In the words of Andrew Nash, CEO of Confyrm Inc. (and previously the identity leader at PayPal and Google), “Attributes are at least as interesting as identities, if not more so.” Attributes are to identity as genes are to organisms – they are really what matters about you when you’re trying to access a service. By fractionating identity into attributes and focusing on what we really need to reveal about users, we can enhance privacy while automating more and more of our everyday transactions.
The Attributes Push may recast social logon. Until now, Facebook and Google have been widely tipped to become “Identity Providers”, but even these giants have found federated identity easier said than done. A dark horse in the identity stakes – LinkedIn – may take the lead with its superior holdings in verified business attributes.
4. The identity agenda is narrowing. For 20 years, brands and organizations have obsessed about who someone is online. And even before we’ve solved the basics, we over-reached. We've seen entrepreneurs trying to monetize identity, and identity engineers trying to convince conservative institutions like banks that “Identity Provider” is a compelling new role in the digital ecosystem. Now at last, the IdM industry agenda is narrowing toward more achievable and more important goals - precise authentication instead of general identification.
5. A digital identity stack is emerging. The FIDO Alliance and others face a challenge in shifting and improving the words people use in this space. Words, of course, matter, as do visualizations. IdM has suffered for too long under loose and misleading metaphors. One of the most powerful abstractions in IT was the OSI networking stack. A comparable sort of stack may be emerging in IdM.
6. Continuity will shape the identity experience. Continuity will make or break the user experience as the lines blur between real world and virtual, and between the Internet of Computers and the Internet of Things. But at the same time, we need to preserve clear boundaries between our digital personae, or else privacy catastrophes await. “Continuous” (also referred to as “Ambient”) Authentication is a hot new research area, striving to provide more useful and flexible signals about the instantaneous state of a user at any time. There is an explosion in devices now that can be tapped for Continuous Authentication signals, and by the same token, rich new apps in health, lifestyle and social domains, running on those very devices, that need seamless identity management.
A snapshot at my report "Identity Moves from Who to What" is available for download at Constellation Research. It expands on the points above, and sets out recommendations for enterprises to adopt the latest identity management thinking.
I have just updated my periodic series of researh reports on the FIDO Alliance. The fourth report, "FIDO Alliance Update: On Track to a Standard" will be available at Constellation Research shortly
The Identity Management industry leader publishes its protocol specifications at v1.0, launches a certification program, and attracts support in Microsoft Windows 10.
The FIDO Alliance is the fastest-growing Identity Management (IdM) consortium we have seen. Comprising technology vendors, solutions providers, consumer device companies, and e-commerce services, the FIDO Alliance is working on protocols and standards to strongly authenticate users and personal devices online. With a fresh focus and discipline in this traditionally complicated field, FIDO envisages simply “doing for authentication what Ethernet did for networking”.
Launched in early 2013, the FIDO Alliance has now grown to over 180 members. Included are technology heavyweights like Google, Lenovo and Microsoft; almost every SIM and smartcard supplier; payments giants Discover, MasterCard, PayPal and Visa; several banks; and e-commerce players like Alibaba and Netflix.
FIDO is radically different from any IdM consortium to date. We all know how important it is to fix passwords: They’re hard to use, inherently insecure, and lie at the heart of most breaches. The Federated Identity movement seeks to reduce the number of passwords by sharing credentials, but this invariably confounds the relationships we have with services and complicates liability when more parties rely on fewer identities.
In contrast, FIDO’s mission is refreshingly clear: Take the smartphones and devices most of us are intimately connected to, and use the built-in cryptography to authenticate users to services. A registered FIDO-compliant device, when activated by its user, can send verified details about the device and the user to service providers, via standardized protocols. FIDO leverages the ubiquity of sophisticated handsets and the tidal wave of smart things. The Alliance focuses on device level protocols without venturing to change the way user accounts are managed or shared.
The centerpieces of FIDO’s technical work are two protocols, called UAF and U2F, for exchanging verified authentication signals between devices and services. Several commercial applications have already been released under the UAF and U2F specifications, including fingerprint-based payments apps from Alibaba and PayPal, and Google’s Security Key from Yubico. After a rigorous review process, both protocols are published now at version 1.0, and the FIDO Certified Testing program was launched in April 2015. And Microsoft announced that FIDO support would be built into Windows 10.
With its focus, pragmatism and membership breadth, FIDO is today’s go-to authentication standards effort. In this report, I look at what the FIDO Alliance has to offer vendors and end user communities, and its critical success factors.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. Lockstep monitors these figures and plots the trend data. We got a bit too busy in 2014 and missed the last couple of APCA releases, so this blog is a catch up, summarising and analysing stats from calendar year 2013 and AU financial year 2014 (July 2013 to June 2014).
In the 12 months to June 2014,
- Total card fraud rose by 22% to A$321 million
- Card Not Present (CNP) fraud rose 27% to A$256 million
- CNP fraud now represents 80% of all card fraud.
APCA is one of the major payments systems regulators in Australia. It has only ever had two consistent things to say about Card Not Present fraud. First, it reassures the public that CNP fraud is only rising because online shopping is rising, implying that it's really not a big deal. Second, APCA produces advice for shoppers and merchants to help them stay safe online.
I suppose that in the 1950s and 60s, when the road toll started rising dranatically and car makers we called on to improve safety, the auto industry might have played down that situation like APCA does with CNP fraud. "Of course the road toll is high" they might have said; "it's because so many people love driving!". Fraud is not a necessary part of online shopping; at some point payments regulators will have to tell us, as a matter of policy, what level of fraud they think is actually reasonable, and start to press the industry to take action. In absolute terms, CNP fraud has ballooned by a factor of 10 in the past eight years. The way it's going, annual online fraud might overtake the cost of car theft (currently $680 million) before 2020.
As for APCA's advice for shoppers to stay safe online, most of it is nearly useless. In their Christmas 2014 media release (PDF), APCA suggested:
Consumers can take simple steps to help stay safe when shopping online including:
- Only providing their card details on secure websites – looking for the locked padlock.
- Always keeping their PC security software up-to-date and doing a full scan often.
The truth is very few payment card details are stolen from websites or people's computers. Organised crime targets the databases of payment processors and big merchants, where they steal the details of tens of millions of cardholders at once. Four of the biggest ever known credit card breaches occurred in the last 18 months (Ref: DataLossDB):
- 109,000,000 credit cards - Home Depot, September 2014
- 110,000,000 credit cards - Target, December 2013
- 145,000,000 credit cards - eBay, May 2014
- 152,000,000 credit cards - Adobe, Oct 2013.
In its latest Data Breach Investigations Report, Verizon states that "2013 may be remembered as ... a year of transition to large-scale attacks on payment card systems".
Verizon has plotted the trends in data breaches at different sources; it's very clear that servers (where the datsa is held) have always been the main target of cybercriminals, and are getting proportionally more attention year on year. Diagrag at right from Verizon Data Breach Investigations Report 2014.
So APCA's advice to look for website padlocks and keep anti-virus up-to-date - as important as that may be - won't do much at all to curb payment card theft or fraud. You might never have shopped online in your life, and still have your card details stolen, behind your back, at a department store breach.
Over the course of a dozen or more card fraud reports, APCA has had an on-again-off-again opinion of the credit card scheme's flagship CNP security measure, 3D Secure. In FY2011 (after CNP fraud went up 46%), APCA said "retailers should be looking at a 3D Secure solution for their online checkout". Then in their FY2012 media release, as losses kept increasing, they made no mention of 3D Secure at all.
Calendar year 2012 saw Australian CNP fraud fall for the first time ever, and APCA was back on the 3D Secure bandwagon, reporting that "The drop in CNP fraud can largely be attributed to an increase in the use of authentication tools such as MasterCard SecureCode and Verified by Visa, as well as dedicated fraud prevention tools."
Sadly, it seems 2012 was a blip. Online fraud for FY2014 (PDF) has returned to the long term trend. It's impossible to say what impact 3D Secure has really had in Australia, but penetration and consumer awareness of this technology remains low. It was surprising that APCA previously rushed to attribute a short-term drop in fraud to 3D Secure; that now seems overly optimistic, with CNP frauds continuing to mount after all.
In my view, it beggars belief the payments industry has yet to treat CNP fraud as seriously as it did skimming and carding. Technologically, CNP fraud is not a hard problem. It's just the digital equivalent of analogue skimming and carding, and it could be stopped just as effectively by using chips to protect cardholder data, just as they do in Card Present payments, whether by EMV card or NFC mobile devices.
In 2012, I published a short paper on this: Calling for a Uniform Approach to Card Fraud Offline and On (PDF).
The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.
This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.
An unpublished letter to the editor of The New Yorker, February 2015.
Alec Wilkinson says in his absorbing profile of the quiet genius Yitang Zhang ("The pursuit of beauty", February 2) that pure mathematics is done "with no practical purposes in mind". I do hope mathematicians will forever be guided by aesthetics more than economics, but nevertheless, pure maths has become a cornerstone of the Information Age, just as physics was of the Industrial Revolution. For centuries, prime numbers might have been intellectual curios but in the 1970s they were beaten into modern cryptography. The security codes that scaffold almost all e-commerce are built from primes. Any advances in understanding these abstract materials impacts the Internet itself, for better or for worse. So when Zhang demurs that his result is "useless for industry", he's mispeaking.
The online version of the article is subtitled "Solving an Unsolvable Problem". The apparent oxymoron belies a wondrous pattern we see in mathematical discovery. Conundrums widely accepted to be impossible are in fact solved quite often, and then frenetic periods of innovation usually follow. The surprise breakthrough is typically inefficient (or, far worse in a mathematician's mind, ugly) but it can inspire fresh thinking and lead to polished methods. We are in one of these intense creative periods right now. Until 2008, it was widely thought that true electronic cash was impossible, but then the mystery figure Satoshi Nakamoto created Bitcoin. While it overturned the conventional wisdom, Bitcoin is slow and anarchic, and problematic as mainstream money. But it has triggered a remarkable explosion of digital currency innovation.
A published letter
As Alec Wilkinson points out in his Profile of the math genius Yitang Zhang, results in pure mathematics can be sources of wonder and delight, regardless of their applications. Yet applications do crop up. Nineteenth-century mathematicians showed that there are geometries as logical and complete as Euclidean geometry, but which are utterly distinct from it. This seemed of no practical use at the time, but Albert Einstein used non-Euclidean geometry to make the most successful model that we have of the behavior of the universe on large scales of distance and time. Abstract results in number theory, Zhang’s field, underlie cryptography used to protect communication on devices that many of us use every day. Abstract mathematics, beautiful in itself, continually results in helpful applications, and that’s pretty wonderful and delightful, too.
Sandy Spring, Md.
My favorite example of mathematical innovation concerns public key cryptography (and I ignore here the credible reports that PKC was invented by the Brits decades before but kept secret). For centuries, there was essentially one family of cryptographic algorithms, in which a secret key shared by sender and recipient is used to both encrypt and decrypt the protected communication. Key distribution is the central problem in so-called "Symmetric" Cryptography: how does the sender get the secret key to the recipient some time before sending the message? The dream was for the two parties to be able to establish a secret key without ever having to meet or using any secret channel. It was thought to be an unsolvable problem ... until it was solved by Ralph Merkle in 1974. His solution, dubbed "Merkle's Puzzles" was almost hypothetical; the details don't matter here but they were going to be awkward to put it mildly, involving millions of small messages. But the impact on cryptography was near instantaneous. The fact that, in theory, two parties really could establish a shared secret via public messages triggered a burst of development of practical public key cryptography, first of the Diffie-Hellman algorithm, and then RSA by Ron Rivest, Adi Shamir and Leonard Adleman. We probably wouldn't have e-commerce if it wasn't for Merkle's crazy curious maths.
This is Part 2 of my coverage of the White House #CyberSecuritySummit; see Part 1 here.
On Feb 13th, at President Obama's request, a good number of the upper echelon of Internet experts gathered at Stanford University in Silicon Valley to work out what to do next about cybersecurity and consumer protection online. The Cyber Security Summit was put together around Obama's signing a new Executive Order to create new cyber threat information sharing hubs and standards to foster sharing while protecting privacy, and it was meant to maintain the momentum of his cybersecurity and privacy legislative program.
The main session of the summit traversed very few technical security issues. The dominant theme was intelligence sharing: how can business and government share what they know in real time about vulnerabilities and emerging cyber attacks? Just a couple of speakers made good points about preventative measures. Intel President Renee James highlighted the importance of a "baseline of computing security"; MasterCard CEO Ajay Banga was eloquent on how innovation can thrive in a safety-oriented regulated environment like road infrastructure and road rules. So apart from these few deviations, the summit had a distinct military intelligence vibe, in keeping with the cyber warfare trope beloved by politicians.
On the one hand, it would be naive to expect such an event to make actual progress. And I don't mind a political showcase if it secures the commitment of influencers and builds awareness. But on the other hand, the root causes of our cybersecurity dilemma have been well known for years, and this esteemed gathering seemed oblivious to them.
Where's the serious talk of preventing cyber security problems? Where is the attention to making e-business platforms and digital economy infostructure more robust?
Personal Information today is like nitroglycerin - it has to be handled with the utmost care, lest it blow up in your face. So we have the elaborate and brittle measures of PCI-DSS or the HIPAA security rules, rendered useless by the slightest operational slip-up.
How about rendering personal information safer online, so it cannot be stolen, co-opted, modified and replayed? If stolen information couldn't be used by identity thieves with impunity, we would neutralise the bulk of today's cybercrime. This is how EMV Chip & PIN payment security works. Personal data and purchase details are combined in a secure chip and digitally signed under the customer's control, to prove to the merchant that the transaction was genuine. The signed transaction data cannot be easily hacked (thanks Jim Fenton for the comment; see below); stolen identity data is useless to a thief if they don't control the chip; a stolen chip is only good for single transactions (and only if the PIN is stolen as well) rather than the mass fraud perpetrated after raiding large databases.
It's obvious (isn't it?) that we need to do something radically different before the Internet of Things turns into a digital cesspool. The good news for privacy and security in ubiquitous computing is that most smart devices can come with Secure Elements and built-in digital signature capability, so that all the data they broadcast can be given pedigree. We should be able to know tell for sure that every piece of information flowing in the IoT has come from a genuine device, with definite attributes, operating with the consent of its owner.
The technical building blocks for a properly secure IoT are at hand. Machine-to-Machine (M2M) identity modules (MIMs) and Trusted Execution Environments (TEEs) provide safe key storage and cryptographic functionality. The FIDO Alliance protocols leverage this embedded hardware and enable personal attributes to be exchanged reliably. Only a couple of years ago, Vint Cerf in an RSA Conference keynote speculated that ubiquitous public key cryptography would play a critical role in the Internet of Things, but he didn't know how exactly.
In fact, we have have known what to do with this technology for years.
At the close of the Cyber Security Summit, President Obama signed his Executive Order -- in ink. The irony of using a pen to sign a cybersecurity order seemed lost on all concerned. And it is truly tragic.
We probably wouldn't need a cybersecurity summit in 2015 if serious identity security had been built into the cyber infrastructure over a decade ago.
It would be naive to expect the White House Cybersecurity Summit to have been less political. President Obama and his colleagues were in their comfort zone, talking up America's recent economic turnaround, and framing their recent wins squarely within Silicon Valley where the summit took place. With a few exceptions, the first two hours was more about green energy, jobs and manufacturing than cyber security. It was a lot like a lost episode of The West Wing.
The exceptions were important. Some speakers really nailed some security issues. I especially liked the morning contributions from Intel President Renee James and MasterCard CEO Ajay Banga. James highlighted that Intel has worked for 10 years to improve "the baseline of computing security", making her one of the few speakers to get anywhere near the inherent insecurity of our cyber infrastructure. The truth is that cyberspace is built on weak foundations; the software development practices and operating systems that bear the economy today were not built for the job. For mine, the Summit was too much about military/intelligence themed information sharing, and not enough about why our systems are so precarious. I know it's a dry subject but if they're serious about security, policy makers really have to engage with software quality and reliability, instead of thrilling to kids learning to code. Software development practices are to blame for many of our problems; more on software failures here.
Ajay Banga was one of several speakers to urge the end of passwords. He summed up the authentication problem very nicely: "Stop making us remember things in order to prove who we are". He touched on MasterCard's exploration of continuous authentication bracelets and biometrics (more news of which coincidentally came out today). It's important however that policy makers' understanding of digital infrastructure resilience, cybercrime and cyber terrorism isn't skewed by everyone's favourite security topic - customer authentication. Yes, it's in need of repair, yet authentication is not to blame for the vast majority of breaches. Mom and Pop struggle with passwords and they deserve better, but the vast majority of stolen personal data is lifted by organised criminals en masse from poorly secured back-end databases. Replacing customer passwords or giving everyone biometrics is not going to solve the breach epidemic.
Banga also indicated that the Information Highway should be more like road infrastructure. He highlighted that national routes are regulated, drivers are licensed, there are rules of the road, standardised signs, and enforcement. All these infrastructure arrangements leave plenty of room for innovation in car design, but it's accepted that "all cars have four wheels".
Tim Cook was then the warm-up act before Obama. Many on Twitter unkindly branded Cook's speech as an ad for Apple, paid for by the White House, but I'll accentuate the positives. Cook continues to campaign against business models that monetize personal data. He repeated his promise made after the ApplePay launch that they will not exploit the data they have on their customers. He put privacy before security in everything he said.
Cook painted a vision where digital wallets hold your passport, driver license and other personal documents, under the user's sole control, and without trading security for convenience. I trust that he's got the mobile phone Secure Element in mind; until we can sort out cybersecurity at large, I can't support the counter trend towards cloud-based wallets. The world's strongest banks still can't guarantee to keep credit card numbers safe, so we're hardly ready to put our entire identities in the cloud.
In his speech, President Obama reiterated his recent legislative agenda for information sharing, uniform breach notification, student digital privacy, and a Consumer Privacy Bill of Rights. He stressed the need for private-public partnership and cybersecurity responsibility to be shared between government and business. He reiterated the new Cyber Threat Intelligence Integration Center. And as flagged just before the summit, the president signed an Executive Order that will establish cyber threat information sharing "hubs" and standards to foster sharing while protecting privacy.
Obama told the audience that cybersecurity "is not an ideological issue". Of course that message was actually for Congress which is deliberating over his cyber legislation. But let's take a moment to think about how ideology really does permeate this arena. Three quasi-religious disputes come to mind immediately:
- Free speech trumps privacy. The ideals of free speech have been interpreted in the US in such a way that makes broad-based privacy law intractable. The US is one of only two major nations now without a general data protection statute (the other is China). It seems this impasse is rarely questioned anymore by either side of the privacy debate, but perhaps the scope of the First Amendment has been allowed to creep out too far, for now free speech rights are in effect being granted even to computers. Look at the controversy over the "Right to be Forgotten" (RTBF), where Google is being asked to remove certain personal search results if they are irrelevant, old and inaccurate. Jimmy Wales claims this requirement harms "our most fundamental rights of expression and privacy". But we're not talking about speech here, or even historical records, but rather the output of a computer algorithm, and a secret algorithm at that, operated in the service of an advertising business. The vociferous attacks on RTBF are very ideological indeed.
- "Innovation" trumps privacy. It's become an unexamined mantra that digital businesses require unfettered access to information. I don't dispute that some of the world's richest ever men, and some of the world's most powerful ever corporations have relied upon the raw data that exudes from the Internet. It's just like the riches uncovered by the black gold rush on the 1800s. But it's an ideological jump to extrapolate that all cyber innovation or digital entrepreneurship must continue the same way. Rampant data mining is laying waste to consumer confidence and trust in the Internet. Some reasonable degree of consumer rights regulation seems inevitable, and just, if we are to avert a digital Tragedy of the Commons.
- National Security trumps privacy. I am a rare privacy advocate who actually agrees that the privacy-security equilibrium needs to be adjusted. I believe the world has changed since some of our foundational values were codified, and civil liberties are just one desirable property of a very complicated social system. However, I call out one dimensional ideology when national security enthusiasts assert that privacy has to take a back seat. There are ways to explore a measured re-calibration of privacy, to maintain proportionality, respect and trust.
President Obama described the modern technological world as a "magnificent cathedral" and he made an appeal to "values embedded in the architecture of the system". We should look critically at whether the values of entrepreneurship, innovation and competitiveness embedded in the way digital business is done in America could be adjusted a little, to help restore the self-control and confidence that consumers keep telling us is evaporating online.
If the digital economy is really the economy then it's high time we moved beyond hoping that we can simply train users to be safe online. Is the real economy only for heros who can protect themselves in the jungle, writing their own code. As if they're carrying their own guns? Or do we as a community build structures and standards and insist on technologies that work for all?
For most people, the World Wide Web experience still a lot like watching cartoons on TV. The human-machine interface is almost the same. The images and actions are just as synthetic; crucially, nothing on a web browser is real. Almost anything goes -- just as the Roadrunner defies gravity in besting Coyote, there are no laws of physics that temper the way one bit of multimedia leads to the next. Yes, there is a modicum of user feedback in the way we direct some of the action when browsing and e-shopping, but it's quite illusory; for the most part all we're really doing is flicking channels across a billion pages.
It's the suspension of disbelief when browsing that lies at the heart of many of the safety problems we're now seeing. Inevitably we lose our bearings in the totally synthetic World Wide Web. We don't even realise it, we're taken in by a virtual reality, and we become captive to social engineering.
But I don't think it's possible to tackle online safety by merely countering users' credulity. Education is not the silver bullet, because the Internet is really so technologically complex and abstract that it lies beyond the comprehension of most lay people.
Using the Internet 'safely' today requires deep technical skills, comparable to the level of expertise needed to operate an automobile circa 1900. Back then you needed to be able to do all your own mechanics [roughly akin to the mysteries of maintaining anti-virus software], look after the engine [i.e. configure the operating system and firewall], navigate the chaotic emerging road network [there's yet no trusted directory for the Internet, nor any road rules], and even figure out how to fuel the contraption [consumer IT supply chains is about as primitive as the gasoline industry was 100 years ago]. The analogy with the early car industry becomes especially sharp for me when I hear utopian open source proponents argue that writing ones own software is the best way to be safe online.
The Internet is so critical (I'd have thought this was needless to say) that we need ways of working online that don't require us to all be DIY experts.
I wrote a first draft of this blog six years ago, and at that time I called for patience in building digital literacy and sophistication. "It took decades for safe car and road technologies to evolve, and the Internet is still really in its infancy" I said in 2009. But I'm less relaxed about his now, on the brink of the Internet of Things. It's great that the policy makers like the US FTC are calling on connected device makers to build in security and privacy, but I suspect the Internet of Things will require the same degree of activist oversight and regulation as does the auto industry, for the sake of public order and the economy. Do we have the appetite to temper breakneck innovation with safety rules?