For 35 years now, a body of data protection jurisprudence has been built on top of the original OECD Privacy Principles. The most elaborate and energetically enforced privacy regulations are in Europe (although well over 100 countries have privacy laws at last count). By and large, the European privacy regime is welcome by the roughly 700 million citizens whose interests it protects.
Over the years, this legal machinery has produced results that occasionally surprise the rest of the world. Among these was the "Right To Be Forgotten", a ruling of the European Court of Justice (ECJ) which requires web search operators in some cases to block material that is inaccurate, irrelevant or excessive. And this week, the ECJ determined that the U.S. "Safe Harbor" arrangement (a set of pragmatic work-arounds that have permitted the import of personal information from Europe by American companies) is invalid.
These strike me as entirely logical outcomes of established technology-neutral privacy law. The Right To Be Forgotten simply treats search results as synthetic personal information, collected algorithmically, and applies regular privacy principles: if a business collects personal information, then lawful limits apply no matter how it's collected. And the self-regulated Safe Harbor was found to not provide the strength of safeguards that Europeans have come to expect. Its inadequacies are old news; action by the court has been a long time coming.
In parallel with steadily developing privacy law, an online business ecosystem has evolved, centred on the U.S. and based on the limitless resource that is information. Fabulous products, services and unprecedented economic success have flowed. But the digital rush (like gold and oil rushes before it) has brought calamity. A shaken American populace, subject to daily breaches, spying and exploitation, is left wondering who and what will ever keep them safe in cyberspace.
So it's honestly a mystery to me why every European privacy advance is met with such reflexive condemnation in America.
The OECD Privacy Principles safeguard individuals by controlling the flow of information about them. In the decades since the principles were framed, digital technologies and business models have radically expanded how information is created and how it moves. Personal information is now produced as if by magic (by wizards who make billions by their tricks). But the basic privacy principles are steadfastly the same, and are manifestly more important than ever. You know, that's what good laws are like.
A huge proportion of the American public would cheer for better data protection. We all know they deserve it. If American institutions had a better track record of respecting and protecting the data commons, then they'd be entitled to bluster about European privacy. But as things stand in Silicon Valley and Washington, moral outrage should be directed at the businesses and governments who sit on their hands over data breaches and surveillance, instead of those who do something about it.
Posted in Privacy
Under new Prime Minister Malcolm Turnbull, innovation for once is the policy du jour in Australia. Innovation is associated with risk taking, but too often, government wants others to take the risk. It wants venture capitalists to take investment risk, and start-ups to take R&D risks. Is it time now for government to walk the talk?
State and federal agencies remain the most important buyers of IT in Australia. To stimulate domestic R&D and advance an innovation culture, governments should be taking some bold procurement risk, punting to some degree on new technology. Major projects like driver licence technology upgrades, the erstwhile Human Services Access Card, the national broadband roll-out, and national e-health systems, would be ideal environments in which to preferentially select next generation, home-grown products.
Obviously government must be prudent spending public money on new technology. Yet at the same time, there is a public interest argument for selecting newer solutions: in the rapidly changing online environment, citizens stand to benefit from the latest innovations, bred in response to current challenges.
What do entrepreneurs need most to help them innovate and prosper? It's metaphorical oxygen!
Too often, innovative entrepreneurs are met with the admonition you’re only trying to sell us something. Well yes we are, but it's because we believe we have something to meet real needs, and that customers actually need to buy something.
The identerati sometimes refer to the challenge of “binding carbon to silicon”. That’s a poetic way of describing how the field of Identity and Access Management (IDAM) is concerned with associating carbon-based life forms (as geeks fondly refer to people) with computers (or silicon chips).
To securely bind users’ identities or attributes to their computerised activities is indeed a technical challenge. In most conventional IDAM systems, there is only circumstantial evidence of who did what and when, in the form of access logs and audit trails, most of which can be tampered with or counterfeited by a sufficiently determined fraudster. To create a lasting, tamper-resistant impression of what people do online requires some sophisticated technology (in particular, digital signatures created using hardware-based cryptography).
On the other hand, working out looser associations between people and computers is the stock-in-trade of social networking operators and Big Data analysts. So many signals are emitted as a side effect of routine information processing today that even the shyest of users may be uncovered by third parties with sufficient analytics know-how and access to data.
So privacy is in peril. For the past two years, big data breaches have only got bigger: witness the losses at Target (110 million), EBay (145 million), Home Depot (109 million records) and JPMorgan Chase (83 million) to name a few. Breaches have got deeper, too. Most notably, in June 2015 the U.S. federal government’s Office of Personnel Management (OPM) revealed it had been hacked, with the loss of detailed background profiles on 15 million past and present employees.
I see a terrible systemic weakness in the standard practice of information security. Look at the OPM breach: what was going on that led to application forms for employees dating back 15 years remaining in a database accessible from the Internet? What was the real need for this availability? Instead of relying on firewalls and access policies to protect valuable data from attack, enterprises need to review which data needs to be online at all.
We urgently need to reduce the exposed attack surface of our information assets. But in the information age, the default has become to make data as available as possible. This liberality is driven both by the convenience of having all possible data on hand, just in case in it might be handy one day, and by the plummeting cost of mass storage. But it's also the result of a technocratic culture that knows "knowledge is power," and gorges on data.
In communications theory, Metcalfe’s Law states that the value of a network is proportional to the square of the number of devices that are connected. This is an objective mathematical reality, but technocrats have transformed it into a moral imperative. Many think it axiomatic that good things come automatically from inter-connection and information sharing; that is, the more connection the better. Openness is an unexamined rallying call for both technology and society. “Publicness” advocate Jeff Jarvis wrote (admittedly provocatively) that: “The more public society is, the safer it is”. And so a sort of forced promiscuity is shaping up as the norm on the Internet of Things. We can call it "superconnectivity", with a nod to the special state of matter where electrical resistance drops to zero.
In thinking about privacy on the IoT, a key question is this: how much of the data emitted from Internet-enabled devices will actually be personal data? If great care is not taken in the design of these systems, the unfortunate answer will be most of it.
My latest investigation into IoT privacy uses the example of the Internet connected motor car. "Rationing Identity on the Internet of Things" will be released soon by Constellation Research.
And don't forget Constellation's annual innovation summit, Connected Enterprise at Half Moon Bay outside San Francisco, November 4th-6th. Early bird registration closes soon.
A letter to the editor, Sydney Morning Herald, January 14, 2011.
The ABC screened a nice documentary last night, "Getting Frank Gehry", about the new UTS Business School building. The only thing spoiling the show was Sydney's rusted-on architecture critic Elizabeth Farrelly having another self-conscious whinge. And I remembered that I wrote a letter to the Herald after she had a go in 2011 when the design was unveiled (see "Gehry has designed a building that is more about him than us").
Where would damp squib critics be without the 'starchitects' they love to hate?
Letter as published
Ironically, Elizabeth Farrelly's diatribe against Frank Gehry and his UTS design is really all about her. She spends 12 flabby paragraphs defending criticism (please! Aren't Australians OK by now with the idea of critics?) and bravely mocking Gehry as "starchitect".
Eventually Farrelly lets go her best shots: mild rhetorical questions about the proposal's still unseen interior, daft literalism about buildings being unable to move, and a "quibble" about harmony. I guess she likewise dismisses Gaudi and his famous fluid masonry.
Farrelly's contempt for the university's ''boot licking'' engagement with this celebrated architect is simply myopic. The thing about geniuses like Gehry and Utzon is that brave clients can trust that the results will prevail.
Stephen Wilson, Five Dock
Posted in Culture
The Biometrics Institute has received Australian government assistance to fund the next stage of the development of a new privacy Trust Mark. And Lockstep Consulting is again working with the Institute to bring this privacy initiative to fruition.
A detailed feasibility study was undertaken by Lockstep in the first half of 2015, involving numerous privacy advocates, regulators and vendors in Europe, the US, New Zealand and Australia.
We found strong demand for a reputable, non-trivial B2C biometrics certification.
Privacy advocates are generally supportive of a new Trust Mark, however they stress that a Trust Mark can be counter-productive if it is too easy to obtain, biased by industry interests, and/or poorly policed. There is general agreement that a credible trust mark should be non-trivial, and consequently, that the criteria be reasonably prescriptive. The reality of a strong Trust Mark is that not all architectures and solution instances will be compatible with the certification criteria.
The next stage of the Biometrics Institute project will deliver technical criteria for the award of the Trust Mark, and a PIA (Privacy Impact Assessment) template. A condition of the Trust Mark will be that a PIA is undertaken.
Please contact Steve Wilson at Lockstep firstname.lastname@example.org or Isabelle Moeller (Biometrics Institute CEO) email@example.com, if you'd like to receive further details of the Stage 1 findings, or would like to contribute to the technical research in Stage 2.
An unpublished letter to New Yorker magazine, August 2015.
Kelefa Sanneh ("The Hell You Say", Aug 10 & 17) poses a question close to the heart of society’s analog-to-digital conversion: What is speech?
Internet policy makers worldwide are struggling with a recent European Court of Justice decision which grants some rights to individuals to have search engines like Google block results that are inaccurate, irrelevant or out of date. Colloquially known as the "Right To Be Forgotten" (RTBF), the ruling has raised the ire of many Americans in particular, who typically frame it as yet another attack on free speech. Better defined as a right to be de-listed, RTBF makes search providers consider the impact on individuals of search algorithms, alongside their commercial interests. For there should be no doubt – search is very big business. Google and its competitors use search to get to know people, so they can sell better advertising.
Search results are categorically not the sort of text which contributes to "democratic deliberation". Free speech may be many things but surely not the mechanical by-products of advertising processes. To protect search results as such mocks the First Amendment.
Some of my other RTBF thoughts:
- Search is not a passive reproduction; Google makes the public domain public.
- Google's deeply divided Advisory Council was strangely silent on the business nature of search.
- Search results are a special form of Big Data, and not the sort of thing that counts as speech.
In the latest course of a 15 month security feast, BlackBerry has announced it is acquiring mobile device management (MDM) provider Good Technology. The deal is said to be definitive, for US$425 million in cash.
As BlackBerry boldly re-positions itself as a managed service play in the Internet of Things, adding an established MDM capability to its portfolio will bolster its claim -- which still surprises many -- to be handset neutral. But the Good buy is much more than that. It has to be seen in the context of John Chen's drive for cross-sector security and privacy infrastructure for the IoT.
As I reported from the recent BlackBerry Security Summit in New York, the company has knitted together a comprehensive IoT security fabric. Look at how they paint their security platform:
And see how Good will slip neatly into the Platform Services column. It's the latest in what is now a $575 million investment in non-organic security growth (following purchases of Secusmart, Watchdox, Movirtu and Athoc).
According to BlackBerry,
- Good will bring complementary capabilities and technologies to BlackBerry, including secure applications and containerization that protects end user privacy. With Good, BlackBerry will expand its ability to offer cross-platform EMM solutions that are critical in a world with varying deployment models such as bring-your-own-device (BYOD); corporate owned, personally enabled (COPE); as well as environments with multiple user interfaces and operating systems. Good has expertise in multi-OS management with 64 percent of activations from iOS devices, followed by a broad Android and Windows customer base.(1) This experience combined with BlackBerry’s strength in BlackBerry 10 and Android management – including Samsung KNOX-enabled devices – will provide customers with increased choice for securely deploying any leading operating system in their organization.
The strategic acquisition of Good Technology will also give the Identity-as-a-Service sector a big kick. IDaaS is become a crowded space with at least ten vendors (CA, Centrify, IBM, Microsoft, Okta, OneLogin, Ping, Salepoint, Salesforce, VMware) competing strongly around a pretty well settled set of features and functions. BlackBerry themselves launched an IDaaS a few months ago. At the Security Summit, I asked their COO Marty Beard what is going to distinguishe their offering in such a tight market, and he said, simply, mobility. Presto!
But IDaaS is set to pivot. We all know that mobility is now the locus of security , and we've seen VMware parlay its AirWatch investment into a competitive new cloud identity service. This must be more than a catch-up play with so many entrenched IDaaS vendors.
Here's the thing. I foresee identity actually disappearing from the user experience, which more and more will just be about the apps. I discussed this development in a really fun "Identity Innovators" video interview recorded with Ping at the recent Cloud Identity Summit. For identity to become seamless with the mobile application UX, we need two things. Firstly, federation protocols so that different pieces of software can hand over attributes and authentication signals to one another, and these are all in place now. But secondly we also need fully automated mobile device management as a service, and that's where Good truly fits with the growing BlackBerry platform.
Now stay tuned for new research coming soon via Constellation on the Internet of Things, identity, privacy and software reliability.
See also The State of Identity Management in 2015.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. For years, Lockstep has been monitoring these figures, plotting the trend data and analysing what the industry is and is not doing about it. A few weeks ago, statistics for calendar year 2014 came out.
As we reported last time, despite APCA's optimistic boosting of 3D Secure and education measures for many years, Card Not Present (CNP) online fraud was not falling as hoped. And what we see now in the latest numbers is the second biggest jump in CNP fraud ever! CY 2014 online card fraud losses were very nearly AU$300M, up 42% in 12 months.
Again, APCA steadfastly rationalises in its press release (PDF) that high losses simply reflect the popularity of online shopping. That's cold comfort to the card holders and merchants who are affected.
APCA has a love-ignore relationship with 3D Secure. This is one of the years when 3D Secure goes unmentioned. Instead the APCA presser talks up tokenization, I think for the first time. Yet the payments industry has had tokenization for about a decade. It's just another band-aid over the one fundamental crack in the payment card system: nothing stops stolen card numbers being replayed.
A proper fix to replay attack is easily within reach, which would re-use the same cryptography that solves skimming and carding, and would restore a seamless payment experience for card holders. See my 2012 paper Calling for a Uniform Approach to Card Fraud Offline and On" (PDF).
The credit card payments system is a paragon of standardisation. No other industry has such a strong history of driving and adopting uniform technologies, infrastructure and business processes. No matter where you keep a bank account, you can use a globally branded credit card to go shopping in almost every corner of the world. The universal Four Party settlement model, and a long-standing card standard that works the same with ATMs and merchant terminals everywhere underpin seamless convenience. So with this determination to facilitate trustworthy and supremely convenient spending in every corner of the earth, it’s astonishing that the industry is still yet to standardise Internet payments. We settled on the EMV standard for in-store transactions, but online we use a wide range of confusing and largely ineffective security measures. As a result, Card Not Present (CNP) fraud is growing unchecked.
This article argues that all card payments should be properly secured using standardised hardware. In particular, CNP transactions should use the very same EMV chip and cryptography as do card present payments.
With all the innovation in payments leveraging cryptographic Secure Elements in mobile phones - the exemplar being Apple Pay for Card Present business - it beggars belief that we have yet to modernise CNP payments for web and mobile shopping.
On July 23, BlackBerry hosted its second annual Security Summit, once again in New York City. As with last year’s event, this was a relatively intimate gathering of analysts and IT journalists, brought together for the lowdown on BlackBerry’s security and privacy vision.
By his own account, CEO John Chen has met plenty of scepticism over his diverse and, some say, chaotic product and services portfolio. And yet it’s beginning to make sense. There is a strong credible thread running through Chen’s initiatives. It all has to do with the Internet of Things.
Disclosure: I traveled to the Blackberry Security Summit as a guest of Blackberry, which covered my transport and accommodation.
The Growth Continues
In 2014, John Chen opened the show with the announcement he was buying the German voice encryption firm Secusmart. That acquisition appears to have gone well for all concerned; they say nobody has left the new organisation in the 12 months since. News of BlackBerry’s latest purchase - of crisis communications platform AtHoc - broke a few days before this year’s Summit, and it was only the most recent addition to the family. In the past 12 months, BlackBerry has been busy spending $150M on inorganic growth, picking up:
Chen has also overseen an additional $100M expenditure in the same timeframe on organic security expansion (over and above baseline product development). Amongst other things BlackBerry has:
The Growth Explained - Secure Mobile Communications
Executives from different business units and different technology horizontals all organised their presentations around what is now a comprehensive security product and services matrix. It looks like this (before adding AtHoc):
BlackBerry is striving to lead in Secure Mobile Communications. In that context the highlights of the Security Summit for mine were as follows.
The Internet of Things
BlackBerry’s special play is in the Internet of Things. It’s the consistent theme that runs through all their security investments, because as COO Marty Beard says, IoT involves a lot more than machine-to-machine communications. It’s more about how to extract meaningful data from unbelievable numbers of devices, with security and privacy. That is, IoT for BlackBerry is really a security-as-a-service play.
Chief Security Officer David Kleidermacher repeatedly stressed the looming challenge of “how to patch and upgrade devices at scale”.
- MyPOV: Functional upgrades for smart devices will of course be part and parcel of IoT, but at the same time, we need to work much harder to significantly reduce the need for reactive security patches. I foresee an angry consumer revolt if things that never were computers start to behave and fail like computers. A radically higher standard of quality and reliability is required. Just look at the Jeep Uconnect debacle, where it appears Chrysler eventually thought better of foisting a patch on car owners and instead opted for a much more expensive vehicle recall. It was BlackBerry’s commitment to ultra high reliability software that really caught my attention at the 2014 Security Summit, and it convinces me they grasp what’s going to be required to make ubiquitous computing properly seamless.
Refreshingly, COO Beard preferred to talk about economic value of the IoT, rather than the bazillions of devices we are all getting a little jaded about. He said the IoT would bring about $4 trillion of required technology within a decade, and that the global economic impact could be $11 trillion.
BlackBerry’s real time operating system QNX is in 50 million cars today.
AtHoc is a secure crisis communications service, with its roots in the first responder environment. It’s used by three million U.S. government workers today, and the company is now pushing into healthcare.
Founder and CEO Guy Miasnik explained that emergency communications involves more than just outbound alerts to people dealing with disasters. Critical to crisis management is the secure inbound collection of info from remote users. AtHoc is also not just about data transmission (as important as that is) but it works also at the application layer, enabling sophisticated workflow management. This allows procedures for example to be defined for certain events, guiding sets of users and devices through expected responses, escalating issues if things don’t get done as expected.
We heard more about BlackBerry’s collaboration with Oxford University on the Centre for High Assurance Computing Excellence, first announced in April at the RSA Conference. CHACE is concerned with a range of fundamental topics, including formal methods for verifying program correctness (an objective that resonates with BlackBerry’s secure operating system division QNX) and new security certification methodologies, with technical approaches based on the Common Criteria of ISO 15408 but with more agile administration to reduce that standard’s overhead and infamous rigidity.
CSO Kleidermacher announced that CHACE will work with the Diabetes Technology Society on a new healthcare security standards initiative. The need for improved medical device security was brought home vividly by an enthralling live demonstration of hacking a hospital drug infusion pump. These vulnerabilities have been exposed before at hacker conferences but BlackBerry’s demo was especially clear and informative, and crafted for a non-technical executive audience.
- MyPOV: The message needs to be broadcast loud and clear: there are life-critical machines in widespread use, built on commercial computing platforms, without any careful thought for security. It’s a shameful and intolerable situation.
I was impressed by BlackBerry’s privacy line. It's broader and more sophisticated than most security companies, going way beyond the obvious matters of encryption and VPNs. In particular, the firm champions identity plurality. For instance, WorkLife by BlackBerry, powered by Movirtu technology, realizes multiple identities on a single phone. BlackBerry is promoting this capability in the health sector especially, where there is rarely a clean separation of work and life for professionals. Chen said he wants to “separate work and private life”.
The health sector in general is one of the company’s two biggest business development priorities (the other being automotive). In addition to sophisticated telephony like virtual SIMs, they plan to extend extend AtHoc into healthcare messaging, and have tasked the CHACE think-tank with medical device security. These actions complement BlackBerry’s fine words about privacy.
So BlackBerry’s acquisition plan has gelled. It now has perhaps the best secure real time OS for smart devices, a hardened device-independent Mobile Device Management backbone, new data-centric privacy and rights management technology, remote certificate management, and multi-layered emergency communications services that can be diffused into mission-critical rules-based e-health settings and, eventually, automated M2M messaging. It’s a powerful portfolio that makes strong sense in the Internet of Things.
BlackBerry says IoT is 'much more than device-to-device'. It’s more important to be able to manage secure data being ejected from ubiquitous devices in enormous volumes, and to service those things – and their users – seamlessly. For BlackBerry, the Internet of Things is really all about the service.
In 2002, a couple of Japanese visitors to Australia swapped passports with each other before walking through an automatic biometric border control gate being tested at Sydney airport. The facial recognition algorithm falsely matched each of them to the others' passport photo. These gentlemen were in fact part of an international aviation industry study group and were in the habit of trying to fool biometric systems then being trialed round the world.
When I heard about this successful prank, I quipped that the algorithms were probably written by white people - because we think all Asians look the same. Colleagues thought I was making a typical sick joke, but actually I was half-serious. It did seem to me that the choice of facial features thought to be most distinguishing in a facial recognition model could be culturally biased.
Since that time, border control face recognition has come a long way, and I have not heard of such errors for many years. Until today.
The San Francisco Chronicle of July 21 carries a front page story about the cloud storage services of Google and Flickr mislabeling some black people as gorillas (see updated story, online). It's a quite incredible episode. Google has apologized. Its Chief Architect of social, Yonatan Zunger, seems mortified judging by his tweets as reported, and is investigating.
The newspaper report quotes machine learning experts who suggest programmers with limited experience of diversity may be to blame. That seems plausible to me, although I wonder where exactly the algorithm R&D gets done, and how much control is to be had over the biometric models and their parameters along the path from basic research to application development.
So man has literally made software in his own image.
The public is now being exposed to Self Driving Cars, which are heavily reliant on machine vision, object recognition and artificial intelligence. If this sort of software can't tell people from apes in static photos given lots of processing time, how does it perform in real time, with fleeting images, subject to noise, and with much greater complexity? It's easy to imagine any number of real life scenarios where an autonomous car will have to make a split-second decision between two pretty similar looking objects appearing unexpectedly in its path.
The general expectation is that Self Driving Cars (SDCs) will be tested to exhaustion. And so they should. But if cultural partiality is affecting the work of programmers, it's possible that testers have suffer the same blind spots without knowing it. Maybe the offending photo labeling programs were never verified with black people. So how are the test cases for SDCs being selected? What might happen when an SDC ventures into environments and neighborhoods where its programmers have never been?
Everybody in image processing and artificial intelligence should be humbled by the racist photo labeling. With the world being eaten by software, we need to reflect really deeply on how such design howlers arise. And frankly double check if we're ready to let computer programs behind the wheel.