This morning Microsoft's CEO Satya Nadella gave a global speech about enterprise security. He announced a new Cyber Defense Operations Center, a should-not-be-new Microsoft Enterprise Cybersecurity Group and a not-at-all-new-sounding Enterprise Mobility Suite (EMS). The webcast can be replayed here but don't expect to be blown away. It's all just tablestakes for a global cloud provider.
Security is being standardised all over the place now. Ordinary people are getting savier about security best practice; they know for example that biometrics templates need to be handled carefully in client devices, and that secure storage is critical for assets like identities and Bitcoin. "Secure Element" is almost a lay-person's term now (Apple tried to give the iPhone security chip the fancy name "Enclave" but seem to now regard it as so standard it doesn't need branding).
All this awareness is great, but it's fast becoming hygeine. Like airplane safety. It's a bit strange for corporations to seek to compete on security, or to have the CEO announce what are really textbook security services. At the end of the speech, I couldn't tell if anything sets Microsoft apart from its arch competitors Google or Amazon.
Most of today's CISOs operate at a higher, more strategic level than malware screening, anti-virus and encryption. Nadella's subject matter was really deep in the plumbing. Not that there's anything wrong with that. But it just didn't seem to me like the subject matter for a CEO's global webcast.
The Microsoft "operational security posture" is very orthodox, resting on "Platform, Intelligence and Partners". I didn't see anything new here, just a big strong cloud provider doing exactly what they should: leveraging the hell out of a massive operation, with massive resources, and massive influence.
A big part of my research agenda in the Digital Safety theme at Constellation is privacy. And what a vexed topic it is! It's hard to even know how to talk about privacy. For many years, folks have covered privacy in more or less academic terms, drawing on sociology, politics and pop psychology, joining privacy to human rights, and crafting new various legal models.
Meanwhile the data breaches get worse, and most businesses have just bumped along.
When you think about it, it’s obvious really: there’s no such thing as perfect privacy. The real question is not about ‘fundamental human rights’ versus business, but rather, how can we optimise a swarm of competing interests around the value of information?
Privacy is emerging as one of the most critical and strategic of our information assets. If we treat privacy as an asset, instead of a burden, businesses can start to cut through this tough topic.
But here’s an urgent issue. A recent regulatory development means privacy may just stop a lot of business getting done. It's the European Court of Justice decision to shut down the US-EU Safe Harbor arrangement.
The privacy Safe Harbor was a work-around negotiated by the Federal Trade Commission, allowing companies to send personal data from Europe into the US.
But the Safe Harbor is no more. It's been ruled unlawful. So it’s a big, big problem for European operations, many multinationals, and especially US cloud service providers.
At Constellation we've researched cloud geography and previously identified competitive opportunities for service providers to differentiate and compete on privacy. But now this is an urgent issue.
It's time American businesses stopped getting caught out by global privacy rulings. There shouldn't be too many surprises here, if you understand what data protection means internationally. Even the infamous "Right To Be Forgotten" ruling on Google’s search engine – which strikes so many technologists as counter intuitive – was a rational and even predictable outcome of decades old data privacy law.
The leading edge of privacy is all about Big Data. And we aint seen nothin yet!
Look at artificial intelligence, Watson Health, intelligent personal assistants, hackable cars, and the Internet of Everything where everything is instrumented, and you see information assets multiplying exponentially. Privacy is actually just one part of this. It’s another dimension of information, one that can add value, but not in a neat linear way. The interplay of privacy, utility, usability, efficiency, efficacy, security, scalability and so on is incredibly complex.
The broader issue is Digital Safety: safety for your customers, and safety for your business.
You’ll have to forgive the deliberate inaccuracy in the title, but I just couldn’t resist the wordplay. The topic of this blog is the use of the blockchain for identity, which is not exactly Bitcoin. By my facetiousness, and by my analysis, you’ll see I don’t yet take the identity use case seriously.
In 2009, Bitcoin was launched. A paper was self-published by a person or persons going by the nom de plume Satoshi Nakamoto, called “Bitcoin: A Peer-to-Peer Electronic Cash System” and soon after an open source software base appeared at http://www.bitcoin.org. Bitcoin offered a novel solution to the core problem in electronic cash: how to prevent double spending without reverting to a central authority. Nakamoto’s conception is strongly anti-authoritarian, almost anarchic, with an absolute rejection of fiat currency, reserve banks and other central institutions. Bicoin and its kin aim to change the world, and by loosening the monopolies in traditional finance, they may well do that.
Separate to that, the core cryptographic technology in Bitcoin is novel, and so surprising, it's almost magical. Add to that spell the promise of security and anonymity, and we have a powerful mix that some people see excitedly as stretching far beyond mere money, and into identity. So is that a reasonable step?
Bitcoin’s secret sauce
A decentralised digital currency scheme requires some sort of community-wide agreement on when someone spends a virtual coin, so she cannot spend it again. Bitcoin’s trick is to register every single transaction on one public tamper-proof ledger called the blockchain, which is refreshed in such a way that the whole community in effect votes on the order in which transactions are added or, equivalently, the time when each coin is spent.
The blockchain ledger is periodically hashed to keep it to a manageable length, but all transactions are visible, archived in effect for all time. No proof of identity or KYC check is needed to register a Bitcoin account, and currency – denominated "BTC" – may be transferred freely to any other account. Hence Bitcoin may be called anonymous (but the unique account identifiers are set in stone, providing a rock solid money trail that has been the undoing of many criminal Bitcoin users).
The continuous arbitration of blockchain entries is effected by a peer-to-peer network of servers that race each other to double-check a special hash value for the refreshed chain. The particular server that wins each race is rewarded for its effort with a tiny fraction of a Bitcoin. The ongoing background computation that keeps a network like this honest is referred to technically as "Proof of Work"; with Bitcoin, since there is a monetary reward, it’s called mining.
Whether or not Bitcoin lasts as a form of electronic cash, there is a groundswell of enthusiasm for the blockchain as a new type of public ledger for a much broader range of transactions, including “identity”. The scare quotes are deliberate on my part, reflecting that the blockchain-for-identity speculations have not been clear about what part of the identity puzzle they might solve.
For identity applications, the reality of Bitcoin mining creates some particular challenges which I will return to. But first let’s look at the positive influence of Bitcoin and then review some of its cryptographic building blocks.
People will argue about its true originality, but we can regard Bitcoin and the blockchain as providing an innovative and practical solution to the unsolved double-spend problem. I like Bitcoin as the latest example of a wondrous pattern in applied mathematics. Conundrums widely accepted as impossible are, in fact, solved quite often, after which frenetic periods of innovation can follow. The first surprise or prototype solution is typically inefficient but it can inspire fresh thinking and lead to more polished methods.
One of the greatest examples is Merkle’s Puzzles, a theoretical method invented by Ralph Merkle in 1974 for establishing a shared secret number between two parties who need only exchange public pieces of data. This was the holy grail for cryptography, for it meant that a secret key could be set up without having to carry the secret from one correspondent to the other (after all, if you can securely transfer a key across a long distance, you can do the same with your secret message and thus avoid the hassle of encryption altogether). Without going into detail, Merkle’s solution could not be used in the real world, but it solved what was thought to be an unsolvable problem. In quick succession, practical algorithms followed from Diffie & Hellman, and Rivest, Shamir & Adleman (the names behind “RSA”) and thus was born public key cryptography.
Bitcoin likewise has spurred dozens of new digital currencies, with different approaches to ledgers and arbitration, and different ambitions too (including Ripple, Ethereum, Litecoin, Dogecoin, and Colored Coins). They all promise to break the monopoly that banks have on payments, radically cut costs and settlement delays, and make electronic money more accessible to the unbanked of the world. These are what we might call liquidity advantages of digital currencies. These objectives (plus the more political promises of ending fiat currency and rendering electronic cash transactions anonymous or untraceable) are certainly all important but they are not my concern in this blog.
Bitcoin’s public sauce
Before looking at identity, let’s review some of the security features of the blockchain. We will see that safekeeping of each account holder’s private keys is paramount – as it is with all Internet payments systems and PKIs.
While the blockchain is novel, many elements of Bitcoin come from standard public key cryptography and will be familiar to anyone in security. What’s called a Bitcoin “address” (the identifier of someone you will send currency to) is actually a public key. To send any Bitcoin money from your own address, you use the matching private key to sign a data object, which is sent into the network to be processed and ultimately added to the blockchain.
The only authoritative record of anyone’s Bitcoin balance is held on the blockchain. Account holders typically operate a wallet application which shows their balance and lets them spend it, but, counter-intuitively, the wallet holds no money. All it does is control a private key (and provide a user experience of the definitive blockchain). The only way you have to spend your balance (that is, transfer part of it to another account address) is to use your private key. What follows from this is an unforgiving reality of Bitcoin: your private key is everything. If a private key is lost or destroyed, then the balance associated with that key is frozen forever and cannot be spent. And thus there has been a string of notorious mishaps where computers or disk drives holding Bitcoin wallets have been lost, together with millions of dollars of value they controlled. Furthermore, numerous pieces of malware have – predictably – been developed to steal Bitcoin private keys from regular storage devices (and law enforcement agencies have intercepted suspects’ private keys in the battle against criminal use of Bitcoin).
You would expect the importance of Bitcoin private key storage to have been obvious from the start, to ward off malware and destruction, and to allow for reliable backup. But it was surprisingly late in the piece that “hardware wallets” emerged, the best known of which is probably now the Trezor, which first appeared in 2013. The use of hardware security modules for private key management in soft wallets or hybrid wallets has been notably ad hoc. It appears crypto currency proponents pay more attention to the algorithms and the theory than to practical cryptographic engineering.
Identifying with the blockchain
The enthusiasm for crypto currency innovation has proven infectious, and many commentators have promoted the blockchain in particular as something special for identity management. A number of start-ups are “providing” identity on the blockchain – including OneName, and ShoCard – although on closer inspection what this usually means is nothing more than reserving a unique blockchain identifier with a self-claimed pseudonym.
Prominent financial services blogger Chris Skinner says "the blockchain will radically alter our futures" and envisages an Internet of Things where your appliances are “recorded [on the blockchain] as being yours using your digital identity token (probably a biometric or something similar)”. And the government of Honduras has hired American Bitcoin technology firm Factom to build a blockchain-based land title registry, which they claim will be “immutable”, resistant to insider fraud, and extensible to “more secure mortgages, contracts, and mineral rights”.
While blockchain afficionados have been quick to make a leap to identity, the opposite is not the case. The identerati haven’t had much to say about blockchain at all. Ping Identity CTO Patrick Harding mentioned it in his keynote address at the 2015 Cloud Identity Summit, and got a meek response from the audience when he asked who knew what blockchain is (I was there). Harding’s suggestions were modest, exploratory and cautious. And only now has blockchain figured prominently in the twice-yearly freeform Internet Identity Workshop unconference in Silicon Valley. I'm afraid it's telling that all the initial enthusiasm for blockchain "solving" identity has come from non identity professionals.
What identity management problem would be solved by using the blockchain? The most prominent challenges in digital identity include the following:
What does the blockchain have to offer?
Certainly, pseudonymity is important in some settings, but is rare in economically important personal business, and in any case is not unique to the blockchain. The secure recording of transactions is very important, but that’s well-solved by regular digital signatures (which remain cryptographically verifiable essentially for all time, given the digital certificate chain). Most important identity transactions are pretty private, so recording them all in a single public register instead of separate business-specific databases is not an obvious thing to do.
The special thing about the blockchain and the proof-of-work is that they prevent double-spending. I’ve yet to see a blockchain-for-identity proposal that explains what the equivalent “double identify” problem really is and how it needs solving. And if there is such a thing, the price to fix it is to record all identity transactions in public forever.
The central user action in all blockchain applications is to “send” something to another address on the blockchain. This action is precisely a digital (asymmetric cryptographic) signature, essentially the same as any conventional digital signature, created by hashing a data object and encrypting it with one’s private key. The integrity and permanence of the action comes from the signature itself; it is immaterial where the signature is stored.
What the blockchain does is prevent a user from performing the same action more than once, by using the network to arbitrate the order in which digital signatures are created. In regular identity matters, this objective simply doesn’t arise. The primitive actions in authentication are to leave one’s unique identifying mark (or signature) on a persistent transaction, or to present one’s identity in real time to a service. Apart from peer-to-peer arbitration of order, the blockchain is just a public ledger - and a rather slow one at that. Many accounts of blockchain uses beyond payments simply speak of its inviolability or perpetuity. In truth, any old system of digitally signed database entries is reasonably inviolable. Tamper resistance and integrity come from the digital signatures, not the blockchain. And as mentioned, the blockchain itself doesn't provide any assurance of who really did what - for that we need separate safeguards on users' private keys, plus reliable registration of users and their relevant attributes (which incidentally cannot be done without some authority, unless self-attestation is good enough).
In addition to not offering much advantage in identity management, there are at least two practical downsides to recording non Bitcoin activity on the blockchain, both related to the proof-of-work. The peer-to-peer resolution of the order of transactions takes time. With Bitcoin, the delay is 10 minutes; that’s the time taken for an agreed new version of the blockchain to be distilled after each transaction. Clearly, in real time access control use cases, when you need to know who someone is right away, such delay is unacceptable. The other issue is cost. Proof-of-work, as the name is meant to imply, consumes real resources, and elicits a real reward.
So for arbitrary identity transactions, what is the economics for using the blockchain? Who would pay, who would be paid, and what market forces would price identity, in this utopia where all accounts are equal?
On one of the IDAM industry mail lists recently, a contributer noted in passing that:
- "I replaced ‘identity’ throughout the document with ‘attribute’ and barring a few grammar issues everything still works."
We're getting warm.
Seriously, when will identity engineers come round and do just that: dispense with the word "identity"? We don't need to change our job descriptions or re-badge the whole "identity management" sector but I do believe we need to stop saying things like "federate identity" or "provide identity".
The writing has been on the wall for some time.
"Identity" is actually a macro for how a Relying Party (RP) knows each of its Subject. Identification is the process by which an RP is satisfied it knows enough about a Subject -- a customer, a trading partner, an employee and so on -- that it can deal with that Subject with acceptable residual risk. Identification is just the surface of the relationship between Subject and RP. The risks of misidentification are ultimately borne by the RP -- even if they can be mitigated to some extent through contracts with third parties that have helped the RP establish identity.
The most interesting work in IDAM (especially the "Vectors of Trust" or VoT, initiated by Justin Richer) is now about better management of the diverse and context-dependent signals, claims and/or attributes that go into a multivariate authentication decision. And that reminds me of the good old APEC definition of authentication -- "the means by which a receiver of an electronic transaction or message makes a decision to accept or reject that transaction or message" -- which notably made no mention of identity at all!
We really should now go the whole way and replace "identity" with "attributes". In particular, we should realise there are no "Identity Providers" -- they're all just Attribute Providers. No third party ever actually "provides" a Subject with their identity; that was a naive industrial sort of metaphor that reduces identity to a commodity, able to be bought and sold. It is always the Relying Party that "identifies" a Subject for their (the RP's) purposes. And therefore it is the Relying Party that bestows identity.
The mangled notion of "Identity Provider" seems to me to have contaminated IDAM models for a decade. Just think how much easier it would be to get banks, DMVs, social networks, professional associations, employers and the rest to set up modest Attribute Providers instead of grandiose and monopolistic Identity Providers!
As Yubico CEO Stina Ehrensvard says, "any organization that has tried to own and control online identity has failed".
There's a simple reason for that: identity is not what we thought it was. As we are beginning to see, if we did a global replace of "identity" with "attribute", all our technical works would still make sense. The name change is not mere word-smithing, for the semantics matter. By using the proper name for what we are federating, we will come a lot closer to the practical truth of the identity management problem, and after reframing the way we talk about the problems, we will solve them.
A new effort dubbed Project Enigma "guarantees" us privacy, by way of a certain technology. Never mind that Enigma's "magic" (their words) comes from the blockchain and that it's riddled with assumptions; the very idea of technology-based perfection in privacy is profoundly misguided.
Enigma is not alone; the vast majority of 'Privacy Enhancing Technologies' (PETs) are in fact secrecy or anonymity solutions. Anonymity is a blunt and fragile tool for privacy; in the event that encryption for instance is broken, you still need the rule of law to stem abuse. I wonder why people still conflate privacy and anonymity? Plainly, privacy is the protection you need when your affairs are not secret.
In any event, few people need or want to live underground. We actually want merchants and institutions and employers and doctors to know us in reasonable detail, but we insist they exercise restraint in what they do with that knowledge.
Consider a utopian architecture where things could be made totally secret between you and a correspondent. How would you choose to share something with more than one party, like a health record, or a party invitation? How would you delegate someone to share something with others on your behalf? How would you withdraw permissions? How would it work in a heterogeneous IT environment? And above all, how would you control all the personal information created about you behind your back, unseen, beyond your reach?
Privacy is about restraint. It's less about what we do with someone’s personal information than what we don’t do. So it’s more political than technological. Privacy can only really be managed through rules. Of course rules and enforcement are imperfect, but let’s not be utopian about privacy. Just as there is no such thing as absolute security, there is no perfect privacy either.
Posted in Privacy
For 35 years now, a body of data protection jurisprudence has been built on top of the original OECD Privacy Principles. The most elaborate and energetically enforced privacy regulations are in Europe (although well over 100 countries have privacy laws at last count). By and large, the European privacy regime is welcome by the roughly 700 million citizens whose interests it protects.
Over the years, this legal machinery has produced results that occasionally surprise the rest of the world. Among these was the "Right To Be Forgotten", a ruling of the European Court of Justice (ECJ) which requires web search operators in some cases to block material that is inaccurate, irrelevant or excessive. And this week, the ECJ determined that the U.S. "Safe Harbor" arrangement (a set of pragmatic work-arounds that have permitted the import of personal information from Europe by American companies) is invalid.
These strike me as entirely logical outcomes of established technology-neutral privacy law. The Right To Be Forgotten simply treats search results as synthetic personal information, collected algorithmically, and applies regular privacy principles: if a business collects personal information, then lawful limits apply no matter how it's collected. And the self-regulated Safe Harbor was found to not provide the strength of safeguards that Europeans have come to expect. Its inadequacies are old news; action by the court has been a long time coming.
In parallel with steadily developing privacy law, an online business ecosystem has evolved, centred on the U.S. and based on the limitless resource that is information. Fabulous products, services and unprecedented economic success have flowed. But the digital rush (like gold and oil rushes before it) has brought calamity. A shaken American populace, subject to daily breaches, spying and exploitation, is left wondering who and what will ever keep them safe in cyberspace.
So it's honestly a mystery to me why every European privacy advance is met with such reflexive condemnation in America.
The OECD Privacy Principles safeguard individuals by controlling the flow of information about them. In the decades since the principles were framed, digital technologies and business models have radically expanded how information is created and how it moves. Personal information is now produced as if by magic (by wizards who make billions by their tricks). But the basic privacy principles are steadfastly the same, and are manifestly more important than ever. You know, that's what good laws are like.
A huge proportion of the American public would cheer for better data protection. We all know they deserve it. If American institutions had a better track record of respecting and protecting the data commons, then they'd be entitled to bluster about European privacy. But as things stand in Silicon Valley and Washington, moral outrage should be directed at the businesses and governments who sit on their hands over data breaches and surveillance, instead of those who do something about it.
Posted in Privacy
Under new Prime Minister Malcolm Turnbull, innovation for once is the policy du jour in Australia. Innovation is associated with risk taking, but too often, government wants others to take the risk. It wants venture capitalists to take investment risk, and start-ups to take R&D risks. Is it time now for government to walk the talk?
State and federal agencies remain the most important buyers of IT in Australia. To stimulate domestic R&D and advance an innovation culture, governments should be taking some bold procurement risk, punting to some degree on new technology. Major projects like driver licence technology upgrades, the erstwhile Human Services Access Card, the national broadband roll-out, and national e-health systems, would be ideal environments in which to preferentially select next generation, home-grown products.
Obviously government must be prudent spending public money on new technology. Yet at the same time, there is a public interest argument for selecting newer solutions: in the rapidly changing online environment, citizens stand to benefit from the latest innovations, bred in response to current challenges.
What do entrepreneurs need most to help them innovate and prosper? It's metaphorical oxygen!
Too often, innovative entrepreneurs are met with the admonition you’re only trying to sell us something. Well yes we are, but it's because we believe we have something to meet real needs, and that customers actually need to buy something.
The identerati sometimes refer to the challenge of “binding carbon to silicon”. That’s a poetic way of describing how the field of Identity and Access Management (IDAM) is concerned with associating carbon-based life forms (as geeks fondly refer to people) with computers (or silicon chips).
To securely bind users’ identities or attributes to their computerised activities is indeed a technical challenge. In most conventional IDAM systems, there is only circumstantial evidence of who did what and when, in the form of access logs and audit trails, most of which can be tampered with or counterfeited by a sufficiently determined fraudster. To create a lasting, tamper-resistant impression of what people do online requires some sophisticated technology (in particular, digital signatures created using hardware-based cryptography).
On the other hand, working out looser associations between people and computers is the stock-in-trade of social networking operators and Big Data analysts. So many signals are emitted as a side effect of routine information processing today that even the shyest of users may be uncovered by third parties with sufficient analytics know-how and access to data.
So privacy is in peril. For the past two years, big data breaches have only got bigger: witness the losses at Target (110 million), EBay (145 million), Home Depot (109 million records) and JPMorgan Chase (83 million) to name a few. Breaches have got deeper, too. Most notably, in June 2015 the U.S. federal government’s Office of Personnel Management (OPM) revealed it had been hacked, with the loss of detailed background profiles on 15 million past and present employees.
I see a terrible systemic weakness in the standard practice of information security. Look at the OPM breach: what was going on that led to application forms for employees dating back 15 years remaining in a database accessible from the Internet? What was the real need for this availability? Instead of relying on firewalls and access policies to protect valuable data from attack, enterprises need to review which data needs to be online at all.
We urgently need to reduce the exposed attack surface of our information assets. But in the information age, the default has become to make data as available as possible. This liberality is driven both by the convenience of having all possible data on hand, just in case in it might be handy one day, and by the plummeting cost of mass storage. But it's also the result of a technocratic culture that knows "knowledge is power," and gorges on data.
In communications theory, Metcalfe’s Law states that the value of a network is proportional to the square of the number of devices that are connected. This is an objective mathematical reality, but technocrats have transformed it into a moral imperative. Many think it axiomatic that good things come automatically from inter-connection and information sharing; that is, the more connection the better. Openness is an unexamined rallying call for both technology and society. “Publicness” advocate Jeff Jarvis wrote (admittedly provocatively) that: “The more public society is, the safer it is”. And so a sort of forced promiscuity is shaping up as the norm on the Internet of Things. We can call it "superconnectivity", with a nod to the special state of matter where electrical resistance drops to zero.
In thinking about privacy on the IoT, a key question is this: how much of the data emitted from Internet-enabled devices will actually be personal data? If great care is not taken in the design of these systems, the unfortunate answer will be most of it.
My latest investigation into IoT privacy uses the example of the Internet connected motor car. "Rationing Identity on the Internet of Things" will be released soon by Constellation Research.
And don't forget Constellation's annual innovation summit, Connected Enterprise at Half Moon Bay outside San Francisco, November 4th-6th. Early bird registration closes soon.
A letter to the editor, Sydney Morning Herald, January 14, 2011.
The ABC screened a nice documentary last night, "Getting Frank Gehry", about the new UTS Business School building. The only thing spoiling the show was Sydney's rusted-on architecture critic Elizabeth Farrelly having another self-conscious whinge. And I remembered that I wrote a letter to the Herald after she had a go in 2011 when the design was unveiled (see "Gehry has designed a building that is more about him than us").
Where would damp squib critics be without the 'starchitects' they love to hate?
Letter as published
Ironically, Elizabeth Farrelly's diatribe against Frank Gehry and his UTS design is really all about her. She spends 12 flabby paragraphs defending criticism (please! Aren't Australians OK by now with the idea of critics?) and bravely mocking Gehry as "starchitect".
Eventually Farrelly lets go her best shots: mild rhetorical questions about the proposal's still unseen interior, daft literalism about buildings being unable to move, and a "quibble" about harmony. I guess she likewise dismisses Gaudi and his famous fluid masonry.
Farrelly's contempt for the university's ''boot licking'' engagement with this celebrated architect is simply myopic. The thing about geniuses like Gehry and Utzon is that brave clients can trust that the results will prevail.
Stephen Wilson, Five Dock
Posted in Culture
The Biometrics Institute has received Australian government assistance to fund the next stage of the development of a new privacy Trust Mark. And Lockstep Consulting is again working with the Institute to bring this privacy initiative to fruition.
A detailed feasibility study was undertaken by Lockstep in the first half of 2015, involving numerous privacy advocates, regulators and vendors in Europe, the US, New Zealand and Australia.
We found strong demand for a reputable, non-trivial B2C biometrics certification.
Privacy advocates are generally supportive of a new Trust Mark, however they stress that a Trust Mark can be counter-productive if it is too easy to obtain, biased by industry interests, and/or poorly policed. There is general agreement that a credible trust mark should be non-trivial, and consequently, that the criteria be reasonably prescriptive. The reality of a strong Trust Mark is that not all architectures and solution instances will be compatible with the certification criteria.
The next stage of the Biometrics Institute project will deliver technical criteria for the award of the Trust Mark, and a PIA (Privacy Impact Assessment) template. A condition of the Trust Mark will be that a PIA is undertaken.
Please contact Steve Wilson at Lockstep firstname.lastname@example.org or Isabelle Moeller (Biometrics Institute CEO) email@example.com, if you'd like to receive further details of the Stage 1 findings, or would like to contribute to the technical research in Stage 2.