Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

Blockchain plain and simple

Blockchain is an algorithm and distributed data structure designed to manage electronic cash without any central administrator. The original blockchain was invented in 2008 by the pseudonymous Satoshi Nakamoto to support Bitcoin, the first large-scale peer-to-peer crypto-currency, completely free of government and institutions.

Blockchain is a Distributed Ledger Technology (DLT). Most DLTs have emerged in Bitcoin's wake. Some seek to improve blockchain's efficiency, speed or throughput; others address different use cases, such as more complex financial services, identity management, and "Smart Contracts".

The central problem in electronic cash is Double Spend. If electronic money is just data, nothing physically stops a currency holder trying to spend it twice. It was long thought that a digital reserve was needed to oversee and catch double-spends, but Nakamoto rejected all financial regulation, and designed an electronic cash without any umpire.

The Bitcoin (BTC) blockchain crowd-sources the oversight. Each and every attempted spend is broadcast to a community, which in effect votes on the order in which transactions occur. Once a majority agrees all transactions seen in the recent past are unique, they are cryptographically sealed into a block. A chain thereby grows, each new block linked to the previously accepted history, preserving every spend ever made.

A Bitcoin balance is managed with an electronic wallet which protects the account holder's private key. Blockchain uses conventional public key cryptography to digitally sign each transaction with the sender's private key and direct it to a recipient's public key. The only way to move Bitcoin is via the private key: lose or destroy your wallet, and your balance will remain frozen in the ledger, never to be spent again.

The blockchain's network of thousands of nodes is needed to reach consensus on the order of ledger entries, free of bias, and resistant to attack. The order of entries is the only thing agreed upon by the blockchain protocol, for that is enough to rule out double spends.

The integrity of the blockchain requires a great many participants (and consequentially the notorious power consumption). One of the cleverest parts of the BTC blockchain is its incentive for participating in the expensive consensus-building process. Every time a new block is accepted, the system randomly rewards one participant with a bounty (currently 12.5 BTC). This is how new Bitcoins are minted or "mined".

Blockchain has security qualities geared towards incorruptible cryptocurrency. The ledger is immutable so long as a majority of nodes remain independent, for a fraudster would require infeasible computing power to forge a block and recalculate the chain to be consistent. With so many nodes calculating each new block, redundant copies of the settled chain are always globally available.

Contrary to popular belief, blockchain is not a general purpose database or "trust machine". It only reaches consensus about one specific technicality – the order of entries in the ledger – and it requires a massive distributed network to do so only because its designer-operators choose to reject central administration. For regular business systems, blockchain's consensus is of questionable benefit.

Posted in Blockchain

Proof of life or what?

A few days ago, it was reported that Julian Assange "read out a bitcoin block hash to prove he was alive". This was in response to rumours that he had died. It was a neat demonstration not only that he was not dead, but also of a couple of limits to the blockchain that are still not widely appreciated. It showed that blockchain on its own provides little value beyond cryptocurrency; in particular, on its own, blockchain doesn’t ‘prove existence’. And further, we can see that when blockchain is hybridised with other security processes, it is no longer terribly unique.

What Assange did was broadcast himself reading out the hexadecimal letters and numbers of the most recent block hash at the time, namely January 10th. Because the hash value is unique to the transaction history of the blockchain and cannot be predicted, quoting the hash value on January 10th proves that the broadcast was not made earlier than that day. It’s equivalent to holding up a copy of a newspaper to show that a video has to be contemporary.

With regards to proof of existence, the evidence on the blockchain comes from the digital signatures created by account holders’ private keys. A blockchain entry certainly proves that a certain private key existed at the time of the entry, but on its own, blockchain doesn’t prove who controls the key. A major objective of blockchain as a crypto-currency engine was indeed to remove any central oversight of keys and account holders.

Quoting the blockchain hash value from January 10th doesn’t prove Assange was alive that day. It is the combination of the broadcast and the blockchain that tells us he was alive.

If this is an example of blockchain providing proof-of-existence (or “proof of life” according to some reports) then the video is like a key management layer: it augments the blockchain by binding the physical person to the data structure. Yet the combination of a video and the blockchain doesn’t provide any unique advantages over, for example, a video plus the day’s newspaper, or a video plus a snapshot of the day’s stock market ticker tape or lotto numbers.

The pure blockchain was designed to manage decentralised electronic cash and it does that with great distinction. But blockchain needs to be combined with other processes to achieve the many other non-cryptocurrency use cases, and those combinations erode its value. If you need to wrap blockchain with other security mechanisms to achieve some outcome, you will find that the consensus algorithm becomes redundant, and that simpler systems can get the job done.

Posted in Blockchain

Blockchain visionaries and blockchain awareness

In a Huffington Post blog "Why the Blockchain Still Lacks Mass Understanding" William Mougayar describes the blockchain as "philosophically inclined technology". It's one of his rare instances of understatement. Like most blockchain visionaries, Mougayar massively exaggerates what this thing does, overlooking what it was designed for, and stretching it to irrelevance. If "99% of people still don’t understand the blockchain" it's because Mougayar and his kind are part of the problem, not part of the solution.

Let's review. This technology is more than philosophically "inclined". Blockchain was invented by someone who flatly rejected fiat currency, government regulation and financial institutions. Satoshi Nakamoto wanted an electronic cash devoid of central oversight or 'digital reserve banks'. And he solved what was thought to be an unsolvable problem, with an elaborate and brilliant algorithm that has a network of thousands of computers vote on the order in which transaction appears in a pool. The problem is Double Spend; the solution is have a crowd watch every spend to see that no Bitcoin is spent twice.

But that's all blockchain does. It creates consensus about the order of entries in the ledger. It does not and cannot reach consensus about anything else, not without additional off-chain processes like user registration, permissions management, access control and encryption. Yet these all require the sort of central administration that Nakamoto railed against. Nakamoto designed an amazing solution to the Double Spend problem, but nothing else. Nakamoto him/herself said that if you still need third parties in your ledger, then the blockchain loses its benefits.

THAT is what most people misunderstand about blockchain. Appreciate what blockchain was actually for and you will see that most applications beyond its original anarchic scope for this philosophically single-minded technology simply don't add up.

Posted in Blockchain

A critique of Privacy by Design

Or Reorientating how engineers think about privacy.

From my chapter Blending the practices of Privacy and Information Security to navigate Contemporary Data Protection Challenges in “Trans-Atlantic Data Privacy Relations as a Challenge for Democracy”, Kloza & Svantesson (editors), in press.

One of the leading efforts to inculcate privacy into engineering practice has been the “Privacy by Design” movement. Commonly abbreviated "PbD" is a set of guidelines developed in the 1990s by the then privacy commissioner of Ontario, Ann Cavoukian. The movement seeks to embed privacy “into the design specifications of technologies, business practices, and physical infrastructures”. PbD is basically the same good idea as build in security, or build in quality, because retrofitting these things too late in the design lifecycle leads to higher costs* and compromised, sub-optimal outcomes.

Privacy by Design attempts to orientate technologists to privacy with a set of simple callings:

    • 1. Proactive not Reactive; Preventative not Remedial
    • 2. Privacy as the Default Setting
    • 3. Privacy Embedded into Design
    • 4. Full Functionality – Positive-Sum, not Zero-Sum
    • 5. End-to-End Security – Full Lifecycle Protection
    • 6. Visibility and Transparency – Keep it Open
    • 7. Respect for User Privacy – Keep it User-Centric.

PbD is a well-meaning effort, and yet its language comes from a culture quite different from engineering. PbD’s maxims rework classic privacy principles without providing much that’s tangible to working systems designers.

The most problematic aspect of Privacy by Design is its idealism. Politically, PbD is partly a response to the cynicism of national security zealots and the like who tend to see privacy as quaint or threatening. Infamously, NSA security consultant Ed Giorgio was quoted in “The New Yorker” of 21 January 2008 as saying “privacy and security are a zero-sum game”. Of course most privacy advocates (including me) find that proposition truly chilling. And yet PbD’s response is frankly just too cute with its slogan that privacy is a “positive sum game”.

The truth is privacy is full of contradictions and competing interests, and we ought not sugar coat it. For starters, the Collection Limitation principle – which I take to be the cornerstone of privacy – can contradict the security or legal instinct to always retain as much data as possible, in case it proves useful one day. Disclosure Limitation can conflict with usability, because Personal Information may become siloed for privacy’s sake and less freely available to other applications. And above all, Use Limitation can restrict the revenue opportunities that digital entrepreneurs might otherwise see in all the raw material they are privileged to have gathered.

Now, by highlighting these tensions, I do not for a moment suggest that arbitrary interests should override privacy. But I do say it is naive to flatly assert that privacy can be maximised along with any other system objective. It is better that IT designers be made aware of the many trade-offs that privacy can entail, and that they be equipped to deal with real world compromises implied by privacy just as they do with other design requirements. For this is what engineering is all about: resolving conflicting requirements in real world systems.

So a more sophisticated approach than “Privacy by Design” is privacy engineering in which privacy can take its place within information systems design alongside all the other practical considerations that IT professionals weigh up everyday, including usability, security, efficiency, profitability, and cost.

See also my "Getting Started Guide: Privacy Engineering" from Constellation Research.

      • *Footnote
      • Not unrelatedly, I wonder if we should re-examine the claim that retrofitting privacy, security and/or quality after a system has been designed and realised leads to greater cost! Cold hard experience might suggest otherwise. Clearly, a great many organisations persist with bolting on these sorts of features late in the day -- or else advocates wouldn't have to keep telling them not to. And the Minimum Viable Product movement is almost a license to defer quality and other non-essential considerations. All businesses are cost conscious, right? So averaged across a great many projects over the long term, could it be that businesses have in fact settled on the most cost effective timing of security engineering, and it's not as politically correct as we'd like?!

Posted in Software engineering, Privacy, Innovation