Government is the source — and should be the provider — of unambiguous, verifiable facts such as birth dates, health system numbers, and driver licences. Their durability may vary, but there is no such thing as “alternative facts” here.
This post proposes a shift in how government provisions verified credentials into the wider economy. The result will be a more resilient data ecosystem across multiple use cases.
We all know there’s a problem
Data is regularly being breached. A large proportion of that data is issued by government agencies. These facts are the stuff on which identification processes depend. Government is the authoritative origin of these atomic truths, these credentials, and when they are stolen citizens are exposed to financial risk and massive inconvenience. Getting new passport or driver licence numbers is no trivial task. This core data should never have to change.
Government agencies at multiple levels generate and manage data over which they are authoritative, including driver licences, birth records, name changes, public education degrees, trade licences, immigration status, work permits, vaccination status, and many more. This data’s utility is grounded in the record-keeping needed to support transactions between the issuing government agency and the individual or business entity to which the data was assigned.
This bilateral relationship between government agency and the data subject has worked well when the data’s use has been confined to the target context. But this data’s subsequent use in other contexts, and particularly its risky storage by third parties, is now a huge source of pain.
As hundreds of data breaches have demonstrated, once the data is stolen it can be replayed for fraudulent purposes, creating financial and emotional hardship at worst or, at the very least, painful inconvenience when numbers must be reissued.
Government has little control over the downstream use of the data it generates. Clearly, governments cannot ban the use of the credentials it issues.
What if we could use government data in ways that obviate and discourage the need for data storage by risk owners?
Expanding government’s role as data provider
Government-managed data would work to benefit the economy and secure our digital lives.
What would be possible if:
- Risk owners — individuals sending money, financial institutions loaning it, homeowners hiring tradesmen — could make decisions based on certain verifiable facts?
- Risk assessments were based not only on a specific number but also the story behind it, the metadata, that describes its behavioural history, in the full knowledge that the data is up to date and securely delivered?
We would improve confidence and trust. And make life for fraudsters much harder.
Taking the next step
The next evolution for government data-sharing is to deliver, to distribute, data in verifiable credential form. Think of this as a new use case for the data the government already produces.
Taking this step requires a broadening of government’s role in data sharing. Government is already in the data collection and distribution business. We propose a natural expansion.
Governments at most levels gather data, often highly personal information, to form the basis of statistical analyses and, on a regular basis, produce reports to the public on, for example, economy-influencing trends such as GDP and population.
To be clear, this will require government to perform all these actions::
- Issue verifiable credentials that wrap the existing official data cryptographically in digital wallets that manage user authentication, just as is done with credit cards.
- Enhance the core data with metadata that conveys the story behind the data, including its provenance, terms and conditions.
- Pass any necessary enabling legislation.
- Enable a data distribution ecosystem of commercial providers who, through a data verification platform based on rules, technology, and brand, deliver quality data to risk owners willing to pay for that quality.
- Integrate with a consent management facility, provided by the data verification platform, that prompts data subjects, via the digital wallet, to approve or deny government data-sharing with risk owners.
The result will be a scheme that allows official government data to be ingested automatically — read by machine, parsed, verified, and accepted as fit-for-purpose — for use across the economy.
Government itself does not need to develop all of these components. Far from it.
The production of verifiable credentials can be co-sourced or outsourced in partnership with bureau services, just as today government IDs are produced by plastic card companies. Government would prepare and sign its data and applicable metadata, and release these bundles to verifiable credential partners who would then mint and load the credentials into citizens’ wallets. Risk owners consuming that data need to know it comes from the source and not from a third party.
We also anticipate the emergence of a value chain of data providers and processors, focused on the needs of the risk owners.In particular, to help data origins issue and manage their data, the model envisions the role of the data origin service provider. Likewise, a data distributor may focus on a particular market sector, such as healthcare or the auto industry. Channel partners might provide services to specific categories of risk owners, tuned to their specific facts requirements.
The risk owner knows what it’s doing
It is the job of every risk owner to assess the risks it faces and to employ processes — from technology to insurance — that lower their exposure to economically acceptable levels. Risk owners don’t always get it right — the breach plague proves that — and they should be sanctioned as appropriate or punished in the marketplace. But those risks are known, and it’s right to expect risk owners to manage them.
We are not advocating that government define what is sufficient for a risk owner. Far from it. We believe the risk owner knows what to use when making a risk decision. An approach which is, essentially, “We know what’s good for you” is a non-starter, as the failure of multiple federated identity initiatives has proven.
The risk owner is the primary beneficiary, and customer, of risk mitigation services, tools, and processes. This would include government-issued data. The risk owner serves as the economic engine of the risk mitigation industry in its many forms. In our facts-delivery model, the Data Verification Platform, the risk owner is the party that pays.
We will return to the “risk owner knows best” point again and again.
Government isn’t the only issuer of facts
We’ve identified government as a critical source of core facts and verifiable credentials. Our belief is that those facts can and should play a much larger role in society’s use of digital technology.
But there are other entities in the economy who issue enduring facts about data subjects. Mobile network operators assign phone numbers. Banks and internet service providers generate account numbers. Electric utilities assign account numbers to individuals at specific addresses and GPS locations.
Facts produced by utilities and commercial entities in accordance with a data verification process will also be useful. The risk owner, based on its risk management needs, could combine government and commercially issued credentials to harden its onboarding and transaction evaluation processes.
Why is this better?
Verified data is demonstrably better than black-box generated results. This is true for consumer data and in B2B contexts.
Government-originated data is foundational and essentially inarguable. The same cannot be said for black-box approaches or the coming avalanche of generative AI BS.
We are proponents of AI’s power to correlate data. Imagine the resolving power of AI-based analyses based on facts rather than prior black-box approximations. Synthetic identities would be far harder to pass through onboarding screening defences.
Data minimisation is another benefit. What would happen if enterprises, like mobile network operators, stored tokens pointing to a record of the results and recency of its fact checks rather than storing the facts themselves? There’d be nothing to steal.
Questions to ask
- What will it take for government policymakers to include the verifiable data economy and its role as fact provider in the scope of its remit and as a public good?
- How well have sanctions on the breached worked? Not well, of course. While expedient and good “security theatre”, the threat of higher cost sanctions does not change how risk owners fundamentally manage high-value data. We need systemic change to remove driver licence and passport numbers from commercial databases.
- Will generative AI become a major tool for fraudsters? How will we be able to tell the difference between synthetic data and deep fakes, and the real thing?
- How long can we afford to continue mishandling data and working with suboptimal data?
Lockstep’s Data Verification Platform is a proposed design to rationalise and organise data flows between data originators such as government and the risk owners who rely on accurate data to guide decisions. Join us in conversation.
If you’d like to follow the development of the Data Verification Platform model, please subscribe for email updates.