Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The ultimate opt-out

Multi-disciplined healthcare is standard practice today. Yet an important legal precedent to do with information sharing shows how important it is that practitioners do not presuppose how patients weigh health outcomes relative to privacy. As debate continues over opt-in and opt-out models for Patient Controlled Electronic Health Records, the lessons of this case should be re-visited, because it was sympathetic to a patient's right to withhold certain information from their carers for privacy reasons.

In 2004, an oncology patient KJ was being treated at a hospital west of Sydney by a multi-disciplined care team. At one point she consulted with a psychiatrist. Sometime later, notes of her psychiatric sessions were shared with others in the oncology team. KJ objected and complained to the NSW Administrative Decisions Tribunal that her privacy had been violated. Hospital management defended the sharing on the basis that it was normal in modern multi-disciplined healthcare and that it therefore represented reasonable Use of personal information under privacy legislation. However, the tribunal agreed with KJ that she should have been informed in advance that her psychiatric file would be shared with others. That is, the tribunal found that sharing patient information even with other professionals in the same facility constituted Disclosure of Personal Information and not just Use.

In broad terms, under Australian privacy laws, the Disclosure of Sensitive Personal Information generally requires the consent of the individual concerned, whereas Use does not, because it is related to the primary purpose for collection and would be regarded as reasonable by the individual concerned.

There is no argument that the exchange of health information with colleagues caring for the same patient is inherent to most good medical practice. Sharing information would probably be universally regarded by healthcare providers, in the context of privacy legislation, as a reasonable use, closely related to the primary purpose of collecting that information. And yet KJ v Wentworth Area Health Service recognises that the attitudes of patients as to what is reasonable may differ from those of doctors. If there is a significant risk that a given patient would not think it reasonable for information to be shared, then privacy legislation in Australia (as typified by NSW law) requires that their express consent is sought beforehand.

Many healthcare facilities in NSW responded to this case by improving their Privacy Notices. At the time of admission (and hopefully also at other times during their treatment journey) patients should be informed that their Personal Information may be disclosed to other healthcare professionals in the facility. This gives the patient the opportunity to withhold details they do not want disclosed more widely.

The tribunal noted in KJ v Wentworth Area Health Service that "while generally speaking the expression 'disclosure' refers to making personal information available to people outside an agency, in the case of large public sector agencies consisting of specialised units, the exchange of personal information between units may constitute disclosure".

In other words, lay people may perceive there to be greater "distance" between different units in the health system, even within the one hospital, than do healthcare professionals. Legally, it appears that the understandable interests of healthcare professionals to work closely together do not trump a patient's wishes to sometimes keep their Personal Information compartmentalised.

This precedent is important to the design of EHR systems, for it reminds us that the entirety of the record should not be automatically accessible by all providers. But more subtley, it also re-balances the argument often advanced by doctors that opt-in may be injurious because patients might not make the best decisions if they pick-and-choose what parts of their story to include in the EHR. Even if that clinical risk is real, the ruling in KJ vs Wentworth Area Health Service would appear to empower patients to do just that.

In my view, the resolution of this tension lies in better communication, and good faith. What matters above all in electronic health is trust and participation. We know that patients who fear for their privacy will actually decline treatment if they do not trust that their Personal Information will be safe. Whether an EHR is technically opt-in or opt-out doesn't matter in the long run if patients exercise their ultimate right to just stay away. Privacy anxieties may be especially acute around mental health, sexual assault, drug and alcohol abuse and so on. It is imperative for the public health benefits expected from e-health that patients with these sorts of conditions have faith in EHRs and do not simply drop out.

Reference: Case Note: KJ v Wentworth Area Health Service, NSWADT 84, Privacy NSW; Date of Decision: 3 May 2004

Posted in Privacy, e-health

Other thoughts on Real Names

I'm going to follow my own advice and not accept the premise of Google's and Facebook's Real Names policy that it somehow is good for quality. My main rebuttal of Real Names is that it's a commercial tactic and not a well grounded worthy social policy.

But here are a few other points I would make if I did want to argue the merits of anonymity - a quality and basic right I honestly thought was unimpeachable!

Nothing to hide? Puhlease!

Much of the case for Real Names riffs on the tired old 'nothing to hide' argument. This tough-love kind of view that respectable people should not be precious about privacy tends to be the preserve of middle class, middle aged white men who through accident of birth have never personally experienced persecution, or had grounds to fear it.

I wish more of the privileged captains of the Internet could imagine that expressing one's political or religious views (for example) brings personal risks to many of the dispossessed or disadvantaged in the world. And as Identity Woman points out, we're not just talking about resistance fighters in the Middle East but also women in 21st century America who are pilloried for challenging the sexist status quo!

Some have argued that people who fear for their own safety should take their networking offline. That's an awfully harsh perpetuation of the digital divide. I don't deny that there are other ways for evil states to track us down online, and that using pseudonyms is no guarantee of safety. The Internet is indeed a risky place for conducting resistance for those who have mortal fears of surveillance. But ask the people who recently rose up on the back of social media if the risks were worth it, and the answer will be yes. Now ask them if the balance changes under a Real Names policy. And who benefits?

Some of the Internet metaphors are so bad they’re not even wrong

Some continue to compare the Internet with a "public square" and suggest there should be no expectation of privacy. In response, I note first of all that the public-private dichotomy is a red herring. Information privacy law is about controlling the flow of Personally Identifiable Information. Most privacy law doesn't care whether PII has come from the public domain or not: corporations and governments are not allowed to exploit PII harvested without consent.

Let's remember the standard set piece of spy movies where agents retreat to busy squares to have their most secret conversations. One's everyday activities in "public" are actually protected in many ways by the nature of the traditional social medium. Our voices don't carry far, and we can see who we're talking to. Our disclosures are limited to the people in our vicinity, we can whisper or use body language to obfuscate our messages, there is no retention of our PII, and so on. These protections are shattered by information technologies.

If Google's and Facebook's call for the end of anonymity were to extend to public squares, we'd be talking about installing CCTVs, tatooing peoples' names on their foreheads, recording everyone's comings and goings, and providing those records to any old private company to make whatever commercial use they see fit.

Medical OSN apartheid

What about medical social networking, which is one of the next frontiers for patient centric care, especially of mental health. Are patients supposed to use their real names for "transparency" and "integrity"? Of course not, because studies show participation in healthcare in general depends on privacy, and many patients decline to seek treatment if they fear they will be exposed.

Now, Real Names advocates would no doubt seek to make medical OSN a special case, but that would imply an expectation that all healthcare discussions be taken off regular social circles. That's just not how real life socialising occurs.

Anonymity != criminality

There's a recurring angle that anonymity is somehow unlawful or unscrupulous. This attitude is based more on guesswork than criminology. If there were serious statistics on crime being aided and abetted by anonymity then we could debate this point, but there aren't. All we have are wild pronouncements like Eugene Kaspersky's call for an Internet Passport. It seems to me that a great deal of crime is enabled by having too much identity online. It's ludicrous that I should hand over so much Personal Information to establish my bona fides in silly little transactions, when we all know that data is being hoovered up and used behind our backs by identity thieves.

And the idea that OSNs have crime prevention at heart when they force us to use "real names" is a little disingenuous when their response to bullying, child pornography, paedophilia and so on has for so long been characterised by keeping themselves at a cool distance.

What’s real anyway?

What’s so real about "real names" anyway? It's not like Google or Facebook they can check them (in fact, when it suited their purposes, the OSNs previously disclaimed any ability to verify names).

But more's the point, given names are arbitrary. It's perfectly normal for people growing up to not "identify with" the names their parents picked for them (or indeed to not identity with their parents at all). We all put some distance between our adult selves and our childhoods. A given family name is no more real in any social sense than any other handle we choose for ourselves.

Posted in Social Media, Security, Privacy, Nymwars, Internet, Identity, e-health, Culture, Social Networking

Diagnosing Google Health

The demise of Google Health might be a tactical retreat, but we need to understand what’s going on here and what it means for programs like Australia’s Patient-Controlled Electronic Health Record (PCEHR) and other commercial Personal EHRs like Microsoft’s HealthVault. On its face, it's sobering that the might and talent of Google hasn't been able to serve up a good solution.

There's no simple recipe for electronic health records; healthcare overall is an intractable system. Here are just a few things to think about, based on my time in e-health and working with medical devices:

1. Presentation of health information is hugely challenging. And healthcare providers and patients have totally different perspectives. Much more work needs to be done on the interfaces, and Google may feel that it’s better not to put off too many users at this stage with sub-optimal GUIs (especially if they need overhauling).

2. Clinical data on its own is near useless; it needs to go hand-in-glove with human expertise and also clinical applications (which feed data into the record, and extract data into decision support systems). The utility of PHRs used in isolation of healthcare experts still seems to be a wide open research field. What will patients be able to make of their own health data? Are PEHRs really only of interest to the "worried well"? Will it help or hinder when patients come to run their personal records through artifical intelligence services on the web?

3. Google is battered and bruised by a string of privacy controversies. While it bravely recovers its position and credibility after the Buzz and Street View wifi misadventures, it is exhibiting fresh caution; for instance they have put facial recognition on ice, with Eric Schmidt showing his soft side and calling it ‘too creepy’. [Maybe Google is going to tackle privacy in the same that Microsoft utterly revamped its security posture?] In any event, the last thing they need right now is a health related privacy stoush. At the end of the day, Google must make money out of e-health (and that’s entirely legitimate) but the business model may need a lot more careful work.

Points one and two apply to all PEHR/PCEHRs.

Designing an EHR dashboard that presents just the right information for the patient at hand, according to their current condition and the viewer’s particular interest, is a stupendous and fascinating task. Every clinical condition is different, and what a physician or patient really needs to see varies dramatically and deeply from one case to the next. It may require carefully characterising the everyman patient (actually, chronic patient, ambulatory out-patient, well person, parent ...) as well as the everyman healthcare professional (actually, nurse, GP, emergency intensivist, cardiologist ...).

We've all seen the literally fantastic videos of the hospital of the future, with physicians waltzing from bed to bed, bringing up multi-media charts on their tablet computers, and whizzing through test results, real time ECGs, decision support and so on. It looks great -- but aren’t they all just mock-ups?

STOP PRESS: The build-up to launch of Google+ recently might have also helped push Google Health back onto the drawing board. They may have sought to clear the decks for their privacy and governance teams!

Posted in e-health, Social Networking