“Designing Privacy By Design”

A recent column written for the International Association of Privacy Professionals (IAPP) on how to make Privacy by Design real.

Home » Library » Privacy » “Designing Privacy By Design”

“Designing Privacy By Design”

IAPP Australia New Zealand Chapter Bulletin #40, February 2013

The Privacy by Design concept, founded by Ontario Privacy Commissioner Ann Cavoukian, has become a permanent fixture of privacy discourse. PbD is basically the same good idea as “designing in quality”, or “designing in security”. The trouble is, designers or security professionals can’t often tell what it means.

What do engineers make of privacy?

Privacy continues to be a bit of a jungle for security practitioners. It’s not that they’re uninterested in privacy; rather, it’s rare for privacy objectives to be expressed in ways they can relate to. The only thing the IPPs, NPPs or APPs have to say about security is that it must be “reasonable” given the sensitivity of the Personal Information concerned.

Framed in legalistic language, privacy is somewhat opaque to the objective engineering mind. Security professionals naturally see it as meaning encryption and maybe some access control (after all, if you only have a hammer, everything looks like nails). To make privacy compliance more accessible, the Office of the Australian Privacy Commissioner is has been developing guidelines for information security practitioners. It remains to be seen how new guidelines will be taken up by technologists.

I have come to believe that a systemic conceptual shortfall affects how most technologists think about privacy. It may be that engineers tend to take literally the well-meaning slogan that “privacy is not a technology issue”. I say this in all seriousness.

Online, we’re talking about data privacy, or data protection, but systems designers tend to bring to work a broader spectrum of personal intuitions about privacy. This is despite the nice precise wording of the Privacy Act. To illustrate the difference between gut feel and regulation, here’s the sort of experience I’ve had time and time again.

A case study

During the course of conducting a PIA in 2011, I spent time with the development team working on a new government database. These were good, senior people, with sophisticated understanding of information architecture. But they harboured restrictive views about privacy. An important clue was the way they referred to “private” information rather than Personal Information. After explaining that Personal Information is the operable term in Australian legislation, and reviewing its definition from the Privacy Act, we found that the team had failed to appreciate the extent of the PI in their system. They overlooked that most of their audit logs collect PI, albeit indirectly and automatically.

Further, they had not appreciated that information about clients in their new database provided by third parties was also PI , despite it seeming to be “less private” by virtue of originating from others. I attributed these blind spots to the developers’ weak and informal frame of “private information”.

Online and in data privacy law alike, things are very crisp. The definition of Personal Information — namely any data relating to an individual whose identity is readily apparent — sets a low bar, embracing a great many data classes and, by extension, informatics processes. It’s a nice analytical definition that is readily factored into systems analysis. After the team got that, the PIA in question proceeded apace and we found and rectified several privacy risks that had gone unnoticed.

Closing the conceptual gap

Here are some more of the many recurring misconceptions I’ve noticed over the past decade:

  • “Personal” Information is sometimes taken to mean especially delicate information such as payment card details, rather than any information pertaining to an identifiable individual such as email addresses in many cases;
  • the act of collecting PI is sometimes regarded only in relation to direct collection from the individual concerned; technologists can overlook that PI provided by a third party to a data custodian is nevertheless being collected by the custodian, and they can fail to appreciate that generating PI internally, through event logging for instance, can also represent collection
  • even if they are aware of points such as the Access and Correction Principle, database administrators can be unaware that, technically, individuals requesting a copy of information held about them should also be provided with pertinent event logs; a non-trivial case where individuals can have a genuine interest in reviewing event logs is when they want to know if an organisation’s staff have been accessing their records.

These instances, among many others across both information security and privacy, show that ICT practitioners suffer important gaps in their understanding. As mentioned, security professionals in particular may be forgiven for thinking that most legislated Privacy Principles are legal niceties irrelevant to them. Yet every privacy principle is impacted by information technology and security practices [1].

I believe the gaps in the privacy knowledge of ICT practitioners are not random but are systemic, probably resulting from privacy training for non-privacy professionals being ad hoc and not properly integrated with their particular world views.

Making privacy real

To properly deal with data privacy, ICT practitioners need to have privacy framed in a way that leads to objective design requirements. Luckily there already exist several unifying frameworks for systematising the work of development teams. One example that resonates strongly with data privacy practice is the Threat & Risk Assessment (TRA).

The TRA is a security analysis tool, widely practiced in the public and private sectors. There are a number of standards that guide the conduct of TRAs, such as ISO 31000 and the Commonwealth’s Information Security Manual (ISM). As such, TRAs are rather more mature and uniform than Privacy Impact Assessments.

A TRA systematically catalogues all foreseeable adverse events that threaten an organisation’s information assets. It goes on to identify candidate security controls (considering technologies, processes and personnel) to mitigate those threats, and most importantly, determines how much should be invested in each control to bring all risks down to an acceptable level.

“Designing Privacy By Design”
The TRA exercise is readily extensible as an aid to Privacy by Design. A TRA can expressly incorporate privacy as an attribute of information assets worth protecting, alongside the conventional security qualities of confidentiality, integrity and availability (“C.I.A.”). A crucial subtlety here is that privacy is not the same as confidentiality, yet many frequently conflate the two.

By merging privacy considerations with well-practiced Threat & Risk Assessment methods, ICT designers acquire a fuller understanding of privacy. They are able to properly consider the Collection, Use, Disclosure, and Access & Correction principles, over and above confidentiality when they accounting for their information assets. This sort of approach proves to give real meaning to Privacy by Design.

[1]. Mapping Privacy requirements onto the IT function, Stephen Wilson, Privacy Law & Policy Reporter, Vol. 10.1& 10.2, 2003.