Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

CNP fraud is just online carding

I recently posted the latest Card Not Present fraud figures for Australia. Technologically, CNP fraud is not a novel problem. We already have the tools and the cardholder habits to solve the CNP problem. We should look at the experience of skimming and carding, which was another tech problem that demanded a smart tech solution.

Card Not Present fraud is simply online carding.

A magnetic stripe card keeps the cardholder's details as a string of ones and zeroes, stored in the clear, and presents that string to a POS terminal or ATM. It's easy for a criminal to scan the ones and zeroes and copy them to a blank card.

In general terms, EMV or Chip-and-PIN cards work by encrypting those ones and zeros in the chip so they can only be correctly decoded by the terminal equipment. In reality the explanation is somewhat more complex, involving asymmetric cryptography, but for the purposes of explaining the parallel between skimming/carding and CNP fraud, we can skip the details. The salient point is that EMV cards prevent carding by using encryption inside the secure chip using keys that cannot be tampered with or substituted by an attacker.

As with mag stripe cards, conventional Card Not Present transactions transmit cleartext cardholder data, this time to a merchant server. On its own, a server cannot tell the difference between the original data and a copy, just as a POS terminal cannot tell an original bank issued cards from a criminal's copy.

Lockstep Technologies was first to see the parallel between skimming/carding and CNP fraud. Our solution "Stepwise" uses the same cryptographic technology in chip cards that prevents carding to digitally sign transactions created at a browser or mobile device. Stepwise signatures can be verified at any merchant server, using standard built-in software libraries and a widely distributed "master key".

TN4 Pic for web 120327


I presented the Stepwise solution to the Payments Innovation stream at Cards & Payments Australia 2012 last week. The presentation is available here.

See also technical details here and a live demo on the ABC TV "New Inventors" program.

Posted in Smartcards, Payments, Fraud

Card Not Present now three quarters of all fraud

The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. Lockstep monitors these figures, condenses them and plots the trend data.

Here's the latest picture of Australian payment card fraud in three major categories over the past six financial years.

CNP trends pic to FY 2011

Card fraud by skimming and counterfeiting is holding steady, thanks to the security of EMV chip-and-PIN cards. Card Not Present (CNP) fraud is the preferred modus operandum of organised crime, and continues to grow unabated. The increase in CNP fraid from last financial year was 46%; CNP now represents 71% -- or nearly three quarters -- of total annual card fraud.

What's to be done about this never ending problem?

  • The credit card associations' flagship online payment protocol "3D Secure", rolled out selectively and tentatively overseas, is loathed by customers and merchants alike. 3D Secure is virtually unknown in Australia.
  • There have been various attempts to stem the tide of stolen cardholder details that fuels CNP fraud. Examples include 'big iron' software changes like "Tokenization" and the PCI-DSS security audit regime, which has proven expensive and largely futile. Arguments raged over whether Heartland Payments Systems (which suffered the world's biggest card data theft in 2009) was "really" PCI-DSS compliant. It's become so arbitrary that by the time the Sony PSN was breached last year with the loss of up to 70 million credit cards (nobody really knows how many) the question of whether Sony was PCI compliant never even came up.

Posted in Security, Payments, Fraud

Ski runs and LOAs

In Identity Management, Levels of Assurance are an attempt to standardise the riskiness of online transactions and the commensurate authentication strength needed to secure them. Quaternary LOAs (levels 1/2/3/4) have been instituted by governments in the USA, Australia and elsewhere, and they're a cornerstone of federated identity programs like NSTIC.

All LOA formulations are based on risk management methodologies like the international standard ISO 31000. The common approach is for organisations to assess both the impact and expected likelihood of all important adverse events (threats) using metrics customised to the local business conditions and objectives. The severity of security threats can be calculated in all sorts of ways. Some organisations can put a dollar price on the impact of a threat; others look at qualititative or political effects. And the capacity to cover the downside means that the same sort of incident might be thought "minor" at a big pharmaceutical company but "catastrophic" at a small Clinical Research Organisation.

I've blogged before that one problem with LOAs is that risk ratings aren't transferrable. Risk management standards like ISO 31000 are forumulated for internal customised use, so their results are not inherently meaningful between organisations.

Just look at another type of risk rating: the colours of ski runs.

All ski resorts around the world badge the degree of difficulty of their runs the same way: Green, Blue, Black and sometimes Double Black. But do these labels mean anything between resorts? Is a Blue run at Aspen the same as a Blue at Thredbo? No. These colours are not like currency, so skiers are free to boast "that Black isn't nearly as tough as the Black I did last week".

LOAs are just like this. They're local. They're based on risk metrics (and risk appetites) that are not uniform across organisations. They cannot interoperate.

As far as I am aware, there are as yet no examples of LOA 3 or 4 credentials issued by one IdP being relied on by external Service Providers. When there's a lot at stake, organisations prefer to use their own identities and risk management processes. And it's the same with skiing. A risk averse skier at the top of a Black run needs more than the pat assurance of others; they will make up their own mind about the risk of going down the hill.

Posted in Language, Federated Identity

A penny for your marketable thoughts?

Most people think that Apple's Siri is the coolest thing they've ever seen on a smart phone. It certainly is a milestone in practical human-machine interfaces, and will be widely copied. The combination of deep search plus natural language processing (NLP) plus voice recognition is dynamite.

If you haven't had the pleasure ... Siri is a wondrous function built into the Apple iPhone. It’s the state-of-the-art in Artificial Intelligence and NLP. You speak directly to Siri, ask her questions (yes, she's female) and tell her what to do with many of your other apps. Siri integrates with mail, text messaging, maps, search, weather, calendar and so on. Ask her "Will I need an umbrella in the morning?" and she'll look up the weather for you – after checking your calendar to see what city you’ll be in tomorrow. It's amazing.

Natural Language Processing is a fabulous idea of course. It radically improves the usability of smart phones, and even their safety with much improved hands-free operation.

An important technical detail is that NLP is very demanding on computing power. In fact it's beyond the capability of today's smart phones, even if each of them alone is more powerful than all of NASA's computers in 1969!. So all Siri's hard work is actually done on Apple's mainframe computers scattered around the planet. That is, all your interactions with Siri are sent into the cloud.

Imagine Siri was a human personal assistant. Imagine she's looking after your diary, placing calls for you, booking meetings, planning your travel, taking dictation, sending emails and text messages for you, reminding you of your appointments, even your significant other’s birthday. She's getting to know you all the while, learning your habits, your preferences, your personal and work-a-day networks.

And she's free!

Now, wouldn't the offer of a free human PA strike you as too good to be true?

Indeed it would. So understand this about Siri: she's continuously reporting back to base. If Apple were a secretarial placement agency, what they get in return for the free services is a transcript of what you've said, the people you've been in touch with, where you've been, and what you plan to do. Apple won't say what they plan to do with all this data, how long they'll keep it, nor who they'll share it with. Apple's Privacy Policy (dated October 2011, accessed 12 March 2012) doesn't even mention Siri nor the collection of the voice-to-text data.

When you dictate your mails and text messages to Siri, you’re providing Apple with content that's usually off limits to carriers, phone companies and ISPs. Siri is an end run around telecommunicationss intercept laws.

Of course there are many, many examples of where free social media apps mask a commercial bargain. Face recognition is the classic case. It was first made available on photo sharing sites as a neat way to organise one’s albums, but then Facebook went further by inviting photo tags from users and then automatically identifying people in other photos on others' pages. What's happening behind the scenes is that Facebook is running its face recognition templates over the billions of photos in their databases (which were originally uploaded for personal use long before face recognition was deployed). Given their business model and their track record, we can be certain that Facebook is using face recognition to identify everyone they possibly can, and thence work out fresh associations between countless people and situations accidentally caught on camera. Combine this with image processing and visual search technology (like Google "Goggles") and the big social media companies have an incredible new eye in the sky. They can work out what we're doing, when, where and with whom. Nobody will need to like expressly "like" anything anymore when OSNs can literally see what cars we're driving, what brands we're wearing, where we spend our vacations, what we're eating, what makes us laugh, who makes us laugh. Apple, Facebook and others have understandably invested hundreds of millions of dollars in image recognition start-ups and intellectual property; with these tools they convert the hitherto anonymous images into content-addressable PII gold mines. It's the next frontier of Big Data.

Now, there wouldn't be much wrong with these sorts of arrangements if the social media corporations were up-front about them, and exercised some restraint. In their Privacy Policies they should detail what Personal Information they are extracting and collecting from all the voice and image data; they should explain why they collect this information, what they plan to do with it, how long they will retain it, and how they promise to limit secondary usage. They should explain that biometrics technology allows them to generate brand new PII out of members' snapshots and utterances. And they should acknowledge that by rendering data identifiable, they become accountable in many places under privacy and data protection laws for its safekeeping as PII. It's just not good enough to vaguely reserve their rights to "use personal information to help us develop, deliver, and improve our products, services, content, and advertising". They should treat their customers -- and all those innocents about whom they collect PII indirectly -- with proper respect, and stop blandly pretending that 'service improvement' is what they're up to.

Siri along with face recognition herald a radical new type of privatised surveillance, and on a breathtaking scale. While Facebook stealthily "x-ray" photo albums without consent, Apple now has even more intimate access to our daily routines and personal habits. And they don’t even pay as much as a penny for our thoughts.

As cool as Siri may be, I myself will decline to use any natural language processing while the software runs in the cloud, and while the service providers refuse to restrain their use of my voice data. I'll wait for NLP to be done on my device with my data kept private.

And I'd happily pay cold hard cash for that kind of app, instead of having an infomopoly embed itself in my personal affairs.

Posted in Social Networking, Social Media, Privacy, Language, Biometrics, Big Data

A software engineer's memoir (work in progress)

I'm an ex software "engineer" [I have reservations about that term] with some life experience of ultra high rel development practices. It's fascinating how much about software quality I learned in the 1980s and 90s is relevant to info sec today.

I've had a trip down memory lane triggered by Karen Sandler's presentation at LinuxConf12 in Ballarat http://t.co/xvUkkaGl and her paper "Killed by code".

The software in implantable defibrillators

I'm still working my way through Karen Sandler's materials. So this post is a work in progress.

What's really stark on first viewing of Karen's talk is the culture she experienced and how it differs from the implantable defib industry I knew in its beginnings 25 years ago.

Karen had an incredibly hard and very off-putting time getting the company that made her defib to explain their software. But when we started in this field, every single person in the company -- and many of our doctors -- would have been able to answer the question What software does this defib run on?: the answer was "ours". And moreover, the FDA were highly aware of software quality issues. The whole medical device industry was still on edge from the notorious Therac 25 episode, a watershed in software verification.

A personal story

I was part of the team that wrote the code for the world's first software controlled implantable cadrioverter/defibrillator (ICD).

In 1990 Telectronics (a tragic legend of Australian technology) released the model 4210, which was just the fourth or fifth ICD on the market (the first few being hard-wired devices from CPI Inc. and Telectronics). The computing technology was severely restricted by several design factors, most especially ultra low power consumption, and a very limited number of microprocessor vendors that would warrant their chips for use in medical devices. The 4210 defib used a semi-customised 8 bit micro-controller based on the 6502, and a 32 KB byte-organised SRAM chip that held the entire executable. The micro clocked at 128kHz, fully eight times slower than the near identical micro in the Apple II a decade earlier. The software had to be efficient, not only to ensure it could make some very tough real time rendezvous, but to keep the power consumption down; the micro consumed about 30% of the device's power over its nominal five year lifetime.

Software development

We wrote mostly in C, with some assembly coding for the kernel and some performance sensitive routines. The kernel was of our own design, multi-tasking, with hard real time performance requirements (in particular, for obvious reasons the system had to respond within tight specs to heart beat interrupts and we had to show we weren't ever going to miss an interrupt!) We also wrote the C compiler.

The 4210's software was 40,000 lines of C, developed by a team of 5-6 over several years; the total effort was 25 person-years. Some of the testing and pre-release validation is described in my blog post about coding being like play writing. The final code inspection involved a team of five working five-to-six hour days for two months, reading aloud and understanding every single line. When occasion called for checking assembly instructions, sometimes we took turns with pencil and paper pretending to be the accumulators, the index registers, the program counter and so on. No stone was left unturned.

The final walk-through was quite a personnel management challenge. One of the senior engineers (a genius who also wrote our kernel and compiler) lobbied for inspecting the whole executable because he didn't want to rely on the correctness of the compiler -- but that would have taken six months. So we compromised by walking through only the assembly code for the critical modules, like the tachycardia detector and the interrupt handlers.

I mentioned that the kernel and compiler were home-grown. So this meant that the company controlled literally every single bit of code running in its defibs. And I reiterate we had several individuals who knew the source code end to end.

By the way, these days I will come across developers in the smartcard industry who find it hard working on applets that are 5 or 10 kilobytes small. Compare say a digital signing applet with an ICD, with its cardiac monitoring algorithms, treatment algorithms, telemetry controller, data logging and operating system all squeezed into 32KB.

Reliability

We amassed several thousand implant-years of experience with the 4210 before it was superseded. After release, we found two or three minor bugs, which we fixed with software upgrades. None would have caused a misfire, neither false positive or false negative.

Yes, for the upgrade we could write into the RAM over a proprietary telemetry protocol. In fact the main reason for the one major software upgrade in the field was to add error correction because after hundreds of device-years we noticed higher than expected bit flips from natural background radiation. That's a helluva story in itself. It was observed that had the code been in ROM, we couldn't have changed it but we wouldn't have had to change it for bit flips either.

Morals of the story

Anyway, some of the morals of the story so far:


  • Software then was cool and topical, and the whole company knew how to talk about it. The real experts -- the dozen or so people in Sydney directly involved in the development -- were all well known worldwide by the executives, the sales reps, the field clinical engineers, and regulatory affairs. And we got lots of questions (in contrast to Karen Sandler's experience where all the caridologists and company people said nobody ever asked about the code).

  • Everything about the software was controlled by the company: the operating system, the chip platform, the compiler, the telemetry protocol.

  • We had a team of people that knew the code like the backs of their hands. Better in fact. It was reliable and, in hindsight, impregnable. Not that we worried about malware back in 1987-1990.

Where has software development gone?

So the sorts of issues that Karen Sandler is raising now, over two decades on, are astonishing to me on so many levels.


  • Why would anyone decide to write life support software on someone else's platform?

  • Why would they use wifi or Bluetooth for telemetry?

  • And if the medical device companies cut corners in software development, one wonders what the defense industry is doing with their drone flight controllers and other "smart" weaponry with its countless millions of lines of opaque software.

[TO BE CONTINUED]

Posted in Software engineering, Management theory

What stops Target telling you're pregnant?

Question: What stops Target from telling that you're pregnant?
Answer: In many parts the world, the law!

The recent New York Times feature How Companies Learn Your Secrets caused a stir. Investigative reporter Charles Duhigg details conversations he had with data analysts and statisticians about what marketing gold they can divine from shoppers' buying habits ... and how one department store then seemed to shut down the dialogue.

The case in point was the ability to statistically predict pregnancy. Duhigg and his contacts looked into the enormous business potential for retailers if they could work out from what they're buying that individual customers were in the early stages of pregnancy. One analyst said "We knew that if we could identify them in their second trimester, there's a good chance we could capture them for years". Insiders admitted to developing and testing a "pregnancy prediction" score but it remains unclear to what extent such tools are used in practice with real data.

This is pretty heady stuff, on the leading edge of Big Data analytics.

What kind of problem is this?

Charles Duhigg's NYT feature ends on a note of resignation, and I get the impression from scanning blog posts on this matter that many people -- especially in the largely unregulated United States -- are feeling powerless to do anything about this. Yet they should take heart from existing privacy law, at least in places like Australia with OECD-based data protection legislation, it's pretty clear for anyone who actually reads the rules, that for a department store to work out and record that someone is pregnant is likely be unlawful.

A look at how Australia regulates privacy

At state and federal level, Australia has several privacy acts and health records acts. For our purposes here, they're all much the same. And I repeat that the following analysis is likely to have parallels in many other countries. I will use the Victorian Health Records Act 2001 (the "Act") as a model; underlining in the quoted passages is added by me for emphasis.

Personal Information is defined in the Act as:

information or an opinion (including information or an opinion
forming part of a database), whether true or not, and whether
recorded in a material form or not, about an individual whose
identity is apparent, or can reasonably be ascertained

from the information or opinion

At this point, note that the definition is broad and unqualified by such abstract matters as data ownership. In the Australian legal system, privacy rights attach to any information whatsoever pertaining to an identifiable individual, whether that information is explicitly collected from the person, or generated automatically by Big Data processing.

Health Information is defined as, amongst other things:

[Personal Information] about -

(i) the physical, mental or psychological health
(at any time) of an individual
; or
(ii) a disability (at any time) of an individual; or
(iii) an individual's expressed wishes about the
future provision of health services to him or her

The cornerstones of privacy in OECD-style data protection systems are Collection Limitation and Use Limitation. Here are the opening clauses of Victoria's Health Privacy Principle HPP 1 - Collection:


1.1 When health information may be collected
An organisation must not collect health information about an
individual unless the information is necessary for one or more
of its functions or activities and at least one of the following
applies -
(a) the individual has consented;
(b) the collection is required, authorised or permitted,
whether expressly or impliedly, by or under law;
(c) the information is necessary to provide a health service ...

Note that consent is required in advance of collecting health information, whereas in the case of regular Personal Information, organisations have more latitude to give notice of collection reasonably after the fact.

And here are the opening clauses of Health Privacy Principle HPP 2 - Use & Disclosure:


2.1 An organisation may use or disclose health information about
an individual for the primary purpose for which the information was
collected in accordance with HPP 1.1.

2.2 An organisation must not use or disclose health information about
an individual for a purpose (the secondary purpose) other than the
primary purpose for which the information was collected unless
at least one of the following paragraphs applies -
(a) both of the following apply -
(i) the secondary purpose is directly related to the primary purpose; and
(ii) the individual would reasonably expect the organisation to use or
disclose the information for the secondary purpose; or
(b) the individual has consented to the use or disclosure ...

HPP 1 goes on to sanction how individuals should be kept informed about the collection of health information about them:

How health information is to be collected
1.4 At or before the time (or, if that is not practicable,
as soon as practicable thereafter) an organisation collects
health information about an individual from the individual,
the organisation must take steps that are reasonable in the
circumstances to ensure that the individual is generally aware of -
(a) the identity of the organisation and how to contact it; and
(b) the fact that he or she is able to gain access to the
information; and
(c) the purposes for which the information is collected; and
(d) to whom (or the types of individuals or organisations to which)
the organisation usually discloses information of that kind; and
(e) any law that requires the particular information to be
collected; and
(f) the main consequences (if any) for the individual if all or
part of the information is not provided.

1.5 If an organisation collects health information about an
individual from someone else, it must take any steps that are
reasonable in the circumstances to ensure that the individual
is or has been made aware of the matters listed in HPP 1.4 except
to the extent that making the individual aware of the matters
would pose a serious threat to the life or health of any
individual or would involve the disclosure of information
given in confidence.

Conclusion: Don't give up on privacy!

On my reading of the Act, we can be sure of the following:


  • If a department store mines its data on shopping habits, determines that a named woman is likely to be pregnant, and records that prediction in a database, then the store will have collected health information about her and is subject to health privacy legislation in several states (as well as the Sensitive Personal Information clauses of Australia's federal privacy law).
  • If the department store has not obtained the customer's consent to having the state of her pregnancy being determined, then the store will have breached HPP 1.1.
  • If the store uses information originally collected from customers to monitor their shopping habits to generate new information predicting their pregnancies, then it will have breached HPP 2.2.
  • If the store has not informed the woman that they have predicted she is pregnant, then it will have breached HPP 1.5.

Many commentators fear that the march of technology outpaces the law, but I for one am more optimistic. For the most part, it seems our current information privacy law actually copes well with the sorts of business activities we find intuitively problematic. I am not a lawyer but it looks clearly unlawful to me if a department store in Australia were to purposefully works out its customers are pregnant. Technically, just recording that prediction even without acting upon it probably counts as a Collection of health information and as such it needs the consent of the customer.

The same legal principles apply -- with even more force -- in Europe. It remains to be seen whether information privacy can be better regulated in the US through the FIPPs or other mechanisms.

Posted in Privacy, Big Data

Information companies and the Use Limitation Principle

Google has copped a lot of flak over its move to join up all services with the cover story that it's simply rationalising its privacy policies. Amongst those defending Google is another information company, Bloomberg. In this post, I want to draw attention to details of Australian privacy law that Bloomberg is oblivious to. Other jurisdictions with OECD based data protection legislation (and that's a lot of the world) may present the same challenge to Google's and Bloomberg's simplistic view of privacy. Let's take a closer look.

In an editorial on March 1, Bloomberg positively thrilled to an alleged over-reaction of privacy advocates:

You’d think Google had announced it would start collecting terabytes of data about you, your neighbor and your dog, if he’s ever online.

Then Bloomberg's editors asserted:

You’d be wrong: Google already does that. Google is not collecting any new information; rather, it is sharing (with itself) more of the information it already has [emphasis added].

But it is Bloomberg that's wrong.

The Use Limitation principle holds that custodians of Personal Information should not put that PI to secondary uses unrelated to the primary purpose for which it was collected. Nobody using Blogger or YouTube for instance over the years could have foreseen that one day their posts and videos would be mashed up with Google's boundless data mines and put to any old comemrcial purpose Google sees fit.

Use Limitation is really basic. One cannot really believe Google doesn't get it; their ambit claim that what they're doing is good for privacy because now there's a single simple privacy policy just doesn't pass muster.

But in Australia, the situation for the big infomopolies is potentially even more restrictive, with recent legally enforceable interpretations of the Use Limitation principle expressly nullifying the presumption that 'sharing information with itself' is ok for heterogeneous organisations.

The Privacy Commissioner for the State of Victoria has advised that "entities within the Victorian public sector should not assume that, because one part of the organisation collected some personal information, this can disclosed to any other part of the organisation without regard for [the Use & Disclosure Principle]" Ref: Guidelines to the Information Privacy Principles, Office of the Victorian Privacy Commissioner, Edition 3, November 2011.

This advice derives from a tribunal ruling elsewhere in Australia, which I discussed at length in another blog post: http://lockstep.com.au/blog/2011/09/04/the-ultimate-opt-out. In that case, patient information collected by a counsellor in a hospital was shared without the patient's consent with another specialist, and the patient's rights were ruled to have been violated.

The relevance of these matters in the current discussion about Google amalgamating services is that the Australian legal system has taken a conservative view of what it means to share personal information within large organisations. Technically, the ruling is that individuals have the right to be informed about internal disclosures, and they may have the right to withdraw their consent.

Let's remember that Australian law is not as strict as that of European states like Germany, and is not enforced as energetically. With OECD principles forming the basis for all these sorts of data protection regulations, I suspect that European states will reach the same conclusions, that Google is not in fact entirely free to share information 'with itself'.

Case law around OECD Privacy Principles is clearly fluid. Big infomopolies need to take more care not to presume what the law actually says.

But let's be less legalistic about this, and instead make this appeal to Google: If you truly have the interests of customers at heart, then please heed civil rights, reconsider how people expect their treasured private information to be handled, and try not to take their online permissiveness for granted.

Posted in Social Media, Privacy