I am speaking at next week's AusCERT security conference, on how to make privacy real for technologists. This is an edited version of my conference abstract.
Privacy by Design is a concept founded by the Ontario Privacy Commissioner Dr. Ann Cavoukian. Dubbed "PbD", it's basically the same good idea as designing in quality, or designing in security. It has caught on nicely as a mantra for privacy advocates worldwide. The trouble is, few designers or security professionals can tell what it means.
Privacy continues to be a bit of a jungle for security practitioners. It's not that they're uninterested in privacy; rather, it's rare for privacy objectives to be expressed in ways they can relate to. Only one of the 10 or 11 or more privacy principles we have in Australia is ever labelled "security" and even then, all it will say is security must be "reasonable" given the sensitivity of the Personal Information concerned. With this legalistic language, privacy is somewhat opaque to the engineering mind; security professionals naturally see it as meaning little more than encryption and maybe some access control.
To elevate privacy practice from the personal plane to the professional, we need to frame privacy objectives in a way that generates achievable design requirements. This presentation will showcase a new methodology to do this, by extending the familiar standardised Threat & Risk Assessment (TRA). A hybrid Privacy & Security TRA adds extra dimensions to the information asset inventory. Classically an information asset inventory accounts for the confidentiality, integrity and availability (C.I.A.) of each asset; the extended methodology goes further, to identify which assets represent Personal Information, and for those assets, lists privacy related attributes like consent status, accessibility and transparency. The methodology also broadens the customary set of threats to include over-collection, unconsented disclosure, incomplete responses to access requests, over-retention and so on.
The extended TRA methodology brings security and privacy practices closer together, giving real meaning to the goal of Privacy by Design. Privacy and security are sometimes thought to be in conflict, and indeed they often are. We should not sugar coat this; after all, systems designers are of course well accustomed to tensions between competing design objectives. To do a better job at privacy, security practitioners need new tools like the Security & Privacy TRA to surface the requirements in an actionable way.
The hybrid Threat & Risk Assessment
TRAs are widely practiced during requirements analysis stages of large information systems projects. There are a number of standards that guide the conduct of TRAs, such as ISO 31000. A TRA first catalogues all information assets controlled by the system, and then systematically explores all foreseeable adverse events that threaten those assets. Relative risk is then gauged, usually as a product of threat likelihood and severity, and the set of threats to be prioritised according to importance. Threat mitigations are then considered and the expected residual risks calculated. An especially good thing about a formal TRA is that it presents management with the risk profile to be expected after the security program is implemented, and fosters consciousness of the reality that finite risks always remain.
The diagram below illustrates a conventional TRA workflow (yellow), plus the extensions to cover privacy design (red). The important privacy qualities of Personal Information assets include Accessibility, Permissibility (to disclose), Sensitivity (of e.g. health information), Transparency (of the reasons for collection) and Quality. Typical threats to privacy include over-collection (which can be an adverse consequence of excessive event logging or diagnostics), over-disclosure, incompleteness of records furnished in response to access requests, and over-retention of PI beyond the prima facie business requirement. When it comes to mitigating privacy threats, security practitioners may be pleasantly surprised to find that most of their building blocks are applicable.
The hybrid Security-Privacy Threat & Risk Assessment will help ICT practitioners put Privacy by Design into practice. It helps reduce privacy principles to information systems engineering requirements, and surfaces potential tensions between security practices and privacy. ICT design frequently deals with competing requirements. When engineers have the right tools, they can deal properly with privacy.
Biometrics seems to be going gang busters in the developing world. I fear we're seeing a new wave of technological imperialism. In this post I will examine whether the biometrics field is mature enough for the lofty social goal of empowering the world's poor and disadvantaged with "identity".
The independent Center for Global Development has released a report "Identification for Development: The Biometrics Revolution" which looks at 160 different identity programs using biometric technologies. By and large, it's a study of the vital social benefits to poor and disadvantaged peoples when they gain an official identity and are able to participate more fully in their countries and their markets.
The CGD report covers some of the kinks in how biometrics work in the real world, like the fact that a minority of people can be unable to enroll and they need to be subsequently treated carefully and fairly. But I feel the report takes biometric technology for granted. In contrast, independent experts have shown there is insufficient science for biometric performance to be predicted in the field. I conclude biometrics are not ready to support such major public policy initiatives as ID systems.
The state of the science of biometrics
I recently came across a weighty assessment of the science of biometrics presented by one of the gurus, Jim Wayman, and his colleagues to the NIST IBPC 2010 biometric testing conference. The paper entitled "Fundamental issues in biometric performance testing: A modern statistical and philosophical framework for uncertainty assessment" should be required reading for all biometrics planners and pundits.
Here are some important extracts:
[Technology] testing on artificial or simulated databases tells us only about the performance of a software package on that data. There is nothing in a technology test that can validate the simulated data as a proxy for the “real world”, beyond a comparison to the real world data actually available. In other words, technology testing on simulated data cannot logically serve as a proxy for software performance over large, unseen, operational datasets. [p15, emphasis added].
In a scenario test, [False Non Match Rate and False Match Rate] are given as rates averaged over total transactions. The transactions often involve multiple data samples taken of multiple persons at multiple times. So influence quantities extend to sampling conditions, persons sampled and time of sampling. These quantities are not repeatable across tests in the same lab or across labs, so measurands will be neither repeatable nor reproducible. We lack metrics for assessing the expected variability of these quantities between tests and models for converting that variability to uncertainty in measurands.[p17].
To explain, a biometric "technology test" is when a software package is exercised on a standardised data set, usually in a bake-off such as NIST's own biometric performance tests over the years. And a "scenario test" is when the biometric system is tested in the lab using actual test subjects. The meaning of the two dense sentences underlined by me in the extracts is: technology test results from one data set do not predict performance on any other data set or scenario, and biometrics practitioners still have no way to predict the accuracy of their solutions in the real world.
The authors go on:
[To] report false match and false non-match performance metrics for [iris and face recognition] without reporting on the percentage of data subjects wearing contact lenses, the period of time between collection of the compared image sets, the commercial systems used in the collection process, pupil dilation, and lighting direction is to report "nothing at all". [pp17-18].
And they conclude, amongst other things:
[False positive and false negative] measurements have historically proved to be neither reproducible nor repeatable except in very limited cases of repeated execution of the same software package against a static database on the same equipment. Accordingly, "technology" test metrics have not aligned well with "scenario" test metrics, which have in turn failed to adequately predict field performance. [p22].
The limitations of biometric testing has repeatedly been stressed by no less an authority than the US FBI. In their State-of-the-Art Biometric Excellence Roadmap (SABER) Report the FBI cautions that:
For all biometric technologies, error rates are highly dependent upon the population and application environment. The technologies do not have known error rates outside of a controlled test environment. Therefore, any reference to error rates applies only to the test in question and should not be used to predict performance in a different application. [p4.10]
The SABER report also highlighted a widespread weakness in biometric testing, namely that accuracy measurements usually only look at accidental errors:
The intentional spoofing or manipulation of biometrics invalidates the “zero effort imposter” assumption commonly used in performance evaluations. When a dedicated effort is applied toward fooling biometrics systems, the resulting performance can be dramatically different. [p1.4]
A few years ago, the Future of Identity in the Information Society Consortium ("FIDIS", a research network funded by the European Community’s Sixth Framework Program) wrote a major report on forensics and identity systems. FIDIS looked at the spoofability of many biometrics modalities in great detail (pp 28-69). These experts concluded:
Concluding, it is evident that the current state of the art of biometric devices leaves much to be desired. A major deficit in the security that the devices offer is the absence of effective liveness detection. At this time, the devices tested require human supervision to be sure that no fake biometric is used to pass the system. This, however, negates some of the benefits these technologies potentially offer, such as high-throughput automated access control and remote authentication. [p69]
Biometrics in public policy
To me, biometrics is in an appalling and astounding state of affairs. The prevailing public understanding of how these technologies work is utopian, based probably on nothing more than science fiction movies, and the myth of biometric uniqueness. In stark contrast, scientists warn there is no telling how biometrics will work in the field, and the FBI warns that bench testing doesn't predict resistance to attack. It's very much like the manufacturer of a safe confessing to a bank manager they don't know how it will stand up in an actual burglary.
This situation has bedeviled enterprise and financial services security for years. Without anyone admitting it, it's possible that the slow uptake of biometrics in retail and banking (save for Japan and their odd hand vein ATMs) is a result of hard headed security officers backing off when they look deep into the tech. But biometrics is going gang busters in the developing world, with vendors thrilling to this much bigger and faster moving market.
The stakes are so very high in national ID systems, especially in the developing world, where resistance to their introduction is relatively low, for various reasons. I'm afraid there is great potential for technological imperialism, given the historical opacity of this industry and its reluctance to engage with the issues.
To be sure vendors are not taking unfair advantage of the developing world ID market, they need to answer some questions:
- Firstly, how do they respond to Jim Wayman, the FIDIS Consortium and the FBI? Is it possible to predict how finger print readers, face recognition and iris scanners are going to operate, over years and years, in remote and rural areas?
- In particular, how good is liveness detection? Can these solutions be trusted in unattended operation for such critical missions as e-voting?
- What contingency plans are in place for biometric ID theft? Can the biometric be cancelled and reissued if compromised? Wouldn't it be catastrophic for the newly empowered identity holder to find themselves cut out of the system if their biometric can no longer be trusted?
I have come to believe that a systemic conceptual shortfall affects typical technologists' thinking about privacy. It may be that engineers tend to take literally the well-meaning slogan that "privacy is not a technology issue". I say this in all seriousness.
Online, we're talking about data privacy, or data protection, but systems designers tend to bring to work a spectrum of personal outlooks about privacy in the human sphere. Yet what matters is the precise wording of data privacy law, like Australia's Privacy Act. To illustrate the difference, here's the sort of experience I've had time and time again.
During the course of conducting a PIA in 2011, I spent time with the development team working on a new government database. These were good, senior people, with sophisticated understanding of information architecture. But they harboured restrictive views about privacy. An important clue was the way they referred to "private" information rather than Personal Information (or equivalently, Personally Identifiable Information, PII). After explaining that Personal Information is the operable term in Australian legislation, and reviewing its definition from the Privacy Act, we found that the team had failed to appreciate the extent of the PI in their system. They overlooked that most of their audit logs collect PI, albeit indirectly and automatically. Further, they had not appreciated that information about clients in their register provided by third parties was also PI (despite it being intuitively "less private" by virtue of originating from others). I attributed these blind spots to the developers' weak and informal frame of "private" information. Online and in data privacy law alike, things are very crisp. The definition of Personal Information -- namely any data relating to an individual whose identity is readily apparent -- sets a low bar, embracing a great many data classes and, by extension, informatics processes. It's a nice analytical definition that is readily factored into systems analysis. After the team got that, the PIA in question proceeded apace and we found and rectified several privacy risks that had gone unnoticed.
Here are some more of the many recurring misconceptions I've noticed over the past decade:
- "Personal" Information is sometimes taken to mean especially delicate information such as payment card details, rather than any information pertaining to an identifiable individual such as email addresses in many cases; an exchange between US data breach analyst Jake Kouns and me over the Epsilon incident in 2011 is revealing of a technologists' systemically narrow idea of PII;
- the act of collecting PI is sometimes regarded only in relation to direct collection from the individual concerned; technologists can overlook that PI provided by a third party to a data custodian is nevertheless being collected by the custodian, and they can fail to appreciate that generating PI internally, through event logging for instance, can also represent collection
- even if they are aware of points such as Australia's Access and Correction Principle, database administrators can be unaware that, technically, individuals requesting a copy of information held about them should also be provided with pertinent event logs; a non-trivial case where individuals can have a genuine interest in reviewing event logs is when they want to know if an organisation's staff have been accessing their records.
These instances, among many others in my experience working across both information security and privacy, show that ICT practitioners suffer important gaps in their understanding. Security professionals in particular may be forgiven for thinking that most legislated Privacy Principles are legal niceties irrelevant to them, for generally only one of the principles in any given set is overtly about security; see:
- no. 5 of the eight OECD Privacy Principles
- no. 4 of the five Fair Information Practice Principles in the US
- no. 8 of the ten Generally Accepted Privacy Principles of the US and Canadian accounting bodies,
- no. 4 of the ten old National Privacy Principles of Australia, and
- no. 11 of the 13 new Australian Privacy Principles (APPs).
Yet every one of the privacy principles is impacted by information technology and security practices; see Mapping Privacy requirements onto the IT function, Privacy Law & Policy Reporter, Vol. 10.1& 10.2, 2003. I believe the gaps in the privacy knowledge of ICT practitioners are not random but are systemic, probably resulting from privacy training for non-privacy professionals being ad hoc and not properly integrated with their particular world views.
To properly deal with data privacy, ICT practitioners need to have privacy framed in a way that leads to objective design requirements. Luckily there already exist several unifying frameworks for systematising the work of dev teams. One example that resonates strongly with data privacy practice is the Threat & Risk Assessment (TRA).
The TRA is an infosec requirements analysis tool, widely practiced in the public and private sectors. There are a number of standards that guide the conduct of TRAs, such as ISO 31000. A TRA is used to systematically catalogue all foreseeable adverse events that threaten an organisation's information assets, identify candidate security controls (considering technologies, processes and personnel) to mitigate those threats, and most importantly, determine how much should be invested in each control to bring all risks down to an acceptable level. The TRA process delivers real world management decisions, understanding that non zero risks are ever present, and that no organisation has an unlimited security budget.
I have found that in practice, the TRA exercise is readily extensible as an aid to Privacy by Design. A TRA can expressly incorporate privacy as an attribute of information assets worth protecting, alongside the conventional security qualities of confidentiality, integrity and availability ("C.I.A."). A crucial subtlety here is that privacy is not the same as confidentiality, yet many frequently conflate the two. A fuller understanding of privacy leads designers to consider the Collection, Use, Disclosure and Access & Correction principles, over and above confidentiality when they analyse information assets.
Lockstep continues to actively research the closer integration of security and privacy practices.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. Lockstep monitors these figures and plots the trend data. The latest stats were released this week, for FY 2012.
Here's the latest picture of Australian payment card fraud growth over the past seven financial years FY2006-12.
Compared with FY2011:
- Total card fraud is up 25%
- CNP fraud is up 27%
- CNP fraud represents three quarters (72%) of all card fraud.
- Card Not Present fraud as a proportion of all fraud remains at just under three quarters (72%).
As with the CY2011 stats we discussed last July, card fraud has again grown in all categories at once, not just Card Not Present, and this is unusual. The explanation may be a burst of skimming and counterfeiting in late 2011 which would be reflected in both the FY2012 and CY2011 numbers.
APCA's press release this week notes that card fraud has dropped in the past six months, contrasting financial 2012 ($189M) with calendar 2011 ($198M). This may not be a statistically valid comparison. We should expect seasonal buying habits will cause asymmetries within 12 months, making FY against CY a case of apples and oranges. Indeed, this looks like the first time APCA themselves have plotted CY and FY stats together. It certainly makes the latest figures look better.
Time will tell whether the trend is changing. The long term trend is that CNP fraud has grown at 38% p.a. on average, from $27M in FY2006 to $189M in FY2012. A 5% drop in the past six months may not mean much. The $189M loss most recently reported is probably close to the true trend.
APCA says "Broadly, the value of CNP fraud reflects growing retail activity in the online space, with many more businesses ... moving online". That's true but the question is: What will we do about it? Bank robbers rob banks because that's where the money is. Think about high road tolls: they reflect the popularity of driving, but we don't put up with them!
In any case, a cardholder's exposure to CNP fraud has nothing to do with whether they themselves shop online! Stolen card data are replayed online by criminals because they can. The online boom provides more places to use stolen cards but it's not where the criminals get most of their cards. Instead, it appears that account numbers are mostly obtained from massive database breaches at processors and large bricks-and-mortar retailers, like Heartland Payments, Global Payments, and Hannaford. So it's not fair to play down CNP fraud as relating to the cost of going digital, because it hurts people who haven't gone digital.
I'm afraid payments regulators seem light on ideas for actually rectifying CNP fraud.
Until recently, APCA actively promoted 3D Secure (Verified by Visa or Mastercard SecureCode) as a response to CNP fraud. In June 2011, APCA went so far as to say "retailers should be looking at a 3D Secure solution for their online checkout". But their most recent press release makes no mention of 3D Secure at all.
It looks to me that 3D Secure, after many years of disappointing performance and terrible take-up, is now too contentious to rate a mention from Australia’s regulators.
In my view, the industry needs to treat CNP fraud as seriously as it did skimming and carding. The industry should not resign itself to increasing rates of fraud just because online shopping is on the rise.
CNP fraud is not a technologically tough problem. It's just the digital equivalent of analogue skimming and carding, and it could be stopped just as effectively by using chips to protect cardholder data online.
As mentioned last month, the security-convenience trade-off in computer security is radically different from traditional locks and keys. Regular users are so habituated to door keys that they don't even think of the trade-offs! Keys are so easy to use that nobody bothers to make them "easier" with the equivalent of Single Sign On (just imagine asking your boss to re-key the office door and all the file cabinets just so you could use the same key for work as well as your home and car - it would be preposterous).
The cyber security-convenience trade-off could be radically re-jigged if we adopted serious physical keys for our computing devices. The usability dilemma online is really all about human factors engineering.
It's instructive to look at the evolution of door locks. For centuries we've used the same basic form factor: as the Oxford dictionary puts it, "a small piece of shaped metal with incisions cut to fit the wards of a particular lock, which is inserted into a lock and turned to open or close it".
The UX is universal, while under the covers, security R&D has spawned long and steady improvement.
And the most recent smart car keys still have a mechanical emergency key for when the electronics fails!
Posted in Security
To a great extent, many of the challenges in information security boil down to human factors engineering. We tend to have got the security-convenience trade-off in infosec badly wrong. The computer password is a relic of the 1960s, devised by technicians, for technicians. If we look at traditional security, we see that people are universally habituated to good practices with keys and locks.
The terrible experience of Wired writer Mat Honan being hacked created one of those classic overnight infosec sensations. He's become the poster boy for the movement to 'kill the password'. His follow up post of that name was tweeted over two thousand times in two days.
Why are we so late to this realisation? Why haven't we had proper belts-and-braces access security for our computers ever since the dawn of e-commerce? We all saw this coming -- the digital economy would become the economy; the information superhighway would become more important than the asphalt one; our computing devices would become absolutely central to all we do.
It's conspicuous to me that we have always secured our serious real world assets with proper keys. Our cars, houses, offices and sheds all have keys. Many of us would have been issued with special high security keys in the workplace. Cars these days have very serious keys indeed, with mechanical and electronic anti-copying design features. It's all bog standard.
But for well over a decade now, cyber security advocates speak earnestly about Two Factor Authentication as if it's something new and profound.
For a few extra bucks we could build proper physical keyed security into all our computers and networked devices. The ubiquity of contactless interfaces by wifi or NFC opens the way for a variety of radio frequency keys in different form factors for log on.
There's something weird about the computing UX that has long created different standards for looking at the cyber world and the real world. A personal story illustrates the point. About nine years ago, I met with a big e-commerce platform provider that was experiencing a boom in fraud against the online merchants it was hosting. They wanted to offer their merchant tenants better security against hijackers. I suggested including a USB key for mutual authentication and strong digital signatures, but the notion of any physical token was rejected out of hand. They could not stomach the idea that the merchant might be inconvenienced in the event they misplaced their key. What an astonishing double standard! I asked them to imagine being a small business owner, who one day drives to the office to find they've left door key behind. What do you want to do? Have some magic protocol that opens the door for you, or do you put up with the reality of having to turn around and get your keys?
We are universally habituated to physical keys and key rings. They offer a brilliant combination of usability and security. If we had comparably easy to use physical keys for accessing virtual assets, we could easily manage a suite of 10 or 15 or more distinct digital identities, just as we manage that many real world keys. Serious access security for our computers would be simple, if we just had the will to engineer our hardware properly.
Quantum computing continues to make strides. Now they've made a chip to execute Shor's quantum foctorisation algorithm. Until now, quantum computers were built from bench-loads of apparatus, and had yet to be fabricated in solid state. So this is pretty cool, taking QC from science into engineering.
The promise of quantum computing is that it will eventually render today's core cryptography obsolete, by making it possible to factorise large numbers very quickly. The RSA algorithm for now is effectively unbreakable because its keys are the products of prime numbers hundreds of digits long. The product of two primes can be computed in split seconds; but to find the factors by brute force - and thus crack the code - takes billions of computer-years.
I'm curious about one thing. Current prototype quantum computers are built with just a few qubits because of the 'coherence' problem (so they can only factorise little numbers like 15 = 3 x 5). The machinery has to hold all the qubits in a state of quantum uncertainty for long enough to complete the computation. The more qubits they have, the harder it is to maintain coherence. The task ahead is to scale up past the Proof-of-Concept stage to a manage a few hundred qubits and thus be able to crack 2048 bit RSA keys for instance.
Evidently it's hard to build say a 1000 qubit quantum computer right now. So my question is: What is the relationship between the difficulty of maintaining coherence and the number of qubits concerned? Is it exponentially difficult?
Because if it is, then the way to stay ahead of quantum computing attack might be to simply go out to RSA keys tens of thousands of digits long.
The reverse engineering of biometric iris templates reported at Blackhat this month has attracted deserved attention. Iris now joins face and fingerprint as modalities that have been reverse engineered; that is, it has proved possible to synthesise an image that when processed by the algorithm in question, produces a match against a target template.
The biometrics industry reacts to these sorts of results in a way unbefitting of serious security practitioners.
Take for instance Securlinx CEO Barry Hodge's comment on the iris attack: "All of these articles obsessing over how to spoof a biometric are intellectually interesting but in the practical application irrelevant".
But nobody should belittle the significance of these sorts of results - especially when no practical biometric can be revoked and reissued after compromise.
Mr Hodge, security is an intellectually challenging field. Let's compare the biometrics industry's complacency with the way serious security professionals responded to the problems discovered in the SHA-1 hash algorithm.
Ideal hash algorithms are supposed to produce digest values that are truly random under any variation to the input data. Any ability to predict how a digest varies could conceivably lead to a number of attack scenarios including where digitally signed data might be tampered with, without impacting the signature. The only way to attack an ideal hash algorithm is by brute force: if an attacker wishes to synthesise a piece of data that produces a target hash value (a so-called "collision") they have to work their way through all possible permutations. For a 160 bit hash value, this brute force task taks on the order of 2 to the power of 159 trials, which would be beyond the power of all the world's computers running for millions of years.
In 2005, Chinese academic cryptologists discovered a weakness in the SHA-1 algorithm, that under some circumstances allows a reduction in the number or trials needed for brute force discovery of collisions. The researchers did not reduce the number of trials by very much, and they did not demonstrate any actual attack. No one else feared that this work would produce a practical exploit, and in the following eight years there still has been no report of an attack on SHA-1.
However, cryptographers, security strategists and policy makers worldwide were shaken by the SHA-1 research. They were deeply worried, intellectually, that a digest algorithm could have a structural weakness that compromises its randomness. It meant that the cryptographic community did not really understand SHA-1 as well as they might. And the policy response was swift. The US government sponsored a new digest algorithm competition, which yielded the SHA-2 algorithm, which is now being promulgated globally.
This is good security practice at work. Academics continuously work away at stressing existing techniques and uncovered weaknesses. To stay ahead of attackers, all verified academic weaknesses are taken seriously, and where critical security infrastructure is involved - even when no practical attack has yet been seen - we review and upgrade the security solutions, to stay ahead of our adversaries.
In stark contrast, biometrics advocates seem to fall back on a variation of the Bart Simpson Defence, namely, "I didn't do it; nobody saw me do it; you can't prove a thing".
Over the past few years I've tracked biometrics proponents claim firstly that templates cannot be reverse engineered. They went on to qualify their position to say that certain types of biometrics are "practically impossible" to reverse. And now Mr Hodge is saying it doesn't really matter if they are reversed.
There is no disaster recovery plan in biometrics; they cannot be cancelled and reissued, so of course their advocates cling to the idea they cannot be compromised. And with that attitude they further distinguish themselves in infosec, for no one else ever acts as though their technology is perfect.
Seasoned security analysts know the card fraud trends, but the latest stats in Australia are surprisingly bad.
The Australian Payments Clearing Association (APCA) releases card fraud statistics every six months for the preceding 12m period. Lockstep monitors these figures, crunches them and plots the trend data.
Here's the latest picture of Australian payment card fraud growth over the past six calendar years CY2006-11.
For the first time in many years, card fraud has grown in all categories at once. The ratio of Card Not Present fraud to all fraud remained steady at just under three quarters. Any up-turn in skimming and counterfeiting is surprising given the strong penetration of chip-and-PIN cards in Australia, although most ATMs here still use the stripe and remain vulnerable to carding. Still, CNP fraud remains the preferred MO of organised crime, and its cost grew by 61% from 2010 to 2011.
"Innovation" is a topical notion in Australian payments systems circles, but for the most part innovation is confined to back end systemic improvements to interbank settlements. Regulators take a light touch on the user side. The market is fostering innovative payments applications in mobile devices, but so far, security still proves to be too hard. APCA's only position on security is to wait and see what happens when 3D Secure comes to Australia. Given that nothing has stood in its way, and CNP fraud is doubling every two years, the very absence of 3D Secure here should be worrying to the regulators.
Once again, in relation to charges levelled against their own, politicians have claimed that like everyone else, they deserve the presumption of innocence. But the old saw "innocent until proven guilty" is no universal human right. It is merely a corollary of the 18th century Blackstone's Formulation: "Better that ten guilty persons escape than that one innocent suffer".
For persons in positions of trust -- politicians, police officers, customs officers, judges and so on -- different calculations apply. The community cuts public officers less slack, because the consequences of their misconduct are far reaching. When only one bad apple can spoil the barrel, Blackstone's Formulation patently does not apply. It is probably better that 10 innocent politicians (or police officers or airport baggage handlers) lose their jobs than for one wrongdoer to stay in place.
If politicians agree to be held to higher standards than members of the public, then as part of the bargain, they cede the presumption of innocence.