Lockstep

Mobile: +61 (0) 414 488 851
Email: swilson@lockstep.com.au

The limits of artificial ethics

The premise that machines can behave ethically

Logicians have long known that some surprisingly simple tasks have no algorithmic solutions. That is, not everything is computable. This fact seems lost on artificial intelligence pundits who happily imagine a world of robots without limits. The assumption that self-driving cars for instance will be able to deal automatically and ethically with life-and-death situations goes unexamined in the public discourse. If algorithms have critical limits, then the tacit assumption that ethics will be automated is itself unethical. Machine thinking is already failing in unnerving ways. The public and our institutions need to be better informed about the limits of algorithms.
Is it ethical to presume ethics will automate?

Discussions of AI and ethics bring up a familiar suite of recurring questions. How do the biases of programmers infect the algorithms they create? What duty of care is owed to human workers displaced by robots? Should a self-driving car prioritise the safety of its occupants over that of other drivers and pedestrians? How should a battlefield robot kill?

These posers seem to stake out the philosophical ground for various debates which policy makers presume will eventually yield morally “balanced” positions. But what if the ground is shaky? There are grave risks in accepting the premise that ethics may be computerized.

The limits of computing

To recap, an algorithm is a repeatable set of instructions, like a recipe or a computer program. Algorithms run like clockwork; for the same set of inputs (including history), an algorithmic procedure always produces the same result. Crucially, every algorithm has a fixed set of inputs, all of which are known at the time it is designed. Novel inputs cannot be accommodated without a (human) programmer leaving the program and going back to the drawing board. No program can be programmed to reprogram itself.

No algorithm is ever taken by surprise, in the way a human can recognise the unexpected. A computer can be programmed to fail safe (hopefully) if some input value exceeds a design limit, but logically, it cannot know what to do in circumstances the designers did not foresee. Algorithms cannot think (or indeed do anything at all) “outside the dots”.

Worse still, some problems are simply not computable. For instance, logicians know for certain that the Halting Problem – that is, working out if a given computer program is ever going to stop – has no general algorithmic solution.

This is not to say some problems can’t be solved at all; rather, some problems cannot be treated with a single routine. In other words, problem solving often entails unanticipated twists and turns. And we know this of course from everyday life. Human affairs are characteristically unpredictable. Take the law. Court cases are not deterministic; in the more “interesting” trials, the very best legal minds are unable to pick the outcomes. If it were otherwise, we wouldn’t need courts (and we would be denied the inexhaustible cultural staple of the courtroom drama). The main thing that makes the law non-algorithmic is unexpected inputs (that is, legal precedents).

How will we cope when AI goes wrong?

It is chilling when a computer goes wrong (and hence we have that other rich genre, the robot horror story). Lay people and technologists alike are ill-prepared for the novel failure modes that will come with AI. At least one death has already resulted from an algorithm mishandling an unexpected scenario (in the case of a Tesla in automatic mode misinterpreting bright sunshine and then failing to notice a truck). Less dramatic but equally unnerving anomalies have been seen in “racist” object classification and face recognition systems.

Even if robots bring great net benefits to human well-being – for instance through the road toll improvements predicted for self-driving cars – society may not be ready for the exceptions. Historically, product liability cases have turned on forensic evidence of what went wrong, and finding points in the R&D or manufacturing processes where people should have known better. But algorithms are already so complicated that individuals can’t fairly be called “negligent” for missing a flaw (unless the negligence is to place too much faith in algorithms in general). At the same time, almost by design, AIs are becoming inscrutable, while they grow unpredictable. Neural networks for example don’t generally keep diagnostic execution traces like conventional software.

We bring this uncertainty on ourselves. The promise of neural networks is that they replicate some of our own powers of intuition, but artificial brains have none of the self-awareness we take for granted in human decision making. After a failure, we may not be able to replay what a machine vision system saw, much less ask: What were you thinking?

Towards a more modest AI

If we know that no computer on its own can solve something as simple as the Halting Problem, there should be grave doubts that robots can take on hard tasks like driving. Algorithms will always, at some point, be caught short in the real world. So how are we to compensate for their inadequacies? And how are community expectations to be better matched to the technical reality.

Amongst other things, we need to catalogue the circumstances in which robots may need to hand control back to a human (noting that any algorithm for detecting algorithmic failure will itself fail at some point!). The ability to interrogate anomalous neural networks deserves special attention, so that adaptive networks do not erase precious evidence. And just as early neurology proceeded largely by studying rare cases of brain damage, we should systematise the study of AI failures, and perhaps forge a pathology of artificial intelligence.

Posted in Software engineering, AI

Comments

Peter GolinskiWed 21 Mar 2018, 1:08pm

You do realise that people are typically not very good at solving the halting problem for a given program, don' you? In fact in my experience computers are usually better at solving an instance of it than humans.

Take as an example the following program which encodes the well known McCarthy91 function in ATS:

fun f91 {i:int} .. (x: int i) : [j:int | (i < 101 && j==91) || (i >= 101 && j==i-10)] int (j) =
if x >= 101 then x-10 else f91 (f91 (x+11))
// end of [f91]

When my computer compiles this it also proves that the program terminates and it requires only mere milliseconds to do so. Can you prove that it terminates? How long did it take you to prove?

By-the-way, here is the function's Wikipedia entry explaining its history and why it is regularly used as a test case for automated program verification methods:
https://en.wikipedia.org/wiki/McCarthy_91_function

Stephen WilsonWed 21 Mar 2018, 4:19pm

Thanks Peter.

The fact that one program can, in one set of cases, determine the fact of halting faster than I can, doesn't affect the general lesson, which is that no one algorithm can do determine if a program will halt. I use the Halting Problem as an example of the limitation of algorithms in a very simple case, far simpler than anything humans do naturally day to day.

Think about what's going on as you and I undertake this debate right here, right now. Peter, you made a contribution, citing work I was unaware of, and appreciate. I thought about your response, found it to be serious (if flawed), approved it for my blog, and formulated this response. And I expect you will counter.

No computer programs today could do what you and I are currently doing.

Your move.

I actually hope I am wrong and you're a bot.

Post a comment

If you are a registered user, Please click here to Sign In

Your Name*

Your Email Address* required, but won't be displayed on this site

To help prevent spam in our blog comments, please type in "of" (without the quotation marks) below*