An interesting question bubbling in the media is that of engineering morality in autonomous devices, such as self–driving cars. I hope, and believe, the manufacturers of the software that will control these cars will include something to minimise suffering. I’m pretty sure they will: as I said, the question has been bubbling in the technical press for a while now.

image: new

I understand the primary goal, in the case of self–driving cars, is to keep suffering to a minimum when, for example, an accident is unavoidable. If the car does nothing and, by doing nothing, five people die, whereas if it does something only one person dies, someone who would otherwise survive, should it act? If it does, it has acted to actively kill a person who would otherwise have lived. It’s the kind of question belovéd of those who like to pour cold water on hot parties.

Apparently the European Parliament are considering the question of moral behaviour of automatic machinery of war. I presume this means more than drones, as in the armed remote control aircraft that many air forces are acquiring. I suspect it means these new types of devices of war will soon be able to make life and death decisions themselves, autonomously, without instructions from base. What kind of moral behaviour should be programmed into them?

There are, of course, sources for general answers to these kind of questions, such as moral philosophy. I don’t think religious sources should be ignored, because the religions represent the evolved moral rules of a culture. Whatever you think of the pixie stuff (and I think it’s somewhat psychotic), it is undoubtedly the case that religions advise people how to behave, and their rules are derived from long experience in, and influence of, societies, cultures and civilisations as they developed. Of course, philosophers remain king, given any religious interpretation that can be properly demolished by philosophy is thus proven wrong (which is why believers of false interpretations condemn philosophy).

Another source of morality that really must be understood and considered is, in my opinion, is the morality that evolved in our own and other social species, as found in the biology of morality. Now, the morality of other animals is not well understood, I believe, but there is enough knowledge to inform any decisions we might make when creating new moral systems, such as that in our new machines.

image: new

After all this, there is one fundamental advantage engineers have over philosophers and theologists, and even, to some extent, the social biologists: engineers can test what’s been designed. Social biologists might be able to experiment to work out the moral rules of particular social animals, but I don’t believe they can change the animals’ rules to see what then happens. Engineers can build simulations and try the different rulesets out, and see what happens. Engineers can, to some extent, empirically derive the best moral rules, for a given set of goals. Of course, that’s limited by the quality of simulation, and it’s also limited by the goals of the ruleset. But engineers can test, and that makes a huge difference to these questions.

Engineering can take into account the uncertainty of the real world. What should a car do if not doing anything will probably cause five deaths, but it can do just one thing that would avoid the risk of those five deaths but certainly cause someone else to die who would otherwise survive?

Given all this, it is perhaps ironic that many engineers have a simplistic understanding of morality. Too many of us don’t understand the difference between a machine that follows physical rules and the messy sophisticated complexity of social structure. It’s more than the difference between physics and biology, it’s more than the difference between a hammer and a stanza. That’s why engineers should be involved in, and inform, any decisions made, but they should not be the ones making them. Similarly their corporate overloads should not make the fundamental decisions: if nothing else, they have an unavoidable conflict of interest.

No, the decisions, in my opinion, should be left to the philosophers, the theologians, and the politicians. The first two because they’ve deep experience in considering these questions, and the politicians because they ultimately represent the people affected by the decisions, the rest of us.

It’s going to be an interesting set of debates, and I don’t think it’ll reach a final conclusion, because I don’t think there is one.