Friday 26 October 2018

Driverless cars, the trolley problem, and tyranny of the majority

Today BBC News ran an article entitled Driverless cars: who should die in a crash? The news story has come about because researchers at MIT have conducted a study analysing more than 40 million responses to an online 'trolley problem'-style ethical survey.

The trolley problem is a classic philosophy problem (which, by chance, I taught today to a class of Year 5 pupils). The problem is this:
A train is hurtling along a track; and on the track ahead are five innocent people who will die if you do nothing. You can't stop the train (its brakes have failed) but you can pull a lever to switch it onto another track - where it will kill just one innocent person. Should you pull the lever or not?
My 'Trolley Problem' powerpoint for the Year 5 pupils.
Several of them said it would matter if some of the people were
criminals; no one said the people's skin colour or gender mattered.
Most people (and indeed the pupils I taught today) say that one should pull the lever based on purely utilitarian grounds - this can be phrased in the words of Spock from Star Trek: "the needs of the many outweigh the needs of the few". In other words, although it's bad for one person to die, it's even worse for five people to die, so pull the lever and kill one.

It's OK for us to hypothesise about what we'd do in this very unlikely scenario, because it's just a thought experiment: it's not real. But driverless cars are forcing us to reconsider this problem not as a hypothetical possibility, but as a very real possibility. If a driverless car is in a no-win situation where its brakes have failed and someone will die, who should it be? The passengers, the toddler on the zebra crossing, or the old ladies on the pavement?

Well, the people at MIT designed a survey / experiment ("The Moral Machine") to find out people's responses to these very sorts of questions. So far so good you might think, but here comes the troubling part: BBC News quotes the MIT people as saying:
"Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them."
Now although this falls short of their suggesting that we use the results of the survey to inform the moral programming of driverless cars, it certainly seems to be in that ballpark. But this, I maintain, is a dangerous and morally troubling step towards eugenics-by-driverless-car. Why? Because in the Moral Machine survey, people's choices about who should die and who should be saved are based on judgments about factors such as age, gender, class, weight, and how law-abiding the person is.

Screenshot from the Moral Machine survey
An optional section of the survey involves moving sliders according to how important you think that particular factor is. For example, you can say that saving higher class over lower class is really important, and saving the young over the old is fairly important. One factor which didn't feature in the survey was race/ethnicity/religion. The people at MIT probably thought it was just too controversial to see whether people would save a black man rather than a white man, or a Muslim woman rather than a Christian woman - but these are probably factors which would yield interesting and distasteful results --but no less interesting and no more distasteful than the actual results of the survey, which show that people choose to save:

  • Women more than men
  • The young more than the old
  • Fit people more than fat people
  • Middle class more than lower class
So why am I troubled by this? Well, driverless cars are on their way, whether we like it or not. And they will be faced with genuine moral decisions, whether we like it or not. And they will need some moral guidance or 'rules' to follow in order to make those split-second decisions about whether the fat young person or the thin old person should die. Yes, a moral system programmed into the cars will be absolutely essential - however....

Basing the moral system of a driverless car on the results of a survey - however large - is a huge issue. Even if we ignore the fact that there is no barrier to the same person taking the survey more than once (I took it twice) and even if we ignore the fact that details of the pictures in the survey aren't immediately clear (in my first time around, I didn't notice that some people were crossing the road on the red man, nor that some of the stick men were supposed to represent homeless people) there is still a massive problem in the form of the tyranny of the majority

John Stuart Mill (I love that guy!) in On Liberty wrote that we must take steps to guard against the tyranny of the majority - this is when a large group of people get their way simply because they are greater in number than a small group of people. Now if you ask me, the majority getting their way is unproblematic if we've taken a vote about whether we should have chocolate ice cream or strawberry ice cream - or about whether we should visit a castle or the beach tomorrow... but the majority verdict really does become tyrannical when the issues at stake are the welfare, lives and rights of people. And these are the very things that are at stake with these driverless cars trolley-problem-style dilemmas. The results of the Moral Machine survey show that the majority prioritise the young over the old, and the rich over the poor - and suppose a similar survey also shows the majority prioritise able-bodied people over disabled people, and white people over brown people. This alone is disturbing enough, but if we then proceed to program the moral system of the majority into driverless cars, and set them free on our public highways - well, it's the tyranny of the majority at its most foul, and a recipe for eugenics by carcrash.

We simply cannot allow the vulgar prejudices of the majority to inform the moral systems of driverless cars. Driverless cars need a way to determine which action to take, of course, but this should be based on non-prejudicial factors such as the likelihood of surviving the crash, the number of people, and the location of the impact: the age, gender, class (etc.) of people should not - ever - be a factor in decision-making about who lives and who dies.

No comments:

Post a Comment