< Back to blog

Technology, Automation, and the Trolley Problem


by Sarah Charrouf

You may be familiar with the classic Trolley problem presented to ethics students. The question of whether to derail a train that’s on course to run over five people but in turn have that train run over one person as a consequence, leads us to ask whether we should interfere with a course of events if it will create less harm, even if some harm is done in the process. For philosophers, the question still has no definitive answer. In recent months, the problem is affecting the developer community.


Self-driving cars were at the forefront of many panels and conversations at Mapbox’s developer conference, Locate, this past week. The topic has also gained traction in mainstream media following a fatal accident involving a self-driving Uber earlier this year.


Inevitably, these cars will be asked to make a choice between causing harm to one set of people or property or another in a situation when an accident can’t be prevented. A few German scientists have introduced elements of morality and ethics into self-driving cars, and have published their research and findings in Frontiers in Behavioural Neuroscience, co-authored by Gordon Pipa, Peter König, and Richard Gast.

 

The authors created a set of rules that help provide the best possible outcomes in these rare but troublesome scenarios. The proposed rules state that self-driving cars should always attempt to minimize human death and shouldn’t discriminate between individuals based on age, gender or any other factor. Human lives should also always be given priority over animals or property.

 

In the Stack Overflow 2017 Developer Survey for 2018, the majority of developers considered moral questions seriously. When asked if developers would write the code for an unethical purpose, 58.5% said “No,” and 36.6% said, “Depends on what it is.” An overwhelming majority of developers believe they have an obligation to consider the ethical implications of their code, with 79.6% of respondents saying they do have a responsibility. For the other 20.4%, 14.3% said they were unsure or don’t know, which leaves just a small number of developers who don’t think they have a moral obligation to the code they are writing.  

 

Though we don’t have conclusive answers to the Trolley problem, the majority of developers believe that they have an obligation to write code that is ethically positive or neutral, and the code that developers write to program autonomous vehicles has a real impact in our world. Therefore, it is imperative that we have clear guidelines on how to approach tough decisions for self-driving vehicles when accidents are inevitable. Especially as we consider research that shows it is safer to drive with an “alert, experienced, middle-aged driver” than “a self-driving car.”

 

If you’re interested in learning more about this topic, we’ve included more resources below. Continue the conversation with us on Facebook or Twitter

 

Resources:

Are autonomous cars really safer than human drivers?” Scientific American

To save the most lives, deploy (imperfect) self-driving cars ASAP” Wired

Stack Overflow Developer Survey Results 2018

Moral machine: a platform for gathering a human perspective on moral decisions made by machine intelligence” MIT

What moral code should your self-driving car follow?” Popular Science 

The ethical dilemmas of self-driving cars” The Globe and Mail

 

Image source: Photo by Esther Tuttle on Unsplash