The Math of Sisyphus

There is but one truly serious question in philosophy, and that is suicide, wrote Albert Camus in The Myth of Sisyphus. This is equally true for a human navigating an absurd existence, and an artificial intelligence navigating a morally insoluble situation.
As AI-powered vehicles take the road, questions about their behavior are inevitable and the escalation to matters of life or death equally so. This curiosity often takes the form of asking whom the car should steer for should it have no choice but to hit one of a variety innocent bystanders. Men? Women? Old people? Young people? Criminals? People with bad credit?
There are a number of reasons this question is a silly one, yet at the same time a deeply important one. But as far as Im concerned, there is only one real solution that makes sense: when presented with the possibility of taking a life, the car must always first attempt to take its own.

The trolley non-problem

First, lets get a few things straight about the question were attempting to answer.
There is unequivocally an air of contrivance to the situations under discussion. Thats because theyre not plausible real-world situations but mutations of a venerable thought experiment often called the Trolley Problem. The most familiar version dates to the 60s, but versions of it can be found going back to discussions of utilitarianism, and before that in classical philosophy.
The problem goes: A train car is out of control, and its going to hit a family of five who are trapped on the tracks. Fortunately, you happen to be standing next to a lever that will divert the car to another track where theres only one person. Do you pull the switch? Okay, but what if there are ten people on the first track? What if the person on the second one is your sister? What if theyre terminally ill? If you choose not to act, is that in itself an act, leaving you responsible for those deaths? The possibilities multiply when its a car on a street: for example, what if one of the people is crossing against the light does that make it all their fault? But what if theyre blind?
And so on. Its a revealing and flexible exercise that makes people (frequently undergrads taking Intro to Philosophy) examine the many questions involved in how we value the lives of others, how we view our own responsibility, and so on.
The Math of Sisyphus

But it isnt a good way to create an actionable rule for real-life use.
After all, you dont see convoluted moral logic on signs at railroad switches instructing operators on an elaborate hierarchy of the values of various lives. This is because the actions and outcomes are a red herring; the point of the exercise is to illustrate the fluidity of our ethical system. Theres no trick to the setup, no secret correct answer to calculate. The goal is not even to find an answer, but generate discussion and insight. So while its an interesting question, its fundamentally a question for humans, and consequently not really one our cars can or should be expected to answer, even with strict rules from its human engineers.

A self-driving car can no more calculate its way out of an ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain.
See also:
Leave a comment
  • Latest
  • Read
  • Commented
Calendar Content
«     2020    »