The Math of Sisyphus

INSUBCONTINENT EXCLUSIVE:
&There is but one truly serious question in philosophy, and that is suicide,& wrote Albert Camus in The Myth of Sisyphus
This is equally true for a human navigating an absurd existence and an artificial intelligence navigating a morally insoluble situation. As
AI-powered vehicles take the road, questions about their behavior are inevitable — and the escalation to matters of life or death equally
so
This curiosity often takes the form of asking whom the car should steer for should it have no choice but to hit one of a variety of innocent
bystanders
Men? Women? Old people? Young people? Criminals? People with bad credit? There are a number of reasons this question is a silly one, yet at
the same time a deeply important one
But as far as I&m concerned, there is only one real solution that makes sense: when presented with the possibility of taking a life, the car
must always first attempt to take its own. The trolley non-problem First, let get a few things straight about the question we&re attempting
to answer. There is unequivocally an air of contrivance to the situations under discussion
That because they&re not plausible real-world situations but mutations of a venerable thought experiment often called the &Trolley Problem.&
The most familiar version dates to the &60s, but versions of it can be found going back to discussions of utilitarianism, and before that in
classical philosophy. The problem goes: A train car is out of control, and it going to hit a family of five who are trapped on the tracks
Fortunately, you happen to be standing next to a lever that will divert the car to another track… where there only one person
Do you pull the switch? Okay, but what if there are ten people on the first track? What if the person on the second one is your sister? What
if they&re terminally ill? If you choose not to act, is that in itself an act, leaving you responsible for those deaths? The possibilities
multiply when it a car on a street: for example, what if one of the people is crossing against the light — does that make it all their
fault? But what if they&re blind? And so on
It a revealing and flexible exercise that makes people (frequently undergrads taking Intro to Philosophy) examine the many questions
involved in how we value the lives of others, how we view our own responsibility, and so on. But it isn&t a good way to create an
actionable rule for real-life use. After all, you don&t see convoluted moral logic on signs at railroad switches instructing operators on an
elaborate hierarchy of the values of various lives
This is because the actions and outcomes are a red herring; the point of the exercise is to illustrate the fluidity of our ethical system
There no trick to the setup, no secret &correct& answer to calculate
The goal is not even to find an answer, but generate discussion and insight
So while it an interesting question, it fundamentally a question for humans, and consequently not really one our cars can or should be
expected to answer, even with strict rules from its human engineers. A self-driving car can no more calculate its way out of an
ethical conundrum than Sisyphus could have calculated a better path by which to push his boulder up the mountain. And it must also
be acknowledged that these situations are going to be vanishingly rare
Most of the canonical versions of this thought experiment — five people versus one, or a kid and an old person — are so astronomically
unlikely to occur that even if we did find a best method that a car should always choose, it&ll only be relevant once every trillion miles
driven or so
And who to say whether that solution will be the right one in another country, among people with different values, or in 10 or 20 years? No
matter how many senses and compute units a car has, it can no more calculate its way out of an ethical conundrum than Sisyphus could have
calculated a better path by which to push his boulder up the mountain
The idea is, so to speak, absurd. We can&t have our cars attempting to solve a moral question that we ourselves can&t
Yet somehow that doesn&t stop us from thinking about it, from wanting an answer
We want to somehow be prepared for the situation even though it may never arise
What to be done? Implicit and explicit trust The entire self-driving car ecosystem has to be built on trust
That trust will grow over time, but there are two aspects to be considered. The first is implicit trust
This is the kind of trust we have in the cars we drive today: that despite being one-ton metal missiles propelled by a series of explosions
and filled with high octane fuel, they won&t blow up, fail to stop when we hit the brakes, spin out when we turn the wheel, and so on
That we trust the vehicle to do that is the result of years and years of success on the part of car manufacturers
Considering their complexity, cars are among the most reliable machines ever made
That been proven in practice and most of the time, we don&t even think of the possibility of the brakes not catching when the pedal is
depressed. You trust your personal missile to work the way you trust a fridge to stay cold
Let take a moment to appreciate how amazing that is. Self-driving cars, however, introduce new factors, unproven ones
Their proponents are correct when they say that autonomous vehicles will revolutionize the road, reduce traffic deaths, shorten commutes,
and so on
Computers are going to be much better drivers than us in countless ways
They have superior reflexes, can see in all directions simultaneously (not to mention in the dark, and around or through obstacles),
communicate and collaborate instantly with nearby vehicles, immediately sense and potentially fix technical problems… the list goes
on. But until these amazing abilities lose their luster and become just more pieces of the transportation tech infrastructure that we
trust, they&ll be suspect
That part we can&t really accelerate except, paradoxically, by taking it slow and making sure no highly visible outlier events (like that
fatal Uber crash) arrest the zeitgeist and set back that trust by years
Make haste slowly, as they say
Few people remember anti-lock brakes saving their lives, though it probably happened to several people reading this right now — it just
quietly reinforced our implicit trust in the vehicle
And no one will remember when their car improved their commute by five minutes with a hundred tiny improvements
But they sure do remember that Toyotas killed dozens with bad software that locked the car accelerator. The second part of that trust is
explicit: something that has to be communicated, learned, something of which we are consciously aware. For cars there aren&t many of these
The rules of the road differ widely and are flexible — some places more than others — and on ordinary highways and city streets we
operate our vehicles almost instinctively
When we are in the role of pedestrian, we behave as a self-aware part of the ecosystem — we walk, we cross, we step in front of moving
cars because we assume the driver will see us, avoid us, stop before they hit us
This is because we assume that behind the wheel of every car is an attentive human who will behave according to the rules we have all
internalized. Nevertheless, we have signals, even if we don&t realize we&re sending or receiving them; how else can you explain how you know
that truck up there is going to change lanes fives seconds before it turns its blinker on? How else can you be so sure a car isn&t going to
stop, and hold a friend back from stepping into the crosswalk? Just because we don&t quite understand it doesn&t mean we don&t exert it or
assess it all the time
Making eye contact, standing in a place implying the need to cross, waving, making space for a merge, short honks and long honks… it a
learned skill, and a culture or even city-specific one at that. Cold blooded With self-driving cars there is no humanity in which to place
our trust
We trust other people because they&re like us; computers are not like us. In time, autonomous vehicles of all kinds will become as much a
part of the accepted ecosystem as automated lights and bridges, metered freeway entrances, parking monitoring systems, and so on
Until that time we will have to learn the rules by which autonomous vehicles operate, both through observation and straightforward
instruction. Some of these habits will be easily understood; for instance, maybe autonomous vehicles will never, ever try to make a U-turn
by crossing a double yellow line
I try not to myself, but you know how it is
I&d rather do that than go an extra three blocks to do it legally
But an AV will perhaps scrupulously adhere to traffic laws like that
So there one possible rule. Laying a trap for self-driving cars Others might not be quite so hard and fast
Merging and lane changes can be messy, but perhaps it will be the established pattern that AVs will always brake and join the line further
back rather than try to move up a spot
This requires a little more context and the behavior is more adaptive, but it still a relatively simple pattern that you can perceive and
react to, or even exploit to get ahead a bit (please don&t). It important to note that, like the trolley problem &solutions,& there no huge
list of car behaviors that says, always drop back when merging, always give the right of way, never this, this if that, etc
Just as our decision to switch or not switch tracks proceeds from a higher-order process of morality in our minds, these autonomous
behaviors will be the natural result of a large set of complicated evaluations and decision-making processes that weigh hundreds of factors
like positions of nearby cars, speed, lane width, etc
But I think they&ll be reliable enough in some ways and in some behaviors that there will definitely be a self-driving &style& that doesn&t
deviate too much. Although few if any of these behaviors are likely to be dangerous in and of themselves, it will be helpful to understand
them if you are going to be sharing the road with them
Imperfect knowledge is how we get accidents to begin with
Establishing an explicit trust relationship with self-driving vehicles is part of the process accepting them into our everyday lives. But
people naturally want to take things to their logical ends, even if those ends aren&t really logical
And as you consider the many ways AVs will drive and how they will navigate certain situations, the &but what if…& scenarios naturally get
more and more dire and specific as variables approach limits, and ultimately you arrive at the AV equivalent of the trolley problem that we
started with
What happens when the car has to make a choice between people? It not that anyone even thinks it will happen to them
What they want to know, as a prerequisite to trust, is that the system is not unprepared, and that the prepared response is not one that
puts them in danger
People don&t want to be the victim of the self-driving car logic, even theoretically — that would be an impassible barrier to trust. Here
how Uber self-driving cars are supposed to detect pedestrians Because whatever the scenario, whoever it &chooses& between, one of those
parties is undeniably the victim
The car got on the road and, following its ill logic to the bitter end, homed in on and struck this person rather than that one. If neither
of the people in this AV-trolley problem can by any reasonable measure be determined to be the &correct& one to choose, especially from
their perspective (which must after all be considered), what else is there to do? Well, we have to remember that there one other &person&
involved here: the car itself. Is it self-destruction if you don&t have a self? My suggestion is simply that it be made a universal policy
that should a self-driving car be put in a situation where it is at serious risk of striking a person, it must take whatever means it can to
avoid it — up to and including destroying itself, with no consideration for its own &life.& Essentially, when presented with the
possibility of murder, an autonomous vehicle must always prefer suicide. When presented with the possibility of murder, an autonomous
vehicle must always prefer suicide. It doesn&t have to detonate itself or anything
It just needs to take itself out of the action, and a robust improvisational engine can be produced to that end just as well as for avoiding
swerving trucks, changing lanes suddenly and any other behavior
There are telephone poles, parked cars, trees — take your pick; any of these things will do as long as they stop the car. The objection,
of course, is that there is likely to be a person inside the self-driving car
Yes — but this person has consented to the inherent risk involved, while the people on the street haven&t
While much of the moral calculus of the trolley problem is academic, this bit actually makes a difference. Consenting to the risks of using
a self-driving system means the occupant is acknowledging the possibility that should such a situation arise, however remote the
possibility, they would be the person who may be the victim of it
They are the ones who will explicitly consent to trust their lives to the logic of the self-driving system
Furthermore, as a practical consideration, the occupant is so to speak on the soft side of the car. As we&ve already established, it
unlikely a car will ever have to do this
But what it does is provide a substantial and easily understood answer when someone asks the perfectly natural question of what an
autonomous vehicle will do when it is careening toward a pedestrian
Simple: it will do its level best to destroy itself first. There are extremely specific and dire situations that there will never be a
solution to as long as there are moving cars and moving people, and self-driving vehicles are no exception to that
You&ll never run out of imaginary scenarios for any system, human or automated, to fail
But it is in order to reduce the number of such scenarios and help establish trust, not to render tragedy impossible, that every
self-driving car should robustly and provably prefer its own destruction to that of a person outside itself. We are not aiming for a
complete solution, just an intuitive one
Self-driving cars will, say, always brake to merge, never cross a double yellow in normal traffic, and so on and so forth — and will crash
themselves rather than hit a pedestrian
Regardless of the specifics and limitations of the model, that a behavior anyone can understand, including those who must consent to
it. Although even the most hard-bitten existentialist would be unlikely to support a systematic framework for suicide, it makes a difference
when &suicide& is more likely to mean a fender bender and damage to one pocket rather than the death or injury of another
To destroy oneself is different when there is no self to destroy, and, practically speaking, the risk to passengers, equipped with airbags
and seat belts, is far less than the risk to pedestrians. How exactly would this all be accomplished in practice? Well, it could of course
be required by transportation authorities, like seat belts and other safety measures
But unlike seat belts, the proprietary and complex inner workings of an autonomous system aren&t easily verifiable by non-experts
There are ways, but we should be wary of putting ourselves in a position where we have to trust not a technology but the company that
administrates it
Either can fail us, but only one can betray us. We should be wary of putting ourselves in a position where we have to trust not a
technology but the company that administrates it
Either can fail us, but only one can betray us. Perhaps there will be no need to rely on regulators, though: No brand of car wants
to have its vehicles associated with running down a pedestrian
Today there are probably more accidents in Civics and Camrys than anything else, but no one thinks that makes them dangerous to drive — it
just means more people drive them, and people make mistakes like anyone else. On the other hand, if an automaker brand of self-driving
vehicle hits someone, it obvious (and right) that the company will bear the blame
And consumers will see that — for one thing, it will be widely reported, and for another, there will probably be highly robust tracking of
this kind of thing, including footage and logs from these accidents. If automakers want to avoid pedestrian strikes and fatalities, they
will incorporate something like this self-destruction protocol in their cars as a last line of defense, even if it leads to a net increase
in autonomous collisions
It would be much preferable to be known as having a cautious AI than a killer one
So I think that, like other safety mechanisms, this or something like it will be included and, I hope, publicized on every car not because
it required, but because it makes sense. People deserve to know how things like self-driving cars work, even if few people on the planet can
truly understand the complex computations and algorithms that govern them
They should, like regular cars, be able to be understood at a surface level
This case of understanding them at an extreme end of their behavior is not one that will be relevant every day, but it is a crucial one
because it is something that matters to us at a gut level: knowing that these cars aren&t evaluating us as targets via mysterious and
fundamentally inadequate algorithms. To repurpose Camus: &These are facts the heart can feel; Yet they call for careful study before they
become clear to the intellect.& Start with a simple solution we feel to be just and work backward from there
And soon — because this is no longer a thought experiment.