The Trolley Problem & Driverless Cars

We’re back on our Edge series, What scientific concept ought to be better known. Today’s entry is from Daniel Rockmore on the “Trolley Problem.

This is an over-worked philosophical or ethical “dilemma”, but here it is given fresh life thinking about how to program automated vehicles to “solve” trolley problems.

“The Trolley Problem” is another thought experiment, one that arose in moral philosophy. There are many versions, but here is one: A trolley is rolling down the tracks and reaches a branchpoint. To the left, one person is trapped on the tracks, and to the right, five people. You can throw a switch that diverts the trolley from the track with the five to the track with the one. Do you? The trolley can’t brake. What if we know more about the people on the tracks? Maybe the one is a child and the five are elderly? Maybe the one is a parent and the others are single? How do all these different scenarios change things? What matters? What are you valuing and why?

It might be guessed most people wouldn’t figure how to use the switch in time to implement whatever decision they’d make, if indeed a person could come to rational decision in the short time. Who would be able to consider and believe with certainty the brakes wouldn’t work? Is there enough, or would people believe there is enough, time to issue some kind of warning? Perhaps that red button over there. So the dilemma in philosophy is a weak one, especially since articulating all the premises one is working with under this panic situation will be next to impossible.

But what if you had to write code so that if the trolley’s brakes failed it had to respond in a certain way?

As we increasingly offload our decisions to machines and the software that manages them, developers and engineers increasingly will be confronted with having to encode—and thus directly code—important and, potentially, life and death decision making into machines. Decision making always comes with a value system, a “utility function,” whereby we do one thing or another because one pathway reflects a greater value for the outcome than the other.

The question is: how different is a driverless car from a drivered car? There is already plenty going on in cars with drivers, and there was, too, before the electronic invasion. If your brakes go out and you have the choice to plow into a sea of (soft) civilians or into the deadly sea itself, what would you do? In that split second, you’d regret having to kill the odd stranger, but maybe you figure people will jump out of the way in time?

And then the number of situations with which drivers are confronted are many. Think of the multitude of decisions you make in fast-slow highway traffic, with maniacs jumping lanes to get head and idiots who drive slow in the left lane, and those nuts who just have to merge in front of you. Wait! Traffic is stopped! No. It only slowed. Wait! It stopped for real! Now it’s going fast; too fast. Can’t these people see the speed limit signs?

Finally you’re off the highway! The off ramp is slick in spots since it snowed a few hours ago. Black ice! Now snow drifts. Now a puddle. That damn truck splashed half of it on the windscreen and computer-driver sensors! At least there is time to use the wipers to scrape off the gunk now that we’re stopped at this light.

Say. What’s wrong with this damned light. It hasn’t changed. It’s busted! Wait! That guy thinks it’s his turn! It’s clearly mine; I’ve been waiting here behind this old lady. Now this other clown is pulling out, too!

We will build driverless cars and they will come with a moral compass—literally. The same will be true of our robot companions. They’ll have values and will necessarily be moral machines and ethical automata, whose morals and ethics are engineered by us. “The Trolley Problem” is a gedankenexperiment for our age, shining a bright light on the complexities of engineering our new world of humans and machines.

I’m not sure how shiny this light is, nor how bright. Engineers have some job ahead of them.

16 Comments

  1. Gary

    Driverless cars will be just another opportunity for the lawyers to get even richer sorting it out.

  2. Michael Dowd

    Now if all the cars were driver-less and all had the same moral structure wouldn’t this guarantee safety for pedestrians and all the folks riding in these vehicles?

  3. Sheri

    Gary: Absolutely.

    Michael: No, it would ensure equality. Equality is not safety.
    (Suppose Hitler wrote the code for the “morals” of the driver-less cars. Now suppose Ghandi wrote it. Both would write codes that are “equal” but I doubt both would have the same outcome. One protects, one destroys.)

    The moral dilemma has indeed been around for centuries. My husband once took out a fire hydrant (which sadly do not erupt in a waterfall like you see on TV) to avoid hitting a woman driving a Subaru. He was driving a 1968 International Travelall which would have crushed the Subaru. Not much of a choice, really, but it does illustrate one choses all the time. The only difference with machines is this is not an individual choice, but one forced on society by the machine programmers. If it’s wrong, it’s wrong everywhere. There’s no one to make the “right” choice and there may be fewer opportunities to learn better choices. Guilt and pain motivate people to change their behaviors and make better choices (mostly). A programmer who never directly sees the results of his program has no such incentive. The lawyers Gary mentions may or may not help—they only want a new boat and a new mansion in the tropics. They care nothing about fixing problems. If they help, it will be accidental.

  4. Michael Dowd

    Thanks Sheri. I think most folks are not going to like this world of robots; this will feel redundant. This will, of course, require an anti-redundancy drug for either temporary or permanent relief. Scary times ahead.

  5. Senghendrake

    I believe people jump lanes to get *ahead*, not to get head.

  6. BrianH

    Self-driving cars are 100% utilitarian, meaning they will take you from your starting point to your destination, a great idea if you’re going to be on the interstate for hundreds of miles. However, we love our cars for more than utility. We go for a Sunday drive around town, take the family up and down neighborhood blocks to look at Christmas lights, drive around to see if we can find something good to eat, etc. The car is an extension of our natural freedom of mobility. They “respond” to our brain much the same way as our hands of feet do, which is to say, sub-consciously. How would you program the following scenario? It’s lunchtime, I’m hungry, there’s a Wendy’s up ahead, sounds good, nah, line is too long, there’s a Burger King in another block, turn in to the drive-thru, on second thought I have to pee, I’ll park instead. Not this spot, it’s too far away and it’s raining, I’ll circle and see if I can find a closer spot.

    The industry push seems to be for 100% full-time self-driving. I don’t believe they will ever by accepted by the public unless there is a way to switch it off. This means the government will have to mandate them, for our “safety” of course, just like the massive surveillance state is for our safety.

  7. DAV

    It’s called a dilemma because there is no answer. On the surface it would seem that running over less people should be the choice but is it? What if it were a choice between running over a little kid and five gangbangers?

    We have had a form of driverless cars for some time. The DC Metro has an ‘operator’ whose sole job is to push the STOP button. Airplanes can be landed automatically. It’s called a CAT III landing. Human pilots are required but would unlikely be unable to respond after breaking out at minimum visibility.

    I suspect that drive-throughs will need modifications for driverless cars. When the line spills into the street (happens around here a lot) how would the end of the line be detected? Could such a line be distinguished from a line of parked cars?

  8. Sander van der Wal

    The real ‘Trolley Problem’ is that the trolly can’t brake. An engineer therefore would put a proper brake on a trolley.

    And tie a philosopher or two at the front, to act as shock absorbers. An ssome others at the back for the same reason. So these philosophers are potentially going to do something useful, for a change.

  9. Anon

    My problem with “self-driving” anything is that is means that humans will have less developed motor skills (sorry) and diminished depth perception. GPS has already made people who were okay drivers into unthinking dependent drones. As for the cars outfitted with sensors—this too, makes for poorer drivers as they expect the machine will save them from careening into the car in the next lane as they go over the lines. Technology is already eroding basic driving skills, which seems like it is an argument FOR self-driving vehicles, when it really is an argument for less technology.

    To be a licensed driver, one has to show some basic smarts regarding operation of the vehicle and rules of the road. I wonder if the declining effectiveness of public education this has anything to do with this agenda.

    Self-driving cars are just another nail in the coffin of self-sufficiency and self-determination.

  10. We drive far too much. It’s getting ridiculous. Whatever happened to working from home in this new “information age?” It reminds me of the “paperless office.” If you wanted a ton of new paper, that’s what it accomplished. Making driving more convenient and easier and lazier seems to me to be about as worthy a goal as as Trump’s silly coal campaign. What the hell good comes of this?

    JMJ

  11. MattS

    RE: Trolley Problem.

    How many people are on the Trolley?

    Can it go straight instead of making either turn?

    How about getting my vehicle in front of the trolley and using my breaks to stop the trolley? I’ve read about a couple of cases where truck drivers stopped run away cars that way.

    Can I jam the switch so the trolley derails instead of making either turn?

  12. Bob

    Designing a driverless car is a complex thing, so you break it down into manageable pieces. I don’t agree that the choice for the driving software is which person or persons to hit. Designing the trolley’s safety system is similar to designing safety features in a nuclear plant, assembly line, or any other mechanical system.

    The system is designed for all known problems. You will never know all the possible problems, so you have to plan on an absolutely worst case scenario as a last option. As with a nuclear plant like Fukashima, everything went wrong, but if there had to be a melt-down, the foundations were designed for as much containment as possible. We can say that something in the Fukashima safety design worked. That was their last ditch stand ( an engineering term).

    In the case of a trolley the absolute worst situation would be if all other safety levels fail, the car must have a mechanical stopping system independent of the normal braking system. This may be making the wheels fall off automatically, or something similar that prohibit movement of the vehicle.

    I do not envy the people who design these systems and its sensors. Nothing is perfect. We just might have to accept as a society that we are going to kill some people in the process of automating our transportation systems. When philosophy runs into reality we have to do these things.

  13. PEHarvey

    You need to have brakes ON by default on these heavy, rail guided machines. It used to be the case with steam locomotives: steam pressure was needed to counteract the spring loaded brakes.
    The Lac-Mégantic accident would not have occurred.

  14. acricketchirps

    It’s not called a dilemma because there’s no answer; it’s called a dilemma because there are two–bad (if that’s not beside the point–answers.

  15. Milton Hathaway

    “Engineers have some job ahead of them.”

    Not really – we just do what the lawyers tell us to do. Or adhere to an agency standards document. Or fall back on “best practices” for the less earth-shattering decisions. Engineering isn’t quite the highly respected profession it once was. A lot of design work is outsourced overseas, or foreign engineers are brought here on various work permits, at 1/2 or 1/3 the cost. The quality isn’t quite there yet, and employers understand this, which I know to be true because I still have a job. I was hoping to be telecommuting by now (which would make JMJ happy), from a beach, but any engineering work that can be done remotely is outsourced to a low-cost region.

    There’s a concept commonly used in engineering that users may not be aware of. Many products are designed to be safe with a “single fault”. A single fault might be a failed component, a user-defeated interlock, or a user doing something stupid with a product. High volume consumer products might go beyond this single-fault criteria, and of course designs like aircraft and dams and nuclear power plants must. But I marvel when I see people continuing to use a potentially dangerous product with an obvious fault – there may be nothing in the design left to protect them if (or when) something else goes wrong.

    Self-driving cars? In our litigious society? No way, it’s just fad that will pass. That’s not to say that the ‘problem’ won’t be solved another way. Maybe something seemingly bizarre, like a lane on the interstate with an embedded track with cars that can follow the track at 70 mph traveling bumper-to-bumper.

  16. Joe

    The real problem is that when a human is driving the trolley, either choice can be accepted/forgiven because we know that it would be a split-second decision made in the heat of the moment with no good options. Even if the human makes the choice we (later) decide to have been “worse”, we don’t necessarily hold them to be guilty.

    BUT a computer doesn’t make “mistakes”. A computer is programmed by someone sitting in an office with time to think about things and make a reasoned decision. A hundred human trolley drivers might go left 50 times and right 50 times, but the self-driving trolley will be programmed to do the same thing (probably go left) every time. Thus a programmer will have coded a machine to purposefully choose to kill the individual on the tracks. *That’s* the moral danger zone.

Leave a Reply

Your email address will not be published. Required fields are marked *