国产视频

In Short

How Do You Solve a Problem Like the Trolley?

Google Car
smoothgroover22 / CC2.0

Imagine you are driving down a two-lane road at about 45 miles per hour, cruising home. You see a group of kids walking home from school about 100 yards ahead. Just as you鈥檙e about to pass by them, an oncoming 18-wheeler swerves out of its lane and is about to hit you head on. You have seconds, tops, to decide: Sacrifice yourself, or hit the children so you can avoid the truck.

I like to think that, if asked in advance, most people would choose not to plough into the kids. As the automation of driving advances, there鈥檚 a way to 鈥渉ard-code鈥 that decision into vehicles. Many cars already detect whether a toddler in a driveway is about to be run over by a driver with a blind spot. They even beep when other vehicles are in danger of being bumped. Transitioning from an alert system to a hard-wired hard stop is technically possible. And if that鈥檚 , so is an automatic brake that would prevent a driver from swerving to save herself at the expense of many others.

But the decision can also be coded the other way鈥攖o put the car occupants鈥 interests above all others. Christoph von Hugo, Mercedes鈥 manager of driver assistance systems, active safety, and ratings, appeared to push this vision of the future of more fully autonomous vehicles in a recent in Car and Driver. 鈥淵ou could sacrifice the car, but then the people you鈥檝e saved, you don鈥檛 know what happens to them after that in situations that are often very complex, so you save the ones you know you can save,鈥 he said. 鈥淚f you know you can save at least one person, at least save that one. Save the one in the car.鈥 (Mercedes that Hugo was 鈥渜uoted incorrectly鈥 and that 鈥淸f]or Daimler it is clear that neither programmers nor automated systems are entitled to weigh the value of human lives. Our development work focuses on completely avoiding dilemma situation by, for example, implementing a risk-avoiding operating strategy in our vehicles.鈥)

Some ethicists classify decisions like von Hugo鈥檚 as a solution to a 鈥,鈥 after the famous series of thought experiments presented by Judith Jarvis Thomson to challenge simple utilitarianism. Jarvis Thomson, a professor of philosophy, stylized ethical dilemmas in a series of hypotheticals. Would you divert an oncoming trolley away from hitting five schoolchildren if your decision meant it killed one person instead? Would you push a very over a bridge onto the tracks in front of the trolley to slow it down and keep it from hitting another person? The trolley problem was a classic example of an 鈥,鈥 capable of eliciting responses ranging from the judicious to the zany. It鈥檚 even satirized in .

So how do you solve a trolley problem? Some believe the answer is to give car owners ever more granular control. Enlightened drivers might choose a general rule of 鈥渟ave me first鈥 but soften it to include more self-sacrificial options in case of mass casualties. Or they might not. Mere awareness that others are not willing to sacrifice for the common good could toward selfishness, or worse. The same individualism that has U.S. organ donation rates would probably be even more influential in driver decision-making here.

So perhaps increasingly autonomous cars should abide by common rules, setting the same terms of safety and danger for all. The project at Massachusetts Institute of Technology is soliciting feedback on user responses to . With a large enough data set on how research subjects respond to simulated crashes, programmers might try to assure that car code of the future reflects our current judgments (or at least those of the people who participate in the Moral Machine). For example, if 80 percent of subjects chose self-sacrifice in the 鈥渉it the truck or the children鈥 scenario at the beginning of this article, that could become the coded rule for such tragic choices. Programmers might also tilt the code in a more utilitarian direction, nudging automation toward better societal outcomes.

Noodling about variations on the trolley problem could occupy car-makers, programmers, and research subjects for years. What if only one child were sacrificed by a decision to avoid the truck? Do elderly persons deserve more, less, or the same consideration as children? But a better question might be: Why are automobiles traveling so close to pedestrians in the first place? The nonprofit safety advocacy organization Transportation for America has the enormous (and troubling) variation among pedestrian death rates in major American cities. The worst places, such as suburbs and exurbs, feature urban design that makes it all too easy for drivers of any stripe鈥攎an or machine鈥攖o crash into pedestrians. Safety is not just a problem of code鈥攑hysical infrastructure matters, too. And the disastrous scenario with the 18-wheeler and the group of kids might never happen if proper dividers separate oncoming lanes of traffic.

Even if those stronger barriers don鈥檛 come to pass, though, worry over trolley problems should not freeze autonomous car initiatives. Human error is the root cause of thousands of traffic deaths each year. The Department of Transportation has rightly self-driving cars鈥 development, and local authorities could to advance their adoption. But the question of who is sacrificed in tragic scenarios is not one that can be submerged in the general utilitarian calculus of lives saved via robot cars. Both law and software code have an as well, favoring some of our values over others.

To preserve those values, we need to avoid uncoordinated, individualized programming choices made by each individual automaker. Libertarians might call the 鈥渄river-first鈥 approach an inevitable, market-based 鈥渟olution鈥 to trolley problems. But the market here wouldn鈥檛 be complete without giving potential victims of the car a its programmers not to hit them. It鈥檚 not hard to imagine who would win that bidding war. For self-driving cars, a 鈥渄evil take the hindmost鈥 option of self-protection above all else would further erode already fraying social solidarity.

It鈥檚 important to remember, though, that this isn鈥檛 the only moral problem that comes with increasing highway automation. As Kate Crawford and Ryan Calo ,

The trolley problem offers little guidance on the wider social issues at hand: the value of a massive investment in autonomous cars rather than in public transport; how safe a driverless car should be before it is allowed to navigate the world (and what tools should be used to determine this); and the potential effects of autonomous vehicles on congestion, the environment or employment.

There is already concern that the firms most likely to control fleets of self-driving cars, such as or , aim to replace (rather than complement) existing public infrastructure. We could call this the 鈥渘o trolley, bus, or subway鈥 problem: increasing carbon footprints, congestion, and marginalization of underserved communities thanks to bad transport policy.

There will always be conflicts among cars, pedestrians, robots, drones, and bikers over the proper share of space and respect each deserves. We need individualistic, technical solutions to some of the problems that will result as new arise and robot delivery services with people. But we also need holistic, big-picture thinking. As policymakers the rules of the road for 21st-century mobility, they should listen to the urban planners, social scientists, and advocates who鈥檝e spent decades thinking about how to build better, more livable communities. Transport isn鈥檛 just a technical problem: It鈥檚 a human and social one, with political implications far beyond arid intellectual models of utilitarian markets.

This was originally published in , a collaboration among , 国产视频, and .

More 国产视频 the Authors

Frank Pasquale
How Do You Solve a Problem Like the Trolley?