Why Doesn’t the AV Industry Care About the “Trolley Problem”?

unsplash-image-MyjVReZ5GLQ.jpg

In the recent reportage on autonomous vehicles (AVs), certain ethical dilemmas are pervasive to the point of cliché. 

You know the sort. Suppose it's 2041, and Tesla has finally delivered on its promise to deliver a true self-driving car, with "no action required by the person in the driver's seat." One of the latest models is cruising through town when three schoolchildren dash into the road ahead. What should the Tesla's programming tell it to do: hit the children, or swerve into one of the parked cars on the roadside, likely killing its sole passenger?

Part of the allure of such dilemmas is their philosophical pedigree, which traces back to the venerable ‘Trolley Problem’. In the classic version of this thought experiment, you must choose whether to let a runaway trolley crush five people in its path, or flick a switch to divert it onto a track with just one person on it. Do you let the trolley take its course, or save five lives by actively ending one?

The possible variations on this case are endless. What if you could only stop the trolley hitting the five people by pushing a large man onto the track? What if the choice is between killing five pensioners or your twelve-year-old daughter? What if we forget the trolley altogether, and instead imagine ourselves sipping an iced tea in Silicon Valley, wondering how to program an AV to handle matters of life and death? 

Bored of them as you may be, it is easy to see why trolley cases fascinate AV pundits and academics. The cases lend a certain vividness and emotional weight to some of our deepest worries about AVs, and the consequences of handing serious moral decisions to lines of code written in Silicon Valley. 

Those in the industry, however, seem largely unconcerned. Some simply remain silent. But a growing number are actively dismissing these dilemmas as irrelevant to AV design. 

The most common ploy is what I’ll call the “Won’t Happen” gambit, which says that pondering AV trolley cases is a waste of time, since they’re so vanishingly unlikely to actually occur on the roads.

As Google AV engineer Andrew Chatham puts it, “the main thing to keep in mind is that we have yet to encounter one of these problems. In all of our journeys, we’ve never been in a situation where you have to pick between the baby stroller or the grandmother.” 

Robotics entrepreneur Rodney Brooks makes the same point rather less diplomatically: "Just as these questions never come up for human drivers they won't come up for self-driving cars. It is pure mental masturbation dressed up as moral philosophy."  

This is, apparently, a common view in the industry—even among those who don't speak to the media. But even certain pundits, including AV-sceptic Christian Wolmar, have started to parrot the “Won’t Happen” rhetoric. Trolley cases, Wolmar writes, are “purely theoretical situation[s] that will occur very rarely” and there are "far bigger and more relevant issues that need to be considered” (Driverless Cars: On a Road to Nowhere, p. 67). 

It is true that—contrary to certain claims—fully autonomous cars are a glint on the technological horizon, and there are serious challenges ahead other than ethical ones. Still, the ethical obstacles should not be dismissed— certainly not on such a flimsy basis as the “Won’t Happen” gambit. 

One obvious issue is that even very unlikely events can and do happen—even quite frequently, especially if large scales are involved. 

Samuel Schwartz points out that, on some estimates, 12 trillion miles are driven every year worldwide. So even if a trolley case arose as rarely as, say, once per 12 billion miles, there could still be about 1,000 of them a year. Truly autonomous vehicles—ones that never require human input—would need the programming to cope with such incidents, just as they’d need to be able to handle unusual obstacles and freak weather conditions. 

Another issue is that, once we actually understand what they are, trolley cases really don't seem all that unlikely. The cases—while infinitely variable in detail— have a basic general shape: an agent must make an ethically relevant choice between several actions, all of which foreseeably lead to harm. And some situations with this shape aren’t so farfetched. 

unsplash-image-XuDPnpox8tc.jpg

I grew up in a part of rural North East England where the winters are bleak, and the country roads serpentine and precipitous. On one icy night, I was in the car with my father when he was suddenly confronted with a choice: brake suddenly on the treacherous surface or run over a rabbit that had run out into the road ahead. He chose the latter, presumably judging that—sad though it was—killing the rabbit was less bad than risking a skid that might have killed us both. 

The veteran drivers among you will probably not think this a farfetched situation—some of you may even have experienced something similar. Yet this is for me a real-life trolley case: a choice between two harmful actions that tests certain moral assumptions (for example, my dad’s assumptions about the relative value of human and animal life). 

Sure, the stakes seem lower and the answer more clear-cut than in the more extreme trolley cases. But this is a matter of degree. What if the rabbit had been a wayward pedestrian? Or three wayward pedestrians? Or three wayward pedestrians pushing prams? With gradual adjustments, we could easily transform the relatively prosaic case of my dad running over the rabbit into something truly morally vexing—like the trolley cases we're used to hearing about.  

My point is that trolley cases involving road vehicles are not inherently outlandish—they are simply driving-situations that force an ethically relevant choice between harms. And these situations can lie anywhere on a long continuum between the totally farfetched and the utterly mundane. Once this is appreciated, pretending that AV trolley cases will be hyper-rare or non-existent is frankly disingenuous. 

There are further holes to poke in the “Won’t Happen” gambit (see, for instance, this excellent recent paper by Geoff Keeling). But having poked some already, let me now turn to a further thought. 

The cynic in me suspects that there’s more going on behind the “Won’t Happen” gambit than mere bad reasoning. 

AV entrepreneurs and engineers may not have done much philosophy, but they’re far from stupid. They know that most people still don’t trust the idea of fully automated vehicles, and that, sooner or later, some transparency on AV ethics will be necessary. But they also know that public moral opinion is both messy and volatile, and that too much ethical transparency would be a PR nightmare. 

Mercedes-Benz learned this the hard way in 2016, when an executive took the bold step of announcing that, in cases like our earlier Tesla example, their algorithms would prioritize the lives of passengers over those of other road users. The backlash was swift, and the company retracted the statement just a week later. 

As moral psychologist and philosopher Joshua Greene explains, this shouldn’t have come as a surprise:  

Life-and-death trade-offs are unpleasant, and no matter which ethical principles autonomous vehicles adopt, they will be open to compelling criticisms, giving manufacturers little incentive to publicize their operating principles. Manufacturers of utilitarian cars will be criticized for their willingness to kill their own passengers. Manufacturers of cars that privilege their own passengers will be criticized for devaluing the lives of others and their willingness to cause additional deaths. 

There are some fascinating and complex issues in the background here, including the relationship between public opinion on AV ethics and future policy, and the question of how exactly ethical principles can (and/or should) be realized in AI systems— I hope to return to these topics in future posts. 

But here’s my closing thought for now. From the perspective of AV companies, trolley cases represent an even knottier problem than the ethical questions we generally associate with them: how to manage the delicate balance between consumer trust, ethical opinion, and public relations. 

Perhaps this is why, at least for the time being, some in the AV industry would rather pretend that trolley problems aren’t problems at all. 

What do you think? Are AV firms being disingenuous or is the problem not really a problem? Tell me your views on LinkedIn

Do you Tweet? Here’s one ready-made

Previous
Previous

PSVAR is a masterclass in sloppy legislation

Next
Next

Rebranding is generally a bad idea