The Autonomist
young blonde woman at steering wheel  texting while driving

Ban human drivers when self-driving cars become common

To maximise the benefits of autonomous technology, we must take the bold step of banning human drivers, argues Cardiff University Professor of Philosophy Jonathan Webber

Self-driving cars could revolutionise people’s lives. By the end of the next decade, they could radically transform public spaces and liberate us from the many problems of mass car ownership. They’ll also be much better behaved than human drivers.

Robot drivers won’t break the speed limit, jump the lights, or park where they shouldn’t. They won’t drive under the influence of drink or drugs. They’ll never get tired or behave aggressively. They won’t be distracted by changing the music or sending a text, and they’ll never be trying to impress their mates.

Driverless cars could also change the face of public spaces. Private cars are very expensive items that do absolutely nothing 95% of the time. They are economically viable only because paying a taxi driver for all your car journeys would be even more expensive. Once cars don’t need human drivers, this cost balance should tip the other way.

Imagine what your town or city could look like with driverless taxis instead of private cars. Most of the space taken up by car parks could be used for homes, offices, cafes, bars, cinemas, hotels, and swimming pools. An end to parked cars lining every street like urban cholesterol. Quicker bus journeys. Wider pavements.

With more space and safer roads, active transport would be more attractive. More people would travel around on bikes, skateboards, roller blades, and scooters. Driverless taxis could easily be electric, returning to depots to recharge.

The benefits to public health would be enormous. Our towns and cities would be vastly more pleasant places to live and breathe. Transport’s contribution to climate change would be dramatically reduced. But ensuring all these benefits presents an important ethical challenge.


Ethical concern about autonomous vehicles has so far focused on emergencies. Should a car save its passengers at the cost of killing or injuring other people? Should it swerve to avoid someone in the road if this means hitting someone on the pavement? How many people need to be saved to outweigh a bystander’s life or limb? Are children more important than adults? And so on.



The problem resembles philosopher Philippa Foot’s most famous ethical thought experiment: the trolley problem. Imagine you are driving a trolleybus. Its brakes have failed and it’s hurtling towards five people who will certainly be killed if it hits them. You can swerve it onto a side track, killing one person who otherwise would not have been affected. The question is, whether you should.

diagram showing trolley and two tracks with people at each end

An illustration of the trolley problem. Pic: McGeddon via WikiMedia Commons

Philosophers debating this question have produced a dazzling array of variations. What if you are standing by the track next to someone wearing a very large backpack? Should you push that tourist under the trolley, saving five people’s lives? If you could stop the trolley only at the cost of your own life, should you do that? And so on and so on.

Intuitive responses to these variations tend to seem contradictory. But we learn more about our moral thinking by exploring how they might in fact be consistent. And we learn more about moral cognition by scanning people’s brains while they consider these problems.

Self-driving cars have given this debate a new purpose. We have to teach these vehicles how to handle emergencies – the trolley problem just got real. At least, this is what many philosophers think. But in focusing on an existing thought experiment, they have missed the bigger picture.


Engineers working on driverless cars tell us that the safest response in any emergency is to stop. This will be even safer if the nearby cars all have robot drivers.

And robot drivers would be better behaved than human ones, reducing the number of emergencies on the roads.

Given all the potential benefits to public health and quality of life, we should be much better off once robots take over the driving, whatever the authorities decide about emergency situations.

This is what gives rise to the real ethical challenge of self-driving cars. Once robot drivers are safe enough to allow onto the roads in large numbers, it seems that we should maximise their benefits by banning their dangerous human counterparts from public roads.

There would be resistance to this, of course. Many people enjoy driving. But many people enjoy smoking, too, and this is banned in public places for the protection of non-smokers. There could be designated safe spaces for drivers to indulge their hobby without risk to other people.

Rights of access pose a more difficult question. There is a strong case that essential transport infrastructure should be publicly owned. And if private cars are not an option, perhaps the cost of using autonomous taxis should be proportionate to ability to pay.

But regardless of how we resolve these practical issues, it seems that the enormous benefits of safe, driverless taxis should lead us to remove any other kind of car from our roads.


The Conversation