Ethics of Machines

Philosopher David Edmonds on relationship between moral philosophy and robotics, artificial intelligence and ethics that we should build into machines

videos | September 6, 2016

How can we define a robot today? When will we start using driveless cars? What kind of ethics should we build into the machines? Philosopher David Edmonds answers these controversial questions.

This is quite an important topic, because it’s no longer just science fiction. We’ve read about this kinds of machines for decades if not longer. Writers have imagined robots acting like a human beings. And it’s increasingly becoming clear that they are not that far away – intelligent machines. Of course, we have to define what we mean by intelligence.

Philosopher David Edmonds on deontological ethics, Kantian ground of human rights, and usefulness of philosophers

If a driver crashes a car and a driver gets drunk and causes an injury, we know who to hold responsible. Who we hold responsible when it comes to robots? Do we hold the person who owns the car responsible? Do we hold responsible the person who manufactured the car? Do we hold responsible the insurance company? Do we hold responsible the person who built the software that decided how the car was going to navigate the streets?

There is another famous example in the same field of moral philosophy where the train is running down the track and five people are going to die. And you are standing next to a very fat man and you can push the fat man over the footbridge and the fat man will tumble over and he’ll land on the track and he’ll stop the train from killing the five people. And what’s interesting is that almost nobody thinks that you should push the fat man.

Consultant Researcher and Senior Research Associate, Oxford Uehiro Centre for Practical Ethics, University of Oxford
Did you like it? Share it with your friends!
Published items
0779
To be published soon
+92
New