Defining the code of ethics in the digital age brings to the surface endless shades of gray. Humans make myriad decisions based on instinct, experience, and a societal understanding of ethics, but coding that potent force into machines is a whole different story. Chief among the questions is who actually gets to make those moral choices that lead to a machine’s decision making?
A thought-provoking new study out of Stanford considers the ethical issues related to autonomous vehicles, and exactly what choices get coded into their electronic brains. Consider the dilemmas facing drivers every day – is it better to swerve out of a lane and thereby break the law in order to avoid hitting an animal in the road? Is it better to hit a pedestrian on a sidewalk than a car full of people that’s suddenly pulled in front of you?
These are untenable choices and a reality that gets played out daily, produced by an unpredictable environment. They force humans to make decisions instantaneously based upon experience and insight. Mastering that Human part of that Human-Machine-Environment equation doesn’t come from observing a few drivers. It requires years of monitoring and analyzing driver behavior in thousands of situations over billions of miles driven. And the machine and the environment are equally critical variables that influence safety, and that carry ethics implications. For AVs, it’s a critical area of study, given the liability that’s carried with each vehicle on the road.
As any professional driver will tell you, the laws of the road, alone, are not adequate to govern safe driving. Autonomous vehicles will have to operate with an awareness of the vehicle’s limits, and an understanding of when it’s appropriate to break the law.
Fortunately, history can help inform some of these decisions, and some will require regulators to generate new policies and laws. But still others may be left up to the car owners themselves. Where the line gets drawn makes for a fascinating read.