Autonomous Cars and “Ethical Crashing Algorithms”

Connected CarsIn a recent article titled “The Problem with Self-Driving Cars: They Don’t Cry” (click HERE), the author investigates the question of how an artificial intelligence would make ethical choices about crash scenarios.

Citing a study by Noah Goodall (“Ethical Decision Making During Automated Vehicle Crashes”) (click HERE), the author asks “can robot cars be taught to make empathetic, moral decisions when an accident is imminent and unavoidable?”  In the news article, the scenario presented to illustrate the concern is as follows:

Consider a bus swerving into oncoming traffic. A human driver may react differently than a sentient car, for example, if she noticed the vehicle was full of school kids. Another person may swerve differently than a robot driver to prioritize the safety of a spouse in the passenger seat.

In my mind, it’s much simpler scenario.  A self-driving car is taking me home at twilight.  I may not see as well during low light transition hours such as dusk or dawn, but the super car has RADAR and special sensors which detect movement on the road ahead — is it a small child crossing the road?  Is it a roving raccoon or a dancing deer?  I would want the car to avoid the crash, but…

  • The tactics of swerving to avoid a child put me at great risk of rolling the car over and causing extensive injuries to me.  
  • The tactics of trying to avoid a raccoon/deer/antelope may include hitting the brakes and hoping we don’t crush the animal, but I come away unhurt and without extensive damage to my very expensive robot car (rolling the car is much worse than a deer-strike in most cases).

Will a robot car be able to distinguish between a child and a wild animal?  If it could, will it react differently?  Would most people react differently?

It is my hope that the onboard sensors enable a much earlier alert and greater chance of avoiding either collision.  Still the author makes an interesting point in stating “There is no obvious way to effectively encode complex human morals in software.”  Further, the analysis offers:

According to Goodall, the best options for car builders are “deontology,” an ethical approach in which the car is programmed to adhere to a fixed set of rules, or “consequentialism,” where it is set to maximize some benefit—say, driver safety over vehicle damage. But those approaches are problematic, too. A car operating in those frameworks may choose a collision path based on how much the vehicles around it are worth or how high their safety ratings are—which hardly seems fair. And should cars be programmed to save their own passengers at the expense of greater damage to those in other vehicles?

Suggesting that the human driver be able to override the AI decision making is an interesting comment offered, but most people’s reaction times are much slower during crashes than a computer’s ability to reach logical conclusions; therefore, it’s suspect that a person could realistically intervene in most cases.

The path forward is complex, but not so difficult that we would slow or stop the development of self-driving cars.  We would certainly hope that engineers, regulators, scientists and other affected professionals in the transportation world would be thinking about how to reconcile these areas of concern.

Our brave new future of hands-free driving sounds great when we consider that the vast majority of crashes could be avoided, but it’s troubling to think about the possible crash scenarios that have ethical implications.

road train automated

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s