The Parr Center and PPE Present “The Ethics of Self-Driving Cars” with Justin Erlich
On Monday October 15 2018, in coordination with the PPE program, UNC’s Parr Center for Ethics presented guest speaker, Justin Erlich, who gave a lecture titled “The Ethics of Self-Driving Cars.” Erlich currently serves as Vice President of Strategy, Policy, and Legal at Voyage, an autonomous vehicle start-up out of San Francisco. He previously held posts as the head of Policy, Autonomous Vehicles, and Urban Aviation at Uber and as a Special Assistant Attorney General in the tech. field under Former California A.G. Kamala Harris.
Erlich outlined the basics of autonomous technology, highlighting the benefits of this emerging technology before heading into some of the most important ethical conundrums of self-driving cars. He addressed a number of concerns in regard to self-driving vehicles, and some of the most important were liability, safety, and privacy.
If a semi-autonomous vehicle were to get into a crash, who is at fault? Is it the manufacturer, who installed technology knowing that many people would use it incorrectly? Is it the driver, who fell asleep at the wheel because they thought the car would take care of itself, or who turned the automated mode on in the wrong circumstances? Erlich pointed out that even when car companies engineer the car to encourage consumer participation and attention, many will try to outsmart the system with solutions that “trick autopilot” into thinking they are engaged. Is this person more at fault than the one who fell asleep at the wheel of a car which was not equipped with those safety features? Many companies are even skipping L3 automation – a stage where cars are mostly autonomous, needing only basic monitoring and intervention in extreme cases – because they can’t ensure people will be compliant with monitoring tasks in the car. As research shows that it would take too long to switch between tasks for L3 monitoring to make the car any safer, many consider L3 an unnecessary, and perhaps unsafe, step. With this information, would a manufacturer be responsible for a slow response by the “driver” in an L3 autonomous vehicle?
How safe would a “fully-autonomous” car have to be in order to secure a place on the streets, and how would someone measure that safety? Does a robot driver have to be just barely better than a human? Twice as good as a human driver? Near perfect? Erlich demonstrated the interesting implications regarding the degree to which we hold robots and humans to different standards. If a particular individual crashed a car, we don’t believe it to be a reflection on all human drivers, rather a mistake by the individual driver in a particular condition. However, we are far more likely to extrapolate regarding autonomous vehicles. The question is whether that is justified. Human drivers are also less likely to follow the law. Many if not most drivers do not strictly obey posted road signs, especially speed limits. So, are autonomous vehicles responsible for the problems that result from interacting with human drivers who aren’t following the law? The second part of this problem is how to test these safety concerns. Should companies self-assess, be required to pass a third-party safety test, or something in-between? How will we know if those assessments are the best ways to test for future safety? How much do we value the quick roll-out of autonomous vehicles and their benefits versus proof of consumer safety?
Would you rather have a safer experience or a more private experience? A cheaper experience or one free of advertisements? Erlich called attention to the fact that autonomous cars are giant roving sensors that store massive amounts of data. Similar to current problems with Facebook, how and with whom that data is shared is an important factor to consider. Also, with autonomous vehicles, there would be no driver to monitor activity inside of the car. In order to protect individuals, there might be a need for a camera inside of the vehicle. Would that be an invasion of privacy? There is also a high demand for advertising. Is it ethical to force advertising content during an autonomous ride in the form of posted signs or videos? Would it be more ethical if that stream of income could offset the price of a ride and increase access?
Justin Erlich mostly posed questions regarding these topics, as many of these questions still lack answers due to the constant emergence of new technology. Many of the topics were completely new to me, and I am still considering the ethical implications of them. The lecture definitely ingrained the complexity of automation in my head, illuminating many considerations on which people spend their entire careers, but that I had never contemplated. If these types of questions interest you, I would highly recommend attending a Parr Center Event in the near future. Our next major event is on November 13th from 5:00 – 6:30 pm. The Parr Center is presenting Jennifer Morton from CUNY, who will be lecturing on “Elite Education and the Developing World.”