By Jill Alexander
This op-ed is part of a series from E295: Communications for Engineering Leaders. In this course, Master of Engineering students were challenged to communicate a topic they found interesting to a broad audience of technical and non-technical readers. As an opinion piece, the views shared here are neither an expression of nor endorsed by UC Berkeley or the Fung Institute.
“An important fact to remember about self-driving cars is that there are no self-driving cars.”When autonomous vehicles fail, sometimes fatally, there is always a scramble to assign and avoid blame. Car companies like Tesla are always quick to point to the warning on their websites, placing the blame squarely on human operators. Legal responsibility almost always falls on the human driver, which often draws attention away from the flaw in the autonomous systems that caused them to fail. So-called “self-driving” cars have many limitations, limitations that are often left unexamined. As self-driving cars become more and more common, it is vital to understand these limitations and their real-world ethical and technical implications. The inner workings of self-driving cars can be simplified down to three steps: sense, plan, and act. The first part, sensing, is taken care of by a collection of sensors, usually involving cameras and infrared light detection, or LIDAR, a form of radar that uses lasers to create a 3D representation of an environment. Different sensors can provide better or more information on distance or speed than other sensors. There is another reason for relying on an army of sensors, however: no sensor is infallible. Cameras can be blocked by dirt or fog, or lose effectiveness on poorly marked areas, while LIDAR works poorly in rainy or snowy environments. Adding multiple sensors closes the blindspots and provides extra layers of safety. However, even with these extra layers, sensors can behave unusually or still fail to detect an object. The planning and acting part of self-driving is no less difficult a problem to solve. Driving requires operating in very complex situations that can be difficult to predict for even the most experienced driver. Self-driving cars perform arguably better than humans at many tasks, using high-performance computing that allows AVs to react at much faster rates than human drivers. However, algorithms can still be vulnerable to biases and failures, or fail to predict things accurately. For example, in the first recorded case of a pedestrian fatality involving a self-driving car, an Uber test vehicle struck and killed a bicyclist crossing a road, even though the vehicle’s sensor detected the bicyclist. The bicyclist was detected 5.6 seconds before impact, but an emergency alert was not sent to the safety driver until 1.2 seconds before impact. According to reports, the driving system tried to classify the pedestrian first as a vehicle, then as a bicyclist, and overall failed to predict their movements in order to prevent the crash.

Op-Ed: A Warning about Self-Driving Cars was originally published in Berkeley Master of Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.