By Jill AlexanderThis op-ed is part of a series from E295: Communications for Engineering Leaders. In this course, Master of Engineering students were challenged to communicate a topic they found interesting to a broad audience of technical and non-technical readers. As an opinion piece, the views shared here are neither an expression of nor endorsed by UC Berkeley or the Fung Institute. An important fact to remember about self-driving cars is that there are no self-driving cars. At least not yet, not really. Current technology can use some degree of autonomy to improve driving performance, such as automatic braking technology, adaptive cruise control, and lane detection. However, technology has not yet caught up to the sci-fi dreams that would put humans in the back seat, comfortably reliant on automation. In the meantime, many cars are still marketed as “self-driving.” Tesla, known for their Autopilot feature, proclaims their cars “Full Self-Driving Capabilities” on their website. A paragraph later, there is a warning: “Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment.” There are no self-driving cars.
“An important fact to remember about self-driving cars is that there are no self-driving cars.”When autonomous vehicles fail, sometimes fatally, there is always a scramble to assign and avoid blame. Car companies like Tesla are always quick to point to the warning on their websites, placing the blame squarely on human operators. Legal responsibility almost always falls on the human driver, which often draws attention away from the flaw in the autonomous systems that caused them to fail. So-called “self-driving” cars have many limitations, limitations that are often left unexamined. As self-driving cars become more and more common, it is vital to understand these limitations and their real-world ethical and technical implications. The inner workings of self-driving cars can be simplified down to three steps: sense, plan, and act. The first part, sensing, is taken care of by a collection of sensors, usually involving cameras and infrared light detection, or LIDAR, a form of radar that uses lasers to create a 3D representation of an environment. Different sensors can provide better or more information on distance or speed than other sensors. There is another reason for relying on an army of sensors, however: no sensor is infallible. Cameras can be blocked by dirt or fog, or lose effectiveness on poorly marked areas, while LIDAR works poorly in rainy or snowy environments. Adding multiple sensors closes the blindspots and provides extra layers of safety. However, even with these extra layers, sensors can behave unusually or still fail to detect an object. The planning and acting part of self-driving is no less difficult a problem to solve. Driving requires operating in very complex situations that can be difficult to predict for even the most experienced driver. Self-driving cars perform arguably better than humans at many tasks, using high-performance computing that allows AVs to react at much faster rates than human drivers. However, algorithms can still be vulnerable to biases and failures, or fail to predict things accurately. For example, in the first recorded case of a pedestrian fatality involving a self-driving car, an Uber test vehicle struck and killed a bicyclist crossing a road, even though the vehicle’s sensor detected the bicyclist. The bicyclist was detected 5.6 seconds before impact, but an emergency alert was not sent to the safety driver until 1.2 seconds before impact. According to reports, the driving system tried to classify the pedestrian first as a vehicle, then as a bicyclist, and overall failed to predict their movements in order to prevent the crash. Self-driving cars do perform better than human drivers in many scenarios and can help change lanes, avoid collisions, and brake in an emergency. Over 90% of traffic accidents are estimated to be caused by human error, and self-driving cars could help reduce these accidents. However, “the elimination of human error does not imply the elimination of machine error.” These autonomous systems still rely on humans to fill in the gaps. Self-driving cars become dangerous when these gaps are ignored. Researcher Michael Nees performed a study that suggested that the acceptance of self-driving cars increased when people were given an overly optimistic view of self-driving cars. Other studies have shown that people are more accepting of self-driving cars in scenarios where the driver is impaired by drugs, alcohol, or medication and could not be relied on to stop the car in an emergency. People are more likely to trust self-driving cars when they are at their least attentive, but an attentive human driver is currently the most important safety system for a self-driving car. Misplaced faith in autonomous driving may also reduce seatbelt use and cause pedestrians to become less cautious. Further evidence of misplaced trust in self-driving cars can be found in cases of fatal accidents involving self-driving cars, which often find that the driver failed to stop an accident because they were occupied with another task. In these few, but fatal accidents, both the driver and the autonomous systems failed to prevent a dangerous situation. The ethical and legal responsibility, however, never left the driver. The companies that produce these cars do not expect them to stop every accident—it would be nearly impossible to do so—but they are often marketed as if they can replace a human driver. This is a dangerous expectation to set for the general public. Self-driving cars depend on human drivers to fill in the gaps where sensors and algorithms can fail. In order to close these gaps, we must be aware of them in the first place. A realistic attitude about the shortcomings of self-driving cars will allow us all to make informed decisions regarding self-driving cars, as well as make the roads safer for everyone. We can continue to be optimistic about the future of autonomous vehicles as long as we keep both hands on the wheel and a reminder on our dashboard: there are no self-driving cars. References: “Autopilot and Full Self-Driving Capability.” Tesla, 16 June 2021, https://www.tesla.com/support/autopilot. Taeihagh, Araz, and Hazel Si Min Lim. “Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks,” Transport Reviews, vol. 39, no. 1, pp. 103–128, 2019, DOI: 10.1080/01441647.2018.1494640 United States. National Transportation Safety Board. 2019. Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018. Highway Accident Report. NTSB/HAR-19/03. Washington, DC. Nees, Michael A. “Acceptance of Self-Driving Cars: An Examination of Idealized versus Realistic Portrayals with a Self-Driving Car Acceptance Scale.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 60, no. 1, 2016, pp. 1449–1453
Op-Ed: A Warning about Self-Driving Cars was originally published in Berkeley Master of Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.