The most skilled human drivers show us how a safe robot car should not drive
The Uber crash was a miserable failure of technology—and a paradoxical reminder that safe robot drivers may be within closer reach than they appear
Robot drivers are supposed to be safer than humans. We would expect no less from machines equipped with sensors that afford expansive, acute, and unwavering perception of their surroundings, processors that meticulously analyze the possible paths ahead, and actuators that quickly and precisely execute the planned maneuvers. Sure, a robot might crash on occasion: say in a situation where the sensors are confused by snow and the processors are taxed by a mix of vehicles, cyclists, and pedestrians moving unpredictably, when suddenly a dog darts out onto the slippery street just as an unfamiliar, hard-to-identify object falls onto the pavement, finally overwhelming the robot’s capabilities. But we would hope that such all-but-unavoidable crashes would be few and far between.
The fatal crash of a self-driving test vehicle on the night of March 18 did not fit that description. The sky in Tempe, Arizona was clear, the road was wide and free of traffic, streetlights illuminated the road; yet somehow, the robot driver did not manage to avoid a woman walking with her bicycle across Mill Avenue. The vehicle, from the test fleet of ride-hailing company Uber, made no attempt to brake, according to police. The human backup “safety driver” failed to correct the vehicle’s trajectory; it struck Elaine Herzberg at a speed of 38 mph. She died of her injuries later in hospital.
Speaking to the San Francisco Chronicle, Tempe Chief of Police Sylvia Moir emphasized how Ms. Herzberg “came from the shadows right into the roadway”; this narrative was bolstered three days after the crash when the police released a low-quality dashcam video that gave a misleading impression of oppressively dark conditions. But even had the road been obscured in total darkness, the vehicle was outfitted with lidar, which can “see in the dark”—the sensor emits its own infrared light.
By all appearances—though, to be sure, the National Transportation Safety Board is still investigating—Ms. Herzberg would still be alive today if the automated vehicle had been a merely competent driver. The Uber crash, then, was a miserable failure of technology. But paradoxically, it can also serve as a reminder that safe robot drivers are within closer reach than they may appear—as long as developers of the technology choose safety as the overriding priority. Admittedly, it may be a long road of technological advancements to a point where robots have prodigious driving skills, but that isn’t the only path to safe robot drivers. There is another route; perhaps less spectacular, but more direct.
If you were designing a driver, you’d probably endow them with laser-sharp attention, lightning-quick reaction time, and virtuosic vehicle-handling prowess. Humans like these exist: they are race car drivers. But hold on a second—don’t the numbers show that they’re actually terrible at driving? Consider the NASCAR Cup, where there were about six crashes per race between 2001 and 2006. Even calculated with an absurdly generous assumption that all drivers finished every race, their per-mile crash rate was over 140 times that of the average American on public roads (using 2006 statistics). And in 2016, one driver, Brian Scott, had a rate of almost two crashes for every 500 miles raced—over 2000 times the crash rate of the general public.
Of course, there’s nothing mysterious here. The only way to control a car furiously hurtling around a track at speeds in excess of 200 mph while surrounded by other cars just inches away, aggressively jostling for position, is to be a tremendous driver. Still, NASCAR drivers crash, a lot—because they have a very risky “driving style”, as road safety researchers would say. As the drivers are in a great hurry to get around the racecourse, they have no time for caution, instead pushing the edges of their vehicles’ performance and their own capabilities. Even when race car drivers leave the mayhem of the track, their skill doesn’t outweigh their appetite for risk. A study from the 1970s found that racing drivers from the Sports Car Club of America had a higher crash rate on public roads than other drivers from the same state of the same age and sex.
At the opposite end of the spectrum are those who use cautious driving styles to make up for their weak skills. Some elderly drivers who score poorly on a driving test nevertheless manage to drive crash-free by actively compensating for their deteriorating abilities, according to a Belgian study from 2000. They drive more slowly and avoid tailgating, leaving long safety gaps behind vehicles they’re following; they also plan their trips to avoid complex traffic or other challenging situations. (Still, it’s probably best not to encourage one’s grandparents to push their luck.)
The main reason people crash isn’t that they lack stunt-driver level of mastery. Steering inaccurately, overcompensating, or otherwise mishandling a car is the key fault behind just 11 percent of crashes, according to the National Highway Traffic Safety Administration. Far more often, the problem is a simple matter of not paying attention: when researchers used multiple onboard video cameras and sensors to observe more than 3500 drivers over a three-year period, they saw that in 68 percent of crashes, drivers were texting, adjusting the climate control, interacting with a passenger, or otherwise distracted.
Though it was not at all evident in the crash of the Uber test vehicle, most developers of self-driving technology have embraced a cautious driving style. On the other hand—though this discussion seems to have gone quiet since the Uber crash—many observers have worried that robots will be too cautious. If a robot car conscientiously leaves an ample gap between it and the vehicle in front, for example, a more assertive human might rush in to take the space, leaving the hapless, timid robot, and its annoyed passengers, in the dust. Self-driving tech leader Waymo (a subsidiary of Google’s parent company, Alphabet Inc.) now programs their cars to be pushier to fit into the dog-eat-dog world of human drivers: they inch forward at intersections to stake their claim to the right of way, and they break the speed limit to keep up with the flow of traffic.
There’s also concern that tentative robots are a safety hazard: coming to a complete stop at stop signs and braking when there is little apparent danger might be catching humans by surprise, causing them to rear-end skittish automated test vehicles. But excess caution might be a misdiagnosis. For example, a 2015 report from the BBC recounted how a jogger running along the opposite side of the road resulted in Google’s test vehicle abruptly jamming on the brakes. It’s questionable whether such a high-strung response was really the most cautious option. Perhaps a more cautious robot would simply decelerate gently, steer well clear, and prepare a set of evasive routes for use in the event that the jogger goes haywire.
Beyond saving lives, one of the great hopes for self-driving cars is that they could wring more intensive use out of the roads. Driving in close formation—like a less frantic version of NASCAR—they would squeeze many more cars onto a given road. Some of the more dramatic estimates have imagined quintupling the volume of traffic flowing down a road. But a short “headway”—the gap between one vehicle and the one just ahead—brings a higher crash risk than a long headway. That’s why we’re taught the “two-second rule”—though most people ignore it, driving closer behind other vehicles than they’re capable of doing dependably. Even with comparatively competent, reliable robot drivers, the fundamental tradeoff between safety and traffic capacity will remain.
Mandating caution in robot drivers would boost safety while sacrificing traffic flow. But for those uncomfortable with losing precious, precious speed in exchange for saving measly human lives, there’s good news: we have other ways to eke more human mobility out of the roads. The key is to focus on moving more people rather than more vehicles, getting people to share rides in vehicles with high passenger capacities. An order of magnitude more travelers can move down a lane if they’re in buses rather than cars. And for shorter urban trips, nothing can beat the space efficiency of walking or biking.
Cautious robot drivers would also make the streets friendlier places for humans rather than machines. Adam Millard-Ball, a professor of environmental studies at the University of California, Santa Cruz, argues that when automated vehicles are ubiquitous and everyone knows that robot drivers are carefully scanning for other road users, people on foot will move freely through the streets while the vehicles courteously yield.
Developing automated vehicles with super-human capabilities will certainly help reduce crash risk, but robot drivers can enhance safety simply by adhering to an uncompromisingly cautious driving style. Rather than bringing robot drivers over to the dark side, making them more aggressive to conform to the existing dysfunctional norms of human driving, the technology presents a singular opportunity to establish a new norm of cautious driving that values human life. And in the meantime, there’s no need to wait for the widespread adoption of self-driving technology before we can enjoy safer roads. Vehicles can come with technologies like speed limiters and ignition interlocks that prevent drunk driving; careful road design can guide drivers to reduce speed and pay attention; dedicated infrastructure can protect people on foot and on bikes; crash risk can be diminished by reducing the total miles motor vehicles drive. Making safe roads isn’t rocket science—it doesn’t even have to be advanced robotics.
This piece is also posted at Medium.com.