After the fatal Uber crash in Tempe, a leading AV researcher warns that big questions about testing and public safety are looming for the industry.
PITTSBURGH, PENNSYLVANIA—The question posed to Phil Koopman, a robotics professor at Carnegie Mellon University, was stark. “Where were you when you heard about Elaine Herzberg?”
He had a fitting answer: He was teaching a software safety class when one of his students raised a hand as the news showed up on their phone that a self-driving Uber vehicle had struck and killed a pedestrian. Elaine Herzberg, 49, was crossing a seven-lane road in Tempe, Arizona, when the Volvo SUV, which was operating in autonomous mode at the time, struck her.
At the Pennsylvania Automated Vehicle Summit in Pittsburgh last week, Koopman gave a talk on how to make AV testing safer for other road users. This is a question he’s been pondering for many years: He’s a self-driving technology pioneer who’s research centers on the safety of autonomous vehicles. Still, the crash itself was still shocking. “We all knew this day would come,” he told CityLab after his talk. “But it seems sooner than I had expected.”
Thanks in part to the work of local researchers like Koopman, Pittsburgh now is one of the capitals of the nascent AV industry, and the city has a big stake in what happens next. In a new oversight action plan unveiled last week, PennDOT has tasked Koopman’s safety team at the National Robotics Engineering Center, an operating unit within Carnegie Mellon University, to advise an autonomous vehicle task force. The plan will convene AV companies and regulators to craft legislation; it also asks testers to volunteer information to the state about their driverless car operations.
While PennDOT says the plan was in the works before the tragedy in Tempe, one detail betrays some fresh urgency for safe testing: The plan calls for companies to “immediately halt” the testing of any driverless vehicle that “knowingly shares hardware or software with a vehicle that is part of a National Transportation Safety Board investigation.” In other words, this is a timeout for Uber and Tesla, which is under scrutiny about its Autopilot feature after a fatal crash in March.
The operative word here is “testing.” Koopman stresses that the Uber crash doesn’t shed doubt on the eventual promise of autonomous vehicles—just the need for safer testing criteria as the technology develops. “We know the autonomy doesn’t work as well as we want yet. That’s why there is a safety driver,” Koopman said. “In testing, we can throw resources at the problem to limit risk. That won’t be the case when we’re actually deploying autonomous vehicles.”
Koopman wants companies to show how they plan to implement three key safety goals: maintaining driver attentiveness, allowing effective time for driver reaction, and having an autonomous vehicle disengagement mechanism that follows safety engineering practices.
His suggestions include adding safety support such as camera monitoring for drivers, requiring more rigorous driver training, and adding an override button independent of the self-driving system. He says companies need to be upfront about what they’re doing to ensure safety—and that the public has a right to know these specific details.
“At some point you have to put the vehicle on the road, and there’s no way it’s going to be perfect. That’s why we test,” Koopman said. “But you should be doing some upfront engineering. Once you believe the car is engineered extremely well, then you should be on the road to test whether you’ve got it right. That’s different from saying, ‘Let’s see what happens.’ That’s not testing to ensure safety—it’s debugging.”
Gathering real-world data with human drivers to then run computer simulations can reduce the need to run driverless cars endlessly around a city. Koopman also says autonomous vehicle engineers should be humble when considering how good human driving actually is.
“Let’s say that 94 percent of all car crashes are bad human choices, including impaired and distracted drivers. Even with the six percent that’s left, do the math of 100 million miles.” Koopman said. (In the U.S., traffic deaths happen at the rate of 1.18 per 100 million vehicle-miles traveled.) “That’s still impressive. Should self-driving cars be that good? It’s a much more intricate question than just driving 100 million miles.”
The challenge is even more daunting since the computer demands of autonomy introduce so much complexity. “If there’s one thing that happens every hundred million miles, you can probably just live without worrying.” Koopman said. “But at some point, there are things that are [individually] extremely rare, but there are a lot of them. What if there are a hundred things that happen every hundred million hours? Every million hours, one of them will bite you. But if you fix that one, it doesn’t do you any good, because there are 99 more waiting for the next million.”
Koopman says its only through rigorous testing that autonomy will reach “maturity,” a term that echoes the notion that self-driving cars are like “student drivers,” as Uber’s CEO Dara Khosrowshahi’s recently said.
“When you take a driver’s test, part of it is proficiency and part of it is being 16—you’ve had a lot of chance to practice seeing what a normal person looks like and to be smart enough to know what’s going on,” Koopman said. “Autonomy is usually pretty bad at knowing when it doesn’t know. It just rounds it to the nearest bin and says ‘let's go.’ That’s why we do things like adversarial testing, fault injection, and edge case testing. We know how easily it can be fooled.”
Then there are the many ethical questions that AV testing is now raising—questions that are beyond the capacity of engineering alone to address. “We should be asking questions like how good is good enough? Is it as good as a human on an average day, or should it be better?”
Koopman raises another knotty-but-plausible speculation: What if AVs end up being much safer than human-driven vehicles overall—but most of the victims turn out to be pedestrians rather than drivers? “I didn’t say that will happen,” he said. “But if deaths go down by a factor of two, you could say we cut deaths in half. The fatalities would shift in demographics. Or what if it’s only kids, because they’re harder to detect? When people say ‘safer,’ it’s an important question: What do you mean by ‘safer’?”
Ultimately, cities now have to decide how they want to handle autonomous vehicle testing to keep their residents safe. If there’s one potentially positive thing to come out of this early failure in Tempe, it’s that it should encourage municipalities to take this responsibility very seriously, since industry players are now racing to bring partial-AV technologies to the market. This, Koopman notes, is good time to pause and re-evaluate that process.
“If there’s tremendous market pressure, it’s human nature to take shortcuts on safety.”