Linda Poon is a staff writer at CityLab covering science and urban technology, including smart cities and climate change. She previously covered global health and development for NPR’s Goats and Soda blog.
That’s what two AI professors in Pittsburgh suggest. Can a “clinical trial” approach to autonomous vehicle safety work?
Now that self-driving cars have moved beyond mere speculation and are roaming the streets of Pittsburgh, among other places, federal and local officials are busily trying to figure out how to regulate them.
During a recent conference in Washington, D.C., Paul Lewis, vice-president of policy and finance at the Eno Center for Transportation, talked about how local and regional governments can lead the way in this mobility revolution. What’s needed, he said, are policies “that both protect public safety and bring some accountability to this rapidly changing environment while still enabling the technology to bring the benefits.” But, as December’s spat between San Francisco authorities and Uber’s self-driving fleet indicates, the regulatory road ahead could be a rocky one.
The National Highway Traffic Safety Administration has published a 15-point safety assessment that laid out some early autonomous vehicle (AV) guidelines. But many more questions await. For example, who’s liable—the driver, the manufacturer, or the other vehicle’s operator—when a self-driving vehicle fails to negotiate a scenario to which it doesn’t know how to react?
Truly fully-automated vehicles may not be common for another 10, 20, or even 50 years, but cities need to prepare now.
“In the past, where technology has been pushed too fast, too far, and negative reaction has happened, either the Congress or public vote ends up putting that technology in the box for a very long time,” said panelist Nat Beuse, a NHTSA administrator. That means, above all, figuring out the kind of collaboration needed between governments and the private sector. He also emphasized the need to prove that AVs are safe not on a test track but out on public roads.
Two philosophy professors at Carnegie Mellon University over in Pittsburgh have a novel idea about how to do that: In an op-ed published Thursday in IEEE Intelligent Systems, they suggested that AV regulations should mirror the U.S. drug approval process.
Alex J. London and David Danks, who also study machine learning, say that the problem with applying current auto safety guidelines to AVs is that they have a binary view: Either the product works and gets rolled out en masse or it doesn’t. It’s a relatively straightforward process to test whether a vehicle’s braking system, for example, performs well enough to meet safety standards. But for self-driving cars, the important factor is whether the vehicle knows if it should hit the brakes. What’s lacking is a set of dynamic standards to assure the public that the vehicle can appropriately distinguish what to do in different scenarios.
“The more you're relying on [the vehicle’s] decision-making in an unconstrained environment, where context can change rapidly and where there may be noisy signals, validating that may have to be more like the way we validate pharmaceuticals—by relying more on trial and error,” says London, who studies the ethics of technology and medicine.
So they recommend testing AVs out in phases, first testing out the vehicles in simulated environments that provide a series of “unforeseen” situations—the “pre-clinical trials,” if you will. When the vehicle’s decision-making process is deemed sufficient in a range of contexts, it can be slowly rolled out in selective settings in the real world, closely watched by drivers and monitors with special training. As each “trial” is declared successful, regulators can gradually grant companies permission to expand the testing into different roads and eventually different cities.
Companies like Google and Uber have been doing a limited amount of testing on public roads, but London calls for more coordination with the federal government. “Some groups would have a van that would just deliver packages in Pittsburgh, but consumers aren't going to want to buy a car that they can only drive in [one city],” says London. “So what are the places that are sufficiently similar to Pittsburgh that the car would also do well in, and what are the places that are sufficiently different that we need to validate the car's ability to drive there. Those are questions that should be sharpened.”
So rather than having individual companies vet the process, federal regulators would used the data shared with them to establish what an “acceptable performance” entails before moving on to the next phase—how many hours of driving in the city does it takes, for example, or how many accidents are tolerable. Over time, market restrictions can be relaxed as the system is refined.
As with introducing potentially dangerous new medications, such a pathway to the marketplace would require a heavy hand with the regulation. This could be a problem. We appear to be entering a radically deregulated federal era—the new presidential administration has pledged to roll back standards governing drug safety, the environment, financial misbehavior, and, well, everything. A methodical trial-and-error method also consumes a lot of time. But London says there’s another advantage to their vision: By rolling out AVs in phases rather than en masse, companies and policymakers can see how pedestrian behavior changes over time and adjust regulations accordingly.
“You try to diagnose what the failures were, and you extend the environment in which you are operating the vehicle,” he says. “Largely, this is experimental learning—you learn from failure.”