Or, how to stop fearing imperfection and start saving lives.
No technological advance in the works will change how we travel more than self-driving cars. The safety benefits alone are enough to welcome our new computerized chauffeurs with open arms. Add in improved mobility for people who can’t currently drive and the possibility of reimagining cities—just think about what you can do with all that unnecessary parking space!—and they can’t get here soon enough.
Social changes of this magnitude naturally fan fears of the unknown and spark loyalty to a status quo that no one’s happy with anyway. For sure, there are some legitimate reasons to urge caution when it comes to fully autonomous cars: chief among them, whether they’re ready to mix with human-driven cars in very complex urban environments. But there’s an equally compelling case that it’s irresponsible to delay the technology until it’s flawless.
Don Norman, director of the Design Lab at University of California, San Diego, took that view in an excellent recent essay on whether we should push for self-driving cars sooner rather than later:
I have long argued that we need to go slow with automation in the automobile (e.g., see my paper, The Human Side of Automation). There were still too many unsolved problems. I have now changed my mind. Why? Because there are far more problems with the increasing number of distractions for drivers, too many new devices, too many new temptations. Imperfect driving is potentially more dangerous than imperfect automation.
To that end, CityLab has highlighted three autonomous car imperfections that tend to raise flags—but that in the grand scheme of things, really shouldn’t.
1. They’ll cause car sickness
Earlier this year the University of Michigan Transportation Research Institute released a report suggesting that riding in self-driving cars might cause some riders to suffer motion sickness. News outlets commenced blowing the findings way out of proportion (Jalopnik: “Self-Driving Cars Will Make You Throw Up”). A more recent study, published in Applied Ergonomics, seconds the idea that autonomous travel will lead to an “increased risk” of car sickness:
In short, self-driving cars cannot be thought of as living rooms, offices, or entertainment venues on wheels and require careful consideration of the impact of a moving environment.
While car-sickness remains largely a mystery to scientists, there are some evidence-backed reasons to think self-driving cars might make some riders queasy. Driverless cars will render everyone a passenger, and it’s this passive role in travel that can lead to motion sickness. One of the main culprits is “visual-vestibular conflict”—basically the mismatch between what your eyes see (a stationary book) and what your ears perceive (a moving car).
So yes, some people have trouble riding in cars, and pretty soon we’ll all be riders. But the studies and most news articles did a poor job putting this shift into the proper context. The UMTRI study, for instance, reports that 6 to 10 percent of riders might experience sickness in a self-driving vehicle. But it fails to note how much of the general population already suffers from motion sickness, nor does it explain how (or whether) the situation will differ from riding in taxis—and yet you don’t see articles about the rise of Uber leading to a nausea epidemic.
Then there’s the bigger safety picture. If looking at your phone in the car were so physically repellant, distracted driving wouldn’t be the problem it is. So even conceding a potential rise in car-sickness during the driverless age, we can all agree it’s far better to lose your lunch than your life.
2. They’ll face a moral dilemma
Speaking of feeling ill, by now anyone even vaguely interested in driverless cars is sick of reading about their moral dilemma. The scenarios vary but the basic concern is the same: In the face of an inevitable collision, how should an autonomous car respond? Should it be programmed to swerve into a crowd of people and spare its own occupant, or should it be told to sacrifice this passenger to minimize the greater damage?
The dilemma is back in the news with a new study from psychologist Jean-François Bonnefon and some collaborators. The work finds that people agree that the car should kill as few people as possible, but that many are only comfortable with that exchange if they’re not the one in the car. Via Bonnefon and company:
People mostly agree on what should be done for the greater good of everyone, but it is in everybody’s self-interest not to do it themselves.
If consumers prefer to buy autonomous cars that don’t kill occupants, then makers of self-driving cars will build these types of vehicles. That means regulators will have to step in and mandate certain actions for the public good. But while there’s no perfect outcome to the moral dilemma, arguably the least ethical decision is to delay making one. Here’s autonomous car expert Bryant Walker-Smith of the University of South Carolina speaking to Tech Review:
Walker-Smith adds that, given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly. “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”
3. They’ll still get into some collisions
The immense safety hype surrounding driverless cars has led to some unrealistic expectations. Take another new study by UMTRI. This one evaluated collisions by Delphi, Google, and VW self-driving cars and found—among other things—a “higher crash rate per million miles traveled than conventional vehicles.” Some news outlets ran with this finding alone; here’s USA Today: “Study: Self-driving cars have higher accident rate.”
The truth is, the study reached three other conclusions that cast this single result in a much more positive light. For starters, the researchers couldn’t statistically rule out the opposite finding—that crash rates for self-driving cars were lower than for conventional cars. Second, none of the autonomous vehicles were at fault. Last but not least, the UMTRI study found that the crash severity for self-driving cars, as measured by injuries, was lower than for conventional cars:
Far from a disturbing sign that autonomous cars are fallible, Google’s Chris Urmson sees these events as evidence that they’re doing exactly what developers want:
Other drivers have hit us 14 times since the start of our project in 2009 (including 11 rear-enders), and not once has the self-driving car been the cause of the collision. Instead, the clear theme is human error and inattention. We’ll take all this as a signal that we’re starting to compare favorably with human drivers.
And that’s the real goal here—not a technology that never fails, but one that fails far less than human drivers do, with all the social benefits to boot. As Emilio Frazzoli of MIT’s driverless car program recently told me, the fear that driverless cars will never work and the belief that they’ll never err are both “clearly nonsense.” The reality is somewhere in between: much better mobility for all, at a much lower cost of human lives.
“I believe that automation will reduce accidents, but not completely eliminate them,” he says. “This is definitely a good thing.”