Laura Bliss is CityLab’s west coast bureau chief. She also authors MapLab, a biweekly newsletter about maps (subscribe here). Her work has appeared in the New York Times, The Atlantic, Los Angeles magazine, and beyond.
In Tempe, Arizona, an autonomous Uber struck and killed a woman crossing a street at night. The incident is likely to test the public’s tolerance of AVs on real-world roads.
On Sunday night, a self-driving Uber vehicle struck and killed a woman in Tempe, Arizona. The incident represents a grim milestone in the age of automotive autonomy: It appears to be the “first known death of a pedestrian struck by an autonomous vehicle on public roads,” according to the New York Times.
Police reports state that the vehicle was in self-driving mode with a back-up driver present behind the wheel when it crashed into the woman around 10 p.m. on Mill Avenue just south of Curry Road. While initial media coverage suggested that the victim, identified by the Tempe Police Department as 49-year-old Elaine Herzberg, was riding a bicycle, later police reports say that she was “walking just outside of the crosswalk.”
Update 9 p.m. EST: According to a report in the San Francisco Chronicle, Herzberg was “[p]ushing a bicycle laden with plastic shopping bags,” and “may have been homeless.” After reviewing video footage of the impact taken by the Uber vehicle, Tempe police chief Sylvia Moir said that “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway.”
Uber has suspended all testing of its self-driving vehicles in Tempe, Pittsburgh, San Francisco and Toronto. “Our hearts go out to the victim’s family. We are fully cooperating with local authorities in their investigation of this incident,” an Uber representative said in a press statement.
As more details emerge, this incident is likely to serve as a litmus test for public’s tolerance of AV testing, particularly in the absence of robust regulations. About 40,000 Americans died in car crashes in 2016. Developers of autonomous vehicle technology have promised to dramatically reduce those fatalities by removing human error from the road; indeed, that is the prime argument AV makers have presented for their product. “This technology was developed only with that in mind,” Tekedra Mawakana, the vice president and global head of policy and government affairs for Waymo, said on a self-driving car panel hosted by Arizona State University last week.
As it is, though, auto and tech companies haven’t fully sold the public on the technology, with more Americans worried than enthused about AVs. Transportation Secretary Elaine Chao has called on the industry to do more to raise awareness about AVs and their benefits. Self-driving Domino’s pizza deliveries, courtesy of Ford, can be seen as one example of the industry’s campaign for greater acclimatization. This incident may set those efforts back.
For its part, the federal government has so far taken a decidedly relaxed approach to regulating AV makers. In the absence of overarching federal rules, Uber, Lyft, Waymo, Tesla, General Motors, and other companies vying to bring self-driving cars to market have pushed states to keep their regulations loose, too. While dozens of states legislatures have gone on to establish rules around safety, taxes, and insurance for AVs, Arizona has cleared a more or less regulation-free path for testing, in an effort to fashion itself as a research hub for the driverless future. Uber, Lyft, Waymo, General Motors, Intel, and other companies have set up shop there, with semi-robotic vehicles cruising the roads of Tempe, Phoenix, and other Arizona communities.
The state’s anything-goes approach has been good for business, but traffic safety advocates have been sounding alarm bells about the lack of safety restrictions and oversight. “It’s open season on other Arizona drivers and pedestrians,” said Rosemary Shahan, president of Consumers for Auto Reliability and Safety, told the New York Times in a November 2017 article about the state’ s approach to AV testing. “There is a complete and utter vacuum on safety.”
That gap was revealed, to some extent, by how the state handled another AV crash in Tempe—this one non-fatal—that occurred between a self-driving Uber and a Honda CRV in March 2017. The Honda driver, Alexandra Cole, was turning left at a busy intersection as an Uber vehicle coming from the opposite direction zipped through the yellow light. The cars collided and the Uber flipped on its side. With no injuries, the incident was swiftly resolved, with Cole found to blame for failing to yield. But, the Times reported,
[W]hat happened after the accident revealed a system that was unprepared for computer-operated vehicles. Mr. Ducey, Tempe officials and state transportation regulators did not get briefed on the collision. The self-driving task force set up by the governor, which has met twice in two years, also did not review the incident.
More intensive official scrutiny of Sunday night’s fatality can be expected, given that the National Transportation Safety Board, which also investigated the fatal Tesla Autopilot accident in 2016 and found Tesla partly to blame, has launched an investigation. Now the questions are: How extensive will this process be? What types of data will local, state, or federal investigators—as far as they exist—request? What will and won’t they be able to conclude? Will these early incidents and investigations establish norms for the day AVs are in wider use and more regularly involved in collisions, however advanced the technology becomes?
How this investigation plays out may be telling on these fronts. As Henry Grabar points out at Slate, “You can expect that Uber, local regulators, and tech evangelists will make much of the Tempe police report that the woman was outside a crosswalk”—even though the avenue where the victim was killed is an eight-lane road with few crosswalks. In Arizona, where just 1.6 percent of the population commutes by foot, many communities are shaped by such car-oriented street design, which cultivates high rates of pedestrian and cyclist mortality. According to new numbers released just last week by the Governor’s Highway Safety Association, Arizona has the highest rate of pedestrian deaths in the nation.
For members of the public who are already leery of self-driving vehicles, the Tempe incident could feed further misgivings about this emerging technology. But if history is any indication, it probably won’t halt its momentum. In September 1899, Henry Hale Bliss (no relation to the author) was struck by an electric taxi while stepping of a streetcar in New York City; he is now recognized as the first pedestrian in the country to be killed by a car. Initial media coverage was intense, as Smithsonian magazine reports, but charges of manslaughter against the taxi driver were dropped after Bliss’ death was determined to be an “accident.” In 1999, a plaque was installed at the site of the crash, West 74th Street and Central Park West, to mark its 100th anniversary.
Since the death of the unlucky Mr. Bliss, the toll from automobiles in the U.S. has soared: A pedestrian was killed by a car every 1.6 hours on average in 2015, CDC data shows. Self-driving cars do have the potential to slow that rate. So far, the safety track records of test vehicles by most industry players—especially from Waymo and GM—are impressive. Companies insist that, to improve the technology, testing in real-world situations must continue.
But, as Adrienne LaFrance wrote in the Atlantic in 2015, the safety benefits of these vehicles depend on the technology’s widespread adoption, which in turn depends on their cultural acceptance. With or without strengthened safety regulations, that may require the public’s consent to play a role as subjects involved in experimentation with a still-dangerous technology. Statistically, on a per mile basis, computer drivers do appear already to be less dangerous than human ones. That may not be enough. LaFrance quoted Andrew Moore, the dean of computer science at Carnegie Mellon: “No one is going to want to realize autonomous driving into the world until there’s proof that it’s much safer, like a factor of 100 safer, than having a human drive.”