An interior view of operator Rafaela Vasquez moments before an Uber SUV hit a woman in Tempe, Arizona, in March 2018.
An interior view of operator Rafaela Vasquez moments before an Uber SUV hit a woman in Tempe, Arizona, in March 2018. Tempe Police Department/AP

The preliminary findings into a fatal crash in Tempe by the National Transportation Safety Board highlight the serious “handoff problem” in vehicle automation.

The first rule of safe flying: Pay attention, even when you think you don’t need to. According to a 1994 review by the National Transportation Safety Board, 31 of the 37 serious accidents that occurred on U.S. air carriers between 1978 and 1990 involved “inadequate monitoring.” Pilots, officers, and other crew members neglected to crosscheck instruments, confirm inputs, or speak up when they caught an error.

Over the period of that study, aviation had moved into the automation era, as Maria Konnikova reported for The New Yorker in 2014. Cockpit controls that once required constant vigilance now maintained themselves, only asking for human intervention on an as-needed basis. The idea was reduce the margin of error via the precision of machines, and that was the effect, in some respects. But as planes increasingly flew themselves, pilots became more complacent. The computers had introduced a new problem: the hazardous expectation that a human operator can take control of an automated machine in the moments before disaster when their attention isn’t otherwise much required.

Decades later, a new NTSB report is fingering the same “handoff problem”—this time in the context of a self-driving Uber car.

On Thursday, the NTSB released its preliminary findings from the federal investigation into a fatal crash by a self-driving Uber vehicle in Tempe, Arizona, on the night of March 18. The report found that sensors on the Volvo XC-90 SUV had detected 49-year-old Elaine Herzberg about six seconds before the vehicle hit her as she crossed an otherwise empty seven-lane road. But the vehicle, which was driving in autonomous mode with a backup operator behind the wheel, did not stop. Its factory-equipped automatic emergency braking system had been disabled, investigators found. Uber also turned off its own emergency braking function while the self-driving system was on, in order “to reduce the potential for erratic behavior,” according to the report. Video footage showed the backup driver looking down immediately before the car hit. In an interview with the NTSB, the operator, Rafaela Vasquez, said that she been monitoring the “self-driving interface,” not her smartphone, as earlier reports had speculated.

In the absence of either automated emergency braking system, the company expected the backup driver to intervene at moment’s notice to prevent a crash. But in this case, the human operator braked only after the collision. Herzberg was killed.

In my March investigation into Uber’s autonomous vehicle testing program, three former employees who worked as backup operators described an arduous work environment that led to exhaustion, boredom, and a false sense of security in the self-driving system. Prior to the Tempe crash, Uber drivers in Tempe, Phoenix, Pittsburgh, and San Francisco worked 8- to 10-hour shifts driving repetitive “loops” with few breaks. They weren’t driving—the car was, while the operators were expected to keep their eyes on the road and hands hovering over the wheel. There was a strict no-cellphone policy.

Towards the end of 2017, as Uber ramped up its ambition to accumulate testing miles, the AV development unit switched from a policy of having two backup operators in the car at all times to only one. Solo operators weren’t supposed to touch the computer interface, which showed the car’s LiDAR view and allowed them to make notes, without stopping the car first. But sometimes it was hard not to, said Ryan Kelley, a former operator who worked in Pittsburgh from 2017 to 2018. “It was nice to look at so you could see if the car was seeing what you were and if it was going to stop,” he told me via text.

Moreover, without a second person to stay alert, and without their additional eyes on the road, “it was easy to get complacent with the system when you’re there for so long,” said Ian Bennett, a former Uber backup operator who also worked in Pittsburgh from 2016 to 2017. Especially as the car’s performance improved: “When nothing crazy happens with the car for months, it’s hard not to get used to it and to stay 100-percent vigilant.”

In March, I spoke with Bennett, Kelley, and one other anonymous former backup operator based in Tempe. They all agreed that the fatality could have been avoided had there been greater consideration to these human factors.

Missy Cummings, the director of Duke University’s Humans and Autonomy Laboratory and a former U.S. Navy fighter pilot, has devoted her career to understanding this very dynamic. The results of the NTSB’s findings point to stark lack of car-to-human communication inside the vehicle, Cummings said.

“The system knew that a braking maneuver needed to happen, but the engineers never decided that this was a good piece of information for the driver to have,” she said. “Why have a driver at all if you’re not willing to share that? They need better cognitive support for the driver.”

That could be as simple as a big red light that flashes on the dashboard saying BRAKE, or perhaps more appropriately, a series of both diagnostic and emergency alerts, said Raj Rajkumar, the head of autonomous vehicle research at Carnegie Mellon University. Six seconds might have been enough to stop the car, swerve, or reduce the severity of the impact; statistically, the odds of pedestrians surviving collision at 25 MPH compared to 43 MPH (the speed at which the car was traveling) are significantly higher.

More systematically, Uber and every company developing self-driving vehicles—Waymo, Tesla, Apple, GM Cruise, Aurora, and others—need people who specialize in human factors on their teams so that these considerations are made, Cummings said. My own reporting and a subsequent investigation by the Financial Times has revealed a lack of rigor and consistency between operator training programs across the companies developing AV technology.

According to Rajkumar, the NTSB’s findings point to a plethora of problems with Uber’s AV program, ranging from flaws in the technology to concerns about the testing methodology to a lack of support in the operator/vehicle interface. “Yes, the operator was not paying attention,” he said. “But humans are notorious for being distracted.”

One potential fix for this issue: An eye-scanning operator monitoring system in the car—the likes of which are available today on the market already, Rajkumar said—could have also detected the distracted nature of the operator and slowed down the car.

Prior to the release of the preliminary investigation findings on Thursday, Uber had announced that it would cease its self-driving vehicle testing program in Arizona. Already, it has shuttered operations in California. But it plans to resume testing in Pittsburgh in the coming months. Asked about possible changes to the testing program in light of the findings, an Uber spokesperson provided this statement to CityLab:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

In the case of commercial aviation automation, the federal government noted the emerging understanding of the hazards of autopilot in the 1990s, and airlines made training and protocol changes as a result. Due both to the automation itself and those human considerations, safety has vastly improved.

As with aviation, automated vehicles come with a promise to save lives—in this case, tens of thousands per year—by reducing human error. But as companies rush to bring vehicles to market, there remains a lack of clear federal regulations around testing and deployment.

And within the industry, some researchers say that there hasn’t been as strong a focus on human safety as there needs to be. Bryan Reimer, a research scientist and associate director of the New England University Transportation Center at MIT, said he worries that self-driving automakers must incorporate their laudable safety goals as the means, as well as the end. “A heavy focus on technology development, in absence of investment in human-centered engineering, may slow the speed at which the life-saving potential of the technology is realized,” said Reimer. “Hopefully the failure in Uber to establish a strong safety culture is a lesson that the industry recognizes.”

About the Author

Most Popular

  1. Life

    Uber, but for Driving Your Kids Around

    A slew of small companies have launched in recent years, offering parents a way to outsource their daily driving.

  2. A pink-shaded map of Los Angeles showing student debt burden
    Equity

    The Neighborhoods Buried In Student Debt

    How much of your paycheck goes towards student loans?

  3. Design

    Lisbon’s Beautiful, Dangerous Sidewalks

    The artistic and slippery “Portuguese pavement” has become Lisbon’s calling card. City Hall wants to replace a few stretches of them with concrete—a seemingly sensible decision that has sparked outrage.

  4. Life

    At Atlanta’s Rail Stations, a Transit-Oriented Soccer League Takes Shape

    Swaths of empty space at train stations are being turned into athletic fields for kids and adults.

  5. Transportation

    Beverly Hills Has Financed Its Metro Fight With $13 Million In Local Taxes

    Instead of reconstructing aging school facilities, the district is using a voter-backed ballot measure to pay for a legal campaign against a subway extension.