It’s called a Ghost Driver experiment, and it’s not a gag.
Washington reporters are used to getting stonewalled, so NBC4 Washington transportation reporter Adam Tuss was persistent yesterday when he tried to ask a person hidden inside what looked like a car seat what he was doing driving a “driverless” van around Arlington, Virginia.
Armed with an unforgettable opening line (“I’m with the news, dude”), a determined Tuss failed to get the unknown upholstered driver to respond, but the reporter eventually got an answer about the stunt. It turns out the vehicle is part of a study being conducted by the Virginia Tech Transportation Institute. This is, improbably enough, real science.
The research project aims to study how human beings will respond to driverless cars in the real world. According to the Institute’s press release, they’re determining how to best design autonomous vehicles (AVs) and investigating the need for additional exterior signals by rolling around both low-density and urban areas in Arlington in a Ford van that, as Tuss sussed out, is not actually being driven via robot. (The researchers say they’re not giving interviews about the projects while the experiment is ongoing.)
While most autonomous vehicle research focuses on how well technology can react to the road, this experiment flips the script—it’s a cognitive experiment testing how humans handle AVs.
The project builds on a research protocol for imitating a driverless car by hiding in a seat suit, designed by a team at Stanford University for a 2015 study. Their car seat costume was inspired by a YouTube prank and built using a wire mash, papier-mâché, and a regular seat cover; the headrest had black see-through fabric.
“We called it Ghost Driver,” says Wendy Ju, the executive director for Interaction Design Research at the university’s Center for Design Research. “We were interested in understanding how people naturally react if they can’t see a driver to interact with, and so we devised this as an experimental protocol, having a person disguised in a car seat costume.”
The fake driverless car experiment is a version of what’s known as a Wizard of Oz experiment—where subjects interact with a computer system they believe to be autonomous but that is actually operated by an unseen human being (at least partially). Another name for it is an “engineer-in-the-loop” experiment. “It's a technique where they use a person to substitute for what in real life would be a machine,” says Ju. “You can do things to get fidelity to what the future behavior is going to be.”
Indeed, by eliminating the bother of getting an actual driverless car, the study could focus on its true subjects: the human beings who would dare to share the streets with these unmanned vehicles on the loose. “We wanted to understand how pedestrians would respond to autonomous vehicles,” Ju says. “Pedestrians won't have training the way you would if you bought an autonomous vehicle. They didn't choose to have an autonomous vehicle, but all of the sudden they're on the road.”
The Stanford team ran the Ghost Driver study in the Bay Area on Stanford’s campus in a parking lot and at an intersection. They’ve also replicated the study in the Netherlands and Mexico. What they’ve found so far is that people mostly treat driverless cars as if they understand the rules of the road. “When there’s a decent amount of traffic, people really stick to norms,” Ju says. “They don’t spend a lot of time understanding what is going on, they just feel they have the right-of-way and they should cross. They’ll actually wait until they get to the other side of the road to really interrogate the situation further.”
After filming interactions with the car, Ju says the researchers would interview their unsuspecting pedestrian subjects afterwards. Some did not even notice the car did not have a driver, even though it was decked out in (non-functional) radar sensors and vinyl stickers that read “Stanford Autonomous Vehicle.” Others would notice but navigated around the car as if it were normal. “We would ask them after the fact and they very strongly felt that if the car didn’t understand the rules of the road, it wouldn't be allowed on the road at all. That's the kind of thing we think is important for manufacturers to know,” Ju says.
During the second day of the initial experiment, the researchers started to do what’s known as a breaching experiment—where they deliberately violate norms, in this case, easing forward at the intersection to force an interaction with a pedestrian. You can see the fake robot car messing with people on this video that Ju uploaded from the experiment.
“People look up and try to negotiate with the driver, and that’s when they see there’s no driver to negotiate with,” Ju says. “You could see the hesitation, but they’d keep going. Maybe if they had a lot of time to reflect they would have made a different decision.”
Ju says she had spoken with the VTTI researchers about a month ago about their research plans to do a longitudinal study. She told them it would be difficult to keep their Ghost Driver experiment under wraps once social media got wind of it. “We were getting pictures on Twitter within an hour of running our experiment,” she says.
As for whether the Virginia Tech project has lost anything by getting its cover blown, that’s a matter of debate within the scientific community. “There is some debate about whether it’s important to have deception as part of Wizard of Oz studies,” Ju says. “My belief is that people act ‘as if’—a lot of the time you don’t know how a machine works, but you just act as if it’s working the way it seems like it’s working. People can act that way even if they understand that it’s a person behind the seat.”