Car seat guy: a mystery no longer. Last month, northern Virginia residents were startled to see a grey Ford Transit Connect van motoring around their neighborhood—without a driver inside. OK, not quite: Further inspection revealed that there was a driver inside, but that he was concealed inside a costume that made him look like an empty car seat. Virginia Tech Transportation Institute took credit, nay, responsibility for car seat man, but wouldn’t reveal more. Why? When? How?
Today, we know the truth. Car seat man was part of a Ford-funded study by researchers at the Virginia Tech Transportation Institute into how autonomous vehicles will interact with humans on the road. Today, subtle facial and body cues help drivers and walkers alike understand what moves cars will make next. A wave says, I see you—hit that crosswalk. A nod says, I see you—make that turn. A middle finger says, well—you know what it says. But how do you translate those communications once the driver is a robot?
It’s hard to know, because when driverless cars test on public roads today, they’re not really driverless. Companies place well-trained engineers behind the wheel, to ensure the vehicles don’t malfunction. But that sends mixed signals to the pedestrians, cyclists, and drivers around them, who may read the non-driving engineers’ movements as faithful indications of what the vehicle will do next.
Which is why Ford is taking a whole new car language out for a test drive. In this experiment, an illuminated bar above the van’s windshield broadcasted what the vehicle would do next. Slow blinking meant, I’m stopping. Fast blinking meant, I’m starting. And a solid white light meant, I’m driving around all on my own.
“In virtual reality studies that we’ve run, people don’t understand at first exposure,” says John Shutko, a Ford engineer who specializes in human factors for self-driving vehicles. But once people see the new signals a couple times, those VR studies showed, they’ll start to cotton on. Now Ford and the Virginia Tech Transportation Institute have 150 hours and 1,800 miles of new data to determine how their autonomous vehicle language works in the real world.
Ford is in talks with 11 other automakers to forge an industry standard for self-driving car signals, and it’s hoping to come up with a kind of Esperanto, accessible to all people of all cultures. (That red, green, yellow thing doesn’t apply everywhere.) “If other equipment manufacturers have better signal ideas, we’re willing to change them,” says Shutko.
(Virginia Tech had to pull the car off the road for a few days after it hit the local news. It wanted people to react to the van as if it was a self-driving car—and not because they knew there was a car seat dude inside. “It was not an ideal situation,” says Andy Schaudt, a psychologist who oversees projects at the Institute’s Center for Automated Vehicle Systems. Whoops.)
The car seat costume was adjustable, so the six drivers who spent most of August driving around Northern Virginia could all fit inside comfortably. And yes, they could see. The research team covered two plastic visors—a tinted, sunglass-y one for day, and a clear one for night—with a thin, reflective fabric, so the driver could watch the road, but still surprise Virginians who definitely couldn’t see in. The design was inspired by work by Stanford researchers, but also YouTube pranksters who dressed as seats to scare unsuspecting drive-thru workers. Great for fun, great for Halloween, great for science.