On a single day in mid-July major headlines painted seemingly mutually exclusive pictures of the state of autonomous vehicles. Tesla announced plans to launch “full self-driving” features on its existing fleet of 100,000+ vehicles. This would allow “automatic driving on city streets” within the year. That same day, long-established automakers (including Ford and VW) announced that their rosy predictions of “self-driving taxis on the road in 2019” were dramatically overly optimistic. Actual experts (i.e., people who are not Elon Musk) “have concluded that making autonomous vehicles is going to be harder, slower and costlier than they thought … [and that] the industry’s bigger promise of creating driverless cars that could go anywhere was ‘way in the future.'”
Interestingly, these two polar opposite predictions–which, on first glance, feel like they come from two alternate universes–are both correct:
- Given the advanced state of sensor technology and hardware, Tesla could very likely roll out autonomous features before 2020…
- … but given human behavior, it would only be safe about 80 percent of the time
The Human Problem in Autonomous Design
According to a study by the Insurance Institute for Highway Safety released in June, about half of all drivers surveyed thought that it was safe to take their hands off the wheel while using Tesla’s existing advanced driver-assistance systems (ADAS)–called “Autopilot.” Six percent assumed it was OK to take a nap with the system engaged.
Neither of these is at all safe.
As Kelly Nantel, vice president of communications and advocacy at the National Safety Council told the Washington Post: “That shows already drivers are overestimating the capabilities of current technology. [With a name like ‘Autopilot’] naturally, you’re going to assume that the vehicle has the technology to drive on its own, and it does not.”
It’s becoming increasingly clear that the big challenge in ADAS and autonomous vehicle design isn’t necessarily the sensor modeling and algorithms; it’s human factors. How will people interact with vehicle systems? What assumptions are they making? How can we nip the dangerous ones and guide them toward a safer interaction with the vehicle and environment?
“The toothpaste is out of the tube,” notes Heather Stoner, General Manager for Realtime Technologies, “in the sense that Tesla already has something out there, and they’re seeing automation failures as well as over trust in automation by their customers.”
Realtime Technologies is a leading provider of vehicle simulation solutions, with a special focus on using simulation to study human behavior with automation. “Most simulation systems for ADAS are focused on sensor modeling and algorithms. We’re providing something very different, an automation surrogate to study the human.”
The One Feature Human Factors Research Sims Need Most
Drawing on a deep background in simulation-based training systems (RTI is a part of FAAC, which has produced immersive training sims for the military, law enforcement, and emergency responders for decades), RTI has established one key element in human factors research:
“If you want to really see what people are going to do once they’re in an autonomous vehicle, they need to feel like they’re actually in a vehicle,” Stoner explains.
This isn’t very different from the “suspension of disbelief” necessary to enjoy a movie or get a thrill from a video game. But it’s hard to suspend disbelief if you’re just sitting at a laptop. That’s why RTI is so dedicated to fidelity throughout the simulation experience. This includes developing high-fidelity vehicle dynamics and richer scenario-building capabilities. It also means furnishing the most realistic possible hardware: full cab simulators, big projection screens, and so on. But more importantly, it means ongoing expert support and an open set of tools. “You can’t really explore the human side of things if your simulation software is a black box, with arbitrary limits, or cuts you off after a year unless you pay for it all over again.”