To error is human—especially behind the wheel. According to government estimates and existing driving research, roughly 94% of all auto accidents result from driver error. Autonomous vehicles offer an important opportunity to reduce human error behind the wheel.
But as we’ve seen over the last several years, the rise in autonomous driving systems has not consistently reduced driver error. Instead, we find ourselves in a new territory, as humans trained to operate “dumb” vehicles, we must adapt to work with smart ones.
What if we could side-step the possibility of human error by automatically detecting when a driver is distracted—much as an airbag system automatically deploys when it detects a collision, with no action from the vehicle occupant.
Driving research into this new terrain requires a new set of tools: driving simulators that create an immersive experience while allowing us to look past a participant’s subjective report of an experience and into their actual neurobiological reactions to the situation.
Simulators that Support Unimagined New Directions in Driving Research
Stanford acquired their first simulator from Realtime Technologies specifically to pursue driving research into how people transition from automated to manual driving. This required a simulator that both created a customizable immersive experience and could serve as a flexible platform for experimentation and data collection.
As they explained at the time, “the simulator is one of few that allow a driver to switch from full or partial autonomy to manual driving and back at any time. It is also the first simulator to automatically synchronize EEG, EKG, respiration, and skin conductance with driving behavior, allowing new answers to questions about distraction and the ability of cars to take over based on the driver’s mental and physical state.”
Using this simulator, Stanford researchers have completed several studies exploring the neurobiological response to stimuli while driving.
For example, researchers wanted to assess “the neural basis of drivers’ responses to changes in vehicle handling.” What happens in a drivers’ brain when they must handle abrupt changes in vehicle handling and dynamics? Specifically, how does a driver cope with a random complete reversal of expected steering behavior?
Such a scenario would be mechanically challenging and incredibly dangerous to explore in real vehicles, even at low speeds on a closed course.
But with the right simulation platform, it’s a relatively basic coding exercise.
Unexpected New Directions in Driving Research
Heather Stoner—General Manager for Realtime Technologies—recalls talking to the researchers early in their work.
“They had this ‘crazy idea,'” she recalls, “that when you steer to the left the car to go to the right. How do we do that? It was an odd idea, and we scratched our heads for a second, and then said ‘Oh, well, you could do that by putting this component into the model here and basically reversing the signs.’ We gave him that little bit of hint of information. They implemented it, and away they went. It was really just that easy, just over email.”
This programming ease and flexibility is consistently advantageous in driving research, where teams often include individuals with extensive experience in psychology or neurobiology, but limited training in computer programming. As Stanford wrote soon after acquiring their sim, “Although the simulator is extremely sophisticated, … it can be programmed by a Stanford undergraduate without specialized training.”
“That’s the big advantage of the SimCreator DX software,” Stoner explains. “It can do a whole lot very quickly with drag-and-drop. But for people that want to do all this other cool stuff—things no one would have thought to design into the system—they can get under the hood and make something totally new really quickly.”