Overcoming Limitations in Human Factors Research into Distracted Driving
Published
Human factors research is keenly interested in what’s happening in the brain of distracted drivers while texting or using GPS apps. Roughly 4,000 people are killed each year by distracted driving and another 390,000 injured. By some estimates, one-quarter of all U.S. car accidents are caused by someone texting and driving.
Previous research has identified recognizable shifts in the brain activity of distracted motorists. But in order to chart those “neurobiological signatures,” researchers placed study participants in an MRI machine.
But lying in an MRI scanner has little in common with driving an actual car. As such, these findings were suspect: Did they actually reflect the brain activity of a distracted driver or simply the brain activity of a person asked to do memory exercises while playing a driving video game inside a large noisy magnet tube?
Overcoming Limitations in Human Factors Research
As researchers Joseph M. Baker, Jennifer L. Bruno, and their team at Stanford wrote in the introduction to their study recently (published in Nature earlier this year):
“[T]he experience of lying supine in an MRI bore while interacting with a simulated vehicle in a manner that is unlike naturalistic driving may elicit patterns of neural activations that do not occur in real-world driving. Moreover, because movement is severely restricted in fMRI studies, the ability to interact with a smartphone as one normally would is not possible.”
For their study, Baker, Bruno, et al. took a new approach. Instead of an MRI scanner, they combined Functional near-infrared spectroscopy (fNIRS) brain monitoring and eye-tracking systems with commercial off-the-shelf smartphones (i.e., the source of distractions), integrating this with an advanced full-cab driving simulator from Realtime Technologies. This simulator made it possible to create a fully immersive naturalistic driving environment, where participants interacted with an actual smartphone and experienced a full set of sensory cues (including engine/environmental sound, force feedback on steering, and so on). This particular sim is also able to record and synchronize both built-in sensor data (pedal positions, steering wheel angle, etc.) and data coming from third-party sensors and devices.
As a result, the researchers were able to “highlight a significant increase in bilateral prefrontal and parietal cortical activity…in response to increasingly greater levels of smartphone distraction.” These changes predicted notable deviations in-vehicle control by the participant.
Rapid Simulation Development, Easier Data Collection, and Analysis
According to Heather Stoner (General Manager for Realtime Technologies), “this is a good example of the types of things that you can do with an RTI simulator, and the caliber of research that’s possible.”
This wasn’t just about force-feedback or data collection, or even RTI’s latest version of SimCreator DX—a powerful tool capable of drag-and-drop simulation creation. The key to studies like these is the sim’s ability to coordinate monitoring devices, smartphones, and other peripherals.
“If it has an API that you can send information out of it, it can definitely go into SimObserver, which will line it up with the built-in simulator data collection. At the same time, the simulator can send data out over UDP, TCP, anything like that, over a local wireless network, for example. That can trigger an app on your phone or another external event.”
This makes it possible to synchronize everything in the sim: data on driver behavior collected by the sim itself, data from your eye tracking and fNIRS monitors, the fact that you triggered this alert or sent that text message to their phone. Having it all in one place saves researchers time, and lets them focus on their actual research, not cleaning up data.