Rear-end collisions account for approximately 28% of all collisions. Multiple studies have demonstrated that forward roadway collision systems show a great deal of promise in reducing such crashes. For example, with experienced and older drivers, even relatively unobtrusive SAE Level 1 ADAS forward roadway collision systems have been shown to lead to shorter reaction times, lower speeds if there is a collision, and reduced crash rates overall.
But young drivers pose a special case in collision avoidance. Multiple studies have shown that novice drivers regularly fail to perceive hazardous locations in the roadway because they have not developed appropriate scanning behaviors. In the literature this is called “latent hazard anticipation,” and refers to a driver’s ability to appropriately scan visual areas of the roadway that are likely to contain hazards (e.g., checking tree lines that may conceal wildlife, tracking a wobbly cyclist on the soft shoulder, knowing a parked truck could easily conceal a pedestrian attempting to cross, etc). Can Level 1 ADAS compensate for the novice driver’s inherent inability to efficiently scan for threats?
And if so, how critical is appropriate timing of these automated alerts? We already know that when a forward roadway collision system has poorly timed alerts, it can undermine safety rather than increase it. Early alerts get ignored. Late alerts can distract, causing drivers to abandon or botch appropriate evasive maneuvers they’d already begun without the alert. In either case, mis-timed alerts undercut driver confidence in the assistive technology.
Is this more pronounced in novice drivers? How finely tuned will a forward-collision detection ADAS need to be to improve their latent hazard anticipation ability?
Visual Warnings Can Impact Latent Hazard Detection—For Good or Ill
In 2017, a team at the University of Massachusetts, Amherst dug into this issue. At that time, UMass Amherst was using a Realtime Technologies RDS-2000 Full Cab Simulator built around a fixed base Saturn sedan with three screens, 150 horizontal field of view, and 30 degrees’ vertical field of view. This unit was optimized for full immersion—not just visually, but also equipped with complete OEM vehicle controls and a surround sound system that generated environmental, vehicle, and roadway noise (including Dopplering). The simulator was networked with an Applied Science Laboratories (ASL) Mobile Eye head mounted eye tracking system. The simulator ran custom simulation scenarios the team had designed to emulate the ADAS experience and simulate a series of latent and active hazard conditions. (Their findings ultimately saw publication as “Effectiveness of Visual Collision Warning Alert on Young Drivers’ Latent Hazard Anticipation.”)
Led by Foroogh Hajiseyedjav, these researchers determined that visual warnings from a forward-collision detection ADAS were highly effective at improving novice drivers’ latent hazard detection. This was the case even when the alert was just 2 seconds in advance of collision!
And these were not small improvements: while the control group of novice drivers only anticipated hazards around 70% of the time, those receiving support from ADAS did so more more than 90% of the time. “In other words,” the authors wrote, “the visual warning alerts when presented 2s in advance of a potential threat significantly improves the anticipation ability of young drivers.”
UMass Research-Backed Sims, Without Coding
Shortly after this study was completed the University of Massachusetts Amherst upgraded their simulator to a RDS-2000 Ford Fusion with 300 degree field of view. More recently, they upgraded their lab once more. They are now using RTI’s premier graphical simulation scenario rapid-authoring suite, SimCreator DX, as well as SimDriver and SimADAS (two tools for completely simulating autonomous and assistive systems).
Coupled with SimObserver Pro HD, UMass Amherst can now quickly develop new driving research studies (including those delving into more complex ADAS and AV scenarios) with automated data collection and synchronization without writing a line of code or spending months in tedious software development.