Realtime Technologies

News

Human Factors Research: “Trust Issues” with Autonomous Vehicles

Human Factors Research: “Trust Issues” with Autonomous Vehicles

 

When it comes to autonomous vehicles, we have trust issues. On the one hand, adoption will be slow (or non-existent) until we develop a certain baseline level of trust in these systems. On the other hand, human factors research has shown that laypeople often vastly over-rely on autonomous systems. 

Human Factors Research Highlights the Danger of (Over) confidence

In their 2017 paper, “Introduction Matters: Manipulating Trust in Automation and Reliance in Automated Driving”, Moritz Körber, Eva Baseler, and Klaus Bengler uncovered some concerning tendencies among autonomous vehicle users.

For their study, Körber et al. created two groups of test drivers for a simulated automated driving system (ADS). Each of the groups was given virtually identical introductions to autonomous driving and the autonomous driving simulation they’d be using. But there were two slightly different versions of the introductory materials, creating two distinct groups of test drivers, one with more trust in autonomous vehicles, and the other with less. 

Predictably, the “trust lowered” group (i.e., the participants reminded of what could go wrong with the ADS prior to beginning their drive) paid more attention throughout the exercise. They were more responsive to in-vehicle take-over requests (TORs) issued by the system, and independently took control of the vehicle 2.5 times more often (generally by tapping the breaks). Not surprisingly, they were in zero collisions. 

Meanwhile, the “trust promoted” group developed some concerning habits. These were participants who, prior to the drive, were reminded of all of the benefits of ADS—without any reminder of the potential for accidents or injury. In general, the researchers found that this group spent significantly “less time looking at the road or instrument cluster and more time looking at the NDRT [“non-driving related task”—i.e., playing with a cellphone or messing with the navigation or in-vehicle entertainment system].” In this group, 30% of the participants got into collisions—despite the TOR alerting them to an impending collision. Even when these occupants did heed the TOR, it took them 1154 ms longer, reducing the minimum time-to-collision by 933 ms.

 

A Pattern of Too Much Trust. Too Soon

An earlier study was no more reassuring: “Two participants fell asleep or at least closed their eyes for periods of several seconds, even after having experienced two take-over situations. … Another participant (elderly group) did not notice one of the TORs, although the TOR was designed according to the recommendations of NHTSA for forward collision warning systems.” I.e., the TOR included a “flashing visual icon,” was as loud as a vacuum cleaner disposal, and annoyingly high-pitched.