SANTA MONICA, California — Virtual drivers that resemble humans may increase consumers' trust in autonomous vehicles, according to a new study published in Human Factors: The Journal of the Human Factors and Ergonomics Society.
The National Safety Council has found that more than 90 percent of vehicle crashes are caused by human error, and proponents of self-driving cars tout increased safety as one of their primary benefits. The thinking is that eliminating the human factor will reduce the error rate.
But, as previously reported by Edmunds, consumers aren't completely sold on the idea. According to a recent Harris Poll, 52 percent of those surveyed think autonomous cars will be very or somewhat dangerous for those inside; 57 percent echo that feeling for other drivers in the area; and 61 percent think they'd be a danger to pedestrians.
Now this latest study, Trusting a Virtual Driver That Looks, Acts, and Thinks Like You, finds that the public's concerns may be allayed if a humanoid robot is handling the controls, rather than simply a computer-controlled steering wheel and brakes.
For the study, researchers introduced more than 100 participants to a virtual driver named "Bob" to assess their level of trust in him. The virtual driver's face, head movements, and driving style were programmed to be either similar or dissimilar to those of each participant.
The subjects planned their own driving routes and rode along as Bob took the wheel in a simulator and then indicated whether they felt comfortable with him driving. The results showed that the more Bob looked, acted and drove like them, the more the participants trusted in his ability to keep them safe in the vehicle.
"We think that the most prominent 'bump' in the road to successful implementation of smart cars is not the technology itself but, rather, the acceptance of that technology by the public," said Frank Verberne, a behavioral scientist who was the lead author of the study, in a statement. "Representing such complex automation technology with something that humans are familiar with — namely, a human behind the wheel — may cause it to become less of a 'black box.'"
It makes sense. After all, we do often treat our electronic devices like people, yelling at them when they frustrate us and yelling louder when they frustrate us more.
As far back as the year 2000, scientists from Stanford and Harvard Universities concluded in a study called Machines and Mindlessness: Social Responses to Computers that we respond to machines in many of the same ways in which we respond to other humans.
According to the researchers, even though study participants consciously rejected the notion that electronic devices have personalities or "should be treated like a person," their behavior in the study showed "clear evidence" that they "apply social rules and expectations to computers."
According to NPR.org, the late sociologist Clifford Nass, a specialist in human-machine interaction, said: "The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human — people will respond with an enormous array of social responses."
That may explain why Siri has a human voice and even apologizes politely when she doesn't understand what we want. Apple wants us to be comfortable with its devices so we'll use them and buy more of them.
But whether this kind of interaction with a humanoid driver will entice consumers to new levels of trust in self-driving cars remains to be seen.
Even though numerous automakers, including Audi, BMW, Ford, GM, Mercedes-Benz, Nissan, Toyota, Volkswagen and Volvo — not to mention tech companies like Google — are hard at work on autonomous vehicle technology, to date none of them have put a robot in the driver seat.
Edmunds says: We'll need a ruling on whether robot drivers like Bob would allow us to use the car-pool lane.