In emergencies, people place too much trust in robots

Imagine that you're in an unfamiliar building, participating in a research experiment. A robot escorts you from room to room to complete a survey about robots and then read an unrelated magazine article. The robot that you are following around is a bit unreliable, though. It guides you to the wrong room a few times and has broken down before. (The robot is secretly being controlled by one of the experimenters.)

Suddenly, the fire alarms go off and smoke fills the hallway. The robot, with the words "Emergency Guide Robot" on it, lights up with red LED lights and uses its "arms" to point people to a route in the back of the building instead of toward the doorway, which was clearly marked with exit signs.

Do you trust the faulty bot?

In the study, all 24 participants did. They were unknowingly being tested on the level of trust they'd place in the robot, even after it demonstrated repeated failings.

"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn't follow it during the simulated emergency," Paul Robinette, a research engineer at Georgia Tech Research Institute (GTRI) who conducted the study as part of his doctoral dissertation, said in a press release. "Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."

The researchers believe that participants viewed the robot as an authority figure and were more likely to trust it in a situation as stressful as a fire. Test subjects were not as likely to trust a faulty robot in simulations that did not involve an emergency scenario.

"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," explained Alan Wagner, a senior research engineer at GTRI. "In our studies, test subjects followed the robot's directions even to the point where it might have put them in danger had this been a real emergency."

When robots made obvious errors during the emergency evacuation, participants did begin to question its instructions. However, some subjects still followed the robot's orders.

This was the first study to look into the level of trust humans place in robots, an important issue as robots and intelligent systems like self-driving cars take on larger roles in our lives. Future research at Georgia Tech will look into why test subjects trusted the robot, whether that response varies by other factors including education level and demographics, and how robots can be viewed as more or less trustworthy.

"These are just the type of human-robot experiments that we as roboticists should be investigating," Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering, said in a press release. "We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human."

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.