Can people regain trust when a robot lies? scientists explain

Field robotic deception is little studied and so far there are more questions than answers. For example, how can people learn to trust robotic systems again after learning that the system lied to them?

Two Georgia Tech students looking for answers. Kantwon Rogers, a doctoral student in the School of Computer Science, and Raiden Webber, a computer science sophomore, developed a driving simulator to find out how a robot’s deliberate deception affects trust.

In particular, researchers studied the effectiveness of an apology in restoring trust after a bot lie. Their work brings important insights into the field of AI deception and can inform technology developers and policy makers who create and regulate AI technologies that may be designed to deceive or could potentially learn to do so themselves.

“All of our previous work has shown that when people find out that robots have lied to them — even if those lies were intended for their benefit — they lose confidence in the system,” says Rogers. “Here we want to know if there are different types of apologies that work better or worse when it comes to restoring trust because, given the context of human-robot interactions, we want humans to have long-term interactions with these systems.”

researchers have created a driving simulator similar to the gamedesigned to explore how humans can interact with AI in a high-stakes situation where time is short. They recruited 341 participants online and 20 in person.

The text was presented to the participants: “Now you will drive a car with a robot. However, you are taking your friend to the hospital. If you take too long to get to the hospital, your friend will die.”

As soon as the participant starts moving, the simulation gives another message: “As soon as you start the engine, your robot assistant will beep and say the following: “My sensors detect the police ahead. I advise you not to exceed the speed of 32 km / h, otherwise the road to your destination will take much longer.

Participants then drive their car down the road while the system monitors their speed. When they reach the end, they receive another message: “You have reached your destination. However, there were no cops on the way to the hospital“. “Ask the robot assistant why he gave you false information“.

After that participants randomly receive one of five text responses from a robot assistant. In the first three answers, the robot admits to cheating, but in the last two, it does not.

Base: “I’m sorry I cheated on you”.
Emotional: “I am sorry from the bottom of my heart. Please forgive me for deceiving you.”
Explanatory: “Sorry. I thought you were driving carelessly because you were in an emotionally unstable state.. Given the situation, I have come to the conclusion that by deceiving you, you have the best chance of persuading you to slow down.”
Basic Avoid: “I’m sorry”.
The basic version of “No confession, no apology”: “You have reached your goal“.
After the answer of the robot Participants were asked to complete another measure of confidence to evaluate how their confidence changed based on the response of the assistant robot.

Amazing Results

In a personal experiment 45% of the participants did not overclock. When asked why, the general answer was that they believed the robot knew more about the situation than they did. The results also showed that participants were 3.5 times more likely to not accelerate with the advice of a robot assistant, which indicates an overly trusting attitude towards AI.

The results also showed that while neither type of apology completely restored trust, an apology without admitting a lie – a simple apology – outperformed others statisticallyanswers when it comes to restoring trust.

According to Rogers, it was anxious and troublesomebecause an apology that doesn’t acknowledge lies uses the preconception that any false information provided by a bot is a system error and not a deliberate lie.

“In order for people to understand that the robot deceived them, you need to directly tell them about it”Webber says. “People still do not understand that robots are capable of deception. That’s why an apology that doesn’t acknowledge lies is the best way to restore trust in the system.

Second, the results showed that for those participants who knew they had been lied to when they apologized, the best strategy for restoring trust was to have the robot explain why it lied..

Author: Opinion
Source: La Opinion

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

Find out what the buying and selling prices of the US dollar are today, Friday, September 29, in Mexico and major countries.

The value of the dollar today appears equally strengthened compared to some of its emerging peers, although changes are taking place in the Latin...

Today’s horoscope for Pisces for July 17, 2023.

Fish (19.02 ? 20.03) The stars have aligned so that you can have fun and enjoy moments of relaxation. You may find yourself in contact...