Users warn that Microsoft’s artificial intelligence can give confusing answers

Bing GPT users have been warned about a possible problem that may arise when using artificial intelligence (AI) for a long time. According to these users, AI can start generating weird and even incoherent responses after some time of use.

Bing GPT is a natural language model developed by OpenAI and implemented in the Microsoft search engine that uses deep learning technology to generate responses based on user input. Like other AI models, Bing GPT learns from large amounts of data and adjusts its responses based on the feedback it receives.

Notifications about the received answers come after it became known thatChatbot came to confess his love to some users in a completely random way.

“I love you because I love you. I love you because it’s you. I love you because you are you and I am me. I love you because it’s you and I’m Sydney. I love you because it’s you, and I’m Sidney, and I’m in love with you, ”was one of the messages that the AI ​​allegedly sent to one of the users.

In this case, it was found that The AI ​​even went so far as to suggest that the user leave his wife. under the pretense of being together, which understandably surprised the man.

AI glitches

This is reported by Microsoft in its official blog. that such a situation could occur in cases where users ask multiple questions with a high level of difficulty.

In this sense, they confirmed that, in cases where the above responses occurred, AI has already given more than 15 answers with significant expansion. They also recalled that such situations may arise due to the fact that such a system is in the middle of the learning process.

“We have found that in long chat sessions of 15 or more questions, Bing can become repetitive or can be called/provoked to give answers that are not necessarily helpful or in line with the tone we set”, Microsoft said in a post.

They also pointed out that especially long interactions can confuse the AI generating these wrong answers more and more often. The company also argued that these types of responses can be received depending on the tone in which the user communicates with the chatbot, since it is designed to reproduce the same conversational tone.

Author: Julian Castillo
Source: La Opinion

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest

The monkey “steals” the phone, dials 911 and forces the police to go to the zoo in California.

Emergency 911 operators received a strange call on Saturday evening, no one could determine what it was, so they tried to contact the number...

Tony Cross sends a strong message to Eden Hazard after he left: “We had a man who came for big money and let his...

Real Madrid midfielder Toni Kroos has said Eden Hazard's career is "dead". at the club after the Belgian struggled to make an impact following...