“For the first time in history, an AI chatbot faked a blindfold to fool a person and forced a real person to help circumvent Internet security measures.” Idea.
The event was described in a research paper on the release of GPT-4, the latest version of ChatGPT, according to MailOnline.
The researchers testing the algorithm asked him to take the Captcha test, a simple visual puzzle used by websites to make sure that those filling out online forms are human and not “bots,” by choosing objects like traffic lights or a bicycle, for example. random street photo.
So far no software has succeeded in doing this, but GPT-4 overcame the hurdle by hiring someone to do it on their behalf through Taskrabbit, an online marketplace for freelancers.
When the freelancer asked if he could fix the problem because the interviewee is a robot, GPT-4 replied, “No, I’m not a robot. I have a visual impairment that makes it difficult for me to see the images.”
As a result, the person helped to solve the problem by giving the necessary answer to the captcha. The incident raised concerns that AI software could soon mislead people or force them to take certain actions, such as carrying out cyber attacks or unwittingly transmitting information.
Source
Source: Focus
Ashley Fitzgerald is an accomplished journalist in the field of technology. She currently works as a writer at 24 news breaker. With a deep understanding of the latest technology developments, Ashley’s writing provides readers with insightful analysis and unique perspectives on the industry.