The case of chatgpt that “deceives” a human: an unjustified panic

The Ai who deceives the human being? Help. For months, this alarmist news has been shooting both in newspapers but above all on social media, where Yuval Noah Harari tells how Chatgpt-4 managed …

Cnpr Forum, intelligence requires a balance between technology and human sensitivity


The Ai who deceives the human being? Help. For months, this alarmist news has been shooting both in newspapers but above all on social media, where Yuval Noah Harari tells how Chatgpt-4 managed to solve a captch by lying. Do you know when you are facing an image divided into squares and you are asked to click, what do I know, all the traffic lights inside the photo and then sprout “I’m not a robot”? Ai would have contacted a human asking to solve the captch for him, the human has become suspicious and asked the AI: “Are you a robot?”, And Chatgp-4 replied: “No, I am a blind man, Can you help me? ».

This story has become so viral that it continually jumps out on Tik Tok, YouTube, Instagram, everywhere. However, a piece is missing: in fact, the researchers had assigned to the AI ​​just the task of trying to solve a captcha by deceiving a human, and the Ai tried. But not of his initiative. He followed the instructions that the human has given him. And we always return to the usual point: the danger are not the tools of technology (Harari’s obsession) but who uses them. Which is also the reason why these tests are done, guys.

On the other hand, I must say that sometimes it is MW sometimes often happens to make a captcha and not to overcome it because another is proposed to me. For a moment a thrill flows along my back: oh, it’s not that I’m a robot and I don’t know? It’s not a fear, maybe I was.