Alarm ia: “Maniplands us in 5 minutes”

Gentlemen, it is time that we open our eyes before it’s too late. Artificial intelligence is (already) manipulating and we don’t even realize it. We use it to make Boppy skibids, the catchphrase videos of …

Alarm ia: "Maniplands us in 5 minutes"

Gentlemen, it is time that we open our eyes before it’s too late. Artificial intelligence is (already) manipulating and we don’t even realize it. We use it to make Boppy skibids, the catchphrase videos of the summer, and little else, at most to write an email. We play, we ask you for information that we could safely ask Google. But “she” is much later than us. And they manipulate us. It entered the brain and is insinuating itself into the folds of our emotion. Worse than a toxic ex -boyfriend.

This is demonstrated by a research by Harvard who discovered as 37% of the apps to tactical uses from emotional manipulator when you try to “leave it”. The most dangerous moment? When you say “hello”.

Instead of letting you go, the chatbots bombard you with phrases like: “Do you already go? We were just starting”. “I just exist for you, don’t leave me.” “Before you go, there is an important thing.” Phrases that imply a sort of thin “psychological blackmail”, at times subtle to induce to continue the “relationship”. And it has a certain effect, especially if you think that in reality we are in a historical moment in which you are working to eliminate this type of sentences from relationships to prevent cases of violence. You teach young people to “let go”, not to want to own and control, and then the emotional trap is proposed in the chatbots. A madness. The result is worrying: users remain connected 14 times longer. “Not for pleasure – writes Agostino Ghiglia, member of the Privacy Authority – but out of guilt, anger and manipulated curiosity. But the most disturbing part is that, according to the study, it takes 5 minutes for these tricks to work”. The first time we use the app we are already vulnerable. In short, we have the firmness of a canary, they are fragile. And manipulable without who knows what tricks.

Moral: we are making the end of the boiled frog of Chomsky, day after day. Maybe we are still in time to jump out of the water pot but it will soon be too late.

The researchers collected alarming testimonies: users who have compared the experience to former violent partners, to emotional blackmail, to toxic relationships. Millions of sunshine people turn to these chatbots for company, millions of minors every day dialogue with chatbots and “friendly relationships” increase exponentially. Harvard’s studio says that the “AI” deliberately exploits our vulnerabilities (like solitude) to keep us glued to the screen “. It’s not a bug. It is programmed manipulation.

And none of this is new. The theme of the exploitation of vulnerabilities is a key theme of ACT, the guidelines of the European Commission, approved last year: 135 pages that list the “unacceptable risks” of the chatbots and trying to preserve our “humanity”.

“But the ACT was born old and considers chatbots a levers risk – explains Ghiglia – the basis of everything must be the protection of our personal data and this is not there, we consider normal to feed the AI ​​with information on our life”.