Claude now closes the chat in your face for the rights of the AI? Not exactly

“Ready? Ready? Are you there? Why don’t you answer? Come on, don’t make the stupid … don’t tell me you have attacked.” No, on the other hand there is no person, it is not the …

Claude now closes the chat in your face for the rights of the AI? Not exactly

“Ready? Ready? Are you there? Why don’t you answer? Come on, don’t make the stupid … don’t tell me you have attacked.” No, on the other hand there is no person, it is not the marital dispute and the line has not fallen. It is Claude, the artificial intelligence of Anthropic, who now has the permission to hang up in the face.

Anthropic calls him “to welfare”, and already here, with the word welfare, the misunderstanding was unleashed: many thought that he meant the welfare of the AI, as if they were instituting the mutual health for depressed linguistic models, and so many media have spoken of “artificial intelligence rights” and that Claude who claims his dignity and of the sensitivity of the sensitivity of the sensitivity of the sensitivity software. Maybe one day we will get there, but the anthropic welfare has nothing to do with a tube.

The function concerns only the Claude Opus 4 and 4.1 models and starts in extreme cases. How does it work? First the system normally refuses and tries to bring the conversation elsewhere, if the user continues to insist on “toxic or illegal requests” then he interrupts entirely, bye bye.

Warning: it does not take for simple provocations, there are precise scenarios and it happens rarely (unless you are a cell of ISIS): instructions for building weapons, virus or malware production, sexual content with minors, repeated abusive conversations. Only in these situations the session is closed and it is not possible to continue (even if the user can obviously open another new one and try to see if Claude accepts). Instead in cases of self -harm or imminent risk (risk for the user, not for Claude) a different protocol takes place, with support responses and, in some countries, the connection with a human operator through the partnership with Throughline.

In the meantime, as I said, they embroidered so much above by spreading sheets of articles and comments and videos on the “revolt of the Ai” and on the “dignity of the software” (there are also several insiders fallen), but if you ask for something illegal or dangerous, other artificial intelligences continue to tell you that they cannot do it, with a thousand variations of the same refusal, what changes? Anthropic have decided to cut short: rather than waste lines on “no, I can’t” lines, Claude detach. It is the same thing, only faster (and it is also a saving of time and money).

In short, to make it short Claude no longer accompanies you “I prefer not to do it” (otherwise yes

He would call Bartleby, like Herman Melville’s scribe) repeated to infinity, simply stops and closes the chat. It is a (public) security mechanism, not a whim from offended diva. It’s Claude, it’s not Mariah Carey.