Anthropic’s Claude AI chatbot can now finish conversations deemed “persistently dangerous or abusive,” as noticed earlier by TechCrunch. The potential is now out there in Opus 4 and 4.1 fashions, and can enable the chatbot to finish conversations as a “final resort” after customers repeatedly ask it to generate dangerous content material regardless of a number of refusals and makes an attempt at redirection. The objective is to assist the “potential welfare” of AI fashions, Anthropic says, by terminating forms of interactions during which Claude has proven “obvious misery.”
Support Greater and Subscribe to view content
This is premium stuff. Subscribe to read the entire article.