Anthropic has introduced new capabilities that may enable a few of its latest, largest fashions to finish conversations in what the corporate describes as “uncommon, excessive instances of persistently dangerous or abusive person interactions.” Strikingly, Anthropic says it’s doing this to not defend the human person, however fairly the AI mannequin itself.
Support Greater and Subscribe to view content
This is premium stuff. Subscribe to read the entire article.