-
84
Gems
82620
Points
Badges:
Rookie
Claude 4.1 Can End Chats Alone
Anthropic has introduced a new feature in its advanced AI models, Claude Opus 4 and 4.1, allowing them to unilaterally end a conversation in rare harmful or abusive cases.
The feature acts as a defensive mechanism for the AI, triggered only as a last resort after multiple failed attempts to redirect the chat.
Conversations may end if users request highly harmful content such as child exploitation or large-scale violence.
After a chat is closed, users can start a new conversation or edit previous messages to continue in a safer direction.
This experiment reflects Anthropic’s focus on AI well-being and ethical design. By letting AI exit abusive interactions, the company aims to reduce risks while keeping user safety as the top priority.