Reinforced Use Policies for Claude: Emphasis on Cybersecurity and Agent Functions
In a significant move, the AI organisation Anthropic has revised the usage guidelines for its AI system Claude, effective from September 15, 2025. The revisions aim to ensure safe, principle-aligned behaviour and to clearly restrict certain uses, such as domestic surveillance, while balancing national security deployment with ethical considerations.
The revised policies provide more clarity and detail, based on user feedback, product developments, and experiences enforcing existing rules. They also address ambiguities in earlier rules for back-office tools and analysis applications.
The handling of agent-based features, such as Claude Code and Computer Use, has specific guidelines due to the risks of misuse like developing malware or cyberattacks. Activities targeting the compromise of computers, networks, or critical infrastructure are explicitly prohibited in the new policies.
Certain supportive applications are still allowed, while surveillance, tracking, profiling, and biometric surveillance remain prohibited. The organisation supports security-promoting use cases, such as vulnerability discovery with system operator consent.
The usage of Claude in law enforcement is not explicitly mentioned in the provided paragraph. However, high-risk application scenarios continue to be subject to human control and transparency over AI use.
Anthropic views the usage policies as an evolving set of rules to be regularly reviewed and updated in collaboration with politics, academia, and civil society. The organisation has also launched a research initiative on AI and the economy.
High-risk applications have not been specifically addressed in the provided paragraph. Heightened requirements apply only to applications with direct consumer contact, not internal corporate use. The provided paragraph does not address specific guidelines for high-risk applications outside sensitive fields.
The revised policies aim to accommodate the model's expanded capabilities and new regulatory requirements. Anthropic has published a guide in the Help Center explaining the regulations for agent-based applications and providing examples of prohibited activities.
Anthropic has revised its policies on political content, allowing political discussions and research while still prohibiting content that deceives or disrupts democratic processes or targets voter manipulation. The organisation continues to uphold strict guidelines for sensitive fields with societal impact, such as legal, financial, or employment-related areas.
Despite the revisions, ambiguities may still exist, and users are encouraged to consult the Help Center for clarification. Anthropic's commitment to ethical AI development and use remains steadfast, as it continues to navigate the complex landscape of AI applications.
Read also:
- Unveiling the Less-Discussed Disadvantages of Buds - Revealing the Silent Story
- BMW Suffers Ransomware Attack by Everest Group, with Reports of Stolen Internal Documents
- MI6 intelligence agency in the UK intends to expand recruitment efforts into the dark web, particularly focusing on potential candidates within Russia.
- Criminal elements are reportedly employing covert malware to infiltrate government systems