Skip to content

Reinforced Use Policies for Claude: Emphasis on Cybersecurity and Agent Functions

AI system Claude's usage guidelines have been revised by Anthropic, effective starting September 15, 2025. The modifications respond to the model's expanded function and newly imposed regulatory standards. The new guidelines specify allowable and prohibited actions for Claude. The update aims...

Tightened Guidelines for Claude Implementation Highlight Cybersecurity and Agent Operations Focus
Tightened Guidelines for Claude Implementation Highlight Cybersecurity and Agent Operations Focus

Reinforced Use Policies for Claude: Emphasis on Cybersecurity and Agent Functions

In a significant move, the AI organisation Anthropic has revised the usage guidelines for its AI system Claude, effective from September 15, 2025. The revisions aim to ensure safe, principle-aligned behaviour and to clearly restrict certain uses, such as domestic surveillance, while balancing national security deployment with ethical considerations.

The revised policies provide more clarity and detail, based on user feedback, product developments, and experiences enforcing existing rules. They also address ambiguities in earlier rules for back-office tools and analysis applications.

The handling of agent-based features, such as Claude Code and Computer Use, has specific guidelines due to the risks of misuse like developing malware or cyberattacks. Activities targeting the compromise of computers, networks, or critical infrastructure are explicitly prohibited in the new policies.

Certain supportive applications are still allowed, while surveillance, tracking, profiling, and biometric surveillance remain prohibited. The organisation supports security-promoting use cases, such as vulnerability discovery with system operator consent.

The usage of Claude in law enforcement is not explicitly mentioned in the provided paragraph. However, high-risk application scenarios continue to be subject to human control and transparency over AI use.

Anthropic views the usage policies as an evolving set of rules to be regularly reviewed and updated in collaboration with politics, academia, and civil society. The organisation has also launched a research initiative on AI and the economy.

High-risk applications have not been specifically addressed in the provided paragraph. Heightened requirements apply only to applications with direct consumer contact, not internal corporate use. The provided paragraph does not address specific guidelines for high-risk applications outside sensitive fields.

The revised policies aim to accommodate the model's expanded capabilities and new regulatory requirements. Anthropic has published a guide in the Help Center explaining the regulations for agent-based applications and providing examples of prohibited activities.

Anthropic has revised its policies on political content, allowing political discussions and research while still prohibiting content that deceives or disrupts democratic processes or targets voter manipulation. The organisation continues to uphold strict guidelines for sensitive fields with societal impact, such as legal, financial, or employment-related areas.

Despite the revisions, ambiguities may still exist, and users are encouraged to consult the Help Center for clarification. Anthropic's commitment to ethical AI development and use remains steadfast, as it continues to navigate the complex landscape of AI applications.

Read also:

Latest

Virtual Machine Monitoring Software

Virtualization Management System

Comprehensive Education Hub: Our platform encompasses a vast array of academic subjects, covering computer science and programming, school education, professional development, commerce, software tools, competitive exams, and beyond, thereby equipping learners across various disciplines.