The questions your security team, procurement, and legal department will ask before approving Claude. Answered directly โ no marketing language, no vague commitments. This is what Anthropic's security documentation actually says.
By default on the API and with Claude Enterprise, your prompts and completions are not used to train Claude. On free claude.ai, training may occur; users can opt out. This is the most common procurement question โ and the answer is no.
All data is encrypted in transit via TLS 1.2+ and at rest via AES-256. Anthropic manages encryption keys; customer-managed keys (CMK) are available for Enterprise customers under specific arrangements.
Anthropic holds SOC 2 Type II certification covering Security, Availability, and Confidentiality trust service criteria. Reports are available to enterprise customers under NDA via the Anthropic Trust Portal.
Anthropic offers a Business Associate Agreement (BAA) for qualifying healthcare customers on Claude Enterprise. This makes Claude deployable in HIPAA-regulated contexts โ clinical documentation, research, patient services.
Anthropic acts as a Data Processor for API and Enterprise customers. Standard Contractual Clauses (SCCs) are available for EU data transfers. The claude.ai user consent flows are designed to meet GDPR Article 13/14 requirements.
API conversations are retained for up to 30 days by default for abuse detection. Enterprise customers can negotiate reduced retention windows or immediate deletion. No persistent user memory without explicit opt-in.
Anthropic's compliance posture as documented in their Trust Portal and publicly available security documentation. Contact Anthropic directly for the latest certification reports.
| Certification / Framework | Status | Scope | Notes |
|---|---|---|---|
| SOC 2 Type II | โ CERTIFIED | Security, Availability, Confidentiality | Annual audit. Report available via Trust Portal under NDA. |
| ISO 27001 | โ CERTIFIED | Information Security Management | Covers Anthropic's corporate systems and cloud infrastructure. |
| HIPAA | โ BAA AVAILABLE | Claude Enterprise customers | BAA required. Covered entity must request separately during contracting. |
| GDPR | โ COMPLIANT | EU/EEA customers | DPA available. SCCs for international data transfers. Anthropic is Data Processor for API/Enterprise. |
| CCPA | โ COMPLIANT | California residents | Privacy policy addresses CCPA rights. DSR processes in place. |
| CSA STAR | โ PUBLISHED | Cloud security self-assessment | CAIQ available on Cloud Security Alliance registry. |
| FedRAMP | โ IN PROGRESS | US Federal Government | Not yet FedRAMP authorized as of March 2026. AWS GovCloud deployments available for some use cases. See FedRAMP guide. |
| PCI DSS | โ NOT CERTIFIED | Payment card data | Claude is not designed to handle PAN data. Architectural controls required if payment data could be in context. |
| IL4 / IL5 (US DoD) | โ NOT AVAILABLE | US Department of Defense | Controlled Unclassified Information environments not currently supported. Monitor Anthropic announcements. |
The most critical question in any enterprise procurement is: "What does Anthropic do with our data?" The answer differs by product tier.
Claude.ai Free and Pro: Conversation data may be used to improve Claude's models, unless you opt out in account settings under "Privacy." Anthropic uses human reviewers to evaluate model outputs for safety research, and this may include free-tier conversations. Opt-out is available but non-trivial to configure.
Claude API (pay-as-you-go and via SDK): By default, prompts and completions are NOT used for training. Anthropic retains conversation data for up to 30 days for safety monitoring and abuse detection, then deletes it. No human review of content except in cases of suspected policy violations.
Claude Enterprise: Zero training on customer data, contractually guaranteed. Custom retention windows available. Audit logging enabled. The enterprise DPA explicitly classifies Anthropic as a data processor, not a data controller โ meaning you remain the data controller for your organisation's data.
Claude Team: Similar to Enterprise but without the custom retention negotiation. By default, content is not used for training. Check your Team plan agreement for exact terms.
For regulated industries: If your organisation handles PHI, PII, financial data, or legally privileged information, only Claude Enterprise with an executed DPA (and HIPAA BAA where applicable) should be used. Do not route sensitive data through free or Pro tiers.
Claude is hosted on Amazon Web Services (AWS) with primary infrastructure in the US. Anthropic does not currently offer EU-resident data processing or in-region deployment for most customers. For EU organisations with data residency requirements, this is the primary compliance gap to address.
Claude is also available via AWS Bedrock and Google Cloud Vertex AI for organisations that need their AI inference to run within their existing cloud environment โ this changes the data controller/processor relationship. See our guides for AWS Bedrock and Vertex AI deployments.
All Claude API traffic is encrypted via TLS 1.2 or higher. API keys authenticate requests โ there is no IP allowlisting as standard (though enterprise networking configurations can restrict access). Anthropic employs WAF protections, DDoS mitigation, and rate limiting at the network layer.
Anthropic's internal access controls follow least-privilege principles. Only authorised personnel with business need can access production systems. Access is audited, and privileged access requires multi-factor authentication.
Anthropic conducts regular third-party penetration testing. Results are incorporated into their remediation programme. The SOC 2 Type II audit covers their vulnerability management process as part of the Security trust service criteria. Details of individual findings are not publicly disclosed but are covered in the SOC 2 report available to Enterprise customers.
One of the security questions unique to AI deployments is model-level attack surface โ specifically, prompt injection. Claude has Constitutional AI baked into its training, which provides a meaningful baseline of resistance to jailbreaking and instruction hijacking.
However, Constitutional AI is not a complete security control. Enterprise deployments should implement architectural defences including: input sanitisation, system prompt hardening, output validation, and rate limiting at the application layer. Anthropic provides guidance on these controls in their developer documentation but does not implement them on your behalf.
Security note for developers: Claude's system prompt can be configured to restrict behaviour, but it cannot prevent a determined adversary from extracting context via indirect prompt injection if your application passes untrusted content directly into Claude without sanitisation. Treat Claude's context window like an execution environment โ sanitise inputs. Our Security & Governance service includes a threat model review for AI applications.
Claude Enterprise includes comprehensive audit logging โ who sent what, when, to which workspace, and what the response was (metadata level). Enterprise admins can configure log export to their SIEM (Splunk, Microsoft Sentinel, etc.) for centralised security monitoring.
Enterprise also provides SSO (SAML 2.0), SCIM provisioning and deprovisioning, seat-level access management, and workspace isolation between departments. Granular permission controls allow you to limit which models users can access, which integrations are permitted, and which data sources Claude Cowork can connect to.
Not directly โ Claude is a hosted API service. However, Anthropic's model weights are available through Amazon Bedrock and Google Vertex AI, and in some cases can be deployed within a VPC. For true air-gap requirements, Anthropic offers on-premises deployment discussions for large enterprise customers, typically requiring a custom commercial agreement. Contact our team to discuss options.
As of March 2026, Anthropic manages encryption keys for all Claude deployments. Customer-managed key support is on the roadmap for enterprise customers but not yet generally available. If CMK is a hard requirement, deploying Claude via AWS Bedrock within your own AWS account provides more key management control.
Claude Enterprise supports workspace-based isolation โ different departments or product lines can have separate workspaces with distinct access controls, integrations, and audit logs. Conversations in one workspace are not accessible from another. For multi-tenant SaaS applications built on the Claude API, tenant isolation is the application developer's responsibility; Anthropic provides guidance but does not enforce it at the model layer.
Anthropic's incident response process includes customer notification obligations under GDPR (72 hours) and relevant US state laws. Enterprise DPAs specify notification timelines and obligations. Anthropic carries cyber liability insurance and engages external incident response specialists for significant security events. Historical security incidents are disclosed on Anthropic's Trust Portal.
The EU AI Act is in phased implementation through 2027. Anthropic is actively monitoring compliance requirements. Claude is classified as a general-purpose AI system under the Act. For enterprise deployments using Claude for high-risk AI applications (as defined by Annex III of the Act), additional compliance obligations fall on the deployer, not just the provider. Our Security & Governance service includes EU AI Act readiness assessment.
We've supported over 30 enterprise security reviews for Claude deployments. We know what the questionnaires ask and how to answer them. Our Security & Governance service includes Anthropic's documentation, architecture threat models, and direct support for procurement approvals.