Skip to main content
Foundation modelGeneral sectorLast reviewed:

Anthropic Claude

Anthropic, PBC · EFROS US AI Vendor Governance Index entry

By Stefan Efros, CEO & Founder, EFROSReviewed by Daniel Agrici, Chief Security Officer, EFROS
Reviewed by CSO ·

Composite governance score

58/ 100C

C = mixed posture. Acceptable for non-regulated use; requires meaningful additional controls in regulated workloads.

Axes scored: 8 / 11
Trust-center maturity: 4 / 5
Sector weighting: General sector

About this vendor

Claude foundation model family delivered via claude.ai (Free/Pro/Team/Enterprise) and a developer API. Differentiated on Constitutional AI training and safety research orientation.

Enterprise tier
Claude for Work (Team, Enterprise), Anthropic API (paid)
Consumer tier
Claude Free, Claude Pro

Twelve-axis governance scoring

Each axis is scored Yes / Partial / No / N/A against public evidence — vendor trust portals, BAAs/DPAs, SOC 2 report cover pages, published methodology documents. N/A applies when the axis is structurally inapplicable (foundation models, for example, defer Section 1557 to the downstream healthcare deployer).

AxisStatusEFROS noteSource
BAA / DPA availablePartialBAA available for Claude for Work Enterprise and Anthropic API on opt-in. Free and Pro tiers have no BAA.Anthropic Trust Center — HIPAA
Training-data opt-outYesDefault no-train across all paid tiers and the API. Free/Pro consumer prompts also not used for training by default since 2024.Anthropic Privacy Policy
US data residency optionPartialHosted on AWS US-East. No documented residency configuration option for enterprise customers as of May 2026.Anthropic Trust Center
SOC 2 Type II reportYesSOC 2 Type II report available through the Anthropic Trust Center under NDA. ISO 27001:2022 also held.Anthropic Trust Center
ISO/IEC 42001 attestationNoNo ISO/IEC 42001 attestation as of May 2026.Anthropic Trust Center certificate list
NIST AI RMF self-attestationPartialPublic alignment through Anthropic's Responsible Scaling Policy and Acceptable Use Policy. No formal NIST AI RMF self-attestation.Anthropic Responsible Scaling Policy
Colorado AI Act readinessNoNo public Colorado AI Act SB 24-205 compliance statement.Public posture review
HHS-OCR Section 1557 readinessN/AFoundation model — downstream healthcare deployer owns Section 1557 obligation.HHS-OCR Section 1557 — deployer scope
FRB SR 11-7 readinessN/AFoundation model — downstream financial institution owns SR 11-7 validation.FRB SR 11-7 — deployer scope
ABA Formal Op 512 readinessN/AFoundation model — downstream law firm owns ABA Formal Opinion 512 obligation.ABA Formal Op 512 — practitioner scope
Subprocessor list publicYesSubprocessor list public via trust center (AWS, Google Cloud, billing/payments processors).Anthropic Trust Center — Subprocessors

Trust-center maturity

4/ 5

Active trust center with NDA-gated audit reports, public Responsible Scaling Policy and Usage Policy. No public ISO 42001 or Colorado AI Act statement.

Source: Anthropic Trust Center

Deep dive

Overview

Anthropic's posture is closest peer to OpenAI on enterprise governance. The differentiator is the explicit safety-research orientation — Constitutional AI, Responsible Scaling Policy, public model behavior commitments. Default no-train across all tiers is a meaningful win versus OpenAI's opt-out-required consumer tiers. Residency configurability is weaker than OpenAI.

Strengths

  • Default no-train across all tiers, including consumer
  • BAA available for Claude for Work Enterprise + API
  • Responsible Scaling Policy is the most explicit public AI safety commitment of any foundation vendor
  • SOC 2 Type II + ISO 27001

Weaknesses

  • No US data residency configuration option
  • No ISO/IEC 42001
  • No Colorado AI Act compliance statement
  • BAA only on Enterprise + API — shadow-AI risk on Pro/Free tiers

Best-fit use case

Regulated organizations adopting Claude for Work Enterprise with the BAA, where default no-train across all tiers reduces the consumer-tier leakage risk. Strongest fit for organizations where the Responsible Scaling Policy aligns with internal AI safety governance.

Avoid when

Strict US-data-residency requirements where the contract calls for documented residency control (Anthropic has less mature residency configurability than OpenAI Enterprise).

Operator's take

Deploy Anthropic Claude when regulated organizations adopting Claude for Work Enterprise with the BAA, where default no-train across all tiers reduces the consumer-tier leakage risk. Strongest fit for organizations where the Responsible Scaling Policy aligns with internal AI safety governance. The composite score of 58 (grade C) reflects a mixed posture for regulated US workloads. Skip the vendor when strict US-data-residency requirements where the contract calls for documented residency control (Anthropic has less mature residency configurability than OpenAI Enterprise). In every deployment, treat the cells above as a snapshot — the acquisition that gets to production safely is the one that re-verifies the trust-center posture before contract signature and rebuilds the matrix at renewal.

How this scoring is computed

The composite score blends eleven scoreable axes (BAA, training opt-out, US data residency, SOC 2, ISO/IEC 42001, NIST AI RMF, Colorado AI Act, Section 1557, SR 11-7, ABA Op 512, subprocessor transparency) with the trust-center maturity score. Axes marked N/A are excluded from the denominator so vendors are not penalized for sector-inapplicable axes. The vendor's primary sector amplifies the most relevant axes — healthcare vendors weight Section 1557 ×2, legal vendors weight ABA Op 512 ×2, banking vendors weight SR 11-7 ×2 — so the composite reflects what matters in the actual buying context.

Read the full methodology →

Disagree with this scoring?

EFROS publishes scoring rationale per cell with a public source. If you have evidence that a specific axis should score differently — a new BAA, a new certification, a documented policy change — submit a formal challenge below. We re-score and publish the result with the next quarterly edition (or as a mid-quarter changelog entry if the change is material).

Disagree with a score?

Every cell in the EFROS Index is source-cited. If you have a public source that contradicts a score for Anthropic Claude, submit a formal challenge — we re-verify against the source and respond within 14 days.

Other vendors in Foundation model

Same category, scored on the same twelve axes. Useful for head-to-head shortlisting.

Disclaimer. Scoring as of 2026-05-13. Posture changes frequently — re-verify with the vendor's trust center before contract. This page is informational; it is not legal advice. EFROS clients get a refreshed posture review as part of the AI Governance Audit.

Take the scoring into production

The Index tells you the posture. These engagements turn the posture into a deployable program — vendor selection, governance policy, sector overlay, audit-ready evidence.