Skip to main content
Foundation modelGeneral sectorLast reviewed:

Meta Llama

Meta Platforms, Inc. · EFROS US AI Vendor Governance Index entry

By Stefan Efros, CEO & Founder, EFROSReviewed by Daniel Agrici, Chief Security Officer, EFROS
Reviewed by CSO ·

Composite governance score

25/ 100F

F = inadequate posture for any regulated workload. Re-evaluate before procurement.

Axes scored: 8 / 11
Trust-center maturity: 2 / 5
Sector weighting: General sector

About this vendor

Open-weight foundation model family (Llama 3.x, Llama 4) distributed under a community license. Used primarily as a self-hosted or partner-hosted alternative to API-only vendors.

Enterprise tier
Self-hosted (open weights) or cloud-hosted via Bedrock, Azure AI, Vertex AI, Together, Fireworks, Groq
Consumer tier
Meta AI consumer (meta.ai)
Vendor homepage
https://llama.com

Twelve-axis governance scoring

Each axis is scored Yes / Partial / No / N/A against public evidence — vendor trust portals, BAAs/DPAs, SOC 2 report cover pages, published methodology documents. N/A applies when the axis is structurally inapplicable (foundation models, for example, defer Section 1557 to the downstream healthcare deployer).

AxisStatusEFROS noteSource
BAA / DPA availableNoMeta does not offer a BAA directly. BAA must be obtained from the hosting partner (AWS Bedrock, Azure AI Studio, GCP Vertex) where Llama is deployed. Self-hosted deployments shift the entire BAA burden to the deploying organization.Meta Llama Community License
Training-data opt-outYesOpen weights — no training feedback loop to Meta. Inputs to your hosted deployment never leave your tenant.Meta Llama license terms
US data residency optionYesSelf-hosted or partner-hosted on a US region — deploying organization controls residency entirely.Deployment-controlled
SOC 2 Type II reportNoMeta does not provide SOC 2 for Llama directly. Hosting partner (AWS/Azure/GCP) provides cloud-side SOC 2.Meta Trust Center
ISO/IEC 42001 attestationNoNo ISO/IEC 42001 attestation.Public posture review
NIST AI RMF self-attestationNoNo NIST AI RMF self-attestation. Meta publishes Responsible Use Guide and Model Card; deploying organization performs RMF mapping.Meta Responsible Use Guide
Colorado AI Act readinessNoNo Colorado AI Act compliance statement. Deployer responsibility entirely.Public posture review
HHS-OCR Section 1557 readinessN/AFoundation model — Section 1557 is deployer responsibility.HHS-OCR Section 1557 — deployer scope
FRB SR 11-7 readinessN/AFoundation model — SR 11-7 is deployer responsibility.FRB SR 11-7 — deployer scope
ABA Formal Op 512 readinessN/AFoundation model — ABA Op 512 is deployer responsibility.ABA Formal Op 512 — practitioner scope
Subprocessor list publicNoSelf-hosted: no Meta subprocessor chain. Partner-hosted: hosting partner's subprocessor list applies.Deployment-controlled

Trust-center maturity

2/ 5

Meta publishes Responsible Use Guide, model cards, license terms. No trust portal in the OpenAI/Anthropic sense. Compliance posture lives at the hosting layer.

Source: llama.com

Deep dive

Overview

Llama scores poorly on a vendor-governance scorecard because Meta delegates governance to the deploying organization. This is by design — open weights mean the deployer owns the entire stack. The right way to evaluate Llama is to score the hosting partner (AWS Bedrock, Azure AI, Vertex AI) instead, because that's where the BAA, SOC 2, residency, and subprocessor controls actually live.

Strengths

  • Open weights — full deployer control of data, residency, retention
  • No training feedback loop to Meta
  • Cost advantage at scale via self-hosting

Weaknesses

  • No vendor-side BAA, SOC 2, residency, or subprocessor controls
  • Deployer owns 100% of governance burden
  • No NIST AI RMF self-attestation, no Colorado AI Act statement

Best-fit use case

Organizations with mature ML/AI platform teams that need full data control, are running on-prem or sovereign-cloud workloads, or have validated hosting on AWS Bedrock / Azure AI Studio / GCP Vertex with the hosting partner's BAA in place.

Avoid when

Smaller organizations without an internal AI platform team. The cost of building deployer-side governance on top of Llama exceeds the cost of paying for OpenAI Enterprise or Claude for Work in most mid-market scenarios.

Operator's take

Deploy Meta Llama when organizations with mature ML/AI platform teams that need full data control, are running on-prem or sovereign-cloud workloads, or have validated hosting on AWS Bedrock / Azure AI Studio / GCP Vertex with the hosting partner's BAA in place. The composite score of 25 (grade F) reflects a mixed posture for regulated US workloads. Skip the vendor when smaller organizations without an internal AI platform team. The cost of building deployer-side governance on top of Llama exceeds the cost of paying for OpenAI Enterprise or Claude for Work in most mid-market scenarios. In every deployment, treat the cells above as a snapshot — the acquisition that gets to production safely is the one that re-verifies the trust-center posture before contract signature and rebuilds the matrix at renewal.

How this scoring is computed

The composite score blends eleven scoreable axes (BAA, training opt-out, US data residency, SOC 2, ISO/IEC 42001, NIST AI RMF, Colorado AI Act, Section 1557, SR 11-7, ABA Op 512, subprocessor transparency) with the trust-center maturity score. Axes marked N/A are excluded from the denominator so vendors are not penalized for sector-inapplicable axes. The vendor's primary sector amplifies the most relevant axes — healthcare vendors weight Section 1557 ×2, legal vendors weight ABA Op 512 ×2, banking vendors weight SR 11-7 ×2 — so the composite reflects what matters in the actual buying context.

Read the full methodology →

Disagree with this scoring?

EFROS publishes scoring rationale per cell with a public source. If you have evidence that a specific axis should score differently — a new BAA, a new certification, a documented policy change — submit a formal challenge below. We re-score and publish the result with the next quarterly edition (or as a mid-quarter changelog entry if the change is material).

Disagree with a score?

Every cell in the EFROS Index is source-cited. If you have a public source that contradicts a score for Meta Llama, submit a formal challenge — we re-verify against the source and respond within 14 days.

Other vendors in Foundation model

Same category, scored on the same twelve axes. Useful for head-to-head shortlisting.

Disclaimer. Scoring as of 2026-05-13. Posture changes frequently — re-verify with the vendor's trust center before contract. This page is informational; it is not legal advice. EFROS clients get a refreshed posture review as part of the AI Governance Audit.

Take the scoring into production

The Index tells you the posture. These engagements turn the posture into a deployable program — vendor selection, governance policy, sector overlay, audit-ready evidence.