Resource · HIPAA MSSP for Clinics Using AI
HIPAA-aligned MSSP for small clinics using AI scribes, coding tools, and GenAI.
US clinics from 1 to 50 providers are adopting AI scribes, coding assist, productivity copilots, and GenAI faster than their generic MSP can keep up. The HIPAA exposure that creates — vendor BAA gaps, audit-log shortfalls, consumer-AI shadow use, Section 1557 algorithmic non-discrimination obligations — is not what a default-tier MSP package covers. This page is the operator playbook: the AI vendor BAA matrix, HHS 405(d) HICP mapping, HHS-OCR Section 1557 overlay, and the 90-day clinic AI governance runbook EFROS runs at real clinic engagements.
Generic MSPs route alerts. The MSSP that signs the BAA, runs the 405(d) program, and produces the Section 1557 oversight artifact is the difference between passing an OCR investigation and writing the breach-notification letter. EFROS operates the clinic AI governance stack — inventory, vendor BAA chain, audit logging, human oversight, quarterly compliance review, board-grade reporting — under one accountable SLA.
What this is for
Why a generic MSSP doesn't cover the AI exposure your clinic now carries
A small clinic — primary care, behavioral health, ortho, derm, GI, women's health, urgent care, addiction medicine, fertility — adopting AI scribes, coding assist, and GenAI is materially different from a small clinic running a default EHR plus a basic firewall. The AI vendor stack introduces a chain of business associates and subcontractors that did not exist five years ago. The HIPAA Security Rule, HHS-OCR Section 1557 Final Rule (May 2024), HHS 405(d) HICP, and the CMS Conditions of Participation all apply to that stack — but none were written for a 6-provider primary care practice trying to use an AI scribe.
A generic MSSP package covers endpoint detection, email security, backup, and identity — a necessary baseline. It does not produce a BAA matrix that names the AI inference subcontractor, a Section 1557 oversight artifact per high-risk AI tool, or a prompt-and-output audit log at clinician-attributable granularity. Those are the artifacts OCR asks for first when an investigation opens — and they cannot be manufactured in retrospect after the breach.
EFROS operates the stack at real clinic engagements: inventory the AI tools, chase the BAAs, configure the identity-layer block on consumer AI, deploy the DLP rule that catches PHI in prompts, document the human-oversight checkpoint per tool, run the quarterly vendor review, produce the board-grade governance report. That is the artifact set that makes an OCR investigation short.
The audience is the practice manager or owner-clinician at a clinic of 1 to 50 providers. The framing is US federal and state law only — HIPAA, HHS-OCR Section 1557, CMS, HHS 405(d) HICP, state medical board AI guidance (California, Texas, Florida lead), and the NIST AI Risk Management Framework with its Generative AI Profile (NIST AI 600-1). Internal links route to the EFROS AI Governance service, the free AI Risk Score tool, the Healthcare industry page, and the companion Colorado AI Act for healthcare deployers resource. Defined terms are in the glossary.
Where the HIPAA risk shows up with AI
Ten failure modes a generic MSSP package does not catch
Each is a real exposure surface seen at clinic engagements in 2024-2026. None are theoretical. Each maps to a specific control on the 90-day runbook below.
AI scribe vendor BAA that excludes the inference pipeline
Many scribe vendors sign a BAA but carve out the model-inference subprocessor (a foundation-model API or hyperscaler GPU pool). If the BAA does not name that subprocessor or warrant subcontractor flow-down, PHI in prompts is processed by an unaccountable party. Verify the BAA names the inference provider in writing.
ChatGPT / Claude consumer accounts for differential diagnosis
Consumer-tier ChatGPT and Claude are not BAA-eligible. Pasting de-identified-looking case details in is still a HIPAA disclosure if any of the 18 HIPAA identifiers — or a reasonable re-identification path — remains. Block consumer-tier AI at the identity layer and provide a sanctioned Enterprise alternative.
PHI in vendor training data — opt-out enforcement gaps
Some AI BAAs allow training on customer data unless the customer opts out, and the toggle lives in an admin console most clinics never configure. Default to opt out, document the opt-out artifact, and re-verify quarterly — vendor defaults shift with platform updates.
Notion AI and M365 Copilot pulling from notes that contain PHI
M365 Copilot is BAA-covered but inherits SharePoint, OneDrive, and Graph permissions. An over-shared OneDrive folder containing a chart note will surface in unrelated prompts. Notion AI has no HIPAA BAA — block it entirely for any workflow that could touch PHI.
Audit trail gaps — 45 CFR § 164.312(b)
HIPAA Security Rule § 164.312(b) requires mechanisms to record and examine activity in systems containing ePHI. Most AI vendors do not expose prompt-and-output audit logs at clinician-attributable granularity by default. Without that log you cannot prove which clinician issued which prompt against which record — OCR treats the gap as a finding.
Breach notification — 60-day rule plus immediate OCR notice for unknown scope
HIPAA Breach Notification Rule (45 CFR §§ 164.400-414) requires notice to affected individuals within 60 days. Breaches affecting 500+ require immediate notice to HHS-OCR and state media. If a clinician pasted PHI into a consumer AI and scope is unknown, OCR expects immediate notice while investigation proceeds. The clock starts at discovery, not confirmation.
Subcontractor BAA chain — the AI inference sub-processor problem
HIPAA (45 CFR § 164.308(b)) requires business associates to obtain satisfactory assurances from subcontractors. For an AI vendor using a foundation-model API, that obligation flows through. If the vendor cannot produce a subcontractor BAA chain on demand, treat the BAA as incomplete.
Section 1557 algorithmic non-discrimination — HHS-OCR May 2024 Final Rule
Section 1557 Final Rule (effective July 2024) prohibits discrimination in patient-care decision support tools, including AI. Clinics receiving federal financial assistance — Medicare, Medicaid, CHIP — must identify clinical decision support that uses protected-class variables or proxies and document mitigation. The deployer obligation is non-delegable to the AI vendor.
Section 1557 + Section 504 + Title VI bias monitoring
Section 1557 incorporates Title VI (race, color, national origin), Title IX (sex), the Age Discrimination Act (age), and Section 504 (disability). For clinical AI decision support that means documented bias testing across each protected class with remediation triggers — not a one-time procurement check.
Vendor change of ownership and training-data carve-outs
AI vendor terms routinely include change-of-control clauses that transfer data — sometimes fine-tunes derived from clinic PHI — to an acquirer. Negotiate explicit data-portability and deletion rights on termination and change of control, plus 30-day notice of any sub-processor change.
AI vendor BAA matrix
Which AI vendors a clinic can actually use under HIPAA
Curated matrix of AI vendors most commonly deployed at small US clinics, with BAA availability, primary use case, and the operational caveat that determines whether the vendor is safe for clinical workflows. BAA terms change — verify with the vendor before procurement.
DAX Copilot (Nuance / Microsoft)
Clinical AI scribe — ambient note generationYes — Microsoft Online Services BAA
Standard for ambulatory clinics on Microsoft 365. Verify M365 tenant BAA executed and DAX Copilot listed as in-scope. Clinician review-and-sign is the human oversight checkpoint.
Abridge
Clinical AI scribe — multi-specialty SOAP + billing assistYes — default tier
HIPAA BAA out of the box. Confirm coverage of the model-hosting subcontractor (currently a hyperscaler GPU pool). Common in primary care and specialty without M365 dependency.
Suki AI
Voice-assistant scribe + ICD-10/CPT coding assistYes — default tier
BAA standard. Coding-assist output materially informs billing — keep a documented coder or clinician review checkpoint before submission to avoid OIG False Claims Act exposure from AI-driven upcoding.
Heidi Health
Multi-specialty AI scribe with template libraryYes — default tier (US tenant)
BAA available on US-tenant offering. Verify contract specifies US-tenant routing for inference; mixed-region inference complicates 45 CFR § 164.314 subcontractor flow-down.
Ambience Healthcare
AI scribe + clinician coaching + utilization reviewYes — default tier
BAA standard with subcontractor flow-down. Used in primary care, urgent care, and behavioral health. Disable training-on-customer-data in the admin console and document the opt-out artifact.
Microsoft 365 Copilot
Email drafting, document summarization, meeting notesYes — under M365 E3/E5 BAA
Inherits SharePoint, OneDrive, Outlook, and Graph permissions. Before rollout: permissions audit, Restricted SharePoint Search, Copilot DLP, sensitivity labels on PHI folders. Copilot respects labels but does not create them.
ChatGPT Enterprise (OpenAI)
General GenAI — research, drafting, summarizationYes — Enterprise tier only with executed BAA
Enterprise with Zero Data Retention is BAA-eligible. ChatGPT Team is conditionally BAA-eligible — verify execution. Block ChatGPT consumer/Plus at the identity layer or clinicians will reach for the lower-friction tool.
ChatGPT consumer / Plus (OpenAI)
Personal accounts brought to work by cliniciansNo — not BAA-eligible
Treat as third-party disclosure if PHI is pasted in. Block at the identity layer (Entra ID Conditional Access, EDR application control, DNS filtering). Most common 2025 clinic breach pattern: clinician pastes a chart for a quick differential.
Claude Enterprise (Anthropic)
Large-context document review, policy drafting, analysisYes — Enterprise tier with executed BAA
BAA-eligible with Zero Data Retention. Claude on AWS Bedrock and GCP Vertex is an alternative path under those hyperscalers' BAAs.
Claude consumer (Anthropic)
Personal accountsNo — not BAA-eligible
Block at the identity layer for clinical staff. Same breach pattern as consumer ChatGPT. Provide a sanctioned Enterprise alternative so clinicians have a compliant path.
Notion AI
Note-taking and knowledge management AINo — not BAA-eligible
Block for any PHI-touching workflow. If the clinic uses Notion for non-clinical operations (HR, vendor management), enforce that no PHI enters Notion via DLP and clinician training.
Otter.ai
Meeting transcription — telehealth, case conferencesOnly on HIPAA Compliance Plan
Free/Pro transcripts flow into Otter's training pipeline — block for clinical use. HIPAA tier required. Verify state two-party consent rules (California, Florida, Washington) before recording any patient interaction.
BAA availability and subcontractor chains change with product releases, acquisitions, and platform pivots — verify current contract terms with each vendor before procurement. EFROS maintains an internal live vendor matrix updated quarterly as part of the AI Governance retainer.
HHS 405(d) HICP practical mapping
How HHS 405(d) HICP small-practice volume maps to a clinic with AI tools
The HHS 405(d) Health Industry Cybersecurity Practices small-practice volume defines ten practice domains. Each maps to AI-tool considerations for a small clinic — and each is what HHS-OCR will look for as evidence of a documented cybersecurity program if your clinic becomes the subject of an investigation.
Email protection
DMARC at p=reject, anti-impersonation for clinic leadership and referring providers, attachment sandboxing. AI scribe vendors and EHRs on enforced-TLS connectors.
Endpoint protection
EDR on every clinician laptop and reception workstation. Tie endpoint posture to conditional access for any AI tool that processes PHI.
Access management
MFA on every system that touches PHI — including AI vendor admin consoles. Quarterly access review with documented attestation, including AI consoles that fall outside the EHR access-review scope.
Data protection and loss prevention
DLP rules flagging the 18 HIPAA identifiers in outbound channels including AI prompts. Sensitivity labels that AI copilots inherit. Block PHI in non-BAA AI tools at identity or DNS layer.
Asset management
Inventory every AI tool with PHI access, including shadow AI via CASB or DNS logs. Owner, BAA status, classification, last-reviewed date per asset.
Network management
Segment AI vendor traffic. Restrict admin consoles to clinic-managed networks or ZTNA. DNS logging captures AI vendor egress for forensic reconstruction.
Vulnerability management
Patch on a documented cadence. Track AI vendor security advisories and incident disclosures — AI infrastructure incidents are now in scope for clinic VM programs.
Incident response
IR plan with AI-specific scenarios: prompt injection that exfiltrates PHI, model output containing other patients' data, vendor incident disclosure. Tabletop annually with AI scenarios baked in.
Medical device security
Inventory connected medical devices and their AI-embedded features. Apply manufacturer security guidance. For AI-enabled diagnostic recommendations, document the Section 1557 oversight checkpoint.
Cybersecurity oversight and governance
Designated security official per 45 CFR § 164.308(a)(2). For AI: designate an AI governance owner who signs off on each deployment against inventory, BAA matrix, and Section 1557 audit.
The 90-day clinic AI governance runbook
From AI inventory to board-grade governance report in 90 days
Twelve specific tasks across three 30-day phases — Discover, Contract & Configure, Operate & Monitor. Each task names the owner and the evidence artifact. Designed to integrate with the clinic's existing HIPAA Security Rule risk analysis cycle, not run alongside it.
AI inventory and shadow-AI discovery
Survey clinicians, MAs, scribes, billers, and front-desk for sanctioned and personal AI tools. Pull DNS or CASB output for unsanctioned vendor domains. Inventory EHR-embedded AI (Epic Cosmos, Athena Voice, eClinicalWorks Sunoh.ai, NextGen ambient, AdvancedMD AI). Output is the authoritative inventory.
BAA gap analysis against the inventory
Locate the executed BAA for every vendor. Classify gaps: Tier 1 (block immediately — consumer ChatGPT, Notion AI), Tier 2 (negotiate before next renewal), Tier 3 (low-PHI-touch, evaluate after Tier 1/2). Document the close date.
Section 1557 clinical AI scope mapping
Identify every AI tool that materially informs a clinical decision: scribes, coding assist, sepsis prediction, imaging, triage, prior-auth. Document protected-class variables or proxies involved and the mitigation evidence the vendor provides.
HIPAA Security Rule risk analysis update
Update the risk analysis to include AI threats: prompt injection, training-data exfiltration, model-output leakage of other patients' data, vendor subcontractor compromise. Risk analysis is the foundational document OCR asks for first.
Execute or remediate BAAs
Sign or renegotiate Tier 1/2 gaps. Require subcontractor BAA chain disclosure for AI inference providers. Capture effective date, scope, subcontractor list, and carve-outs in the BAA matrix.
Identity-layer block of non-BAA AI
Block consumer ChatGPT, consumer Claude, Notion AI, consumer Perplexity for clinical-staff identities via Entra ID Conditional Access + Microsoft Defender for Cloud Apps or equivalent CASB. Pair with clinician-comms cycle explaining the sanctioned alternative.
DLP, sensitivity labels, and audit logging
DLP rules flagging the 18 HIPAA identifiers in outbound channels including AI prompts. Sensitivity labels on PHI folders. Prompt-and-output audit logging at clinician-attributable granularity per 45 CFR § 164.312(b).
Section 1557 bias-testing intake
Obtain vendor bias-testing methodology, demographic performance results, and remediation protocol for each in-scope tool. Document clinic-side mitigation (e.g., mandatory clinician override on flagged-cohort outputs).
Human-oversight runbooks per AI tool
Document the human-in-the-loop checkpoint: scribe output clinician-signed before EHR commit, coding output reviewed by certified coder, clinical decision support overridden when contradicted by clinical judgment. Train staff and capture sign-off.
Quarterly AI vendor compliance review cadence
Quarterly review covers vendor incident disclosures, BAA in force, training-opt-out set, subcontractor chain unchanged, audit log retention healthy, Section 1557 bias-testing artifact on file. Schedule the first review on the runbook calendar.
Tabletop exercise — AI-specific breach scenario
90-minute tabletop: clinician pastes a chart into consumer ChatGPT and reports it; AI scribe vendor discloses a subcontractor incident; a Section 1557 bias complaint arrives. Validate breach-notification timing (60 days, immediate OCR for 500+).
Board-grade quarterly AI governance report
4-6 page report for clinic leadership: AI inventory delta, BAA status changes, incidents or near-misses, Section 1557 oversight attestation, next-quarter actions. This is the artifact that closes the OCR investigation when year-three audit comes.
Honest pricing
What an MSSP package for a small clinic actually costs
For a 1-15 provider US clinic running an EHR plus an AI scribe plus a productivity copilot, the realistic MSSP retainer band is $600 to $2,400 per month depending on provider count, EHR complexity, and whether the engagement is mandate-driven (post-incident, OCR open, cyber-insurance renewal) or proactive. That range covers the AI governance scope on this page in addition to the baseline MSSP service.
In scope at the low end: AI inventory and shadow-AI discovery, quarterly BAA matrix review, identity-layer block of non-BAA AI for clinical staff, DLP for the 18 HIPAA identifiers in AI prompts, prompt-and-output audit logging per 45 CFR § 164.312(b), human-oversight runbooks per AI tool, Section 1557 bias-testing intake, annual tabletop, and the board-grade quarterly governance report. Plus baseline MSSP — EDR, email security, MFA enforcement, backup, vulnerability management, incident response.
Add-on at the higher end: a dedicated AI Governance retainer for clinics with more than 15 providers or multiple specialty service lines, custom integrations with EHR-embedded AI features the EHR vendor does not instrument for HIPAA audit, expert-witness-grade Section 1557 audit documentation for clinics in active OCR investigation, and 24/7 SOC coverage for telehealth or after-hours patient-facing AI tools.
Not in scope at any tier: the actual practice of medicine, clinical AI model selection (that belongs to clinical leadership), and legal interpretation of OCR findings (that belongs to your healthcare attorney). EFROS operates the program; we do not replace clinical judgment or counsel.
Frameworks anchored
The US-only regulatory anchor set
HIPAA Security Rule (45 CFR Part 164 Subpart C) — administrative, physical, and technical safeguards for ePHI. § 164.308 risk analysis, § 164.312(b) audit controls, and § 164.308(b) subcontractor flow-down are the AI-relevant clauses.
HIPAA Privacy Rule (45 CFR Part 164 Subpart E) — permitted uses and disclosures of PHI, including the minimum-necessary standard that constrains how much PHI can go into an AI prompt.
HHS-OCR Section 1557 Final Rule (May 2024) — algorithmic non-discrimination in patient-care decision support tools. Non-delegable deployer obligation for clinics receiving federal financial assistance.
HHS 405(d) HICP (Health Industry Cybersecurity Practices) — small-practice volume defines the ten practice domains. Voluntary framework but HHS-OCR treats adoption as evidence of a documented program under HITECH Act Section 13412.
CMS Conditions of Participation and the CMS-0057-F prior-authorization final rule — apply to clinics participating in Medicare and Medicaid. AI use in payer determinations and clinical decision support is in scope.
State medical board AI guidance — California, Texas, and Florida have issued specific clinical AI guidance. Other states reference HHS-OCR Section 1557 and ABMS / specialty-board standards. State guidance is binding on the licensee.
NIST AI Risk Management Framework and ONC health-IT certification — NIST AI 100-1 and the Generative AI Profile (NIST AI 600-1) are the voluntary US-federal AI risk vocabulary. ONC publishes certified-EHR criteria including algorithmic transparency requirements for predictive decision support.
FAQ
Common questions from small-clinic practice managers
Does my clinic need a BAA with every AI vendor we use?
Yes for any AI vendor that creates, receives, maintains, or transmits PHI on the clinic's behalf — including prompts and outputs. A scribe processes PHI on every visit. A productivity copilot with access to a SharePoint folder containing PHI also processes PHI. If the vendor cannot or will not sign a HIPAA BAA, that vendor is not viable for any clinical workflow. The exception is genuinely de-identified analytics where you can prove the 45 CFR § 164.514 safe-harbor or expert-determination standard was met — and clinic-side de-identification is harder than it sounds.
What if our EHR (Epic, Athenahealth, eClinicalWorks, Cerner, NextGen, AdvancedMD) already has AI features?
EHR-embedded AI is covered under the EHR vendor's BAA — but you still owe a Section 1557 algorithmic non-discrimination review, an audit-log retention check, and a documented human-oversight checkpoint per feature. Epic Cosmos, Athena Voice, eClinicalWorks Sunoh.ai, NextGen ambient, and AdvancedMD AI are all in scope. Most EHRs default audit logging to lower retention than HIPAA Security Rule § 164.312(b) requires for forensic reconstruction — verify your retention setting before assuming the vendor has it covered.
Is using ChatGPT for clinical notes a HIPAA violation?
Consumer ChatGPT — yes, when PHI is in the prompt. Consumer/Plus tiers are not BAA-eligible, and pasting any of the 18 HIPAA identifiers into the prompt is an impermissible disclosure under 45 CFR § 164.502. ChatGPT Enterprise with an executed BAA and Zero Data Retention is acceptable. ChatGPT Team is conditionally BAA-eligible — verify the BAA was actually executed for your tenant before clinical use. If a clinician has already done this, treat it as a potential breach, start the 60-day notification clock at discovery, and document the investigation.
What's the difference between Section 1557 and HIPAA when we use clinical AI?
HIPAA governs the confidentiality, integrity, and availability of PHI — who can access the data, how it is protected, when it can be disclosed. Section 1557 governs whether the clinical decision the AI informs discriminates against patients in protected classes (race, color, national origin, sex, age, disability). A clinic can be perfectly HIPAA-compliant — every BAA signed, every log retained — and still violate Section 1557 if an AI tool routes triage decisions in a way that disadvantages a protected class. Both obligations apply; neither subsumes the other.
We're a Medicare Advantage practice — are there extra rules?
Yes. Medicare Advantage participation adds CMS Conditions of Participation expectations, MIPS / MACRA reporting where AI-driven coding can implicate False Claims Act exposure for OIG, and the CMS final rule on prior authorization (CMS-0057-F) which constrains AI use in payer determinations and obligates plans to disclose AI use. On the provider side, your prior-auth workflow with the payer's AI tool needs documentation of human review when an adverse determination is appealed. Document the workflow and keep the artifact for the federal audit cycle.
How do we audit AI vendor compliance ongoing, not just at signing?
Quarterly minimum. BAA, subcontractor chain, training-data opt-out, and audit-log retention are not signed once and forgotten — they change with product updates, acquisitions, and platform pivots. Define a quarterly review: BAA in force, subcontractor list unchanged or appropriately disclosed, training-opt-out still set, audit-log retention configured per HIPAA Security Rule, vendor advisories triaged. MSSP runs the review, clinic leadership signs the attestation, the artifact lives in the compliance binder for the federal audit cycle.
Three ways forward
Self-assess your clinic's AI exposure in 5 minutes, reserve the fixed-fee $5K AI Governance audit with the deliverables described on this page, or read the full AI Governance service brief.