Resource · Colorado AI Act for Healthcare
Colorado AI Act for healthcare deployers — what compliance actually requires.
Colorado AI Act SB 24-205 (effective February 2026) imposes obligations on developers and deployers of high-risk AI systems across nine consequential-decision categories — and healthcare is squarely in scope. This page maps the act to the AI tools US health systems, clinics, ACOs, payers, and digital-health firms actually run today, layers in HIPAA Security Rule + HHS-OCR Section 1557 algorithmic non-discrimination, and walks through the 90-day deployer compliance roadmap.
Law firms can write the memo. The MSSP runs the controls. EFROS operates the AI Governance program — inventory, classification, vendor BAA verification, audit logging, human oversight, impact assessment, board-grade reporting — under one accountable SLA.
Jurisdictional trigger
Who must comply
Colorado AI Act applies based on where the consumer (patient, employee, or AI-decision subject) resides — not where the deployer is headquartered. A Massachusetts hospital that uses an automated decisioning AI on a Colorado resident's claim, chart, or hiring application triggers Colorado AI Act obligations for that interaction.
The act imposes obligations on two roles: developers (entities that build or substantially modify a high-risk AI system) and deployers (entities that use a high-risk AI system to make a consequential decision). Most US healthcare organizations are deployers; some larger systems with internal data science teams act as both.
Small-deployer carve-out: organizations under 50 employees that meet specific narrow criteria are exempted from some documentation obligations. The exemption does not remove the use-case prohibition where one applies — and most clinical AI use intersects with high-risk categories regardless of deployer size.
Colorado AI Act §6-1-1701 high-risk in healthcare
Nine high-risk use cases healthcare deployers face
The Colorado AI Act lists nine consequential-decision categories. Healthcare operations intersect with most of them. Each requires impact assessment, consumer notice, opt-out rights, and a NIST AI RMF-aligned risk-management policy.
AI-driven hiring and credentialing
Automated employment decision tools used to screen physician candidates, clinical staff, or contract nurses fall under Colorado AI Act §6-1-1701 high-risk plus NYC LL144 bias audit (if NYC-resident candidates). Common surfaces: HireVue, Pymetrics, Modern Hire, Eightfold, and AI-embedded ATS scoring inside Greenhouse, Workday, iCIMS.
Clinical decision support and diagnostic AI
AI tools that materially inform clinical decisions about a patient — diagnostic imaging interpretation, sepsis prediction, readmission risk scoring, triage routing — meet the Colorado high-risk threshold AND the HHS-OCR Section 1557 algorithmic non-discrimination requirement. Examples: Aidoc, Viz.ai, Epic Sepsis Model, AWS HealthScribe-derived recommendations.
Clinical AI scribes and documentation
AI scribes that draft clinical notes — Abridge, Suki, Microsoft DAX Copilot, Heidi, Augmedix, Nuance DAX — are HIPAA business associates that require executed BAAs and fall under the high-risk category when their output substantially informs treatment, coding, or billing decisions.
Insurance and prior authorization AI
Payer-side AI for prior authorization, claims denial routing, medical necessity determination, or coverage adjudication is high-risk under Colorado AI Act §6-1-1701 insurance category. Notable: UnitedHealthcare's nH Predict (subject to 2023 class action), Cigna's PXDX, Humana algorithms.
Financial services for healthcare
Patient-financing eligibility scoring, medical debt collection AI, and revenue cycle management algorithms that determine financial eligibility for care fall under the Colorado AI Act financial services category plus FTC Section 5 enforcement on unfair practices.
Education for clinical training
AI tools used in medical school admissions, residency match algorithms, continuing medical education assessment, or fellowship selection meet Colorado AI Act education category.
Housing-adjacent: senior living and skilled nursing placement
AI tools that determine eligibility for skilled nursing, assisted living, or supportive housing placement intersect Colorado AI Act housing + healthcare categories — particularly relevant for ACOs and care coordination platforms.
Legal services for healthcare ops
AI used in malpractice risk scoring, peer review automation, or credentialing legal review meets the Colorado AI Act legal services category. Common in large hospital systems with embedded legal ops.
Government services in public health
Public hospitals, FQHCs, and state Medicaid agencies deploying AI for benefit determination, fraud detection, or program eligibility face Colorado AI Act government services category plus federal procurement AI rules (OMB M-24-10).
Clinical AI vendor BAA matrix
What your AI vendors will sign — and what they won't
Curated matrix of the AI vendors most commonly deployed in US healthcare workflows, with BAA availability tier, Colorado AI Act risk class, and the operational caveat that determines whether the vendor is safe for clinical use.
Abridge
Yes — default tierHigh-risk · Colorado AI Act + Section 1557HIPAA-aligned BAA available. Output materially informs documentation and billing — Article 14 human oversight + Section 1557 non-discrimination audit required.
Suki AI
Yes — default tierHigh-risk · Colorado AI Act healthcareHIPAA-BAA standard. Treat as deployer-side high-risk system; impact assessment + consumer notice required for patient-facing use.
Microsoft DAX Copilot / Dragon Medical
Yes — Microsoft Online Services BAAHigh-risk · Colorado AI Act + Section 1557Covered under Microsoft BAA. Maintain technical documentation, bias testing per Section 1557, and audit-log retention via Purview.
Heidi Health
Yes — default tierHigh-risk · Colorado AI Act healthcareBAA available. Human-in-the-loop on output review per Section 1557. Verify state-by-state operational coverage.
Nuance DAX (legacy, pre-Copilot)
Yes — via Nuance BAA addendumHigh-risk · Colorado AI Act + Section 1557Now consolidated under Microsoft DAX Copilot for new deployments; legacy DAX continues under Nuance terms.
ChatGPT Enterprise / Team
Yes — enterprise tier onlyLimited-risk · CA SB 1001 / AB 2013Consumer ChatGPT NOT BAA-eligible. Enterprise tier requires explicit BAA execution + Zero Data Retention. Block consumer tier at identity layer for clinical staff.
Microsoft 365 Copilot (general productivity)
Yes — under M365 E3/E5 BAALimited-risk · transparency-requiredInherits SharePoint and Graph permissions. Run permission audit + Restricted SharePoint Search + Copilot DLP before clinical staff use.
Otter.ai (meetings AI)
Only on HIPAA Compliance PlanSector-specific · two-party consentFree/Pro tier transcripts go to Otter training pipeline — block for clinical meetings. HIPAA tier required for telehealth consult transcription.
Notion AI
NoNot BAA-eligibleBlock for any PHI-touching workflow. Use Microsoft 365 Copilot or Google Workspace Gemini under BAA instead.
Perplexity, consumer Claude, consumer ChatGPT
NoNot BAA-eligibleBlock at identity layer for all clinical staff. Treat as third-party disclosure if PHI is pasted in — likely HIPAA breach + Section 1557 disclosure issue.
BAA availability changes — verify current contract terms with each vendor before relying on this matrix for procurement decisions. EFROS maintains an internal live vendor matrix updated quarterly as part of the AI Governance retainer.
90-day deployer compliance roadmap
From inventory to impact assessment in 90 days
The phased plan EFROS runs for healthcare deployers. Six two-week phases, each producing a defined evidence artifact. Designed to integrate with existing HIPAA Security Rule risk analysis cycles rather than running as a parallel program.
AI inventory + shadow-AI discovery
Map every AI tool touching clinical workflows: EHR-embedded AI features (Epic, Cerner Oracle Health, athenahealth), standalone clinical AI (scribes, imaging, sepsis), copilots (M365, ChatGPT, Claude), and AI-embedded vendor tools (Salesforce Health Cloud Einstein, HubSpot, Intercom Fin). Survey clinical staff for personal-account use of AI.
Tier classification + Colorado AI Act §6-1-1701 high-risk mapping
Classify each inventoried AI system against Colorado high-risk categories. Document classification rationale per system with signoff. Flag systems requiring impact assessment under §6-1-1701(8). Identify state-of-residence exposure for consumer-notice obligation triggers.
Vendor BAA + DPA verification
Execute or verify BAA with every AI vendor processing PHI. Block consumer-tier AI (Perplexity, Notion AI, consumer ChatGPT/Claude) at the identity layer. Document training-data lineage for any vendor whose model was fine-tuned on customer data.
Section 1557 algorithmic non-discrimination audit
For each high-risk system, document bias-testing methodology, demographic performance analysis, and remediation triggers per HHS-OCR Section 1557 final rule (effective July 2024). Establish escalation protocol for performance disparities by race, ethnicity, sex, disability, age, or national origin.
Human oversight + audit logging
Implement Article 14 human-in-the-loop controls on every high-risk system output: documented review checkpoints, mandatory clinician sign-off on diagnostic suggestions, audit-log capture of override decisions. Configure Microsoft Purview AI Hub or equivalent for prompt + output logging.
Impact assessment + consumer notice
Produce Colorado AI Act §6-1-1701(8) impact assessment artifact: purpose, training data summary, evaluation methodology, known limitations, foreseeable risks. Update patient-facing notices and consent forms to disclose AI use where it materially informs care decisions.
FAQ
Common questions from healthcare deployers
Does the Colorado AI Act apply to a healthcare organization headquartered outside Colorado?
It applies based on where the consumer (patient, employee, or AI-decision subject) resides — not where the deployer is headquartered. A New York health system that uses an automated decisioning AI on a Colorado resident's chart triggers Colorado AI Act obligations for that interaction. Effective February 2026 with phased enforcement.
Is there a small-organization exemption?
Colorado AI Act includes a small-deployer carve-out for organizations under 50 employees that meet specific narrow criteria. Most clinics still fall in scope when their AI use intersects with high-risk categories (clinical decision support, employment screening) and the small-employer threshold does not exempt the use case itself — only some deployer-side documentation obligations.
How does HHS-OCR Section 1557 interact with Colorado AI Act?
Section 1557 final rule (effective July 2024) prohibits algorithmic discrimination in covered health programs receiving federal financial assistance. Colorado AI Act adds documentation, impact assessment, and consumer notice on top of the Section 1557 non-discrimination baseline. Compliance with one does not satisfy the other — both apply when both jurisdictional triggers are met.
What is a 'consequential decision' under Colorado AI Act for healthcare?
Colorado AI Act §6-1-1701(3) defines consequential decision as one with a material legal or similarly significant effect on a consumer in access to healthcare services. In practice this captures diagnostic decision support, treatment routing, prior authorization, eligibility determination, credentialing, and any AI output that substantially informs the clinician's care decision.
Do clinical AI scribes (Abridge, Suki, DAX, Heidi) trigger Colorado AI Act high-risk obligations?
Yes when their output materially informs documentation, coding, billing, or downstream clinical decisions. A scribe that auto-generates an A&P section that the clinician signs off on without substantive review meets the substantial-factor threshold. The remedy is procedural — documented human-review checkpoints — not technical removal of the scribe.
What documentation does EFROS produce as part of the AI Governance program?
AI inventory, per-system Colorado AI Act tier classification with rationale, vendor BAA verification matrix, Section 1557 non-discrimination audit methodology + results, human-oversight runbooks, audit-log retention configuration, impact assessment artifact per high-risk system, and a board-grade quarterly executive summary. Fixed-fee 10-day audit converts to managed retainer with audit fee credited toward first quarter.
Three ways forward
Self-assess your AI exposure in 5 minutes, book a 20-minute scoping call, or reserve the fixed-fee 10-day AI Governance audit with the deliverables described on this page.