Skip to main content

Tool · Free AI Risk Score

Where does your AI usage sit on the US AI risk landscape?

Five minutes. Fifteen questions. You get a US AI risk classification (Colorado AI Act + state law overlay), a governance maturity score (NIST AI RMF), sector-specific compliance overlays (HIPAA, SR 11-7, CMMC), and a citation-anchored 90-day execution roadmap. No M365 admin access required — just your answers and a work email.

Free · 5 minutes · No commitment|Branded report + executive briefing|Mapped to Colorado AI Act + NIST AI RMF + ISO/IEC 42001
Step 1 of 425% complete

AI Risk Classification

Five questions to classify your AI usage against US state and federal frameworks. The classification determines which obligations apply — including state-law-restricted uses that need legal review.

Q1NYC LL144 · TN ELVIS Act · IL HB 3773

Does your organization use AI for automated employment decisions without a bias audit (NYC LL144), facial recognition by government or law enforcement, deepfake or AI voice cloning without consent (TN ELVIS Act), social scoring, or predictive policing?

State-law-restricted uses. NYC LL144 requires annual bias audits with public posting; TN ELVIS Act creates civil liability for unauthorized voice cloning; CA, IL, and others restrict AI in hiring.

Q2Colorado AI Act SB 24-205

Does your organization use AI to make — or substantially inform — 'consequential decisions' in any of: employment, healthcare, financial services, education, government services, housing, insurance, legal services, or criminal justice?

Colorado AI Act SB 24-205 (effective Feb 2026) defines these as 'high-risk AI system' categories. Triggers impact assessment, consumer notice, opt-out rights, and risk-management documentation.

Q3CA SB 1001 · CA AB 2013 · FTC Section 5

Do you deploy AI that directly interacts with natural persons (chatbots, AI agents) or generates AI content (text, image, audio, video) shown to customers or the public?

Transparency obligations: CA SB 1001 bot disclosure in commerce/elections; CA AB 2013 (Jan 2026) gen-AI training data summary; FTC Section 5 covers AI-washing and undisclosed AI-generated marketing.

Q4NIST AI RMF GPAI Profile (2024)

Do you use General-Purpose AI Models or Foundation Models in business operations — Microsoft 365 Copilot, ChatGPT/Claude/Gemini Enterprise, custom GPTs, or fine-tuned LLMs?

Triggers NIST AI RMF GPAI Profile (2024) obligations: technical documentation, evaluation results, copyright policy, model card collection, continuous monitoring.

Q5NIST AI RMF · GOVERN 1.4

Is your AI usage strictly limited to internal automation (note-taking, summaries, scheduling), spam filtering, search ranking, or video game AI — and none of the higher-risk categories above?

Minimal-risk uses are not specifically regulated but benefit from an Acceptable Use Policy and inventory.

What the report tells you

Four classifications. One report.

US AI risk tier

Your AI use cases mapped against four classifications: state-law restricted (NYC LL144, TN ELVIS Act, IL HB 3773), Colorado AI Act high-risk consequential decisions (employment, healthcare, financial, education, housing, insurance, legal, criminal justice, government), transparency-required (CA SB 1001 bot disclosure, CA AB 2013 gen-AI training data, FTC Section 5), and minimal-risk. Tier drives the obligation set you're accountable for.

Foundation Model obligations

Whether you use General-Purpose AI Models — Microsoft 365 Copilot, ChatGPT Enterprise, Claude for Work, custom GPTs, or fine-tuned LLMs — and which NIST AI RMF GPAI Profile (2024) obligations apply: technical documentation, evaluation results, model card collection, continuous monitoring.

NIST AI RMF governance maturity

Five Govern-function controls scored 0-100%: AI inventory, Acceptable Use Policy, vendor diligence, logging/monitoring, and human-in-the-loop oversight. Surfaces the foundational gaps.

Sector + state-law compliance overlays

We layer HIPAA / HHS-OCR Section 1557 (clinical AI + BAA), SR 11-7 / NYDFS Part 500 (financial model risk + AI washing), CMMC 2.0 / NIST SP 800-171 R2 (CUI in AI tools), and state AI laws (Colorado, NYC, California, Illinois, Utah, Tennessee) onto the recommendations.

Who runs this

Decision-makers who need a defensible AI risk picture.

CISO / Compliance Officer

You need a defensible AI risk classification you can show the board, the auditor, and the GC — and an execution roadmap with named NIST AI RMF, Colorado AI Act, and sector-specific framework citations.

General Counsel

Colorado AI Act high-risk classification (effective Feb 2026), state-by-state restricted use exposure (NYC LL144, TN ELVIS Act, IL HB 3773), and FTC AI-washing risk determine which AI deployments need legal review before the next quarter. This assessment surfaces what to triage first.

Founder / COO

Your team is running Copilot, ChatGPT, and embedded SaaS AI faster than governance can keep up. The assessment shows what's exposed and what to fix in the next 90 days.

CFO / CIO

Budget and prioritization tool — the report's 90-day roadmap calibrates how much governance to fund this quarter vs. defer, with citation-anchored justifications anchored in US frameworks.

FAQ

Questions about the assessment.

How long does it take?

Five minutes. Fifteen questions across three sections — US AI risk classification, governance maturity, and compliance context. No M365 admin access required for the self-assessment version.

Is it actually free?

Yes. You receive the same tier classification, maturity score, and citation-anchored recommendations regardless of whether you engage EFROS afterwards. The assessment is the entry point to the EFROS AI Governance audit ($5k fixed-fee, 10-day delivery), but there is no obligation to continue.

What frameworks does the classification reference?

NIST AI Risk Management Framework 1.0 (GOVERN, MAP, MEASURE, MANAGE) + NIST AI RMF GPAI Profile (2024); ISO/IEC 42001:2023; Colorado AI Act SB 24-205; NYC Local Law 144; California AB 2013 / SB 1001; Illinois HB 3773; Tennessee ELVIS Act; Utah SB 149; HIPAA Security Rule (45 CFR 164.308-312); HHS-OCR Section 1557; HICP 405(d); SR 11-7 / OCC 2011-12 for financial services; NYDFS Part 500; CMMC 2.0 / NIST SP 800-171 R2; FTC Section 5; SEC AI-washing guidance.

Is this US-focused or international?

US-focused. EFROS serves only US clients, so the classification anchors in US state and federal frameworks (Colorado AI Act, NYC LL144, CA AB 2013, NIST AI RMF, HIPAA, SR 11-7, CMMC). If your US-based organization has EU customers or operations, additional jurisdictional analysis is recommended outside this assessment.

Does this replace legal counsel?

No. The classification is grounded in current US framework guidance, but a definitive Colorado AI Act high-risk classification or state-law-restricted use determination requires legal review of the specific use case and jurisdiction. The report is decision-support for your counsel — not a substitute.

What's the difference between this and a paid AI Governance audit?

The free assessment is self-reported and produces a tier + maturity + recommendations. The paid audit adds: M365 Graph deep-scan (Copilot configuration, agent inventory, audit log retention), AI vendor BAA verification, training-data lineage review, executive-ready compliance binder mapped to NIST AI RMF + ISO/IEC 42001 + Colorado AI Act, and a counsel-reviewed gap remediation roadmap. Ten-day delivery.

What happens to my data?

Your intake responses + lead-capture fields (name, email, company) are stored in EFROS-controlled D1 storage on Cloudflare. Used only to generate and deliver this report and to follow up if you request a discovery call. Not shared with third parties. Subject to the EFROS privacy policy.

Already know you need the paid audit?

The $5k AI Governance audit extends this self-assessment with M365 Graph deep-scan, AI vendor BAA verification, training-data lineage review, and an executive-ready compliance binder. Ten-day delivery.