Skip to main content

The 4th peer discipline · NIST AI RMF · ISO/IEC 42001 · EU AI Act

AI Governance — a regulated risk surface, not a feature.

EFROS operates an accountable AI governance program for SMBs and mid-market organizations running Microsoft 365 Copilot, ChatGPT, Claude, Gemini, AI-embedded vendor tools, and custom LLM deployments — mapped to NIST AI RMF 1.0, ISO/IEC 42001:2023 and the EU AI Act.

Why AI needs governance

AI is not just a productivity feature

The AI surface in a typical organization is broader than people think. Shadow AI use lives in employee browsers and personal accounts. Embedded vendor AI ships inside Microsoft 365 Copilot, Salesforce Einstein, Zoom AI Companion, Slack AI, Notion AI, and the long tail of SaaS. Custom deployments run on AWS Bedrock, Azure OpenAI, and Google Vertex AI. RAG pipelines pull corporate documents into LLM context windows. Agents make API calls on behalf of users.

Each AI interaction is a potential data-exfiltration event, a potential hallucination liability, a potential prompt-injection vector, and a potential regulatory exposure under the EU AI Act, HIPAA, SEC, FTC, or sector-specific guidance. Treating AI as a feature toggle in some product page is what gets organizations on the wrong end of a regulator press release.

We treat AI as a regulated risk surface that deserves the same accountability model as cybersecurity: an inventory, risk classification, policy, monitoring, and evidence produced on a quarterly cadence.

The program

Five pillars, one accountable program

The program is built around five pillars that map cleanly to NIST AI RMF functions and ISO/IEC 42001 controls. Each pillar produces evidence; the evidence is the deliverable.

Pillar 01

AI Inventory & Discovery

Find every LLM, Copilot, agent, RAG pipeline, and AI-embedded vendor running across Microsoft 365, Google Workspace, SaaS, and custom deployments.

  • Shadow AI discovery across browser, network, and identity logs
  • M365 Copilot, ChatGPT Enterprise, Claude Enterprise, Gemini for Workspace inventory
  • AI-embedded vendor mapping (Notion AI, Salesforce Einstein, Zoom AI Companion, Slack AI, and the long tail)
  • Custom LLM and RAG pipeline cataloguing across cloud workloads
  • Data-source mapping per AI system (what training data, what context, what output destinations)
Pillar 02

Risk Classification (EU AI Act tiers)

Every AI system in the inventory gets classified against the EU AI Act risk tiers and against NIST AI RMF use-case categories.

  • Unacceptable risk — systems requiring immediate decommissioning
  • High risk — clinical decision support, employment screening, credit decisioning, biometric identification
  • Limited risk — chatbots, deepfakes, emotion recognition (transparency obligations apply)
  • Minimal risk — productivity assistants and the bulk of enterprise AI
  • Documented determination per system with reviewer sign-off and revision history
Pillar 03

Policy & Control Framework

NIST AI RMF 1.0 functions mapped to ISO/IEC 42001:2023 controls, with an organizational acceptable-use policy your staff can actually read.

  • Govern, Map, Measure, Manage — NIST AI RMF function implementation
  • ISO/IEC 42001:2023 Annex A controls mapped to operational evidence
  • Acceptable use policy (AUP) covering Copilot, ChatGPT, Claude, Gemini and the long tail of embedded AI
  • Data handling rules (what classes of data go into which AI systems, with technical enforcement where available)
  • Vendor onboarding checklist for new AI tools, including BAA and DPA requirements
Pillar 04

Monitoring & Drift Detection

Continuous monitoring of AI usage, prompt-injection patterns, output anomalies, and data-leakage indicators.

  • Centralized AI usage logging across M365, Google Workspace, and integrated SaaS
  • Prompt-injection pattern detection and alerting
  • Output audit sampling for data-leakage and policy-violation indicators
  • Model drift monitoring for custom-deployed models (performance and bias)
  • Quarterly red-team review of monitoring coverage and blind spots
Pillar 05

Compliance Reporting

Quarterly evidence pack ready for regulators, customers, auditors, and the board. EU AI Act-aligned for organizations in scope.

  • EU AI Act Article 9 (risk management) and Article 12 (logging) evidence
  • ISO/IEC 42001 audit-ready control evidence
  • NIST AI RMF profile and progress reporting
  • Board-grade executive summary with material risks and remediation status
  • Customer-ready evidence pack (the AI governance equivalent of a SOC 2 report)

Add-on engagement

AI Pen-Test — adversarial testing for the AI you already run

Pen-testing for AI is a different discipline from network or application pen-testing. We run adversarial testing across the attack categories that matter for production LLM and agent deployments. The deliverable is a written report with reproduction steps, severity ratings, and prioritized remediation.

Prompt injection (direct and indirect)
Jailbreak resistance testing
Training-data and context exfiltration
Model theft and inversion
Output integrity and hallucination probing
Agent guardrail and tool-use bypass

Bookable as a fixed-fee engagement, or included annually in the highest-tier managed AI governance retainer. Methodology aligns to the OWASP Top 10 for Large Language Model Applications.

Frameworks referenced

Built on the standards your auditors expect

NIST AI RMF 1.0
NIST AI Risk Management Framework (January 2023). The U.S. baseline for trustworthy AI, organized around Govern, Map, Measure, and Manage functions.
ISO/IEC 42001:2023
International standard for AI management systems. The AI counterpart to ISO/IEC 27001 for information security.
EU AI Act
Regulation (EU) 2024/1689 of the European Parliament and Council. Risk-based regulatory framework with extraterritorial reach for AI systems placed on the EU market.
OWASP LLM Top 10
OWASP Top 10 for Large Language Model Applications. The technical reference for prompt injection, training data poisoning, and supply-chain risks.

AI governance FAQ

How does AI governance differ from cybersecurity?

Cybersecurity protects systems and data from unauthorized access, exfiltration, and disruption. AI governance addresses a different risk surface: what happens when authorized users interact with AI systems that have probabilistic outputs, opaque training, and unpredictable behavior. The two functions overlap on data-leakage prevention and vendor risk, but AI governance also covers model bias, hallucination liability, intellectual-property exposure in training and inference, and regulatory obligations like the EU AI Act. A mature program operates them as separate disciplines that share evidence and controls where it makes sense.

Is AI governance required for an SMB?

Required is a legal question that depends on jurisdiction, industry, and use case. EU AI Act obligations apply to organizations of any size that place AI systems on the EU market or whose AI outputs are used in the EU. NIST AI RMF is voluntary in the United States but is rapidly becoming the baseline for procurement, insurance, and customer-facing assurance. For SMBs operating in regulated industries (healthcare, financial, legal), the practical answer is yes, because customers and regulators expect documented AI risk management whether or not a specific statute names you. We build programs sized to the organization rather than enterprise-scale frameworks shoehorned in.

Do you handle EU AI Act compliance?

Yes. We classify systems against the EU AI Act risk tiers, implement the obligations associated with each tier, and produce the documentation the regulation expects (Article 9 risk management, Article 12 logging, Article 13 transparency, conformity assessment readiness for high-risk systems). EU AI Act enforcement is phased; we map the obligations to the dates they apply so you do not over-implement before it is necessary or under-prepare for milestones that are already in force.

What about Microsoft 365 Copilot governance?

Copilot is the highest-volume AI surface in most organizations and the one with the broadest data exposure. We configure Copilot at the tenant level (data-loss prevention, sensitivity labels, restricted SharePoint access, audit-log retention), define and enforce an acceptable use policy, and run quarterly reviews of usage patterns and exposure. Customers running Microsoft Purview AI Hub get our help operationalizing the signal it produces; customers without Purview get equivalent monitoring through other tooling. The governance pattern is the same; the tools vary with your stack.

Can you do this for a healthcare organization?

Yes, and healthcare is one of the verticals where we have the deepest pattern library. Clinical AI scribes (Abridge, Suki, DAX, Heidi and the rest), billing copilots, and AI-embedded EHR features all sit in scope under HIPAA, state PII laws, and increasingly under the EU AI Act for European operations. We negotiate AI-vendor BAAs, document data flows for ePHI exposure, and produce evidence packs that satisfy both HIPAA OCR audits and AI-specific regulatory questions. See our healthcare industry page for the bundled offering.

Is the AI Pen-Test included or a separate engagement?

AI Pen-Test is a separate engagement, billed as a fixed-fee add-on per testing window. We run adversarial testing covering prompt injection, jailbreak resistance, training-data exfiltration, model theft, output integrity, and agent guardrail bypass. The deliverable is a written report with reproduction steps, severity ratings, and remediation recommendations. Annual AI Pen-Tests are included as part of the highest-tier managed AI governance retainer.

What kinds of AI vendors are you familiar with?

We have operational experience across the major LLM providers (OpenAI, Anthropic, Google, Microsoft, Meta), the enterprise AI assistants (M365 Copilot, ChatGPT Enterprise, Claude Enterprise, Gemini for Workspace), the AI-embedded productivity layer (Notion AI, Salesforce Einstein, Zoom AI Companion, Slack AI), and the vertical AI ecosystem (clinical scribes, contract analytics, sales intelligence, fraud detection). For custom-deployed models we operate the standard stack: AWS Bedrock, Azure OpenAI, Google Vertex AI, and self-hosted inference.

How long does the initial AI Risk Audit take?

Two to three weeks for the typical mid-market environment. The deliverable is a written report covering the full AI inventory, vendor risk assessment, policy gap analysis, NIST AI RMF and ISO 42001 mapping, an EU AI Act risk-tier classification, the top twenty prioritized risks, and an executive briefing. Larger or more complex environments take four to six weeks. The audit is fixed-fee and converts to a managed retainer with the audit fee credited toward the first quarter for customers who continue.

Do you handle EU AI Act readiness for an organization with no EU operations?

Yes, when there is a reason to. Many U.S. organizations face EU AI Act exposure indirectly: they use AI systems whose outputs flow to EU customers, they procure from vendors who are themselves in scope, or they anticipate U.S. regulation that follows the EU AI Act pattern. We build readiness programs sized to the actual exposure rather than applying the full regulatory framework where it does not yet apply.

What if we already use Microsoft Purview AI Hub or another AI-governance tool?

Tooling is the easy part. The hard part is the operational discipline that turns tool signal into evidence, decisions, and remediation. We layer our governance program on top of whatever tooling you already have, including Purview AI Hub, Google AI Hub, Cisco AI Defense, Wiz AI-SPM, and the rest of the emerging market. Customers without dedicated tooling get equivalent coverage through logs and audits in their existing security stack.

Govern your AI before it governs you

Start with a free AI Risk Score, book a 20-minute call to scope a fixed-fee audit, or request a managed AI governance engagement.