How does AI governance differ from cybersecurity?+
Cybersecurity protects systems and data from unauthorized access, exfiltration, and disruption. AI governance addresses a different risk surface: what happens when authorized users interact with AI systems that have probabilistic outputs, opaque training, and unpredictable behavior. The two functions overlap on data-leakage prevention and vendor risk, but AI governance also covers model bias, hallucination liability, intellectual-property exposure in training and inference, and regulatory obligations like the EU AI Act. A mature program operates them as separate disciplines that share evidence and controls where it makes sense.
Is AI governance required for an SMB?+
Required is a legal question that depends on jurisdiction, industry, and use case. EU AI Act obligations apply to organizations of any size that place AI systems on the EU market or whose AI outputs are used in the EU. NIST AI RMF is voluntary in the United States but is rapidly becoming the baseline for procurement, insurance, and customer-facing assurance. For SMBs operating in regulated industries (healthcare, financial, legal), the practical answer is yes, because customers and regulators expect documented AI risk management whether or not a specific statute names you. We build programs sized to the organization rather than enterprise-scale frameworks shoehorned in.
Do you handle EU AI Act compliance?+
Yes. We classify systems against the EU AI Act risk tiers, implement the obligations associated with each tier, and produce the documentation the regulation expects (Article 9 risk management, Article 12 logging, Article 13 transparency, conformity assessment readiness for high-risk systems). EU AI Act enforcement is phased; we map the obligations to the dates they apply so you do not over-implement before it is necessary or under-prepare for milestones that are already in force.
What about Microsoft 365 Copilot governance?+
Copilot is the highest-volume AI surface in most organizations and the one with the broadest data exposure. We configure Copilot at the tenant level (data-loss prevention, sensitivity labels, restricted SharePoint access, audit-log retention), define and enforce an acceptable use policy, and run quarterly reviews of usage patterns and exposure. Customers running Microsoft Purview AI Hub get our help operationalizing the signal it produces; customers without Purview get equivalent monitoring through other tooling. The governance pattern is the same; the tools vary with your stack.
Can you do this for a healthcare organization?+
Yes, and healthcare is one of the verticals where we have the deepest pattern library. Clinical AI scribes (Abridge, Suki, DAX, Heidi and the rest), billing copilots, and AI-embedded EHR features all sit in scope under HIPAA, state PII laws, and increasingly under the EU AI Act for European operations. We negotiate AI-vendor BAAs, document data flows for ePHI exposure, and produce evidence packs that satisfy both HIPAA OCR audits and AI-specific regulatory questions. See our healthcare industry page for the bundled offering.
Is the AI Pen-Test included or a separate engagement?+
AI Pen-Test is a separate engagement, billed as a fixed-fee add-on per testing window. We run adversarial testing covering prompt injection, jailbreak resistance, training-data exfiltration, model theft, output integrity, and agent guardrail bypass. The deliverable is a written report with reproduction steps, severity ratings, and remediation recommendations. Annual AI Pen-Tests are included as part of the highest-tier managed AI governance retainer.
What kinds of AI vendors are you familiar with?+
We have operational experience across the major LLM providers (OpenAI, Anthropic, Google, Microsoft, Meta), the enterprise AI assistants (M365 Copilot, ChatGPT Enterprise, Claude Enterprise, Gemini for Workspace), the AI-embedded productivity layer (Notion AI, Salesforce Einstein, Zoom AI Companion, Slack AI), and the vertical AI ecosystem (clinical scribes, contract analytics, sales intelligence, fraud detection). For custom-deployed models we operate the standard stack: AWS Bedrock, Azure OpenAI, Google Vertex AI, and self-hosted inference.
How long does the initial AI Risk Audit take?+
Two to three weeks for the typical mid-market environment. The deliverable is a written report covering the full AI inventory, vendor risk assessment, policy gap analysis, NIST AI RMF and ISO 42001 mapping, an EU AI Act risk-tier classification, the top twenty prioritized risks, and an executive briefing. Larger or more complex environments take four to six weeks. The audit is fixed-fee and converts to a managed retainer with the audit fee credited toward the first quarter for customers who continue.
Do you handle EU AI Act readiness for an organization with no EU operations?+
Yes, when there is a reason to. Many U.S. organizations face EU AI Act exposure indirectly: they use AI systems whose outputs flow to EU customers, they procure from vendors who are themselves in scope, or they anticipate U.S. regulation that follows the EU AI Act pattern. We build readiness programs sized to the actual exposure rather than applying the full regulatory framework where it does not yet apply.
What if we already use Microsoft Purview AI Hub or another AI-governance tool?+
Tooling is the easy part. The hard part is the operational discipline that turns tool signal into evidence, decisions, and remediation. We layer our governance program on top of whatever tooling you already have, including Purview AI Hub, Google AI Hub, Cisco AI Defense, Wiz AI-SPM, and the rest of the emerging market. Customers without dedicated tooling get equivalent coverage through logs and audits in their existing security stack.