Skip to main content

Article 2 · Policy language

AI clause decoder — what your carrier is actually excluding.

Every 2026 US cyber policy now ships with an AI exclusion endorsement. The language varies, the scope varies, and the implications vary — but the default position has flipped: AI is excluded unless you negotiate coverage back in. This piece decodes the four exclusion patterns in plain English so you can spot them on the renewal binder before signing.

For business owners reading the policy line items and brokers advising on placement strategy. No legal advice — read the specific endorsement language with your coverage counsel — but enough operator detail to ask the right questions.

By Stefan Efros, CEO & Founder, EFROSReviewed by Daniel Agrici, Chief Security Officer, EFROS
Reviewed by CSO ·

Anatomy of the AI exclusion

What's actually in a 2026 cyber AI clause

The 2026 AI clause typically sits as a standalone endorsement titled “Artificial Intelligence Exclusion” or “AI System Exclusion” attached to the base cyber form. The structure is consistent: a definition section (what counts as AI), a scope section (what types of losses are excluded), and sometimes a carve-back section (what is explicitly still covered).

The definition of “artificial intelligence system” varies widely. The narrowest definitions limit it to generative AI models (LLMs, image generators, code assistants). The broadest definitions sweep in any algorithmic decision system — fraud detection, recommendation engines, automated triage, spam filters, even search ranking. The breadth of the definition determines the breadth of the exclusion.

The scope section then lists the categories of loss the exclusion captures: outputs, vendor-supplied AI, automated decisions, hallucinations, and so on. Most policies use one or two of the four patterns below; some sophisticated buyers will see all four stacked in a single endorsement.

The four patterns

4 types of AI exclusions you'll see in 2026

Most carriers use one or two of these patterns. The output and vendor exclusions are the most common; the hallucination exclusion is specific to professional services policies; the decision exclusion shows up most often in industries with existing automated decision regulatory exposure (banking, hiring, insurance, healthcare).

Type 1

Output exclusion

Plain English:Any loss caused by content that an AI tool generated — text, image, audio, video, code, recommendations.

Sample policy language

"The insurer shall not be liable for any Loss arising out of, based upon, or attributable to the use of any artificial intelligence system, including but not limited to losses arising from output, content, or recommendations generated in whole or in part by such system."

Who it hits:Marketing agencies (AI-drafted ad copy that triggers FTC complaint), e-commerce shops (AI product descriptions that mislead and trigger refunds), media companies (AI-generated article that defames a subject).

What to watch for:Broadest of the four patterns. The phrase "in whole or in part" is the dangerous part — even output where a human heavily edited the AI-generated draft can be argued back into the exclusion. Push your broker to either narrow the definition or carve out specific use cases ("productivity tooling not customer-facing").

Type 2

Vendor exclusion

Plain English:Any loss caused by a third-party AI tool you used — even if you didn't build the AI and didn't control its behavior.

Sample policy language

"This policy does not cover Loss arising from any third-party artificial intelligence product, service, model, or API used by the Insured, including but not limited to outputs, decisions, or actions taken by such third-party system."

Who it hits:Anyone using OpenAI, Anthropic, Google, Microsoft, or Meta AI APIs — which is roughly everyone now. A SaaS product that embeds AI features inherits the AI exclusion of the underlying provider.

What to watch for:If your business uses ChatGPT Enterprise, Microsoft 365 Copilot, Google Workspace Gemini, Claude for Work, or any AI tool from an external vendor, this exclusion arguably nullifies coverage for any loss involving that tool. Push for a carve-out for enterprise-licensed AI products under BAA or DPA — the vendor-managed governance reduces carrier risk and is a defensible distinction.

Type 3

Decision exclusion

Plain English:Any loss caused by a business decision driven by an AI system — pricing, hiring, lending, claims adjudication, recommendation.

Sample policy language

"The insurer shall not be liable for any Loss arising from any decision, determination, or recommendation made by, or substantially informed by, an automated decision-making system or artificial intelligence model."

Who it hits:Anyone deploying AI in operational decisions — automated price adjustments triggering FTC inquiry, AI-driven hiring screening triggering EEOC complaint, AI-assisted credit decisions triggering fair lending audit, AI claims adjudication triggering state insurance department review.

What to watch for:The "substantially informed by" language is the dangerous part. Carriers can argue that any decision where AI provided input is excluded — even if a human made the final call. Push for "sole decision-making" rather than "substantially informed by." The narrower language preserves coverage for human-in-the-loop workflows.

Type 4

Hallucination exclusion (legal/professional services)

Plain English:Any loss caused by AI generating false, fabricated, or misleading content — specifically in legal, accounting, medical, financial advisory, and other professional services contexts.

Sample policy language

"This policy does not cover Loss arising from any inaccurate, fabricated, false, or hallucinated content generated by an artificial intelligence system, including but not limited to false citations, fabricated case law, inaccurate medical recommendations, or erroneous financial advice."

Who it hits:Law firms (the Mata v. Avianca fact pattern — AI fabricates case citations and the brief gets filed), accounting firms (AI generates incorrect tax position memos), medical practices (AI scribe records a diagnosis the clinician didn't make), financial advisors (AI summary misrepresents portfolio risk).

What to watch for:Often appears as a separate endorsement on professional liability (E&O) policies as well as cyber. The exclusion is typically absolute — no carve-out for human review — which means the only mitigation is procedural (mandatory human verification before publication). Document the verification workflow and make it auditable; some carriers will narrow the exclusion if you can demonstrate a controls program.

The carve-back

What's typically NOT excluded

The AI exclusion is broad in language but the carrier intent is usually narrower. Most carriers do not want to exclude standard cyber events that happen to involve AI tangentially. The following loss types are typically still covered even with an AI exclusion in place — though specific policy language always wins over general principle, so read the endorsement.

  • Cyber breach where AI was used as productivity tooling but not the breach vector itself (e.g., the developer used Copilot to write code, the breach came from a misconfigured S3 bucket — covered as a standard cyber loss).
  • Phishing, credential theft, ransomware deployment, business email compromise, wire fraud — these are standard cyber perils regardless of whether AI was involved in the threat actor's tradecraft.
  • Loss of customer data through a vendor breach where AI was incidentally present in the vendor's stack but not the proximate cause of the breach.
  • Insider threat, employee misconduct, accidental data exposure — these are standard cyber perils regardless of AI involvement.
  • AI used in internal security operations (the EDR or SOC tool runs on AI) — carriers generally view defensive AI as posture-enhancing, not loss-causing.

Negotiation playbook

How to negotiate AI clause language with your broker

Four moves that work in 2026. The market is competitive enough that clean accounts (low prior-incident history, mature controls) have real leverage on endorsement language — especially with mid-market wholesale markets like Coalition, Resilience, At-Bay, and Cowbell who compete partly on endorsement flexibility.

Move 1

Ask for the AI exclusion definition section in writing

Many policies cross-reference "artificial intelligence system" without defining it. Get the definition. "Artificial intelligence" can mean an LLM chatbot, a recommendation algorithm, a fraud detection model, a spam filter, or a search ranking system depending on interpretation. The definition is the difference between narrow and broad coverage loss.

Move 2

Push to narrow "in whole or in part" to "primarily caused by"

The standard exclusion language often reads "in whole or in part by" which captures every loss where AI was anywhere in the workflow. Pushing the language to "primarily caused by" or "materially caused by" narrows the exclusion to losses where AI was the actual proximate cause.

Move 3

Negotiate a carve-back for human-in-the-loop workflows

If you can document that a human reviews AI output before customer-facing use, ask for a specific carve-back to the AI exclusion for these workflows. Carriers will often agree because the human-review step materially reduces the loss exposure.

Move 4

Ask whether sub-limited AI coverage is available as an endorsement

Some carriers (Coalition, Resilience, At-Bay) now offer affirmative AI coverage as a paid endorsement with a sub-limit — typically $100k-$500k for AI-specific losses. If your business has material AI exposure, the sub-limited affirmative coverage is worth the additional premium versus walking in unprotected.

Before signing

3 questions to ask the carrier in writing before binding

These three questions, asked in writing through your broker, produce documentation that materially helps if the AI exclusion is ever invoked in a claim. The carrier's written answer becomes part of the binder file.

Question 1

How does this policy define 'artificial intelligence system' — is the definition limited to generative AI, or does it include any algorithmic decision system?

Why it matters:The definition is often broader than owners assume. A spam filter, a fraud detection model, or a recommendation engine can fall under a broadly written AI definition. Get the definition section explicitly cited in the binder.

Question 2

If a third-party SaaS product (M365 Copilot, Salesforce Einstein, etc.) embeds an AI feature we don't actively use, does the AI exclusion still apply to losses involving that product?

Why it matters:Critical for businesses where AI is embedded but not the primary use case. The answer determines whether the exclusion captures every SaaS product in your stack (most of them ship AI features now) or only products where AI is the load-bearing feature.

Question 3

If our staff uses an AI tool against policy — shadow AI — and that causes a loss, does the AI exclusion apply, or is unauthorized use considered a separate insured event?

Why it matters:Shadow AI is the most common real-world AI loss scenario. If the answer is "AI exclusion applies regardless of authorization," the policy is materially weaker than one where unauthorized AI use falls back into general cyber coverage.

FAQ

AI exclusion — the questions owners ask

Does the AI exclusion apply if we don't actively use AI in our business?

It can — depending on how broadly "artificial intelligence system" is defined. If you use Microsoft 365 (which now bundles Copilot in many SKUs), Google Workspace (which includes Gemini), Slack (which embeds Slack AI), or any modern SaaS product, you almost certainly have AI in your stack whether you actively use it or not. Get the carrier to define the term in writing.

Can I just remove AI from our business to avoid the exclusion?

Practically no. AI is now embedded in your productivity stack, your SaaS vendors, your search tools, your email filters, and your security tooling. The honest answer is to assume the AI exclusion applies somewhere and negotiate its scope rather than pretend you can avoid it.

Our broker says the AI exclusion is 'standard' and not negotiable. Is that true?

In 2024 the AI exclusion was new and largely non-negotiable. In 2026 it's standard but the language varies significantly across carriers and many of the more thoughtful underwriters will narrow the language for a clean account that asks. If your broker says the language is firm, ask which markets they shopped — you may have a different answer from a different carrier on the same submission.

If we have affirmative AI coverage as a sub-limited endorsement, what does that actually cover?

Typically a sub-limited bucket ($100k-$500k) for AI-specific losses that would otherwise be excluded — output errors, AI hallucination claims, AI-driven decision losses. The sub-limit sits inside the overall policy limit but is the only money available for AI claims. Useful for a business with real AI exposure; less relevant for a business where AI is incidental.

Does the AI exclusion affect Errors & Omissions (E&O) coverage too, or just cyber?

Increasingly both. The hallucination exclusion specifically is now common on professional liability policies for legal, accounting, medical, and financial advisory practices. The cyber AI exclusion and the E&O AI exclusion are usually written differently and need to be reviewed separately — don't assume the cyber broker's answer covers the E&O policy.

Three ways to get ready before renewal

Quantify exposure, benchmark your AI governance posture in five minutes, or book a 20-minute call to review the AI clause language on the renewal binder in front of you.