Skip to main content

Resource · AI Governance for Law Firms

AI Governance for law firms — privilege-preserving adoption of Harvey, CoCounsel, Lexis+AI, and Copilot.

US law firms face a structural AI problem that healthcare and banking do not: a single prompt sent to a vendor model that retains data, or trains on it, can arguably waive attorney-client privilege for that matter — and once waived, the work product doctrine does not always close the gap. This page maps ABA Formal Opinion 512, the seven state bar opinions that actually bind your jurisdiction, the Mata v. Avianca sanctions wave, the legal AI vendor matrix (15 vendors), and the 90-day governance runbook EFROS operates for law firms.

Law firms can write a memo on AI risk. The MSSP runs the controls. EFROS operates the AI governance program — inventory, policy, vendor verification, identity-layer enforcement, M365 Copilot matter-wall hardening, prompt logging, citation verification, court-order tracking, and partnership-grade audit reporting — under one accountable SLA.

By Stefan Efros, CEO & Founder, EFROSReviewed by Daniel Agrici, Chief Security Officer, EFROS
Reviewed by CSO ·

Why law firms are structurally different

Why law firms need a different AI governance posture than healthcare or banking

Healthcare AI risk is governed by HIPAA and HHS-OCR Section 1557. Banking AI risk is governed by GLBA, FFIEC interagency guidance, and OCC SR 11-7 model risk management. In both sectors, the regulatory framework is procedural — document the controls, run the audits, retain the artifacts.

Law firm AI risk is different because the underlying protection — attorney-client privilege — is self-executing and brittle. Privilege attaches automatically to confidential communications between attorney and client for the purpose of legal advice; it can be waived just as automatically by an inadvertent third-party disclosure. A single attorney pasting a draft brief into the consumer ChatGPT site can arguably constitute disclosure to a third party (OpenAI), and a court could later find that the firm waived privilege as to that matter's content.

Work product doctrine under Federal Rule of Civil Procedure 26(b)(3) provides a secondary protection for materials prepared in anticipation of litigation. But work product protection is qualified, not absolute — it yields to a showing of substantial need. Where attorney-client privilege has been waived as to underlying content, work product alone is a thinner defense than most firms assume.

Federal Rule of Evidence 502(b) provides a path to limit the scope of inadvertent disclosure — but only if the holder took reasonable steps to prevent and rectify the disclosure. A firm with no AI policy, no identity-layer block list, and no attorney training will struggle to argue its steps were reasonable. The 90-day runbook on this page is, in part, the documentary record that supports a Rule 502(b) argument if it is ever needed.

The structural point: AI governance in a law firm is not just compliance theater. The governance program is the only thing standing between the firm and a privilege waiver argument from opposing counsel. That is why this is partnership-level work, not an IT project.

ABA Formal Opinion 512 (July 2024)

ABA Formal Opinion 512 explained — five duties, in plain English

The American Bar Association issued Formal Opinion 512 in July 2024 — the headline ABA opinion on lawyer use of generative AI. The opinion maps generative AI use against five core Model Rules. Each is summarized below in operational terms.

Rule 1.1 — Competence

Lawyers have a duty to understand the benefits and risks of the technology they use, including generative AI. ABA Op 512 explicitly extends the technology-competence comment to AI tools: it is not enough to use Harvey or CoCounsel — the supervising lawyer must understand how the model produces output, what data is retained, and where hallucinations are likely.

Rule 1.6 — Confidentiality

Inputting client information into a self-learning generative AI tool that retains prompts or uses them for training risks unauthorized disclosure. ABA Op 512 requires that lawyers obtain client informed consent before inputting client information into a generative AI tool that does not adequately protect confidentiality — including consumer-tier ChatGPT and consumer-tier Claude.

Rules 1.4 and 1.5 — Communication and Fees

Material use of AI on a client's matter may need to be disclosed under Rule 1.4 (communication of the basis for representation). Under Rule 1.5, a lawyer cannot bill the client for time the AI did instead of the lawyer — billing two hours of associate time for a brief that Harvey drafted in eight minutes is a fee-rule violation. Fees must be reasonable and reflect what was actually performed.

Rules 5.1 and 5.3 — Supervision

Partners and supervising lawyers must put in place reasonable measures to ensure that subordinate lawyers (5.1) and non-lawyer assistants (5.3) use AI tools in a manner consistent with the firm's professional obligations. ABA Op 512 treats AI like any other non-lawyer assistant — the supervising lawyer is on the hook for hallucinations and confidentiality breaches by associates and paralegals.

Rule 3.3 and Rule 8.4(c) — Candor and Dishonesty

Filing a brief containing fabricated AI-generated case citations violates Rule 3.3 (candor to tribunal) and may violate Rule 8.4(c) (conduct involving dishonesty). This is the Mata v. Avianca problem. Op 512 makes verification mandatory — every citation, every quote, every factual assertion produced by an AI tool must be independently confirmed before filing.

State bar AI guidance

State bar AI guidance that's actually binding in your jurisdiction

ABA Op 512 is influential but is not binding by itself — each state bar enforces its own rules. Seven jurisdictions have issued material AI guidance to date. The table below summarizes each. Track the rule in every state where the firm has attorneys admitted; the supervising lawyer's home jurisdiction is not the only one that matters.

New York

NYSBA Op #1224 (April 2024) + NYSBA Task Force on AI Report

Lawyers may use generative AI on client matters subject to Rules 1.1, 1.6, 5.1, 5.3, and 3.3. Confidentiality opinion: a lawyer must not input client confidential information into a self-learning AI tool without informed consent. Verification of AI-generated citations and assertions is required prior to use.

California

State Bar of California Practical Guidance for Use of Generative AI in the Practice of Law (Nov 2023)

Comprehensive guidance covering confidentiality, competence, supervision, fees, and candor. Distinguishes between AI tools that retain and learn from inputs (higher risk) and those that do not. Specifically flags that consumer-tier tools without enterprise data-handling guarantees are generally not appropriate for client-confidential inputs.

Florida

Florida Bar Ethics Op 24-1 (January 2024)

Florida lawyers may ethically use generative AI subject to confidentiality, oversight, billing, and lawyer-advertising rules. Op 24-1 specifically addresses generative AI in client-facing chatbots and document automation — informed consent and AI-output verification are required.

District of Columbia

DC Bar Ethics Op 388 (April 2024)

DC lawyers using generative AI must understand the technology's risks, maintain client confidentiality, supervise AI use across the firm, and verify AI output. Op 388 specifically addresses cross-matter and conflict-of-interest exposure when AI is trained or fine-tuned on internal firm data.

Texas

State Bar of Texas Task Force for Responsible AI in the Law (2024 Report)

Task force guidance on competence, confidentiality, and supervision of AI use. Texas applies its own Rules 1.01 (competence) and 1.05 (confidentiality of client information) to AI inputs and outputs — note that Texas has its own confidentiality rule wording that is broader than ABA Model Rule 1.6 in important respects.

Illinois

Illinois ARDC and Illinois State Bar Association AI Guidance (2024)

Illinois practitioners must comply with IRPC 1.1, 1.6, 5.1, 5.3, and 3.3 when using generative AI. Particular emphasis on verification of citations and on supervisory responsibility — Illinois has been an active sanctions jurisdiction for hallucinated-citation filings.

New Jersey

NJ Supreme Court Notice to the Bar — Preliminary Guidelines on the Use of AI by NJ Lawyers (January 2024)

The NJ Supreme Court issued preliminary guidelines applying RPC 1.1, 1.6, 5.1, 5.3, 3.3, and 8.4 to lawyer AI use. NJ requires that lawyers verify AI output, maintain client confidentiality, supervise non-lawyer and AI-tool use, and disclose AI use to courts and clients where material.

Source links: nysba.org, calbar.ca.gov, floridabar.org, dcbar.org. State bar opinions are updated periodically — verify current guidance for your jurisdiction before relying on the summaries above.

The hallucinated-citation sanctions wave

Mata v. Avianca and what came after

The June 2023 Mata v. Avianca order is the foundational US lawyer-AI accountability document. Since Mata, courts have extended the principle across tools and jurisdictions. The throughline is simple: verification failure is sanctionable regardless of the tool used.

Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y., June 22, 2023)

The original ChatGPT-hallucinated-citation case. Two attorneys at Levidow, Levidow & Oberman submitted a personal-injury opposition brief containing six fabricated case citations generated by ChatGPT. Judge Castel imposed $5,000 in sanctions, required notification of judges in the fabricated cases, and the attorneys faced state bar discipline. The opinion is the foundational document on lawyer AI accountability.

Park v. Kim, 91 F.4th 610 (2d Cir. 2024)

The Second Circuit referred an attorney to its Grievance Panel for citing two non-existent cases generated by ChatGPT in an appellate brief. The court held that attorneys remain responsible for verifying every authority cited regardless of how it was generated.

United States v. Cohen / Michael Cohen Disgorgement (S.D.N.Y. 2023-2024)

Michael Cohen's attorneys submitted a sentencing memo containing fake cases generated by Google Bard. The court ordered the attorney to show cause and the matter resulted in disciplinary referral. The case extends the Mata problem to non-ChatGPT tools — every general-purpose generative AI presents the same hallucination risk.

Damon v. Coinbase (and the wave of post-Mata sanctions, 2023-2025)

Dozens of federal-court orders since Mata have imposed sanctions, attorney-fees awards, and disciplinary referrals for AI-hallucinated filings. The throughline: courts treat verification failure as a candor-to-tribunal violation regardless of whether the attorney intended the misstatement.

Standing orders requiring AI disclosure

Multiple federal district judges (notably in N.D. Tex., E.D. Pa., and the D.C. Circuit) have entered standing orders requiring counsel to certify whether AI was used to prepare any portion of a filing, and to verify any AI-generated content. Firms must track these orders by judge — not just by district — and update their filing checklists accordingly.

Privilege-preserving AI governance is one workstream inside the broader EFROS legal IT and cybersecurity service — 24/7 SOC, matter-data segregation, e-discovery readiness, wire-transfer-fraud defense, cyber-insurance-aligned controls.

See our full Legal IT & Cybersecurity service →

Legal AI vendor BAA-equivalent matrix

The legal AI vendor matrix — what's safe under enterprise terms, what's never safe

Curated matrix of the AI tools US law firms most commonly evaluate or deploy. The "BAA-equivalent" framing is borrowed from healthcare — what matters is whether the vendor's enterprise contract provides confidentiality protections strong enough to preserve the firm's Rule 1.6 obligation. Consumer-tier products do not.

Harvey

Legal research, drafting, due diligence
Data retention
Enterprise-only. No training on customer data per Harvey's terms.
Privilege-safe
Yes — under enterprise contract with explicit no-training clause and SOC 2 documentation.

Verify the data-processing addendum and confirm that customer prompts are not used to fine-tune the underlying model. Harvey runs on top of OpenAI infrastructure — the OpenAI enterprise data-handling terms flow through.

Thomson Reuters CoCounsel (Casetext)

Legal research, document review, drafting, deposition prep
Data retention
Enterprise. Documented no-training-on-customer-data posture.
Privilege-safe
Yes — under enterprise contract with Thomson Reuters legal-vendor terms.

CoCounsel runs Skills (Review Documents, Prepare for Deposition, Draft Correspondence) on top of GPT-4-class models. Confirm vendor-of-vendor flow-through for OpenAI infrastructure and that audit logs are retained per the firm's records-retention policy.

Lexis+AI

Legal research, drafting, summarization
Data retention
Enterprise. RELX/LexisNexis legal-vendor data-handling terms.
Privilege-safe
Yes — under enterprise contract.

Lexis+AI is grounded in LexisNexis's editorial content. Verify the configuration that limits the model to Lexis-indexed sources for case-citation tasks — that grounding is the primary defense against hallucinations on this platform.

Westlaw Precision AI / CoCounsel on Westlaw

Legal research, KeyCite-grounded drafting
Data retention
Enterprise. Thomson Reuters legal-vendor terms.
Privilege-safe
Yes — under enterprise contract.

Same underlying data-handling posture as CoCounsel. Confirm that KeyCite grounding is enabled for any task that touches case-citation accuracy.

Spellbook

Contract drafting and markup in Microsoft Word
Data retention
Enterprise. Documented no-training posture.
Privilege-safe
Yes — under enterprise contract.

Spellbook plugs into Word and sees contract content. Confirm that any clause libraries, redlines, and prompts the firm uploads are kept within the firm's tenant and are not used to improve the vendor model.

Ironclad AI Assist

CLM workflow automation, clause extraction, redlining
Data retention
Enterprise. Ironclad data-handling terms.
Privilege-safe
Yes — under enterprise contract for in-house legal departments.

Used heavily on the in-house side. Confirm that AI Assist outputs are reviewed by counsel before contract execution — Ironclad's AI is positioned as drafting assistance, not autonomous execution.

Diligen / Kira / eBrevia / Litera Foundation AI

Contract review, due diligence, M&A document analysis
Data retention
Enterprise. Vendor-specific terms — verify case-by-case.
Privilege-safe
Yes — under enterprise contract with no-training clause confirmed.

Long-established due-diligence AI tools. Confirm data-residency posture (on-premise vs. vendor cloud) and that training-data clauses do not allow vendor improvement on customer documents.

ChatGPT Enterprise / ChatGPT Team

General drafting, summarization, research
Data retention
Enterprise / Team. No training on customer data per OpenAI terms.
Privilege-safe
Yes — Enterprise tier only, with admin-enforced data controls.

Enterprise tier is the only OpenAI consumer-product tier suitable for client work. Configure SCIM / SSO, data-retention controls, and disable Memory for legal users. Document the IT-deployment posture as part of ABA Op 512 supervision evidence.

ChatGPT consumer (Plus / Free)

Personal use only
Data retention
Inputs may be used by OpenAI to improve models unless opted out.
Privilege-safe
NEVER for client-confidential content.

Block at the identity layer. Inputs to consumer ChatGPT can become part of OpenAI's training data, and the operator cannot guarantee that the conversation log will not surface to OpenAI staff during abuse review. Treat any associate use as a Rule 1.6 incident.

Claude Enterprise (Anthropic)

Drafting, summarization, research, code-assist
Data retention
Enterprise. No training on customer data per Anthropic enterprise terms.
Privilege-safe
Yes — Enterprise tier only.

Anthropic's enterprise tier provides administrative controls, SSO, and documented data-handling. As with ChatGPT Enterprise, the consumer Pro and Free tiers are not appropriate for client-confidential work.

Claude consumer (Pro / Free)

Personal use only
Data retention
Inputs may be used to improve products under consumer terms.
Privilege-safe
NEVER for client-confidential content.

Block at the identity layer. Same reasoning as consumer ChatGPT — without the enterprise data-handling addendum the firm cannot meet Rule 1.6's confidentiality obligation.

Microsoft 365 Copilot

Drafting in Word, Outlook summarization, Teams recap, Excel analysis
Data retention
Enterprise. Microsoft tenant boundary; no cross-tenant training.
Privilege-safe
Yes — under Microsoft 365 E3/E5 with Copilot license — IF SharePoint and matter-wall permissions are configured correctly.

The risk is not vendor data-handling — it is internal over-permissive SharePoint that lets Copilot index across matter walls. Run a Restricted SharePoint Search rollout, Copilot DLP, and a per-matter access audit before broad enablement. This is where most firms fail.

Google Gemini for Workspace

Drafting in Docs, Gmail summarization, Sheets analysis
Data retention
Enterprise. Google Workspace data boundary; no model-training on customer data.
Privilege-safe
Yes — under Google Workspace Enterprise with Gemini license.

Same internal-permissions concern as Copilot. Workspace's Shared Drive permission model has different failure modes than SharePoint — audit drive-sharing posture before enabling Gemini broadly.

Notion AI

Notes summarization, document drafting in Notion
Data retention
Notion enterprise terms; verify case-by-case.
Privilege-safe
Only with executed enterprise terms AND no client-confidential content in Notion workspaces.

Notion is not where client matter content should live for most firms. If Notion AI is enabled, scope it to non-privileged internal content (firm operations, marketing, recruiting) and block it for any workspace containing matter notes.

Otter.ai / Fireflies / Read.ai (voice transcription)

Meeting transcription, client-call recording
Data retention
Free/Pro tiers send transcripts to vendor training. Business tiers vary.
Privilege-safe
Only on enterprise tier with confirmed no-training clause AND two-party consent under state law.

Two-party-consent states (CA, FL, IL, MD, MA, MT, NV, NH, PA, WA) require all participants' consent to record. Recording a client call without that consent — and routing the audio to a vendor that trains on it — is both a confidentiality and a state criminal-law exposure.

Vendor contract terms change — verify current data-handling clauses with each vendor before relying on this matrix for procurement decisions. EFROS maintains an internal live vendor matrix updated quarterly as part of the law firm AI Governance retainer.

Eight ways AI breaks privilege in a typical mid-size firm

Where AI breaks privilege in a typical mid-size firm

The failure modes EFROS sees most often in law firm AI governance audits. Most are below the surface — they are not caught by IT-asset inventory alone because the consumer-tier tools live in attorneys' personal accounts.

Associate runs draft brief through consumer ChatGPT at 11 PM

The most common failure mode. Junior attorney pastes a draft motion or memo into the consumer ChatGPT site to clean it up. The brief contains client facts, opposing-party identities, and theory of the case. Once submitted, the content can be used to train OpenAI's models and is reviewable by OpenAI staff during abuse-review processes. Likely Rule 1.6 violation; arguably privilege waiver as to that content.

Paralegal uses Notion AI on case notes

A paralegal organizing matter notes turns on Notion AI to summarize a witness-interview file. If the Notion workspace is on consumer terms, the witness-interview content is processed under Notion's consumer data-handling, not the firm's BAA-equivalent enterprise contract. The supervising lawyer is on the hook under Rule 5.3.

Microsoft 365 Copilot indexes across matter walls

Copilot inherits the user's SharePoint and OneDrive permissions. If the partner running the M&A practice has been granted broad firm-wide read access to the document management system, Copilot will surface content from a matter the partner is not staffed on — including a matter where representing the new prospect would create a conflict. Copilot's prompt-completion becomes the conflict-discovery vector.

Voice transcription on client call without enterprise terms

An attorney uses Otter, Fireflies, or Read.ai on the consumer / free tier to transcribe a client call. The vendor uses the transcript to train its model. The opposing party later subpoenas the vendor in a discovery dispute. The vendor's terms are now central to whether the call retains privilege protection.

AI drafting assistant retains prompts in vendor training pipeline

A non-enterprise legal-tech tool the firm onboarded without IT review accepts attorney prompts containing client facts and routes them into the vendor's model-training data. Even after the firm terminates the contract, the model retains those prompts. This is the vendor-of-vendor problem — the firm has to track flow-through terms to every sub-processor.

Cross-matter prompt-context leakage

An attorney working on Matter A asks an AI assistant a question that pulls retrieval context from Matter B (a different client). The model's response includes content from Matter B in the answer to Matter A's question. This breaks both matter walls and creates an arguable Rule 1.9 conflict-of-interest exposure.

Conflict-of-interest exposure via firm-wide AI search

A firm-wide AI search tool (built on top of iManage, NetDocuments, or Relativity) indexes all matters and lets any attorney search semantically. An associate searching 'representation of [Company X] in tax matter' surfaces fragments from an unrelated litigation matter where the firm represents Company X's adversary. The search itself can be a conflict trigger.

Departing associate exports AI prompts containing privileged content

When an associate departs, the firm typically reviews exported email and document copies. Few firms review the associate's AI chat history. A departing associate with personal-tier access to ChatGPT or Claude may have months of prompt history containing client matter content — content that now travels to the new firm or to a competing role.

90-day law firm AI governance runbook

The 90-day law firm AI governance runbook

Twelve tasks across three phases — Inventory & Policy, Vendor & Tooling, Operate & Audit. Each task names the owner, the work, and the evidence artifact. This is the runbook EFROS operates for law firm clients on the AI Governance retainer.

Task 1Phase 1 — Inventory & Policy · Weeks 1-4

AI inventory across every platform attorneys touch

Owner: GC + IT, sponsored by Managing Partner

Map every AI tool in the firm: practice-specific (Harvey, CoCounsel, Lexis+AI, Westlaw Precision AI, Spellbook, Diligen, Kira), general productivity (M365 Copilot, Gemini for Workspace, Notion AI, ChatGPT Enterprise, Claude Enterprise), transcription (Otter, Fireflies), and CLM (Ironclad). Survey attorneys for personal-account use of consumer ChatGPT or Claude. Output: AI Inventory Register, classification of in-scope tools, list of consumer-tier tools to block at the identity layer.

Evidence artifact: AI Inventory Register + identity-layer block list

Task 2Phase 1 — Inventory & Policy · Weeks 1-4

Firm AI Use Policy ratified by partnership

Owner: Managing Partner + GC

Draft, circulate, and ratify a Firm AI Use Policy that covers: approved tools by tier, prohibited tools, prompt-content rules (what may and may not be inputted), client-disclosure obligations, billing rules for AI-assisted work, mandatory verification of citations and factual assertions, and disciplinary consequences for policy violations. The partnership has to actually adopt it — not just IT.

Evidence artifact: Ratified Firm AI Use Policy + signed acknowledgement from every attorney and staff member

Task 3Phase 1 — Inventory & Policy · Weeks 1-4

Practice-group AI use addenda

Owner: Practice Group Leaders (Litigation, Corporate, IP, Tax)

Each practice group adds a practice-specific addendum to the firm policy: litigation addresses court standing orders on AI disclosure; corporate addresses due-diligence AI in M&A; IP addresses prior-art search AI and patent-drafting tools; tax addresses tax-research AI. Practice group leader owns the addendum and refreshes it quarterly.

Evidence artifact: Per-practice-group AI addenda, signed by group leaders

Task 4Phase 1 — Inventory & Policy · Weeks 1-4

Engagement-letter language for AI use disclosure

Owner: GC + Risk Management Committee

Update engagement letter templates to disclose AI use where material. The disclosure should describe the categories of AI tools used (drafting, research, document review), confirm that confidentiality is preserved under enterprise-tier vendor terms, and confirm that AI output is reviewed by an attorney before delivery. Coordinate with the firm's malpractice carrier — the carrier may have specific wording requirements.

Evidence artifact: Updated engagement letter templates + malpractice carrier sign-off

Task 5Phase 2 — Vendor & Tooling · Weeks 5-8

Vendor BAA-equivalent verification

Owner: IT + GC

Execute or verify a written data-processing agreement with every enterprise AI vendor covering: no training on firm prompts/outputs, data residency (US-only or specified jurisdictions), sub-processor disclosure, breach-notification obligations, retention and deletion on contract termination, and audit rights. For Microsoft 365 Copilot and Google Gemini for Workspace, the existing enterprise agreement generally suffices — for Harvey, CoCounsel, Lexis+AI, Spellbook, and Diligen, the AI-specific addendum is separate.

Evidence artifact: Signed AI-vendor DPA matrix + vendor data-handling summary per tool

Task 6Phase 2 — Vendor & Tooling · Weeks 5-8

Identity-layer block list and enterprise-tier enforcement

Owner: IT, supervised by GC

At the identity provider (Entra ID, Okta, or equivalent) block consumer ChatGPT, consumer Claude, consumer Gemini, free Otter, free Fireflies, and any AI tool not on the approved list. Enforce SSO into the enterprise tiers of all approved tools. Block install of unapproved browser extensions that route prompts to vendor services.

Evidence artifact: Identity-layer block list configuration + monthly compliance report

Task 7Phase 2 — Vendor & Tooling · Weeks 5-8

Microsoft 365 Copilot / Google Gemini matter-wall hardening

Owner: IT + Practice Group Leaders

Run a Restricted SharePoint Search rollout (Microsoft) or Shared Drive permissions audit (Google) before enabling Copilot or Gemini broadly. Confirm matter walls hold: a partner staffed on Matter A cannot surface content from Matter B through Copilot. Configure Copilot DLP (Microsoft Purview) or Workspace DLP to prevent client-identifier patterns from being exfiltrated through AI prompts.

Evidence artifact: Matter-wall integrity test results + DLP configuration

Task 8Phase 2 — Vendor & Tooling · Weeks 5-8

Prompt-logging and audit-trail configuration

Owner: IT + GC

Configure prompt-and-output logging for every enterprise AI tool. For Microsoft 365 Copilot, use Microsoft Purview AI Hub; for ChatGPT Enterprise and Claude Enterprise, use vendor-side admin logs piped to the firm's SIEM; for Harvey, CoCounsel, and Lexis+AI, use the vendor-provided audit logs. Retention should align with the firm's records-retention schedule for matter-related work product.

Evidence artifact: Prompt/output audit-log configuration across all approved AI tools

Task 9Phase 3 — Operate & Audit · Weeks 9-12

Verification-of-citations workflow embedded in every matter

Owner: Practice Group Leaders + Associates

Every brief, memo, opinion letter, or filing that uses AI-generated content must run through a citation-verification step before it leaves the firm. Cite-checking software (Westlaw KeyCite, Lexis Shepard's, Cite-Check) confirms cases exist and are good law; a human attorney confirms quotations are accurate; the verification step is documented in the matter file. This is the Mata v. Avianca prevention layer.

Evidence artifact: Per-matter verification log demonstrating citation-check completion

Task 10Phase 3 — Operate & Audit · Weeks 9-12

Court-standing-order tracking and judge-specific AI disclosure

Owner: Litigation Practice Group Leader + Docket

Maintain a per-judge tracker of AI-related standing orders. Multiple federal judges now require AI-use certification or AI-output verification at the filing level. Before any filing, the filing attorney confirms the judge's current AI-disclosure rule and includes the required certification if applicable.

Evidence artifact: Judge-specific AI standing-order tracker, updated quarterly

Task 11Phase 3 — Operate & Audit · Weeks 9-12

Quarterly AI-governance audit and board / partnership reporting

Owner: GC + Managing Partner, reported to Executive Committee

Quarterly review of: AI-tool inventory changes, new vendor onboarding, identity-layer block list integrity, prompt-log volume by attorney and matter, verification-step compliance, court-order tracker accuracy, any AI-related incidents or complaints. Output is a partnership-level summary fit for malpractice-carrier and risk-management review.

Evidence artifact: Quarterly AI Governance Audit Report — partnership-grade

Task 12Phase 3 — Operate & Audit · Weeks 9-12

Attorney and staff AI training, refreshed annually

Owner: GC + Professional Development

Mandatory annual AI use training for every attorney and non-attorney staff member. Content: ABA Op 512 obligations, firm policy, approved vs. prohibited tools, the privilege-safe prompt protocol, verification procedures, departure protocol for AI prompt history. Track completion at the individual level for malpractice and state-bar audit purposes.

Evidence artifact: Annual AI-use training completion roster

The privilege-safe AI prompt protocol

The privilege-safe AI prompt protocol — operational rules for attorneys

The Firm AI Use Policy sets the rules. The prompt protocol below is what an attorney actually does at the keyboard. EFROS provides this as a one-page attorney reference card as part of the AI Governance retainer.

  • Approved tools only. Use only the tools on the firm's approved list (Harvey, CoCounsel, Lexis+AI, Westlaw Precision AI, Spellbook, ChatGPT Enterprise, Claude Enterprise, M365 Copilot, or as updated). Never use a consumer-tier tool for any client work.
  • No client-identifying content in general-purpose AI prompts. When using ChatGPT Enterprise, Claude Enterprise, or M365 Copilot for general drafting, do not include client names, opposing-party names, deal codenames, or other client-identifying details in the prompt. Use placeholder names ("Client A," "Opposing Party") and reintegrate identifiers after the model output is reviewed.
  • Practice-specific AI is contracted for matter content. Harvey, CoCounsel, Lexis+AI, Westlaw Precision AI, and Spellbook are contracted to receive matter content under enterprise terms. Full client facts are appropriate inputs for these tools. The placeholder-name rule applies only to general-purpose models.
  • Verify every citation, quote, and factual assertion. Before any AI-assisted content leaves the firm, run case citations through KeyCite or Shepard's, verify quotations against the original sources, and confirm factual assertions independently. Document the verification step in the matter file. This is the Mata v. Avianca prevention layer.
  • Log the prompt in the matter file when material. Where AI output materially contributes to a deliverable, the supervising attorney saves the prompt-and-output transcript to the matter file. This serves two purposes: it supports the supervising attorney's Rule 5.1 / 5.3 oversight evidence, and it provides a record if the AI output is later challenged.
  • Court orders before filing. Before filing any AI-assisted document, the filing attorney confirms the judge's current AI-disclosure rule and includes the required certification if applicable. Track per-judge orders, not just per-district.
  • Departure protocol. When an attorney departs, the firm reviews the attorney's prompt history across approved tools as part of the standard exit review. Personal-account AI use outside approved tools is surfaced and addressed before separation.

Engagement letter AI disclosure

AI use disclosure language for engagement letters — sample

Sample paragraph law firms can adapt for engagement letters. This is provided as a starting point, not as legal advice — adapt with the firm's general counsel and the malpractice carrier. The malpractice carrier may have specific wording requirements that supersede this sample.

"In performing the legal services contemplated by this engagement, the Firm may use generative artificial intelligence tools — including legal-research and drafting assistants such as [Harvey / CoCounsel / Lexis+AI / Westlaw Precision AI / Spellbook] and general productivity tools such as Microsoft 365 Copilot — operated under enterprise contracts that prohibit the vendor from training its models on Firm or Client data and that preserve the Firm's obligation of confidentiality. All AI-assisted work product is reviewed by an attorney before delivery to the Client. The Firm verifies all citations, quotations, and factual assertions in AI-assisted documents prior to filing or delivery. The Client may instruct the Firm to limit or exclude AI use on this matter by notifying the engagement partner in writing."

Adapt the bracketed vendor list to the actual tools the firm has approved and contracted. Coordinate the final wording with the firm's general counsel and malpractice carrier. Not legal advice; not a substitute for jurisdiction-specific review.

FAQ

Common questions from managing partners and general counsel

Does using Harvey, CoCounsel, or Lexis+AI waive attorney-client privilege?

Not by itself, when used on an enterprise tier with a vendor contract that prohibits training on customer data and contractually preserves confidentiality. Enterprise contracts with Harvey, Thomson Reuters CoCounsel, Lexis+AI, and Westlaw Precision AI are written to preserve the firm's confidentiality obligation. Privilege exposure comes from using the consumer-tier free or personal versions of any AI tool, from inputting content that the firm has no contractual confidentiality cover for, or from court-ordered discovery of AI prompts in matters where the vendor's data-handling becomes central.

Do we have to disclose AI use to clients?

Disclosure is required where the AI use is material to the representation — and ABA Op 512 and several state bar opinions treat 'material' broadly. The clean answer is to update engagement letter templates with a standard AI-use disclosure paragraph, and to expressly inform clients of AI tool use in matters where AI substantively contributes to a deliverable. The malpractice carrier may have specific wording. Most firms now include a standard paragraph in engagement letters rather than handling disclosure ad hoc.

Can we bill clients for time spent reviewing AI output?

Yes — attorney time spent verifying citations, reviewing AI-generated draft language, and exercising legal judgment on the AI output is billable. What is not billable is the time the AI did instead of the attorney — billing two hours of associate time for a brief that took Harvey eight minutes plus thirty minutes of attorney review is a Rule 1.5 fee-reasonableness violation. The defensible billing entry describes the attorney's review and judgment work, not the AI's drafting time.

What does ABA Formal Opinion 512 actually require, in practice?

Op 512 requires the firm to do five things: (1) understand the AI tools attorneys use, at the technology level (Rule 1.1); (2) protect client confidentiality, which means enterprise-tier vendor terms or informed consent before inputting client content (Rule 1.6); (3) supervise AI use across attorneys and non-lawyer staff (Rules 5.1, 5.3); (4) bill ethically — disclose AI use where material and do not bill for AI-replaced time (Rules 1.4, 1.5); (5) verify every AI-generated citation, quote, and factual assertion before filing (Rules 3.3, 8.4). The artifacts in our 90-day runbook map directly to these five duties.

Our IT team set up Microsoft 365 Copilot — is that compliant for legal work?

Microsoft 365 Copilot under an E3 or E5 license is a vendor-side compliant deployment — Microsoft does not train models on customer prompts and the data stays within the firm's tenant. But compliance depends on the firm's internal configuration, which is where most firms fail. Copilot inherits SharePoint and OneDrive permissions: if matter walls are not properly enforced in the document management system, Copilot will surface content across matters that should be separated. Before broad enablement: run a Restricted SharePoint Search rollout, configure Copilot DLP via Microsoft Purview, run a matter-wall integrity test, and audit per-user permissions. Done well, Copilot is fine for legal work. Done as a default rollout, it creates conflict-of-interest and privilege exposure.

What does a law firm AI governance audit produce?

EFROS's fixed-fee law firm AI Governance Audit produces: an AI Inventory Register, identity-layer block-list configuration, ABA Op 512 mapping per duty, ratified Firm AI Use Policy and per-practice-group addenda, AI-vendor DPA matrix, M365 Copilot / Google Gemini matter-wall integrity test results, prompt-and-output audit-log configuration, citation-verification workflow embedded in matter procedures, judge-specific standing-order tracker, attorney and staff training program, and a partnership-grade quarterly audit report. The audit converts to a managed AI Governance retainer with the audit fee credited toward the first quarter.

Related EFROS resources

Related work

  • Legal IT & Cybersecurity for Law Firms — the broader EFROS service: 24/7 SOC, matter-data segregation, e-discovery readiness, wire-transfer-fraud defense, cyber-insurance-aligned controls.
  • AI Governance service — how EFROS runs AI governance as a managed retainer across sectors.
  • NIST AI RMF Implementation Guide — US-anchored AI risk-management framework that underpins the runbook on this page.
  • Free AI Risk Score — five-minute self-assessment of the firm's AI exposure.
  • EFROS Glossary — definitions for ABA Op 512, Rule 1.6, Mata v. Avianca, work product doctrine, and the legal AI vendor categories cited on this page.

Three ways forward

Run the free AI Risk Score to self-assess the firm's exposure in five minutes, reserve the fixed-fee $5K AI Governance Audit, or take the broader Legal IT & Cybersecurity service tour.