Resource

SOC 2 Type II Readiness Checklist

Eighty controls, mapped to the 2017 Trust Services Criteria. This is the checklist we actually use on engagements to tell a client whether they are six months or twelve months from a clean Type II opinion. No gating, no email capture. Work through it, print to PDF if you want a saved copy, and put the gaps on a quarterly plan.

By Maria Popescu, VP of EngineeringReviewed by Daniel Agrici, Chief Security Officer, EFROS
Reviewed by CSO ·

What SOC 2 Type II actually requires

SOC 2 is an attestation, not a certification. A licensed CPA firm issues a report describing the system your organization operates and opining on whether the controls you have asserted were both suitably designed and operating effectively over a defined observation period. Type I captures design at a point in time. Type II captures operating effectiveness over a window, typically six or twelve months. Buyers want Type II because only Type II tells them your controls did work, not merely that they existed on paper.

The work breaks into two parts. First, you document the system description: what the product does, who it serves, where data flows, which subservice organizations you rely on, and which criteria apply to your service commitments. Second, you assert a set of controls, test them yourself, and then hand auditors the evidence they need to re-test a sample. The auditor does not invent controls for you. They evaluate the ones you have selected against the applicable 2017 Trust Services Criteria published by the AICPA (SOC suite of services).

The Security criterion (also called the Common Criteria) is mandatory. The other four (Availability, Processing Integrity, Confidentiality, Privacy) are elective and should be selected based on the service commitments you make to customers. Most early SaaS Type II reports cover Security and Availability. If you handle regulated data or make privacy promises in a marketing page, add Confidentiality or Privacy as needed. If you process transactions or produce reporting that customers rely on, add Processing Integrity.

Type II timelines are real constraints. The observation period cannot start until your controls are operating. If you promise a prospect a Type II report in March and you do not have controls operating until January, the earliest plausible report covers a short window and most sophisticated buyers will ask for a longer one on the next cycle.

The 2017 Trust Services Criteria at a glance

The 2017 TSC replaced the earlier criteria structure and introduced the common criteria model. Nine common-criteria categories (CC1 through CC9) cover governance, communication, risk, monitoring, control activities, access, operations, change, and risk mitigation. Those nine are what Security actually is. Add one or more of the elective categories (Availability, Processing Integrity, Confidentiality, Privacy) if your service commitments require it.

Most teams new to SOC 2 underestimate how integrated the nine common-criteria sections are. CC1 (control environment) and CC2 (communication) set the governance and training baseline without which CC6 (access) and CC8 (change) cannot be demonstrated. CC3 (risk assessment) and CC4 (monitoring) are the glue that tells auditors you have a functioning program rather than a one-time scramble. Skipping the governance layers produces a report with exceptions that buyers notice.

See the AICPA SOC 2 examinations page for authoritative guidance. The list that follows is the operational interpretation we use on engagements. It is not a substitute for the AICPA TSC document, but it is what the document looks like when it meets a real environment.

Security (Common Criteria): CC1 through CC9

The nine common-criteria groups below are mandatory for every SOC 2 Type II examination. Treat them as the floor. For each group the summary describes the intent and the list describes the controls a Type II auditor will expect to test.

CC1. Control Environment

Tone at the top. Board oversight, code of conduct, HR practices, and organizational structure that make the rest of the program possible.

  • CC1.1 Board oversight of the information security program, with documented meeting minutes showing security topics reviewed at least quarterly.
  • CC1.2 Code of conduct signed at hire and re-attested annually, with disciplinary escalation path for violations.
  • CC1.3 Organizational chart identifying security, engineering, and compliance roles with reporting lines.
  • CC1.4 Job descriptions for each role in scope, including security responsibilities.
  • CC1.5 Background checks on all hires before access is granted, documented per jurisdiction.
  • CC1.6 Annual performance reviews that reference security-relevant expectations.

CC2. Communication and Information

How security information flows internally and externally. Policies, training, customer communication, and incident disclosure.

  • CC2.1 Written information security policy, reviewed and approved annually.
  • CC2.2 Acceptable use policy acknowledged at hire.
  • CC2.3 Security awareness training completed within 30 days of hire and annually thereafter.
  • CC2.4 Role-based training for engineers (secure coding) and for privileged operators.
  • CC2.5 Customer-facing communication channel for security questions, with documented response SLAs.
  • CC2.6 Process for notifying customers of security incidents that affect their data.

CC3. Risk Assessment

Documented methodology for identifying, scoring, and treating risks. Updated at least annually or when the environment changes materially.

  • CC3.1 Annual enterprise risk assessment covering threats, vulnerabilities, and control gaps.
  • CC3.2 Risk register with ownership, likelihood, impact, and treatment decision per risk.
  • CC3.3 Fraud risk assessment referenced in the enterprise risk program.
  • CC3.4 Change management triggers that force a re-assessment (major architecture changes, new product lines, new geographies).

CC4. Monitoring Activities

Internal checks that verify controls are actually working, plus remediation tracking when they are not.

  • CC4.1 Internal control testing schedule covering every in-scope control at least annually.
  • CC4.2 Findings log with owner, severity, remediation plan, and due date.
  • CC4.3 Management review of testing results at least quarterly.

CC5. Control Activities

The operating controls themselves. Preventive, detective, and corrective activities that reduce risk.

  • CC5.1 Control matrix mapping each risk to one or more operating controls.
  • CC5.2 Segregation of duties across financially and security-sensitive workflows.
  • CC5.3 Technology-based controls (automated rules, policy-as-code) documented alongside manual ones.

CC6. Logical and Physical Access Controls

Who can reach what. Identity lifecycle, authentication, authorization, least privilege, physical entry to facilities.

  • CC6.1 Access provisioning request with documented manager approval.
  • CC6.2 Quarterly user access reviews for production systems and customer data stores.
  • CC6.3 MFA enforced for all employee, contractor, and administrative access.
  • CC6.4 Privileged access separated from daily-use accounts, with session logging.
  • CC6.5 Termination access removal within 24 hours (same day for high-risk roles).
  • CC6.6 Password policy aligned to NIST SP 800-63B, or SSO with phishing-resistant factors.
  • CC6.7 Remote access limited to authenticated, encrypted channels (ZTNA or VPN with MFA).
  • CC6.8 Physical access to offices and data centers controlled via badge, with logs retained.

CC7. System Operations

Detection, logging, incident response, backup, and recovery. The evidence auditors scrutinize most closely.

  • CC7.1 Vulnerability scanning on infrastructure and applications at least monthly.
  • CC7.2 Penetration test annually by a qualified third party.
  • CC7.3 SIEM or equivalent logging pipeline covering auth, admin, and data-access events.
  • CC7.4 Alerting rules tied to the incident response playbook.
  • CC7.5 Documented incident response plan reviewed annually and tested via tabletop at least annually.
  • CC7.6 Backups tested at least quarterly with documented restore times.
  • CC7.7 Disaster recovery plan covering RPO and RTO targets per critical system.

CC8. Change Management

Every production change is tracked, tested, approved, and reversible. The single highest-evidence-volume control area in most audits.

  • CC8.1 Code review required for every change merging to a protected branch.
  • CC8.2 Automated tests that gate deploys, with evidence retained in CI logs.
  • CC8.3 Separation between developer commit rights and production deploy rights.
  • CC8.4 Change ticket (or PR link) referenced on every production deploy.
  • CC8.5 Rollback procedure tested and documented for critical services.
  • CC8.6 Infrastructure-as-code with peer-reviewed changes for environment modifications.

CC9. Risk Mitigation

How the organization addresses residual risk from vendors, business disruption, and changes in the threat environment.

  • CC9.1 Vendor risk management program, with tiered assessment based on data sensitivity.
  • CC9.2 Business continuity plan tested annually.
  • CC9.3 Cyber insurance policy reviewed against the organization's risk profile.

Availability, Processing Integrity, Confidentiality, Privacy

The four elective categories apply based on the commitments you make in contracts and marketing. Availability is the most commonly added because SaaS providers publish uptime targets. Confidentiality follows when regulated or sensitive data is handled. Privacy comes in when the service collects personal data in ways that go beyond internal employee records. Processing Integrity applies when customers rely on the accuracy of computed outputs.

Adding a category is not symbolic. It extends the set of controls in scope, the evidence required, and the observation period testing. Scope creep is a real cost, so align the elective categories with the service commitments already in your contracts rather than asserting everything at once.

Availability (A-series)

  • A1.1 Capacity planning and monitoring for CPU, memory, storage, and bandwidth on all production systems.
  • A1.2 Availability SLOs defined per service, with error budgets and alerting.
  • A1.3 Redundancy (HA, multi-AZ, multi-region where warranted) documented per critical service.
  • A1.4 DR failover tested at least annually with documented results.
  • A1.5 Environmental controls at physical sites (power, cooling) monitored and tested.

Processing Integrity (PI-series)

  • PI1.1 Input validation at API boundaries and at user interfaces.
  • PI1.2 Data reconciliation between systems of record (billing to GL, events to warehouse) with exception handling.
  • PI1.3 Batch job monitoring with failure alerts and retry policies.
  • PI1.4 Data-quality checks on analytics and reporting pipelines.

Confidentiality (C-series)

  • C1.1 Data classification policy with at least three tiers (public, internal, confidential).
  • C1.2 Encryption at rest for all confidential data stores (including backups).
  • C1.3 Encryption in transit (TLS 1.2 minimum, 1.3 preferred) for all data flows.
  • C1.4 Key management via HSM-backed KMS with rotation schedule.
  • C1.5 Data retention and secure destruction procedures, including for departed employees and terminated vendors.

Privacy (P-series)

  • P1.1 Privacy notice published and reviewed annually.
  • P1.2 Data subject request workflow with response within regulatory deadlines.
  • P1.3 Data processing inventory (record of processing activities).
  • P1.4 Vendor data processing agreements in place for all sub-processors.
  • P1.5 Consent capture for applicable processing, with evidence retained.

Evidence expectations: what auditors actually look for

Auditors test controls by sampling. They do not read every change ticket. They pick a handful and verify the evidence matches the asserted control. That means evidence has to be findable, date-stamped, tied to the system of record, and tied to a person. Screenshots are acceptable for discrete items but are the wrong default for anything that happens often. Automated extracts from the source system are the right default.

The most common evidence categories we see tested are user access reviews (CC6.2), change management tickets and PR approvals (CC8), backup restore tests (CC7.6), and incident response tabletop results (CC7.5). Each needs a population list (how many were in scope during the observation window) and a sample (what the auditor picked and what you handed over). Missing a population list is the single most common reason an engagement slips.

Tie evidence to the zero trust architecture decisions you have already made. A ZTNA rollout produces authentication logs that answer CC6 questions without additional instrumentation. A consolidated identity provider collapses dozens of per-application user access reviews into one. Use the audit as a forcing function for architecture improvements rather than treating it as a checkbox exercise.

The 6-month readiness timeline

A six-month path is reasonable if the organization already runs with sound engineering hygiene: code review, branch protection, MFA on SaaS, centralized logging, an identity provider, and a documented on-call rotation. Months one and two are gap assessment and policy drafting. Month three closes high-priority gaps (access review cadence, vendor inventory, incident response plan). Months four through six are the operating window during which controls run and evidence accumulates. The Type II fieldwork happens at the end of month six or early in month seven.

The single most important decision in the six-month path is the observation window. A six-month window is the shortest duration most audit firms will issue a Type II opinion on. Do not start the window before all required controls are demonstrably in operation, because exceptions in the first month of the window are visible in the report.

See our SOC 2 Type II compliance guide for the operational detail on each step. For regulated industries where the audit is consumed by a bank or insurer, see financial services for the additional supervisory expectations layered on top.

The 12-month readiness timeline for less mature organizations

A twelve-month path is the realistic plan when starting points include a flat production environment, no identity provider, manual deploys, no vulnerability management, or no formal policies. In that scenario the first three months are architecture work, not compliance work. Stand up SSO, enforce MFA, consolidate logging, adopt branch protection, and write the policies that describe what the team is now doing. Months four through six close remaining gaps and finish the vendor inventory. Months seven through twelve are the operating window.

The trap in a twelve-month timeline is treating the first three months as documentation rather than engineering. Writing a policy that says the team uses SSO does nothing if SSO is not actually enforced. Write policies that describe the system as implemented, and implement the system before you write the policy. Everything else is a paper exercise.

The second trap is ignoring the incident response maturity required at CC7.5. A Type II auditor will ask for a tabletop exercise that happened inside the observation window. If you schedule the first tabletop in month eleven of a twelve-month window, you have no population to sample and the control is untested.

Common gaps that delay the audit

The same gaps appear on almost every first engagement. User access reviews that are not reliably completed each quarter (and for which the reviewer attestation is not retained). Change management where the tooling supports code review but branch protection is not enforced on production branches. Backups that run but have never been restored under realistic conditions. Vendor inventories that list names but not data sensitivity, not contract owners, and not assessment dates. Incident response plans that were written once and never exercised.

The other recurring gap is service-organization descriptions that do not match reality. The system description in the report is what customers read. If it says you run on AWS in a specific region and you have just migrated to multi-region, the description is stale and the audit has to be reopened. Keep the description aligned with the architecture as it is, not as it was six months ago.

None of these gaps are exotic. All of them are fixable with a few quarters of disciplined execution. The checklist above is the instrument we use to make the execution visible to the team, the auditor, and the buyer who will read the report.