Cybersecurity13 min readLast reviewed Apr 2026

Ransomware Response Playbook: The First 24 Hours

SE
Stefan Efros
CEO & Founder
|
Reviewed byDaniel Agrici, Chief Security Officer

Ransomware is the single event most likely to put a mid-market business at existential risk in 2026. Average demand now exceeds $2M for mid-sized businesses. Total cost including downtime and recovery usually runs 5-10x the ransom itself. 60% of businesses hit by ransomware without proper backups never fully recover. I've led incident response on more of these than I want to count, and I've watched organizations either handle them well and survive, or handle them poorly and fold. The first 24 hours is what determines which outcome you get. This is the playbook I actually run. The federal reference I point clients to is CISA's #StopRansomware Guide and the free decryption library at No More Ransom.

Hour 0: Detection

Hour 0 is detection. Modern ransomware doesn't announce itself politely. You detect it through anomalies: a file server showing unusually high write activity, a Domain Controller logging strange authentication patterns, an EDR firing on encryption behavior. Worst case, someone reports they can't open files and there's a ransom note on their desktop. If you have 24/7 SOC monitoring, detection happens in the first anomalous minutes. If you rely on business-hours support or user reports, detection usually lags 6-12 hours, by which time the ransomware has finished encrypting everything it could reach. The difference between those two detection speeds is the difference between a contained incident and a company-wide disaster.

The first minute after detection is the most important minute of the entire incident. What you do in that minute determines almost everything that follows. The single highest-leverage action is isolating affected systems from the network. Disconnect Ethernet. Disable Wi-Fi. Power down if you have to. This prevents lateral spread to file servers, backup systems, and other critical infrastructure. If you have EDR with pre-authorized host isolation through an MDR service with containment authority, this happens in seconds without waiting for human approval. If you don't, you're relying on whoever's awake to make the call fast. Fast matters more than elegant at hour zero.

Hour 0-1: Incident Command

Hour 0-1 is incident command activation. The moment ransomware is suspected, not confirmed, activate your incident response team. Name an incident commander with decision authority. Open a dedicated communication channel on Slack, Teams, or a phone bridge, separate from email and other channels that may be compromised. Notify IT leadership, security leadership, legal counsel, and the executive team. If you have cyber insurance, notify your carrier in the first hour. Most policies have notification deadlines that matter for coverage, and the carrier will likely engage their own incident response firm under their network.

The critical decision in the first hour is whether to shut down the entire network or attempt surgical containment. For small blast-radius incidents, like a single compromised endpoint caught before lateral movement, surgical containment works. For confirmed lateral movement to multiple systems, full network isolation is often the right call even though it takes production offline. The math is straightforward: shutting down $50K of hourly revenue is better than losing $5M to full encryption. I've watched people hesitate on this call and lose the company. Don't hesitate. If you can't prove the attacker is contained, assume they aren't.

Hour 1-4: Scope Assessment

Hour 1-4 is scope assessment. Determine the actual attack surface. Which systems are encrypted? Which are compromised but not yet encrypted? Which are still clean? Traditional forensics tools like Volatility, KAPE, and the EZ Tools suite help here, and EDR platforms give the same visibility faster. The outputs you need: a list of compromised hosts, the attacker's entry point (phishing email, compromised VPN, exposed RDP, supply chain), and the attacker's current position. Are they still active in your environment, or did they deploy and leave? The answer changes your response.

Preserve evidence while you're assessing. Before wiping or restoring anything, capture disk images of critical compromised systems and memory dumps where possible. Preserve log data from SIEM, EDR, firewall, VPN, and identity provider for at least the attack window plus 30 days on either side. If this becomes a law enforcement matter or an insurance dispute, evidence preservation determines whether you can prove what happened. The temptation during a crisis is to prioritize restoration over preservation. Resist that. Evidence is free if captured now, impossible to reconstruct later.

Hour 4-8: Stakeholder Communication

Hour 4-8 is stakeholder communication. Internal first. The board and executive team need a situation summary and expected business impact within the first 2-4 hours. All-hands communication has to be deliberate. Tell employees what's happening, what's expected of them (don't reconnect to the network, don't access work systems until cleared, don't post on social media), and who to contact with questions. Avoid triggering panic, but don't sugarcoat. External communication is riskier. Customers, partners, regulators, and press may all need notification eventually, but premature disclosure can worsen the situation. Follow legal counsel's guidance on timing.

The regulatory obligations that drive external communication matter, and the clock starts earlier than most people realize. HIPAA requires notification within 60 days if protected health data was exposed. GDPR requires notification to the supervisory authority within 72 hours of becoming aware. State breach notification laws vary from 30 to 90 days, with California, New York, Illinois, and Texas on the strict end. SEC Regulation S-K Item 1.05 requires public companies to file an 8-K within 4 business days of determining the incident is material. NYDFS Part 500 requires 72 hours. Start the legal analysis the moment you have reasonable belief that protected data was accessed. Not when you're certain. 'Reasonable belief' is when the clock starts.

Hour 8-16: The Payment Question

Hour 8-16 is when the payment question surfaces. This is the most contested decision in ransomware response. Arguments against paying: it funds criminal enterprises, there's no guarantee of working decryption, paid victims are often re-targeted within 12 months, and paying a group on OFAC's sanctions list may violate federal law. Arguments for paying: if backups are compromised and the business cannot operate without encrypted data, payment may be the only path to continuity. The decision is business-critical and should involve CEO, CFO, legal counsel, the incident response firm, and your cyber insurance carrier. Don't let any single person decide this alone.

If you're considering payment, verify several things before making it. The ransomware group's decryption track record matters: some groups have reliable decryptors, others take payment and disappear. OFAC sanctions status matters: paying a sanctioned group exposes you to federal penalties that can exceed the ransom. Your insurance coverage matters: not all policies cover ransom payments, and some require explicit pre-approval from the carrier. Tax and legal implications matter: ransom payments have specific reporting requirements. Cryptocurrency payment requires a broker who specializes in compliant ransom payments, not just any crypto exchange. Document the payment decision with all considerations. This decision will be scrutinized by auditors, regulators, and potentially plaintiffs' attorneys for years afterward.

Hour 12-24: Recovery Path

Hour 12-24 is recovery path selection. You have three parallel paths, and you'll probably use a hybrid. Restore from backups is preferred, but requires backups that weren't encrypted by the attacker. That means immutable or air-gapped backups, tested quarterly. If your backups live on the same network the attacker reached, they're probably encrypted too. Decryption requires attacker cooperation after payment, and the decryptors typically run at 1-5% of normal file I/O speed, meaning full recovery takes weeks. Rebuild from scratch is the nuclear option. Expensive but clean. Most organizations end up doing some version of all three, restoring critical systems from verified clean backups to production infrastructure that's been rebuilt on clean hardware or fresh cloud accounts.

When you're bringing operations online during recovery, priority order matters. Identity systems first, because authentication has to work before anything else can. Email and core business applications second. Customer-facing systems third. Internal tools last. Do not reconnect compromised systems to the clean network until they've been fully forensically cleared. The fastest way to turn a recovering incident into a second incident is to bring a still-compromised system back online because you're in a hurry to resume operations.

Hour 24+ is where the first 24 hours ends but the incident doesn't. The next 30-90 days involve full remediation (patching the entry point, rotating all credentials, rebuilding compromised systems), customer and regulator communications, insurance claims processing, law enforcement coordination through the FBI Cyber Division for US-based incidents, a lessons-learned review, and updating your incident response plan based on what actually happened versus what your plan assumed. Most organizations find that their pre-incident IR plan didn't match reality in specific and important ways. Capture those gaps and fix them. The next incident is coming whether you like it or not.

A pattern I want to call out, because it's preventable and consistent. Most ransomware incidents I investigate have three things in common. Email security was insufficient, so the original phishing wasn't caught. MFA wasn't enforced on administrative or VPN access, so credential stuffing or phishing led directly to privileged access. Backups were compromised along with production because the backup infrastructure wasn't isolated or immutable. Fixing those three controls cuts ransomware risk by an order of magnitude. None of them are expensive. The organizations that get hit hardest are the ones that didn't do the boring fundamentals because the fundamentals didn't feel urgent until the day they were urgent.

Pre-Incident Preparation

Pre-incident preparation pays off roughly 100 to 1. Organizations that prepare for ransomware rarely pay ransom. Organizations that don't pay on average. The preparation checklist I work through with clients. Identify your incident response team and incident commander with 24/7 contact information, including alternates. Draft pre-approved containment playbooks with specific thresholds for full-network shutdown versus surgical containment. Verify your cyber insurance policy: ransom coverage, approved IR firms, notification timeline, excluded scenarios. Test backup restoration quarterly, not just that backups exist but that they restore to clean infrastructure in the time your business requires. Document critical business processes and recovery priorities. Pre-engage an incident response firm with a retainer or insurance relationship, because finding and onboarding one during an active incident wastes critical hours. Run a tabletop exercise annually with executive participation. None of these cost more than $20K-$50K per year to maintain. All of them materially change outcomes during a real incident.

Insurance claims during an active incident have specific friction points that catch people off guard. Most cyber insurance policies require notification within 24-72 hours of discovery, so check your policy before the incident. Most require using carrier-approved incident response firms, which means calling your preferred forensics firm first may void coverage. Most require documented chain of custody for evidence. Most require pre-approval for ransom payment if you're considering it. Most require documented remediation showing the root cause was addressed. Policies typically cover forensic investigation, legal counsel, public relations, customer notification, business interruption, and in some cases ransom payment itself. Policies typically don't cover fines from regulatory actions, reputational damage as a standalone loss, or intellectual property loss. Some jurisdictions exclude ransom payments to sanctioned groups. Review your policy annually with your broker, not just at renewal.

Decryption resources you should know about, because some of them are free. The No More Ransom project at nomoreransom.org is a collaboration between Europol, law enforcement, and security vendors that provides free decryption tools for over 150 ransomware families. Check there before paying. Your attacker's family may have a free decryptor available. Commercial incident response firms sometimes negotiate lower ransom amounts or obtain decryption without payment based on intelligence about the specific group. CISA at cisa.gov/ransomware maintains current alerts and mitigation guidance. The FBI's IC3 at ic3.gov collects ransomware reports and may provide recovery assistance. Report even if you don't expect federal investigation, because aggregate data helps future victims. Law enforcement agencies in many jurisdictions maintain decryption keys obtained from seized attacker infrastructure, and sometimes those keys unlock your files.

Legal and regulatory timelines during an active incident are tighter than people expect. The regulatory notification clock starts when you have reasonable belief that protected data was accessed, not when forensic investigation confirms it. Don't delay the legal analysis waiting for certainty. US state breach notification laws range 30-90 days. GDPR requires notification to supervisory authority within 72 hours of becoming aware, with customer notification following promptly. HIPAA Breach Notification Rule requires 60 days. SEC Regulation S-K Item 1.05 requires public companies to file an 8-K within 4 business days of determining the incident is material. NYDFS Part 500 requires 72 hours. Most organizations engage outside counsel specializing in cybersecurity disclosure for material incidents. Their involvement is also protected by attorney-client privilege, which matters when you're documenting decisions you may have to defend later.

Communication strategy during incident is its own discipline. Internal first, external second. The executive team and board need situational awareness within 2-4 hours. Employees need clear guidance on what they can and cannot do: don't reconnect personal devices, don't discuss the incident externally, follow specific instructions from the incident commander. External communication has to be measured and accurate. Initial customer notifications should focus on actions they need to take (reset passwords, monitor accounts) rather than detailed incident specifics. Media engagement, if necessary, should go through communications counsel. Off-the-cuff statements during an active incident create long-term reputational damage that outlasts the incident itself. Document every communication decision and the reasoning behind it. You'll want that record later.

The 30-90 days after containment is where organizational learning happens, or doesn't. Rebuild everything compromised on fresh infrastructure. Do not restore compromised systems as-is, because attackers routinely leave persistence mechanisms (web shells, scheduled tasks, backdoor service accounts) that survive standard cleanup. Rotate all credentials, API keys, certificates, and service account tokens, because attackers harvest these systematically during their dwell time. Conduct a formal root cause analysis, not to assign blame but to identify the control gaps that allowed initial entry, lateral movement, and persistence. Update your incident response runbook with specific improvements based on what actually happened. The playbooks you expected to execute probably weren't practical in the real incident, and the decisions you didn't anticipate need to be documented for next time. Share lessons with industry peers through ISACs (FS-ISAC, H-ISAC, Auto-ISAC). Collective defense improves everyone's security posture, and the next victim might be a supplier or customer of yours.

One final honest observation from too many of these incidents. The organizations that handle ransomware well are the ones that accepted it could happen and built capability for it. The ones that handle it poorly are the ones that treated it as something that happens to other people. Cultural readiness matters as much as technical readiness. When leadership has internalized the scenario, the response is faster, the decisions are better, and the recovery is cleaner. When leadership is still processing denial during hour two, the response fragments and the damage multiplies. Build the cultural readiness before you need it. Tabletop exercises, executive briefings, and simulations serve that purpose more than they serve the technical one. By the time you need the technical readiness, you also need the cultural readiness, and neither one is something you can build during an active incident.

EFROS MDR includes ransomware-specific detection content mapped to the TTPs of active ransomware groups, pre-authorized containment actions that fire in minutes, and incident response with forensic evidence preservation. Our MTTD for ransomware behavior is under 5 minutes, MTTC is under 15, and we maintain direct working relationships with the major cyber insurance carriers and incident response firms. We keep pre-approved playbooks for the 20 most common ransomware families and run quarterly tabletop exercises with client leadership teams. If you're not sure whether your current security posture would survive a ransomware attack at 3 AM on a Saturday, the honest answer is probably no. We run a free readiness assessment to show you exactly where. No pitch required, just an honest review.

Frequently Asked Questions

Should we pay the ransom?

There's no universal answer, but the default is no. Paying funds criminals, doesn't guarantee recovery, often leads to repeat targeting, and may violate OFAC sanctions. Pay only if (1) backups are confirmed compromised and business cannot continue, (2) the ransomware group has a reliable decryption track record, (3) OFAC status is cleared, and (4) your cyber insurance carrier pre-approves. Involve CEO, legal, insurance, and IR firm in the decision.

How fast can MDR detect and contain ransomware?

A well-operated MDR service detects ransomware behavior within 1-5 minutes of first encryption activity and contains (host isolation, account disable, token revocation) within 10-15 minutes. EFROS MDR operates to MTTD under 5 minutes and MTTC under 15 minutes contractually. Compare that to business-hours-only support where detection can lag 6-12 hours.

What is the single best control to prevent ransomware?

There isn't one. The three highest-leverage controls in combination are: (1) MFA on all privileged and VPN access — cuts credential-based entry by 80%+, (2) email security with sandbox detonation and URL rewriting — cuts phishing entry by 60%+, (3) immutable or air-gapped backups tested quarterly — guarantees recovery path regardless of attack. Deploy all three and your risk profile drops dramatically.

Do we need cyber insurance if we have strong security?

Yes. Even mature programs can't reduce ransomware risk to zero, and recovery costs far exceed typical IT budgets. Cyber insurance also provides access to experienced IR firms, legal counsel, and negotiation experts during an incident — capabilities most mid-market orgs don't have in-house. Expect $15K-$100K+ in annual premiums depending on revenue and controls.

About the author

Stefan Efros

Stefan Efros

CEO & Founder, EFROS

Stefan founded EFROS in 2009 after 15+ years in enterprise IT and cybersecurity. He sees how the pieces connect before others see the pieces themselves. Focus: security-first architecture, operational rigor, and SLA accountability.

CompTIA SecurityXCompTIA CySA+CompTIA Security+CompTIA PenTest+OSINTAWS Solutions Architect
Connect on LinkedIn

Related articles

More from the EFROS blog on cybersecurity and adjacent topics.