IT security workflow for Tallyfy

Handle security incidents methodically under pressure

When an incident hits, panic makes everything worse. You need a structured playbook that guides your team through triage, investigation, escalation, resolution, and documentation. This workflow keeps incident response organized when everyone else is scrambling.

8 steps
3 automations

Run this workflow in Tallyfy

1
Import this template into Tallyfy and assign your incident response team to specific steps - triage lead, investigators, communications, and documentation roles
2
Use Tallyfy's form fields to capture severity assessment, affected systems, evidence storage location, chain of custody status, and escalation decisions with justifications
3
Track every incident through all 8 steps in Tallyfy including stakeholder communications, regulatory notifications, and post-incident lessons learned for compliance documentation
Import this template into Tallyfy

Process steps

1

Initial triage

1 day from previous step
task
This is where you stop, take a breath, and figure out what you're actually dealing with. We've all seen teams jump straight to fixing things before they even understand the problem - and that usually makes it worse.

Your job here is to answer three questions fast:
  • When did this start? Record the exact time - you'll need it for your timeline later
  • How bad is it? Use the severity levels honestly. Calling everything "critical" means nothing's actually critical
  • What's affected? Systems, data, users - get specific. "The server is down" isn't enough

Don't spend more than 15-20 minutes on this step. You're not solving anything yet - you're getting a clear picture so the right people can start working on it. If you're unsure about severity, err on the side of rating it higher. It's much easier to downgrade later than to explain why you didn't escalate sooner.
Form fields in this step
Triage started at *
Initial severity assessment *
What is affected? *
2

Investigation initiation

1 day from previous step
task
Now that you know what you're dealing with, it's time to assign someone to own the investigation. This isn't optional and it isn't a committee decision - one person leads, and everyone knows who that person is.

Getting the team right matters more than getting started fast:
  • Pick a lead investigator who's got the right skills for this type of incident. Don't just default to the most senior person
  • If you need a team, keep it small. Three to five people is usually the sweet spot - more than that and you'll spend more time coordinating than investigating
  • Define clear focus areas upfront. "Look into everything" isn't a plan

The lead investigator should be someone who can make decisions without having to ask permission for every move. They'll need to act quickly, and waiting for approvals during an active incident costs you time you don't have. Make sure they've got the access and authority they need before you mark this step done.
Form fields in this step
Lead investigator *
Investigation team
Primary focus areas *
3

Evidence collection

1 day from previous step
task
Evidence disappears fast during incidents. Logs get rotated, memory gets overwritten, and temporary files vanish. If you don't grab it now, it might not be there when you need it later - especially if this ends up involving legal or regulatory action.

Treat evidence collection like you're building a case, because you might be:
  • Document exactly what you collected - screenshots, log exports, memory dumps, network captures. Be specific about filenames, timestamps, and sources
  • Store everything in a secure, access-controlled location. Don't just dump it on a shared drive where anyone can modify it
  • Chain of custody matters if this goes to court or regulators. Track who collected what, when they collected it, and where it's been since

A common mistake here is only collecting evidence that supports your initial theory. Cast a wide net. Grab logs from adjacent systems too. You won't know what's relevant until the investigation is further along, and by then the evidence might be gone.
Form fields in this step
Evidence collected *
Where is evidence stored? *
Chain of custody documented? *
4

Impact assessment

1 day from previous step
task
You need to understand the full blast radius of this incident. It's not enough to know one system is affected - you need to know whether customer data was exposed, how many systems are compromised, and what it's costing the business right now.

Work through each impact area honestly - downplaying things here will hurt you later:
  • Data impact - Was any data accessed, modified, or exfiltrated? If personal data is involved, that changes everything about your notification obligations
  • System impact - Which systems are down or degraded? What's the dependency chain? Sometimes the real damage isn't where the incident started
  • Customer impact - Can customers still use your product? Are they seeing errors? Have they already noticed and started complaining?
  • Financial impact - This doesn't have to be exact, but get a rough estimate. Lost revenue, remediation costs, potential fines - leadership will ask for numbers

Talk to the people who actually run these systems. The monitoring dashboard won't tell you everything. Someone on the ops team might know about a dependency you didn't consider.
Form fields in this step
Data impact (if any) *
System impact *
Customer impact *
Estimated financial impact
5

Escalation determination

1 day from previous step
task
Here's where you decide who else needs to know - and who else needs to get involved. This isn't about covering yourself (though it does that too). It's about getting the right resources and authority behind the response before it's too late.

Think through each escalation path based on what you've learned so far:
  • Executive escalation - If the incident affects business operations, revenue, or reputation, leadership needs to hear it from you before they hear it from a customer or the press
  • Legal team - Any time personal data might be involved, get legal in the loop early. They'll tell you what notifications are required and by when
  • External authorities - Some incidents require reporting to regulators or law enforcement. Know your thresholds before an incident happens
  • No escalation - That's a valid answer too. But document why you made that call, because someone will ask later

Write down your reasoning clearly. Six months from now, an auditor might want to know why you did or didn't escalate. "It didn't seem that bad" won't hold up. Be specific about what information you had at the time and how it informed your decision.
Form fields in this step
Escalation decision *
Why this escalation level? *
Escalated to (names)
6

Resolution

1 day from previous step
task
This is where you actually fix things - but in two distinct phases. First you contain the damage (stop the bleeding), then you remediate (fix the underlying problem). Don't skip containment to jump to remediation, even if you think you know the root cause.

Work through this in order:
  • Containment first - Isolate affected systems, revoke compromised credentials, block malicious IPs. Your goal is to stop things from getting worse while you work on a real fix
  • Remediation second - Patch the vulnerability, fix the misconfiguration, close the gap. This is where you address the root cause, not just the symptoms
  • Record the completion time - You'll need this for your incident timeline and to calculate your mean time to resolution

A mistake teams often make is declaring victory after containment. Yes, the bleeding stopped - but if you haven't fixed the root cause, it'll happen again. Take the time to do both. And test your remediation before you call it done. The last thing you want is to mark this resolved and then have the same incident pop up again tomorrow.
Form fields in this step
Containment actions taken *
Remediation steps *
Resolution completed at *
7

Stakeholder communication

1 day from previous step
task
Communication during an incident is where teams most often drop the ball. You're so focused on fixing the problem that you forget people are waiting for updates - and silence breeds panic. Get ahead of it.

You've got three audiences to think about:
  • Internal teams - Your colleagues need to know what happened, what's being done, and whether it affects their work. Don't make them find out from Twitter. Be direct, be honest, and update them regularly even when there's nothing new
  • External parties - Customers, partners, vendors. If they're affected, they deserve to know. Draft comms carefully - once it's sent, you can't unsend it. But don't let perfect be the enemy of timely
  • Regulatory bodies - If personal data was breached, many regulations require notification within specific timeframes (72 hours for GDPR, for example). Miss the deadline and you've added a compliance problem on top of your security problem

Keep a log of every communication you send - who received it, when, and what it said. You'll need this for your incident report, and it protects you if someone later claims they weren't informed.
Form fields in this step
Internal communications sent *
External communications sent
Regulatory notification needed? *
8

Closure documentation

1 day from previous step
task
This is the step that everyone wants to skip because the incident is over and there's a backlog of normal work waiting. Don't skip it. The documentation you create here is what turns a bad experience into an improvement. It's also what auditors and regulators will want to see.

Do this while it's fresh - not next week when the details have faded:
  • Final incident report - Write it up and store it somewhere permanent. Include the full timeline, root cause analysis, impact summary, and actions taken. Link to evidence and communications
  • Lessons learned - Be genuinely honest here. What worked? What didn't? Where did the process break down? This isn't about blame - it's about getting better. If your monitoring didn't catch it, say so. If the escalation was too slow, say that too
  • Prevention recommendations - What specific changes would prevent this from happening again? Be concrete. "Improve security" isn't a recommendation. "Add rate limiting to the login endpoint and set up alerts for failed auth attempts over 50/minute" is
  • Closure date - Mark the official close. This starts the clock on implementing your prevention recommendations

The best incident response teams treat every incident as a chance to get better. Your documentation is how you make that happen.
Form fields in this step
Final incident report location *
Lessons learned *
Recommendations for prevention *
Incident closed on *

Ready to use this template?

Sign up free and start running this process in minutes.