Skip to content

Handling idempotency

What is idempotency and why it matters

Idempotency ensures your integrations work correctly when the same thing happens twice. Think of it like pressing an elevator button - no matter how many times you press it, the elevator only comes once. That’s what you want when building integrations with Tallyfy’s webhooks and API.

Here’s the reality: webhooks can fire multiple times for the same event. External systems might send duplicate API requests. Network glitches happen. Without proper idempotency handling, you’ll face a mess - duplicate records in your database, customers getting charged twice, or inventory counts going haywire.

Common scenarios requiring idempotency

Webhooks firing multiple times

Let’s say someone completes a task in Tallyfy. Your webhook gets the “task completed” event - great! But what if they reopen that task and complete it again? You’ll get another webhook for the exact same task.

It happens more than you’d think:

  1. User completes the task
  2. Your webhook receives the event
  3. User realizes they forgot something, reopens the task
  4. User completes it again
  5. Your webhook receives another event - same task, different timestamp

Without idempotency handling, you’re looking at chaos:

  • Duplicate records cluttering your database
  • Customers getting three confirmation emails for one action
  • Payment systems processing the same order twice (yikes!)
  • Inventory numbers that don’t match reality

Process-level webhooks generating multiple events

Set up a webhook at the process level, and you’re in for a surprise - it fires for every single task completion in that process. Got 10 tasks? That’s 10 webhook events coming your way.

The math gets scary fast:

  • 10-task process = 10 webhook events minimum
  • Tasks get reopened and recompleted? More events pile up
  • Your system better be ready for the flood

External systems sending duplicate events

It’s not just Tallyfy that can send duplicates - your own systems can too. We’ve seen it all:

  • Helpdesk software hiccups and sends the same ticket update twice
  • Network timeout triggers an automatic retry, but the first request actually went through
  • Someone double-clicks a button (we’ve all been there) and fires off multiple API calls

Implementing idempotency strategies

Use unique identifiers

Tallyfy gives you everything you need to catch duplicates - unique IDs in every webhook payload. Use them!

{
"event": "task.completed",
"task_id": "abc123",
"process_id": "xyz789",
"completed_at": "2024-01-15T10:30:00Z",
"completed_by": "user@example.com"
}

Your game plan:

  1. Store the task_id and completed_at combo in your database
  2. Check this combination before processing any webhook
  3. Already seen it? Skip or update the existing record
  4. Brand new? Process away

Implement event deduplication

You need a dedicated place to track what you’ve already processed. A simple table works wonders:

CREATE TABLE processed_events (
event_id VARCHAR(255) PRIMARY KEY,
event_type VARCHAR(100),
processed_at TIMESTAMP,
payload JSON
);

Each time a webhook arrives:

  1. Build a unique event ID: task_id + event_type + timestamp
  2. Check your table - have you seen this ID before?
  3. New event? Process it and save that ID
  4. Old news? Log it and move on

Design for graceful failure

Here’s where things get interesting. When duplicate API requests come in, don’t fight them - work with them:

  1. Return success for duplicate requests: External system tries to create the same record twice? Don’t throw an error. Send back a 200 OK with the existing record. Everyone’s happy.

  2. Use conditional updates: Before updating form fields through the API, check what’s already there:

    • Value already matches? Skip the update
    • Different value? Go ahead and update
    • Always add a comment for the audit trail
  3. Use request IDs: Make external systems include a unique ID with each call:

    X-Request-ID: unique-request-identifier-123

    Keep these IDs for 24 hours. When you see a repeat, you’ll know it’s a retry.

Best practices for specific integrations

Handling task completion webhooks

Tasks get reopened and recompleted all the time. You’ve got to track the full history:

  1. Keep a record of every completion:

    {
    "task_id": "abc123",
    "completions": [
    {"completed_at": "2024-01-15T10:30:00Z", "completed_by": "user1@example.com"},
    {"completed_at": "2024-01-15T14:45:00Z", "completed_by": "user2@example.com"}
    ]
    }
  2. Pick your strategy based on what makes sense:

    • Care only about the first completion? Ignore the rest
    • Need to track every completion? Store them all separately
    • Only the latest matters? Update your records each time

Managing process-level webhook volume

Got a process with 50 tasks? That’s 50 webhooks headed your way. Here’s how to handle the deluge:

  1. Batch processing: Don’t process events one by one. Collect them and process in chunks every 5 minutes
  2. Use queues: Message queues are your friend - they’ll prevent your system from choking on the volume
  3. Filter smartly: Not all tasks are equal. Check the payload and process only what matters to you

Preventing duplicate API submissions

Stop duplicates before they start when external systems talk to Tallyfy:

  1. Use idempotency keys: Every operation needs its own unique key:

    POST /api/processes/launch
    X-Idempotency-Key: ticket-12345-launch-attempt-1
  2. Check before you leap: Always verify the current state first:

    • About to launch a process? Check if it already exists
    • Completing a task? Make sure it’s not already done
    • Updating a form field? Confirm it needs changing

Example implementation patterns

Pattern 1: Webhook processor with deduplication

async function processWebhook(payload) {
// Generate unique event key
const eventKey = `${payload.task_id}-${payload.event}-${payload.timestamp}`;
// Check if already processed
const existing = await db.query('SELECT * FROM processed_events WHERE event_key = ?', [eventKey]);
if (existing.length > 0) {
console.log('Duplicate event detected, skipping:', eventKey);
return { status: 'duplicate', message: 'Event already processed' };
}
// Process the event
await handleEvent(payload);
// Mark as processed
await db.query('INSERT INTO processed_events (event_key, processed_at) VALUES (?, NOW())', [eventKey]);
return { status: 'processed', message: 'Event processed successfully' };
}

Pattern 2: API integration with retry safety

async function updateTaskField(taskId, fieldName, fieldValue, requestId) {
// Check if this request was already processed
const cachedResult = await cache.get(`request:${requestId}`);
if (cachedResult) {
return cachedResult;
}
// Get current task state
const task = await tallyfyApi.getTask(taskId);
// Check if update is needed
if (task.fields[fieldName] === fieldValue) {
const result = { status: 'unchanged', message: 'Field already has the desired value' };
await cache.set(`request:${requestId}`, result, 86400); // Cache for 24 hours
return result;
}
// Perform update
const updatedTask = await tallyfyApi.updateTask(taskId, {
fields: { [fieldName]: fieldValue }
});
// Add comment for audit trail
await tallyfyApi.addComment(taskId,
`Field "${fieldName}" updated to "${fieldValue}" via API integration`
);
const result = { status: 'updated', task: updatedTask };
await cache.set(`request:${requestId}`, result, 86400);
return result;
}

Testing your idempotency implementation

Don’t wait for production to find out if your deduplication works. Test it now:

  1. Simulate duplicate webhooks: Fire the same webhook at your system 3-4 times in a row
  2. Test network retries: Use tools like Postman to simulate connection timeouts and automatic retries
  3. Check data consistency: After your tests, verify your data isn’t corrupted or duplicated
  4. Monitor production logs: Watch for duplicate patterns - they’ll show up eventually

Troubleshooting common issues

IssueCauseSolution
Duplicate records in databaseNot checking for existing records before insertImplement unique constraints and check before insert
Missing webhook eventsTreating duplicates as errorsLog duplicates but don’t fail the webhook response
Inconsistent data stateProcessing events out of orderUse timestamps to ensure correct ordering
API rate limits from retriesNot caching successful responsesImplement response caching with appropriate TTL

Important consideration

Always respond with a 2xx status code to webhook requests, even for duplicates. If you return an error, Tallyfy thinks something went wrong and retries - creating even more duplicates. Don’t make things worse!

Next steps

You’ve built idempotency into your integration - now keep it running smoothly:

  1. Monitor your logs for duplicate patterns (they’ll reveal retry behaviors you didn’t expect)
  2. Fine-tune your deduplication window based on real-world data
  3. Need complete audit trails? Consider event sourcing for the full picture
  4. Stay current with Tallyfy’s webhook documentation - payload formats can evolve

Webhooks > Details about webhooks

Tallyfy webhooks enable real-time workflow automation by sending JSON data to external systems when specific events occur with configuration options at both template and step levels including retry mechanisms security headers and integration examples for platforms like Slack.

Webhooks > Webhook scenarios

Tallyfy webhooks automatically send comprehensive JSON data to external systems when workflow events occur providing real-time integration capabilities that trigger immediate actions across your technology ecosystem while carrying all process information collected up to that specific point.

Integrations > Webhooks

Tallyfy webhooks enable real-time system integration by automatically sending structured JSON messages to external systems whenever specific workflow events occur eliminating the need for constant polling and transforming Tallyfy into a reactive automation hub that triggers immediate actions across your entire technology ecosystem.

Open Api > Integrate with Tallyfy using the API

Tallyfy provides a comprehensive REST API that enables developers to integrate workflow functionality into external applications using two authentication methods - user-based tokens for personal integrations and application-based OAuth credentials for third-party applications - while supporting features like token refresh automatic retry logic and webhook capabilities for event-driven integrations.