Skip to content

Advanced patterns and testing

Collection runners handle bulk operations. Newman plugs into CI/CD pipelines. Monitors run scheduled checks. Together, these patterns take you from manual API testing to automated workflow validation.

Multi-organization testing

If you work across several Tallyfy organizations, you’ll want a fast way to switch between them during testing.

Environment switching setup

Create a pre-request script that rotates through your orgs automatically:

// Pre-request script to rotate organizations
const orgs = [
{
name: "Production",
id: "org_prod_123",
clientId: "client_prod",
clientSecret: pm.environment.get("PROD_SECRET")
},
{
name: "Staging",
id: "org_stage_456",
clientId: "client_stage",
clientSecret: pm.environment.get("STAGE_SECRET")
}
];
let currentIndex = pm.variables.get("ORG_INDEX") || 0;
const currentOrg = orgs[currentIndex];
pm.environment.set("TALLYFY_ORG_ID", currentOrg.id);
pm.environment.set("TALLYFY_CLIENT_ID", currentOrg.clientId);
pm.environment.set("TALLYFY_CLIENT_SECRET", currentOrg.clientSecret);
console.log(`Testing with ${currentOrg.name} organization`);
const nextIndex = (currentIndex + 1) % orgs.length;
pm.variables.set("ORG_INDEX", nextIndex);

Cross-org data comparison

Compare processes across organizations after fetching from each:

const orgProcesses = pm.environment.get("ORG_PROCESSES") || {};
const currentOrg = pm.environment.get("TALLYFY_ORG_ID");
// Tallyfy wraps responses in a "data" property
orgProcesses[currentOrg] = pm.response.json().data;
pm.environment.set("ORG_PROCESSES", orgProcesses);
const orgIds = Object.keys(orgProcesses);
if (orgIds.length >= 2) {
console.log("Process count comparison:");
orgIds.forEach(orgId => {
console.log(`${orgId}: ${orgProcesses[orgId].length} active processes`);
});
// Find processes with the same name across orgs
const processNames = new Set();
orgIds.forEach(orgId => {
orgProcesses[orgId].forEach(p => processNames.add(p.name));
});
processNames.forEach(name => {
const orgsWithProcess = orgIds.filter(orgId =>
orgProcesses[orgId].some(p => p.name === name)
);
if (orgsWithProcess.length > 1) {
console.log(`"${name}" exists in ${orgsWithProcess.length} orgs`);
}
});
}

Performance monitoring

Response time tracking

Add this to your collection’s Tests tab to track response times over multiple runs:

pm.test("Response time is acceptable", function () {
pm.expect(pm.response.responseTime).to.be.below(1000);
});
const perfData = pm.environment.get("PERFORMANCE_DATA") || [];
perfData.push({
endpoint: pm.request.url.toString(),
method: pm.request.method,
responseTime: pm.response.responseTime,
timestamp: new Date().toISOString(),
status: pm.response.code
});
// Keep last 100 entries
if (perfData.length > 100) perfData.shift();
pm.environment.set("PERFORMANCE_DATA", perfData);
const recentTimes = perfData.slice(-10).map(d => d.responseTime);
const avgTime = recentTimes.reduce((a, b) => a + b, 0) / recentTimes.length;
if (avgTime > 800) {
console.warn(`Performance degradation detected. Avg: ${avgTime}ms`);
}

Endpoint performance comparison

const perfData = pm.environment.get("PERFORMANCE_DATA") || [];
const endpointStats = {};
perfData.forEach(entry => {
// Normalize UUIDs and numeric IDs to /:id
const endpoint = entry.endpoint.replace(/\/[a-f0-9\-]{8,}/g, '/:id');
if (!endpointStats[endpoint]) {
endpointStats[endpoint] = { count: 0, totalTime: 0, maxTime: 0, minTime: Infinity };
}
const stats = endpointStats[endpoint];
stats.count++;
stats.totalTime += entry.responseTime;
stats.maxTime = Math.max(stats.maxTime, entry.responseTime);
stats.minTime = Math.min(stats.minTime, entry.responseTime);
});
Object.entries(endpointStats).forEach(([endpoint, stats]) => {
const avg = (stats.totalTime / stats.count).toFixed(0);
console.log(`${endpoint}: ${stats.count} calls, avg ${avg}ms, min ${stats.minTime}ms, max ${stats.maxTime}ms`);
});

Mock server setup

Postman mock servers let you simulate API responses without hitting the real Tallyfy API. They match requests by HTTP method, path, query parameters, and headers like x-mock-response-code or x-mock-response-name.

Capturing examples for mocks

// Save successful responses as mock examples
if (pm.response.code >= 200 && pm.response.code < 400) {
const examples = pm.environment.get("MOCK_EXAMPLES") || {};
const key = `${pm.request.method}_${pm.request.url.getPath().replace(/\//g, '_')}`;
examples[key] = {
request: {
method: pm.request.method,
url: pm.request.url.toString(),
headers: pm.request.headers.toObject(),
body: pm.request.body ? pm.request.body.raw : null
},
response: {
status: pm.response.code,
headers: pm.response.headers.toObject(),
body: pm.response.text()
},
timestamp: new Date().toISOString()
};
pm.environment.set("MOCK_EXAMPLES", examples);
}

Switching between mock and real servers

const mockConfig = {
development: {
useMock: true,
mockUrl: "https://mock-server-123.pstmn.io"
},
staging: {
useMock: false,
realUrl: "https://go.tallyfy.com/api"
},
production: {
useMock: false,
realUrl: "https://go.tallyfy.com/api"
}
};
const env = pm.environment.get("TARGET_ENV") || "development";
const config = mockConfig[env];
if (config.useMock) {
pm.request.url.host = config.mockUrl.replace(/https?:\/\//, '').split('/');
pm.request.url.protocol = "https";
const scenario = pm.environment.get("MOCK_SCENARIO") || "success";
pm.request.headers.add({ key: 'x-mock-response-name', value: scenario });
}

Error simulation

const errorSimulation = {
"rate_limit": { headers: {"x-mock-response-code": "429"} },
"server_error": { headers: {"x-mock-response-code": "500"} },
"timeout": { headers: {"x-mock-response-code": "408"} }
};
const simulateError = pm.environment.get("SIMULATE_ERROR");
if (simulateError && errorSimulation[simulateError]) {
Object.entries(errorSimulation[simulateError].headers).forEach(([key, value]) => {
pm.request.headers.add({key, value});
});
}

CI/CD integration

Newman vs Postman CLI

FeatureNewmanPostman CLI
Installationnpm install -g newmanDownload from Postman
AuthenticationAPI key onlyFull OAuth support
Cloud featuresLimitedFull workspace sync
CI/CD maturityWell-establishedNewer, growing
ExtensibilityRich plugin systemLimited but improving

Newman setup

Terminal window
# Install Newman (requires Node.js v16+)
npm install -g newman
newman --version

Running collections with Newman

Terminal window
# Basic run with reporting
newman run tallyfy-api.postman_collection.json \
-e production.postman_environment.json \
--reporters cli,json,html \
--reporter-json-export results.json \
--reporter-html-export report.html \
--delay-request 100 \
--timeout-request 30000
# Data-driven run
newman run collection.json \
-e environment.json \
-d test-data.csv \
--iteration-count 5
# Stop on first failure
newman run collection.json \
--bail failure \
--global-var "API_BASE=https://go.tallyfy.com/api"
# Run a specific folder only
newman run collection.json \
--folder "Authentication Tests" \
--env-var "SKIP_CLEANUP=true"

GitHub Actions integration

Here’s a working pipeline that tests against multiple environments:

.github/workflows/api-tests.yml:

name: Tallyfy API Tests
on:
schedule:
- cron: '0 */4 * * *'
workflow_dispatch:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
NODE_VERSION: '18'
jobs:
api-tests:
runs-on: ubuntu-latest
strategy:
matrix:
environment: [staging, production]
test-suite: [smoke, full]
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install Newman
run: npm install -g newman newman-reporter-htmlextra
- name: Run API tests
env:
TALLYFY_CLIENT_ID: ${{ secrets[format('TALLYFY_CLIENT_ID_{0}', matrix.environment)] }}
TALLYFY_CLIENT_SECRET: ${{ secrets[format('TALLYFY_CLIENT_SECRET_{0}', matrix.environment)] }}
run: |
newman run postman/tallyfy-api.json \
-e postman/${{ matrix.environment }}.json \
--folder "${{ matrix.test-suite }}" \
--env-var "TALLYFY_CLIENT_ID=$TALLYFY_CLIENT_ID" \
--env-var "TALLYFY_CLIENT_SECRET=$TALLYFY_CLIENT_SECRET" \
--reporters cli,json \
--reporter-json-export results-${{ matrix.environment }}-${{ matrix.test-suite }}.json \
--delay-request 100 \
--timeout-request 30000 \
--bail failure
continue-on-error: true
- name: Parse results
id: test-results
run: |
RESULT_FILE="results-${{ matrix.environment }}-${{ matrix.test-suite }}.json"
if [ -f "$RESULT_FILE" ]; then
TOTAL=$(jq '.run.stats.requests.total' "$RESULT_FILE")
FAILED=$(jq '.run.stats.requests.failed' "$RESULT_FILE")
echo "total_requests=$TOTAL" >> $GITHUB_OUTPUT
echo "failed_requests=$FAILED" >> $GITHUB_OUTPUT
fi
- name: Upload artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results-${{ matrix.environment }}-${{ matrix.test-suite }}
path: results-*.json
retention-days: 30
- name: Fail on test failures
if: steps.test-results.outputs.failed_requests > 0
run: |
echo "API tests failed: ${{ steps.test-results.outputs.failed_requests }}/${{ steps.test-results.outputs.total_requests }}"
exit 1

Performance regression detection

Terminal window
newman run collection.json \
--reporters cli,json \
--reporter-json-export current-results.json
# Compare against a saved baseline
node scripts/performance-comparison.js \
--baseline baseline-results.json \
--current current-results.json \
--threshold 20

Data-driven testing

Postman supports CSV and JSON data files. CSV works for flat data; JSON handles nested structures.

CSV example - test-data.csv:

process_name,template_id,assignee,expected_status
"Q1 Budget Review","template_123","john@company.com","active"
"Employee Onboarding","template_456","hr@company.com","pending"

JSON example - test-data.json:

[
{
"process_name": "Q1 Budget Review",
"template_id": "template_123",
"assignee": "john@company.com",
"kick_off_data": {
"field_department": "Finance",
"field_budget_amount": 50000
},
"expected_tasks": 5,
"validation_rules": {
"response_time_max": 2000,
"required_fields": ["id", "name", "status"]
}
}
]

Using data variables in tests:

const expectedTasks = parseInt(pm.variables.get("expected_tasks"));
const validationRules = JSON.parse(pm.variables.get("validation_rules") || '{}');
pm.test(`Process has ${expectedTasks} tasks`, () => {
const response = pm.response.json();
pm.expect(response.data.tasks).to.have.lengthOf(expectedTasks);
});
if (validationRules.response_time_max) {
pm.test(`Response under ${validationRules.response_time_max}ms`, () => {
pm.expect(pm.response.responseTime).to.be.below(validationRules.response_time_max);
});
}
if (validationRules.required_fields) {
validationRules.required_fields.forEach(field => {
pm.test(`Has field: ${field}`, () => {
pm.expect(pm.response.json().data).to.have.property(field);
});
});
}

Error scenario data file:

[
{
"scenario": "invalid_template_id",
"template_id": "invalid_123",
"expected_status": 404
},
{
"scenario": "missing_required_field",
"template_id": "template_123",
"kick_off_data": {},
"expected_status": 422
}
]
const scenario = pm.variables.get("scenario");
const expectedStatus = parseInt(pm.variables.get("expected_status"));
pm.test(`${scenario} returns ${expectedStatus}`, () => {
pm.expect(pm.response.code).to.equal(expectedStatus);
});

Collection runner patterns

Workflow simulation

Structure your collection to mirror a full workflow, passing data between requests:

// Collection order:
// 1. Authenticate
// 2. Create process (POST /organizations/{org}/runs)
// 3. Complete tasks (PUT /organizations/{org}/runs/{run}/tasks/{task})
// 4. Add comment
// 5. Verify process complete
// In "Create Process" Tests tab - note the .data wrapper:
const processId = pm.response.json().data.id;
pm.collectionVariables.set("CURRENT_PROCESS_ID", processId);
// Subsequent requests reference {{CURRENT_PROCESS_ID}}

Parallel API calls

Fire multiple requests at once to compare response times:

async function parallelOperations() {
const operations = [
{ name: "List Templates", endpoint: "/checklists" },
{ name: "List Processes", endpoint: "/runs" },
{ name: "List Tasks", endpoint: "/me/tasks" },
{ name: "List Users", endpoint: "/users" }
];
const results = await Promise.all(
operations.map(op =>
pm.sendRequest({
url: `${pm.environment.get("TALLYFY_BASE_URL")}/organizations/${pm.environment.get("TALLYFY_ORG_ID")}${op.endpoint}`,
method: 'GET',
header: {
'Authorization': `Bearer ${pm.environment.get("TALLYFY_ACCESS_TOKEN")}`,
'X-Tallyfy-Client': 'APIClient'
}
}).then(response => ({
name: op.name,
status: response.code,
count: response.json().data?.length || 0,
time: response.responseTime
}))
)
);
results.forEach(r => {
console.log(`${r.name}: ${r.count} items in ${r.time}ms`);
});
}
parallelOperations();

Monitoring and alerting

Scheduled monitors

Postman monitors run collections on a schedule. Two things worth monitoring:

Stuck process detection:

pm.test("No stuck processes", function() {
const processes = pm.response.json().data;
const stuckCount = processes.filter(p => {
const hoursSinceUpdate = (Date.now() - new Date(p.updated_at)) / 3600000;
return hoursSinceUpdate > 24 && p.status === 'active';
}).length;
pm.expect(stuckCount).to.equal(0);
});

API availability:

pm.test("API is responsive", function() {
pm.response.to.have.status(200);
pm.expect(pm.response.responseTime).to.be.below(2000);
});

Slack alerts on failure

if (pm.test.failures && pm.test.failures.length > 0) {
pm.sendRequest({
url: pm.environment.get("SLACK_WEBHOOK_URL"),
method: 'POST',
header: { 'Content-Type': 'application/json' },
body: {
mode: 'raw',
raw: JSON.stringify({
text: "Tallyfy API Monitor Alert",
attachments: [{
color: "danger",
fields: [
{ title: "Failed Tests", value: pm.test.failures.map(f => f.name).join("\n") },
{ title: "Environment", value: pm.environment.name, short: true },
{ title: "Time", value: new Date().toISOString(), short: true }
]
}]
})
}
});
}

Performance optimization

Adaptive request delays

// Slow down when the API responds slowly
const lastResponseTime = pm.environment.get("LAST_RESPONSE_TIME");
if (lastResponseTime > 2000) {
pm.environment.set("REQUEST_DELAY", 500);
} else {
pm.environment.set("REQUEST_DELAY", 100);
}

Memory cleanup between iterations

["TEMP_PROCESS_ID", "TEMP_TASK_DATA", "CACHED_RESPONSE", "ITERATION_STATE"]
.forEach(key => pm.environment.unset(key));

Postman > Collection organization best practices

Organize Postman collections for Tallyfy’s API by grouping requests by resource type using folders for workflows and implementing consistent naming conventions with bracket prefixes like [SETUP] [CORE] and [UTILS] to help team members quickly identify purpose and access levels while reducing errors and speeding up integration development.

Api Clients > Getting started with Postman API testing

Set up Postman with Tallyfy’s REST API to test workflow automation endpoints including OAuth authentication with the password grant type, required headers, and common operations like launching processes and completing tasks.

Postman > Troubleshooting common issues

Fixing Postman errors with Tallyfy’s API requires checking the X-Tallyfy-Client header first then verifying the authentication grant type is password (not client_credentials) and ensuring tokens haven’t expired since these three issues cause most API failures.

Postman > Working with templates and processes

Working with Tallyfy templates and processes through the API in Postman requires understanding that the API uses “checklists” for templates and “runs” for processes, with process creation done via POST to the runs endpoint with a checklist_id in the request body.