Api Clients > Getting started with Postman for Tallyfy API
Advanced patterns and testing
To implement advanced Postman workflows with Tallyfy, use collection runners for bulk operations, Newman for CI/CD integration, and monitors for scheduled checks. These patterns help scale from manual testing to automated workflow management.
Many users work across multiple Tallyfy organizations. Here’s how to test efficiently:
Create separate environments for each organization:
// Pre-request script to rotate organizationsconst orgs = [ { name: "Production", id: "org_prod_123", clientId: "client_prod", clientSecret: pm.environment.get("PROD_SECRET") }, { name: "Staging", id: "org_stage_456", clientId: "client_stage", clientSecret: pm.environment.get("STAGE_SECRET") }, { name: "Development", id: "org_dev_789", clientId: "client_dev", clientSecret: pm.environment.get("DEV_SECRET") }];
// Get current org indexlet currentIndex = pm.variables.get("ORG_INDEX") || 0;
// Set current org variablesconst currentOrg = orgs[currentIndex];pm.environment.set("TALLYFY_ORG_ID", currentOrg.id);pm.environment.set("TALLYFY_CLIENT_ID", currentOrg.clientId);pm.environment.set("TALLYFY_CLIENT_SECRET", currentOrg.clientSecret);
console.log(`Testing with ${currentOrg.name} organization`);
// Rotate to next org for next runconst nextIndex = (currentIndex + 1) % orgs.length;pm.variables.set("ORG_INDEX", nextIndex);
Compare processes across organizations:
// After getting processes from multiple orgsconst orgProcesses = pm.environment.get("ORG_PROCESSES") || {};const currentOrg = pm.environment.get("TALLYFY_ORG_ID");
// Store current org's processesorgProcesses[currentOrg] = pm.response.json().data;pm.environment.set("ORG_PROCESSES", orgProcesses);
// If we have data from all orgs, compareconst orgIds = Object.keys(orgProcesses);if (orgIds.length >= 3) { console.log("Process count comparison:"); orgIds.forEach(orgId => { const count = orgProcesses[orgId].length; console.log(`${orgId}: ${count} active processes`); });
// Find processes with same name across orgs const processNames = new Set(); orgIds.forEach(orgId => { orgProcesses[orgId].forEach(p => processNames.add(p.name)); });
console.log("\nCommon process names across orgs:"); processNames.forEach(name => { const orgsWithProcess = orgIds.filter(orgId => orgProcesses[orgId].some(p => p.name === name) ); if (orgsWithProcess.length > 1) { console.log(`"${name}" exists in ${orgsWithProcess.length} orgs`); } });}
Track API performance to identify bottlenecks:
// Add to collection's Tests tabpm.test("Response time is acceptable", function () { const maxTime = 1000; // 1 second pm.expect(pm.response.responseTime).to.be.below(maxTime);});
// Track performance over timeconst perfData = pm.environment.get("PERFORMANCE_DATA") || [];perfData.push({ endpoint: pm.request.url.toString(), method: pm.request.method, responseTime: pm.response.responseTime, timestamp: new Date().toISOString(), status: pm.response.code});
// Keep last 100 entriesif (perfData.length > 100) { perfData.shift();}
pm.environment.set("PERFORMANCE_DATA", perfData);
// Analyze trendsconst recentTimes = perfData.slice(-10).map(d => d.responseTime);const avgTime = recentTimes.reduce((a, b) => a + b, 0) / recentTimes.length;
if (avgTime > 800) { console.warn(`Performance degradation detected. Average response time: ${avgTime}ms`);}
// Aggregate performance by endpointconst perfData = pm.environment.get("PERFORMANCE_DATA") || [];const endpointStats = {};
perfData.forEach(entry => { const endpoint = entry.endpoint.replace(/\/[a-f0-9\-]+/g, '/:id'); // Normalize IDs
if (!endpointStats[endpoint]) { endpointStats[endpoint] = { count: 0, totalTime: 0, maxTime: 0, minTime: Infinity }; }
const stats = endpointStats[endpoint]; stats.count++; stats.totalTime += entry.responseTime; stats.maxTime = Math.max(stats.maxTime, entry.responseTime); stats.minTime = Math.min(stats.minTime, entry.responseTime);});
console.log("Endpoint Performance Report:");Object.entries(endpointStats).forEach(([endpoint, stats]) => { const avgTime = (stats.totalTime / stats.count).toFixed(0); console.log(`${endpoint}:`); console.log(` Calls: ${stats.count}`); console.log(` Avg: ${avgTime}ms`); console.log(` Min: ${stats.minTime}ms`); console.log(` Max: ${stats.maxTime}ms`);});
Postman mock servers simulate API behavior for testing and development. Unlike simple stubs, they use sophisticated matching algorithms.
Expert insight: Mock servers in Postman use a request-response matching algorithm that considers HTTP method, path, query parameters, and custom headers (x-mock-response-code, x-mock-response-id, x-mock-response-name).
-
Intelligent example capture:
// Enhanced response capturing with metadataif (pm.response.code >= 200 && pm.response.code < 400) {const examples = pm.environment.get("MOCK_EXAMPLES") || {};const endpoint = pm.request.url.getPath();const method = pm.request.method;const key = `${method}_${endpoint.replace(/\//g, '_')}`;examples[key] = {request: {method: method,url: pm.request.url.toString(),headers: pm.request.headers.toObject(),body: pm.request.body ? pm.request.body.raw : null},response: {status: pm.response.code,headers: pm.response.headers.toObject(),body: pm.response.text(),responseTime: pm.response.responseTime},timestamp: new Date().toISOString()};pm.environment.set("MOCK_EXAMPLES", examples);console.log(`Captured example for ${key}`);} -
Dynamic mock server configuration:
// Smart mock/real server switchingconst mockConfig = {development: {useMock: true,mockUrl: "https://mock-server-123.pstmn.io",scenarios: ["success", "error", "timeout"]},staging: {useMock: false,realUrl: "https://staging-api.tallyfy.com"},production: {useMock: false,realUrl: "https://go.tallyfy.com/api"}};const environment = pm.environment.get("TARGET_ENV") || "development";const config = mockConfig[environment];if (config.useMock) {pm.request.url.host = config.mockUrl.replace(/https?:\/\//, '').split('/');pm.request.url.protocol = "https";// Add mock scenario headersconst scenario = pm.environment.get("MOCK_SCENARIO") || "success";pm.request.headers.add({key: 'x-mock-response-name',value: scenario});} -
Mock server error simulation:
// Simulate various API conditionsconst errorSimulation = {"rate_limit": {code: 429,headers: {"x-mock-response-code": "429"},delay: 1000},"server_error": {code: 500,headers: {"x-mock-response-code": "500"}},"timeout": {code: 408,headers: {"x-mock-response-code": "408"},delay: 30000}};const simulateError = pm.environment.get("SIMULATE_ERROR");if (simulateError && errorSimulation[simulateError]) {const error = errorSimulation[simulateError];Object.entries(error.headers).forEach(([key, value]) => {pm.request.headers.add({key, value});});}
Best practice insight: Use mock servers to test system resilience without relying on external APIs you can’t control.
// E2E test pattern with mocked dependenciesconst testScenarios = { "happy_path": { mockResponses: ["success", "success", "success"], expectedOutcome: "process_completed" }, "partial_failure": { mockResponses: ["success", "error", "success"], expectedOutcome: "process_retry" }, "complete_failure": { mockResponses: ["error", "error", "error"], expectedOutcome: "process_failed" }};
const currentScenario = pm.environment.get("E2E_SCENARIO") || "happy_path";const scenario = testScenarios[currentScenario];
// Set up mock responses for this scenarioscenario.mockResponses.forEach((response, index) => { pm.environment.set(`MOCK_STEP_${index + 1}`, response);});
2024 Update: You now have two options for command-line collection execution:
Feature | Newman | Postman CLI |
---|---|---|
Installation | npm install -g newman | Download from Postman |
Authentication | API key only | Full OAuth support |
Cloud features | Limited | Full workspace sync |
CI/CD integration | Mature ecosystem | Newer, growing |
Extensibility | Rich plugin system | Limited but improving |
Newman setup for automated testing:
# Install Newman (requires Node.js v16+)npm install -g newman
# Verify installationnewman --version
Advanced Newman execution patterns:
# Basic run with comprehensive reportingnewman run tallyfy-api.postman_collection.json \ -e production.postman_environment.json \ --reporters cli,json,html \ --reporter-json-export results.json \ --reporter-html-export report.html \ --delay-request 100 \ --timeout-request 30000
# Parallel execution with data-driven testingnewman run collection.json \ -e environment.json \ -d test-data.csv \ --iteration-count 5 \ --parallel
# Advanced error handlingnewman run collection.json \ --bail failure \ --ignore-redirects \ --disable-unicode \ --global-var "API_BASE=https://go.tallyfy.com/api"
Expert Newman configuration:
# Custom delay patterns for rate limitingnewman run collection.json \ --delay-request 200 \ --timeout-request 10000 \ --timeout-script 5000
# Selective executionnewman run collection.json \ --folder "Authentication Tests" \ --env-var "SKIP_CLEANUP=true"
# Output managementnewman run collection.json \ --reporter-cli-no-summary \ --reporter-cli-no-failures \ --reporter-json-export results.json \ --reporter-html-export detailed-report.html
Production-ready CI/CD pipeline with advanced error handling:
.github/workflows/api-tests.yml
:
name: Tallyfy API Tests
on: schedule: - cron: '0 */4 * * *' # Every 4 hours workflow_dispatch: push: branches: [main, develop] pull_request: branches: [main]
env: NODE_VERSION: '18' NEWMAN_VERSION: 'latest'
jobs: api-tests: runs-on: ubuntu-latest strategy: matrix: environment: [staging, production] test-suite: [smoke, full, regression]
steps: - uses: actions/checkout@v4
- name: Setup Node.js uses: actions/setup-node@v4 with: node-version: ${{ env.NODE_VERSION }} cache: 'npm'
- name: Install Newman and reporters run: | npm install -g newman@${{ env.NEWMAN_VERSION }} npm install -g newman-reporter-slack npm install -g newman-reporter-teamcity
- name: Validate collection syntax run: | newman run postman/tallyfy-api.json --dry-run
- name: Run API Tests - ${{ matrix.environment }} - ${{ matrix.test-suite }} env: TALLYFY_CLIENT_ID: ${{ secrets[format('TALLYFY_CLIENT_ID_{0}', matrix.environment)] }} TALLYFY_CLIENT_SECRET: ${{ secrets[format('TALLYFY_CLIENT_SECRET_{0}', matrix.environment)] }} SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_URL }} run: | newman run postman/tallyfy-api.json \ -e postman/${{ matrix.environment }}.json \ --folder "${{ matrix.test-suite }}" \ --env-var "TALLYFY_CLIENT_ID=$TALLYFY_CLIENT_ID" \ --env-var "TALLYFY_CLIENT_SECRET=$TALLYFY_CLIENT_SECRET" \ --reporters cli,json,html,slack \ --reporter-json-export results-${{ matrix.environment }}-${{ matrix.test-suite }}.json \ --reporter-html-export report-${{ matrix.environment }}-${{ matrix.test-suite }}.html \ --reporter-slack-webhookurl "$SLACK_WEBHOOK" \ --delay-request 100 \ --timeout-request 30000 \ --bail failure continue-on-error: true
- name: Parse test results id: test-results run: | RESULT_FILE="results-${{ matrix.environment }}-${{ matrix.test-suite }}.json"
if [ -f "$RESULT_FILE" ]; then TOTAL=$(jq '.run.stats.requests.total' "$RESULT_FILE") FAILED=$(jq '.run.stats.requests.failed' "$RESULT_FILE") ASSERTIONS_FAILED=$(jq '.run.stats.assertions.failed' "$RESULT_FILE")
echo "total_requests=$TOTAL" >> $GITHUB_OUTPUT echo "failed_requests=$FAILED" >> $GITHUB_OUTPUT echo "failed_assertions=$ASSERTIONS_FAILED" >> $GITHUB_OUTPUT
# Calculate success rate SUCCESS_RATE=$(echo "scale=2; ($TOTAL - $FAILED) * 100 / $TOTAL" | bc) echo "success_rate=$SUCCESS_RATE" >> $GITHUB_OUTPUT fi
- name: Upload test artifacts uses: actions/upload-artifact@v4 if: always() with: name: test-results-${{ matrix.environment }}-${{ matrix.test-suite }} path: | results-*.json report-*.html retention-days: 30
- name: Comment PR with results if: github.event_name == 'pull_request' uses: actions/github-script@v7 with: script: | const successRate = '${{ steps.test-results.outputs.success_rate }}'; const totalRequests = '${{ steps.test-results.outputs.total_requests }}'; const failedRequests = '${{ steps.test-results.outputs.failed_requests }}';
const comment = ` ## 🧪 API Test Results - ${{ matrix.environment }} - ${{ matrix.test-suite }}
- **Success Rate**: ${successRate}% - **Total Requests**: ${totalRequests} - **Failed Requests**: ${failedRequests}
[View detailed results](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) `;
github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: comment });
- name: Fail if tests failed if: steps.test-results.outputs.failed_requests > 0 run: | echo "❌ API tests failed: ${{ steps.test-results.outputs.failed_requests }} out of ${{ steps.test-results.outputs.total_requests }} requests failed" exit 1
notification: needs: api-tests runs-on: ubuntu-latest if: always() steps: - name: Notify team of results uses: 8398a7/action-slack@v3 with: status: ${{ job.status }} text: 'Tallyfy API tests completed' env: SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Performance regression detection:
# Add to Newman command for performance trackingnewman run collection.json \ --reporters cli,json \ --reporter-json-export current-results.json
# Compare with baseline (add to CI script)node scripts/performance-comparison.js \ --baseline baseline-results.json \ --current current-results.json \ --threshold 20 # 20% performance degradation threshold
Environment-specific test execution:
# Advanced environment matrixstrategy: matrix: include: - environment: staging test-type: smoke timeout: 10000 - environment: staging test-type: full timeout: 30000 - environment: production test-type: health-check timeout: 5000
Postman supports both CSV and JSON data files for comprehensive data-driven testing approaches.
CSV format for simple data sets:
test-data.csv
:
process_name,template_id,assignee,priority,expected_status,deadline_days"Q1 Budget Review","template_123","john@company.com","high","active",7"Employee Onboarding","template_456","hr@company.com","medium","pending",14"Equipment Request","template_789","it@company.com","low","draft",30
JSON format for complex data structures:
test-data.json
:
[ { "process_name": "Q1 Budget Review", "template_id": "template_123", "assignee": "john@company.com", "priority": "high", "kick_off_data": { "field_department": "Finance", "field_budget_amount": 50000, "field_approval_required": true }, "expected_tasks": 5, "validation_rules": { "response_time_max": 2000, "required_fields": ["id", "name", "status"] } }]
Advanced Newman execution with data:
# Run with multiple data sets and detailed reportingnewman run collection.json \ -e environment.json \ -d test-data.json \ --iteration-count 3 \ --delay-request 500 \ --reporters cli,json,html \ --reporter-html-template custom-template.hbs \ --reporter-json-export detailed-results.json
# Parallel execution for performance testingnewman run collection.json \ -d large-dataset.csv \ --parallel \ --reporter-cli-no-assertions \ --reporter-cli-no-console
Dynamic data generation in scripts:
// Access CSV/JSON data in Postman scriptsconst processName = pm.variables.get("process_name");const templateId = pm.variables.get("template_id");const kickOffData = JSON.parse(pm.variables.get("kick_off_data") || '{}');
// Dynamic validation based on dataconst expectedTasks = parseInt(pm.variables.get("expected_tasks"));const validationRules = JSON.parse(pm.variables.get("validation_rules") || '{}');
pm.test(`Process has expected ${expectedTasks} tasks`, () => { const process = pm.response.json(); pm.expect(process.tasks).to.have.lengthOf(expectedTasks);});
// Validate response time based on dataif (validationRules.response_time_max) { pm.test(`Response time under ${validationRules.response_time_max}ms`, () => { pm.expect(pm.response.responseTime).to.be.below(validationRules.response_time_max); });}
// Validate required fields from dataif (validationRules.required_fields) { validationRules.required_fields.forEach(field => { pm.test(`Response has required field: ${field}`, () => { pm.expect(pm.response.json()).to.have.property(field); }); });}
Data-driven error scenario testing:
[ { "scenario": "invalid_template_id", "template_id": "invalid_123", "expected_status": 404, "expected_error": "Template not found" }, { "scenario": "missing_required_field", "template_id": "template_123", "kick_off_data": {}, "expected_status": 422, "expected_error": "Validation failed" }]
// Test error scenarios from dataconst scenario = pm.variables.get("scenario");const expectedStatus = parseInt(pm.variables.get("expected_status"));const expectedError = pm.variables.get("expected_error");
pm.test(`${scenario} returns expected status ${expectedStatus}`, () => { pm.expect(pm.response.code).to.equal(expectedStatus);});
if (expectedError) { pm.test(`${scenario} returns expected error message`, () => { const response = pm.response.json(); pm.expect(response.error || response.message).to.include(expectedError); });}
Simulate complete workflows:
// Collection structure:// 1. Authenticate// 2. Create Process// 3. Complete Task 1// 4. Complete Task 2// 5. Add Comment// 6. Complete Final Task// 7. Verify Process Complete
// Pass data between requests// In "Create Process" Tests:const processId = pm.response.json().id;pm.collectionVariables.set("CURRENT_PROCESS_ID", processId);
// In subsequent requests, use:// {{CURRENT_PROCESS_ID}}
Run multiple operations concurrently:
// Execute multiple API calls in parallelasync function parallelOperations() { const operations = [ { name: "List Templates", endpoint: "/checklists" }, { name: "List Processes", endpoint: "/runs" }, { name: "List Tasks", endpoint: "/me/tasks" }, { name: "List Members", endpoint: "/members" } ];
const results = await Promise.all( operations.map(op => pm.sendRequest({ url: `${pm.environment.get("TALLYFY_BASE_URL")}/organizations/${pm.environment.get("TALLYFY_ORG_ID")}${op.endpoint}`, method: 'GET', header: { 'Authorization': `Bearer ${pm.environment.get("TALLYFY_ACCESS_TOKEN")}`, 'X-Tallyfy-Client': 'APIClient' } }).then(response => ({ name: op.name, status: response.code, count: response.json().data?.length || 0, time: response.responseTime })) ) );
console.log("Parallel operation results:"); results.forEach(r => { console.log(`${r.name}: ${r.count} items in ${r.time}ms`); });
const totalTime = Math.max(...results.map(r => r.time)); console.log(`Total parallel execution time: ${totalTime}ms`);}
parallelOperations();
Set up Postman monitors for continuous checking:
-
Process health check:
// Monitor collection to run every hourpm.test("No stuck processes", function() {const processes = pm.response.json().data;const stuckCount = processes.filter(p => {const hoursSinceUpdate = (Date.now() - new Date(p.updated_at)) / (1000 * 60 * 60);return hoursSinceUpdate > 24 && p.status === 'active';}).length;pm.expect(stuckCount).to.equal(0);}); -
API availability:
pm.test("API is responsive", function() {pm.response.to.have.status(200);pm.expect(pm.response.responseTime).to.be.below(2000);});
Send alerts when monitors detect issues:
// In monitor's Tests tabif (pm.test.failures && pm.test.failures.length > 0) { const slackWebhook = pm.environment.get("SLACK_WEBHOOK_URL");
pm.sendRequest({ url: slackWebhook, method: 'POST', header: { 'Content-Type': 'application/json' }, body: { mode: 'raw', raw: JSON.stringify({ text: `🚨 Tallyfy API Monitor Alert`, attachments: [{ color: "danger", fields: [{ title: "Failed Tests", value: pm.test.failures.map(f => f.name).join("\n"), short: false }, { title: "Environment", value: pm.environment.name, short: true }, { title: "Time", value: new Date().toISOString(), short: true }] }] }) } });}
// Optimize requests for better performanceconst optimizationConfig = { // Reuse connections keepAlive: true,
// Compress requests compression: 'gzip',
// Optimal timeouts timeouts: { request: 30000, script: 5000, dns: 5000 },
// Rate limiting respect delays: { betweenRequests: 100, onRateLimit: 5000 }};
// Apply in pre-request scriptpm.request.timeout = optimizationConfig.timeouts.request;
// Adaptive delay based on previous responseconst lastResponseTime = pm.environment.get("LAST_RESPONSE_TIME");if (lastResponseTime > 2000) { // Slow API, increase delay pm.environment.set("REQUEST_DELAY", 500);} else { pm.environment.set("REQUEST_DELAY", 100);}
// Clean up variables after each iterationconst cleanupKeys = [ "TEMP_PROCESS_ID", "TEMP_TASK_DATA", "CACHED_RESPONSE", "ITERATION_STATE"];
cleanupKeys.forEach(key => { pm.environment.unset(key);});
// Memory-efficient data handlingconst largeDataSet = pm.response.json();if (largeDataSet.length > 1000) { // Process in chunks to avoid memory issues const chunkSize = 100; for (let i = 0; i < largeDataSet.length; i += chunkSize) { const chunk = largeDataSet.slice(i, i + chunkSize); // Process chunk processDataChunk(chunk); }}
You’ve mastered advanced Postman patterns including:
- ✅ Multi-environment testing with automated switching
- ✅ CI/CD integration with Newman and GitHub Actions
- ✅ Mock server strategies for E2E testing
- ✅ Data-driven testing with CSV/JSON
- ✅ Performance monitoring and optimization
Continue with:
Postman > Authentication setup for Postman
Postman > Working with templates and processes
Postman > Task operations and automation
- 2025 Tallyfy, Inc.
- Privacy Policy
- Terms of Use
- Report Issue
- Trademarks