Best Practices > Best practices for instrumenting applications
Understanding structured events
For Tallyfy Manufactory to effectively manage your event-driven workflows and for you to gain deep observability insights, the events themselves must be well-structured. This article explores what structured events are and how to design them for optimal use with Manufactory.
Structured events are records of occurrences within your systems that are formatted as key-value pairs. This organization makes them easily machine-parsable, searchable, and analyzable. Think of a structured event like a well-organized spreadsheet, where each piece of information has its own column (key) and corresponding entry (value). This is in stark contrast to unstructured logs, which are often just lines of free-form text – more like a pile of handwritten notes, harder to sift through and make sense of systematically.
For example, instead of a log line saying: "User Bob (ID:123) failed to process payment for order ORD456 at 10:30 AM due to insufficient funds."
, a structured event would represent this as:
{ "timestamp": "2023-10-27T10:30:00Z", "event_type": "PaymentProcessingFailed", "user_id": "123", "user_name": "Bob", "order_id": "ORD456", "reason_code": "INSUFFICIENT_FUNDS", "message": "Payment failed due to insufficient funds."}
This structure is critical for Tallyfy Manufactory to efficiently route, trigger actions, and allow for powerful querying and analysis of your event data. Without structure, automating responses or gaining insights from event streams becomes significantly more challenging.
To be truly useful for observability, especially with a system like Tallyfy Manufactory, your structured events should possess several key characteristics:
- Arbitrarily wide: Events should be able to accommodate a diverse and potentially large number of fields (dimensions). This allows you to capture all relevant context.
- High-cardinality fields: Include fields that can have many unique values. For example, a
userID
,orderID
, or a specificeventID
generated by or for Manufactory are high-cardinality. These are essential for pinpointing specific instances or users. - Rich context: Go beyond purely technical data. Include business-specific information like
customerTier
,productCategory
, orregion
. This allows you to correlate system behavior with business metrics when analyzing events that pass through Manufactory. - Timestamps and durations: Accurate timestamps are vital for sequencing events and understanding when things happened. If an event represents an operation, its duration is also key for performance analysis. Manufactory itself will add timestamps related to its own processing stages.
When designing the structure (or schema) for events that will be processed by or sent to Tallyfy Manufactory, consider the following:
- Identify critical information: What data does Manufactory need to perform its routing and trigger logic? What information will you need to understand an event’s lifecycle and troubleshoot issues related to its processing within or via Manufactory?
- Standardization vs. flexibility: While it’s beneficial to standardize common field names across your events (e.g.,
trace_id
,user_id
,event_source
), your schema should also be flexible enough to accommodate event-specific data relevant to different Manufactory projects or actors. - Naming conventions: Adopt clear, consistent, and descriptive field names. Using a nested structure (e.g.,
event.name
,event.source
,manufactory.event_id
,manufactory.actor_id
) can help organize attributes. snake_case or camelCase are common choices.
While your specific needs will vary, here are some commonly essential fields for events interacting with Tallyfy Manufactory:
- Unique Identifiers: A unique ID for the event instance itself (e.g.,
event_id
), and any relevant correlation IDs (e.g.,workflow_instance_id
,order_id
). If the event is part of a distributed trace, includetrace_id
andspan_id
. - Event Source: The system or service that originally generated the event (e.g.,
CRMWebService
,PaymentGatewayCallback
). - Event Type/Name: A clear, descriptive name for the event (e.g.,
CustomerProfileUpdated
,InventoryItemRestocked
,ManufactoryTriggerFired
). - Payload: The actual data pertinent to the event, structured in a way that Manufactory actors or downstream systems can easily consume.
- Timestamps: At a minimum, the timestamp of when the event was created at its source. Manufactory will likely add its own timestamps for ingestion and processing milestones.
- Status Indicators: Fields that can indicate the success or failure of the operation the event represents, or reasons for failure if applicable, especially for events generated by Manufactory actors.
Let’s illustrate with an example of an event that Tallyfy Manufactory might ingest or trigger.
Bad example (unstructured log line):
"LOG: Order ORD789 processed by fulfillment_actor in Manufactory project 'OrderProcessing' at 2023-10-27 14:15:00 UTC, status success, items: 3, user: jane.doe@example.com"
This is hard to query. To find all successful order processing events for Jane Doe, you’d need complex text parsing.
Good example (structured JSON event for Manufactory):
{ "event_id": "evt_67890fghij", "trace_id": "trace_12345abcde", "event_source": "TallyfyManufactory.Project.OrderProcessing", "event_type": "OrderFulfillmentStepCompleted", "timestamp_occurred": "2023-10-27T14:15:00Z", "manufactory_project_id": "OrderProcessing", "manufactory_actor_name": "fulfillment_actor", "order_id": "ORD789", "user_email": "jane.doe@example.com", "item_count": 3, "status": "SUCCESS", "payload": { "shipment_details_retrieved": true, "inventory_updated": true }}
This good example is far superior because:
- Each piece of information is a distinct field, making it easy to query (e.g.,
WHERE user_email = 'jane.doe@example.com' AND status = 'SUCCESS'
). - Manufactory can use fields like
manufactory_project_id
orevent_type
for routing or triggering specific logic. - Numerical fields like
item_count
can be aggregated. - The nested
payload
keeps related details organized.
Tallyfy Manufactory, as an event ingestion and lifecycle engine, fundamentally relies on well-structured event data to:
- Perform efficient routing and filtering: Manufactory projects can use specific event attributes to decide which events to process or which actors to trigger.
- Execute actor logic: Actors within Manufactory often need to consume specific data points from the event payload to perform their tasks.
- Enable powerful user queries: For you to understand what Manufactory is doing, you need to be able to search and filter the events it has processed based on their attributes.
- Provide data for monitoring and alerting: Structured event data is essential for creating meaningful metrics and alerts about the health and performance of your event processing through Manufactory.
Your event schemas don’t need to be perfect from day one. Start by instrumenting the information you know you’ll need for Tallyfy Manufactory to function and for your initial troubleshooting scenarios. As your understanding of your event flows deepens and your observability practice matures, you can add more context to your events.
It’s important to consider how your event schemas might evolve. If Manufactory or other downstream systems have strict expectations about the event structure, plan for versioning your event schemas or ensure changes are backward-compatible to avoid breaking integrations.
Manufactory > Introduction to observability best practices
Best Practices > Analyzing events and deriving insights
Best Practices > What is observability?
- 2025 Tallyfy, Inc.
- Privacy Policy
- Terms of Use
- Report Issue
- Trademarks