Querying Logic App Performance with KQL: A Practical Guide
When you’re deep into delivering secure, scalable cloud solutions, performance visibility is everything. For Logic Apps—especially in enterprise-grade deployments—understanding execution time is key to optimizing workflows, controlling costs, and ensuring SLAs are met.
In this post, we’ll walk through how to query Logic App run durations using Kusto Query Language (KQL), with a focus on real-world diagnostics via Log Analytics and Application Insights.
One of the hardest parts is knowing where to get started. Let’s dive in!
Where to Start: What Tables Matter?
Depending on your Logic App SKU and telemetry setup, you’ll be querying different tables:
| Logic App Type | Telemetry Source | Primary Table(s) |
|---|---|---|
| Consumption | Log Analytics | AzureDiagnostics |
| Standard (ISE or App Service Plan) | Application Insights | Traces, LogicAppWorkflowRuntime |
Step 1: Schema Discovery — What Fields Are Available?
Before diving into metrics, inspect the structure of your telemetry:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.LOGIC"
| where OperationName == "WorkflowRunCompleted"
| take 5
Purpose:
- See what fields are available (e.g.,
properties_runId,properties_status,properties_workflowName, etc.). - Validate whether
properties_runDurationMs_sor similar exists. - Confirm your Logic App is emitting the expected telemetry.
Step 2: List Recent Workflow Runs with Status
This gives you a clean view of recent executions and their outcomes:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.LOGIC"
| where OperationName == "WorkflowRunCompleted"
| project TimeGenerated, Resource, properties_workflowName, properties_runId, properties_status
| order by TimeGenerated desc
Purpose:
- Track recent workflow runs.
- Spot failures or long gaps between executions.
- Use
properties_statusto filter for"Succeeded","Failed", etc.
Step 3: Calculate Run Duration (if available)
If your telemetry includes properties_runDurationMs_s, this query summarizes performance:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.LOGIC"
| where OperationName == "WorkflowRunCompleted"
| extend DurationMs = todouble(properties_runDurationMs_s)
| summarize avg(DurationMs), min(DurationMs), max(DurationMs), count() by properties_workflowName
| order by avg(DurationMs) desc
Purpose:
- Analyze average, min, and max run durations per workflow.
- Identify workflows that may need optimization.
- Use
count()to gauge execution frequency.
If
properties_runDurationMs_sis missing, fall back to calculating duration usingWorkflowRunStartedandWorkflowRunCompletedtimestamps (as shown earlier).
Bonus: Filter by Specific Workflow or Time Range
To isolate a specific Logic App or timeframe:
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.LOGIC"
| where OperationName == "WorkflowRunCompleted"
| where properties_workflowName == "MyLogicAppName"
| where TimeGenerated > ago(7d)
| project TimeGenerated, properties_runId, properties_status, properties_runDurationMs_s
The purpose of limiting by a specific workflow or time range gives us the ability to:
- Focus on a single Logic App.
- Limit results to the past 7 days.
- Useful for troubleshooting or SLA validation.
Let’s get started with building our queries now that we have a good understanding of the schema we are working with.
Option 1: Using AzureDiagnostics for Consumption Logic Apps
If your Logic App is sending diagnostics to Log Analytics, you can query the AzureDiagnostics table for workflow run events.
Sample Query: Run Duration via Start/End Events
AzureDiagnostics
| where ResourceProvider == "MICROSOFT.LOGIC"
| where OperationName == "WorkflowRunStarted" or OperationName == "WorkflowRunCompleted"
| extend RunId = tostring(properties_runId)
| summarize StartTime = min(TimeGenerated), EndTime = max(TimeGenerated) by RunId, Resource
| extend DurationMs = datetime_diff("millisecond", EndTime, StartTime)
| order by DurationMs desc
This quick query:
- Filters for workflow start and completion events.
- Groups by
RunIdto isolate each execution. - Calculates duration using
TimeGenerated.
Heads-up: Some blog posts suggest using
properties_runDurationMs_s, but this field may not exist in your environment. Always inspect your schema first usingtake 1and jump back up to the top of this post to see how you can work through your own schema.
Option 2: Using Traces for Logic App Standard
If you’re using Logic App Standard with Application Insights, and you’ve instrumented your workflow with custom trackTrace actions, you can query the Traces table for performance insights.
Sample Query: Duration from Custom Trace Events
Traces
| where message has "LogicAppStart" or message has "LogicAppEnd"
| extend WorkflowName = tostring(customDimensions["WorkflowName"])
| extend RunId = tostring(customDimensions["RunId"])
| extend Timestamp = timestamp
| summarize StartTime = min(Timestamp), EndTime = max(Timestamp) by RunId, WorkflowName
| extend DurationMs = datetime_diff("millisecond", EndTime, StartTime)
| order by DurationMs desc
This approach requires:
- Emitting trace messages at workflow start and end inside your logic app.
- Including metadata like
RunIdandWorkflowNameincustomDimensions.
Option 3: Using LogicAppWorkflowRuntime (Best for Standard)
If you’re using Logic App Standard and have enabled full telemetry, the LogicAppWorkflowRuntime table gives you clean, structured access to workflow metrics.
Sample Query: Summary of Run Durations
LogicAppWorkflowRuntime
| where WorkflowName == "YourLogicAppName"
| where Status == "Succeeded"
| extend DurationMs = datetime_diff('millisecond', EndTime, StartTime)
| summarize avg(DurationMs), min(DurationMs), max(DurationMs) by WorkflowName
This is the most direct way to analyze performance—no need for custom instrumentation.
Pro Tips for Production Monitoring
- Always inspect schema first: Use
take 1to validate field names before building queries. - Use
summarizefor trends: Group by time, status, or workflow name to spot anomalies. - Instrument with intent: For Standard Logic Apps, use
trackTraceto log business-critical checkpoints. - Ensure the diagnostic settings are enabled: For consumption based logic apps, ensure that the diagnostic settings have been setup and are being saved to the correct log analytics workspace.
- Correlate with cost: Longer runs often mean higher consumption—tie duration metrics to billing insights.
Conclusion
Whether you’re optimizing ingestion pipelines or tracking SLAs for enterprise workflows, KQL gives you the precision to monitor Logic App performance at scale. The key is knowing which telemetry source you’re working with—and tailoring your queries accordingly.
If you’re deploying multi-phase Sentinel or Purview solutions and want to bake in performance observability from day one, this is the kind of telemetry strategy that pays dividends.