Comparison
| Provider | LLM Tracing | Tool Tracing | Framework |
|---|---|---|---|
TemporalTracingModelProvider | Auto | No | OpenAI Agents SDK |
adk.providers.litellm | Auto | No | LiteLLM |
create_langgraph_tracing_handler | Auto | Auto | LangGraph / LangChain |
Providers
- OpenAI Agents SDK
- LiteLLM
- LangGraph
How It Works
For Temporal-style agents using the OpenAI Agents SDK, LLM calls are auto-traced viaTemporalTracingModelProvider. For non-Agentex usage of the OpenAI Agents SDK with the SGP tracing SDK directly, see the OpenAI Agents Integration page.The tracing flow is:- Your workflow sets
_task_id,_trace_id, and_parent_span_idon the workflow instance ContextWorkflowOutboundInterceptorpropagates these as Temporal headers to activitiesContextActivityInboundInterceptorreads the headers and setsContextVarsTemporalTracingResponsesModelwraps the base model and creates a span for eachget_response()call
Auto-Traced Span Structure
Each LLM call produces a single span namedmodel_get_response:The span name is always
model_get_response. The model name and settings appear in the span’s input data.Streaming Hooks
TemporalStreamingHooks streams tool lifecycle events to the UI in real time:ToolRequestContentwhen a tool starts (name, arguments)ToolResponseContentwhen a tool finishes (name, result)TextContenton agent handoffs
Tracing Gap: Tool Calls
Choosing a Provider
I'm using OpenAI Agents SDK on Temporal
I'm using OpenAI Agents SDK on Temporal
Use
TemporalTracingModelProvider for automatic LLM call tracing. This is the default when you use the OpenAI Agents SDK plugin with Temporal workflows. No additional setup is needed for model call spans.For tool-level tracing, create manual spans around your tool functions using adk.tracing.span(). See Manual Spans for examples.I'm using LiteLLM directly
I'm using LiteLLM directly
Use the
adk.providers.litellm methods. All four methods automatically create tracing spans when you pass trace_id and parent_span_id.The auto_send variants are convenient for chat-style agents since they handle both tracing and UI message creation in one call.I'm using LangGraph / LangChain
I'm using LangGraph / LangChain
Use
create_langgraph_tracing_handler registered as a callback. This gives you the most complete auto-tracing. Both LLM and tool calls get their own spans without any manual instrumentation.You still need to create turn-level parent spans manually to group operations by conversation turn.
