Skip to main content
Agentex provides three built-in tracing providers that automatically create spans for LLM calls and, in some cases, tool invocations. Each provider integrates with a different framework and has different tracing coverage.
Built-in providers give you a baseline — auto-traced LLM calls and, in some cases, tool invocations — but truly useful traces require manual instrumentation by the agent implementer: turn-level spans, custom reasoning steps, structured input/output, and proper parent-child linking. Think of built-in providers as the foundation, not the finished product. See Span Hierarchy & Best Practices for how to build on top of them.

Comparison

ProviderLLM TracingTool TracingFramework
TemporalTracingModelProviderAutoNoOpenAI Agents SDK
adk.providers.litellmAutoNoLiteLLM
create_langgraph_tracing_handlerAutoAutoLangGraph / LangChain

Providers

How It Works

For Temporal-style agents using the OpenAI Agents SDK, LLM calls are auto-traced via TemporalTracingModelProvider. For non-Agentex usage of the OpenAI Agents SDK with the SGP tracing SDK directly, see the OpenAI Agents Integration page.The tracing flow is:
  1. Your workflow sets _task_id, _trace_id, and _parent_span_id on the workflow instance
  2. ContextWorkflowOutboundInterceptor propagates these as Temporal headers to activities
  3. ContextActivityInboundInterceptor reads the headers and sets ContextVars
  4. TemporalTracingResponsesModel wraps the base model and creates a span for each get_response() call

Auto-Traced Span Structure

Each LLM call produces a single span named model_get_response:
# Span input:
{
    "model": "gpt-4o",
    "has_system_instructions": True,
    "input_type": "list",
    "tools_count": 3,
    "handoffs_count": 0,
    "has_output_schema": False,
    "model_settings": {
        "temperature": 0.7,
        "max_tokens": 4096,
        "reasoning": {"effort": "high"}
    }
}

# Span output:
{
    "new_items": [
        {"type": "message", "content": [{"type": "text", "text": "..."}]},
        {"type": "function_call", "name": "search", "arguments": "..."}
    ],
    "final_output": "The agent's text response..."
}
The span name is always model_get_response. The model name and settings appear in the span’s input data.

Streaming Hooks

TemporalStreamingHooks streams tool lifecycle events to the UI in real time:
from agentex.lib.core.temporal.plugins.openai_agents import TemporalStreamingHooks

hooks = TemporalStreamingHooks(task_id=params.task.id)
result = await Runner.run(agent, input_list, hooks=hooks)
This streams:
  • ToolRequestContent when a tool starts (name, arguments)
  • ToolResponseContent when a tool finishes (name, result)
  • TextContent on agent handoffs

Tracing Gap: Tool Calls

The OpenAI Agents SDK plugin does not create separate spans for tool executions. Tool calls appear inside the model span’s output.new_items array as function_call entries, but they are not their own spans in the trace tree.If you need tool-level spans, create manual spans around your tool logic. See Manual Spans.

Choosing a Provider

Use TemporalTracingModelProvider for automatic LLM call tracing. This is the default when you use the OpenAI Agents SDK plugin with Temporal workflows. No additional setup is needed for model call spans.For tool-level tracing, create manual spans around your tool functions using adk.tracing.span(). See Manual Spans for examples.
Use the adk.providers.litellm methods. All four methods automatically create tracing spans when you pass trace_id and parent_span_id.The auto_send variants are convenient for chat-style agents since they handle both tracing and UI message creation in one call.
Use create_langgraph_tracing_handler registered as a callback. This gives you the most complete auto-tracing. Both LLM and tool calls get their own spans without any manual instrumentation.You still need to create turn-level parent spans manually to group operations by conversation turn.