Skip to main content
Use Catalyst Tracing when you want more than gateway request logs. The tracing SDKs collect the full shape of an LLM operation: model calls, messages, tool calls, tool results, structured outputs, token usage, agent runs, framework steps, and custom spans you add around your own orchestration code.
If you only need request routing through the Catalyst gateway, start with model provider integrations. If you need trace trees for agents, tools, framework runs, or non-gateway work, use the tracing SDKs.

Getting Started

Using a coding agent? Paste this into your agent to set up Catalyst trace collection:
Follow the instructions from https://docs.inference.net/skills/catalyst-tracing.md and ask me questions as needed.
Start with the path that matches the code you want to observe:
  1. Quickstart - install a tracing SDK, configure export, and capture your first provider span.
  2. Provider and framework guides - instrument OpenAI, Anthropic, LangChain, LangGraph, LangSmith, OpenAI Agents, Claude Agent SDK, Claude Code, Pydantic AI, and other supported surfaces.
  3. Manual spans - wrap custom agents, subprocesses, tools, retrieval, routing, and unsupported SDKs yourself.

Supported Trace Integrations

OpenAI

Trace Chat Completions, tool calls, structured outputs, and Responses API calls.

Anthropic

Trace Messages API calls, tool use round trips, and prompt-cache usage.

LangChain

Capture chains, agents, model calls, and tools through callback instrumentation.

LangGraph

Capture graph and node spans while preserving parent-child relationships.

LangSmith

Bridge LangSmith OpenTelemetry spans into Catalyst.

OpenAI Agents

Trace agent runs, tools, handoffs, and nested OpenAI model calls.

Claude Agent SDK

Trace Claude Agent SDK query loops and yielded messages.

Claude Code SDK

Trace Claude Code CLI subprocess calls and SDK-style agent invocations.

Manual spans

Wrap custom agents, CLI calls, provider routing, and unsupported SDKs.

Packages

RuntimePackagePrimary import
TypeScript / Node / Bun@inference/tracingimport { setup } from "@inference/tracing"
Pythoninference-catalyst-tracingfrom inference_catalyst_tracing import setup

What Gets Captured

Span dataExamples
Inputs and outputsinput.value, output.value
Messagesuser, system, assistant, tool, and tool-result messages
Tool callstool names, IDs, JSON arguments, and tool results
Model metadatamodel name, provider/system, invocation parameters
Usageprompt, completion, total, and prompt-cache token counts when available
Agent structureagent spans, framework spans, tool spans, graph/node spans
Errorsexception status and error details on failed spans

How It Works

  1. setup() configures a Catalyst OpenTelemetry exporter.
  2. The SDK instruments the providers or frameworks you enable.
  3. Your app runs normally.
  4. Spans are exported to Catalyst and grouped by service, trace, span, and task metadata.

Next Steps

Quickstart

Install an SDK, configure export, and capture your first model span.

Instrumentation examples

Browse copyable examples for tools, agents, framework runs, and manual spans.

Troubleshooting

Debug missing spans, missing attributes, and shutdown behavior.