Skip to main content
This quickstart uses OpenAI because it is the smallest end-to-end trace. The same setup pattern applies to Anthropic, LangChain, LangGraph, LangSmith, OpenAI Agents, Claude Agent SDK, Pydantic AI, and manual spans.

1. Install

bun add @inference/tracing openai

2. Configure Export

Set the Catalyst traces endpoint and token before your app starts.
export CATALYST_OTLP_ENDPOINT="https://your-catalyst-otlp-endpoint"
export CATALYST_OTLP_TOKEN="<your-token>"
export CATALYST_SERVICE_NAME="checkout-agent"
export CATALYST_SERVICE_VERSION="2026.04.28"
Use a stable CATALYST_SERVICE_NAME per deployed service. It makes traces easier to filter and compare across environments.

3. Initialize Tracing Early

Call setup() before constructing clients from instrumented SDKs.
import { setup } from "@inference/tracing";
import OpenAI from "openai";

const tracing = await setup({
  modules: { openai: OpenAI },
});

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Reply with just the word hello." }],
  max_tokens: 16,
});

console.log(response.choices[0]?.message.content);
await tracing.shutdown();

4. Verify

Open the Catalyst dashboard and filter for your CATALYST_SERVICE_NAME. The trace should include an OpenAI LLM span with input messages, output messages, model name, invocation parameters, finish reason, and token counts. If the process is short-lived, always call shutdown() before exit so batched spans are flushed.

Next Steps

OpenAI tracing

Add tool calls, structured outputs, and Responses API examples.

Manual spans

Wrap custom agents, CLI calls, and unsupported SDKs.