The Vercel AI SDK emits native OpenTelemetry spans whenDocumentation Index
Fetch the complete documentation index at: https://docs.inference.net/llms.txt
Use this file to discover all available pages before exploring further.
experimental_telemetry is enabled. Catalyst provides the tracer provider and a
small helper that wires those AI SDK spans into your Catalyst trace export.
Use this guide when your app calls the ai package directly through
generateText, streamText, structured outputs, tools, or ToolLoopAgent.
The same setup works with AI SDK providers such as @ai-sdk/openai,
@ai-sdk/anthropic, and @ai-sdk/openai-compatible.
What Is Captured
ai.generateTextoperation spans andai.generateText.doGeneratemodel-step spansai.streamTextoperation spans andai.streamText.doStreammodel-step spansai.toolCallspans for client-side tool executionToolLoopAgent.generate()andToolLoopAgent.stream()activity through the same native AI SDK spans- Prompt text or prompt messages, response text, structured output metadata, and streamed text
- Tool call names, IDs, arguments, and tool results
- Token usage including input, output, total, cached input, and reasoning tokens when the provider returns them
operation.namevalues that include yourfunctionId- Custom metadata passed through
experimental_telemetry
Install
TypeScript
TypeScript
@ai-sdk/openai, @ai-sdk/anthropic,
and @ai-sdk/google.
Configure Export
Set your Catalyst OTLP endpoint and token in the runtime environment. Short-lived scripts should also set a stable service name so traces are easy to find.Initialize Tracing
Initialize Catalyst tracing before the first AI SDK call. Import the AI SDK namespace and pass it tosetup() so auto-detection and integration status can
see the installed module.
TypeScript
experimental_telemetry: telemetry("...") on every AI SDK call you want
to trace. The AI SDK does not apply telemetry settings globally.
Provider Setup
This example uses an OpenAI-compatible provider, which works with Catalyst Gateway and other OpenAI-compatible endpoints.TypeScript
includeUsage: true is useful because usage metadata is what populates token
columns in Catalyst. Some providers only return token counts for non-streaming
calls or only after a stream finishes.
Basic Generation
TypeScript
ai.generateTextai.generateText.doGenerate
llm_model_name, input_tokens,
output_tokens, total_tokens, input, and output when the provider returns
the corresponding AI SDK attributes.
Streaming
streamText() produces a streaming operation span and a model-step span. Consume
the stream before process shutdown so the AI SDK can finish the span and record
the reconstructed response text.
TypeScript
ai.streamTextai.streamText.doStream
await tracing.shutdown() after the stream is
fully consumed.
Tool Calling
Tool calls create both model spans and client-sideai.toolCall spans. The
model-step span records the tool call requested by the model. The tool span
records your local execute() call and its result.
TypeScript
ai.generateText- one or more
ai.generateText.doGeneratemodel-step spans ai.toolCallspans for executed tools
| Attribute | Meaning |
|---|---|
ai.response.toolCalls | Tool calls requested by the model on a model-step span |
ai.toolCall.name | Tool name on the client-side tool span |
ai.toolCall.id | Tool call ID that links model request and tool execution |
ai.toolCall.args | JSON arguments passed to execute() |
ai.toolCall.result | JSON result returned by execute() |
Structured Output
Structured outputs are traced through the samegenerateText operation shape.
The response text or parsed output is preserved in AI SDK attributes when the
provider returns it.
TypeScript
supportsStructuredOutputs: true on OpenAI-compatible providers when the
downstream model endpoint supports native structured outputs.
Agents
The AI SDK’sToolLoopAgent accepts experimental_telemetry in the agent
constructor. Agent calls then emit the same native AI SDK spans as core
functions:
agent.generate()emitsai.generateTextandai.generateText.doGenerateagent.stream()emitsai.streamTextandai.streamText.doStream- agent tool execution emits
ai.toolCall
ai.agent span today. Infer the agent loop from the
parent/child relationships, repeated model-step spans, tool-call spans, and the
functionId you choose.
TypeScript
Streaming Agent
TypeScript
Next.js Route Handler
For request/response applications, initialize tracing in a module that is loaded before route handlers call the AI SDK. Keepshutdown() for process lifecycle
hooks or short-lived jobs; do not call it after every web request.
TypeScript
Verify Traces
Filter by the service name you configured:ai.generateText, ai.streamText, and ai.toolCall spans. If you set
distinct functionId values, you can also search for the corresponding
operation.name attributes in the trace detail view.
Attribute Reference
Catalyst promotes stable AI SDK attributes into canonical columns and preserves all raw attributes for inspection.| Catalyst field | AI SDK attribute |
|---|---|
llm_model_name | ai.model.id |
input_tokens | ai.usage.inputTokens or ai.usage.promptTokens |
output_tokens | ai.usage.outputTokens or ai.usage.completionTokens |
total_tokens | ai.usage.totalTokens or ai.usage.tokens |
cache_read_tokens | ai.usage.cachedInputTokens |
reasoning_tokens | ai.usage.reasoningTokens |
input_messages | ai.prompt.messages |
input | ai.prompt |
output | ai.response.text |
ai.operationId:
ai.operationId shape | Observation kind |
|---|---|
ai.generateText, ai.generateText.doGenerate | LLM |
ai.streamText, ai.streamText.doStream | LLM |
ai.generateObject, ai.streamObject | LLM |
ai.toolCall | TOOL |
ai.embed, ai.embedMany | EMBEDDING |
Common Gotchas
- Pass
experimental_telemetryon every AI SDK call or agent you want traced. - Use a stable
functionId; it appears inoperation.nameand makes filtering easier. - Set
includeUsage: trueon OpenAI-compatible providers when available. - Fully consume streams before process exit.
- Call
await tracing.shutdown()in scripts, CLIs, tests, and job workers that exit after a run. - Do not call
shutdown()after each request in a long-running server. - If tool calls appear on model spans but no
ai.toolCallspan appears, confirm the tool has anexecute()function and is executed client-side.