Skip to main content
You can’t improve what you can’t see. Observe captures every LLM request flowing through your product (cost, latency, tokens, errors) and gives you the raw material for everything else on the platform. Even if you came to Catalyst to train a custom model, Observe is the first step. Captured traffic becomes the training data and eval datasets that power everything downstream. The metrics and visibility alone provide enough value to justify the integration, and the data pipeline it creates feeds evals and training without additional work. If you already have data you want to evaluate or train on, you can skip straight to uploading a dataset.

How it works

Catalyst sits between your application and your LLM provider. Route requests through the platform by swapping a base URL. Your SDK, provider credentials, and application logic stay the same. Data collection happens at the edge and adds under 10ms of overhead.
import OpenAI from "openai";

const client = new OpenAI({
baseURL: "https://api.inference.net/v1",
apiKey: process.env.INFERENCE_API_KEY,
defaultHeaders: {
"x-inference-provider-api-key": process.env.OPENAI_API_KEY,
"x-inference-provider": "openai",
},
});

See Connect Your App for the full setup, or use Install with AI to instrument your codebase automatically.

📍 TODO:MEDIA

Graphic showing how data flows with the gateway installed (app → gateway → provider, with data captured at the gateway layer).

Key concepts

ConceptDescription
GatewayThe transparent layer between your app and your LLM provider. Captures traffic with under 10ms overhead.
InferenceA single LLM API call captured by the gateway. Records the full request, response, cost, latency, and token counts.
TaskA user-defined objective (like “summarize docs” or “classify tickets”) that groups related inferences so you can track each AI feature independently.
MetricsAggregated cost, latency, error rates, and token usage across your inferences. Filterable by model, task, or provider.

Next steps

Set up tasks

Group your LLM calls by objective.

Integrate with your LLM provider

Connect your app and start capturing traffic.

Metrics Explorer

See your LLM usage dashboards.

Inference Viewer

Browse individual LLM calls.