The quick change
Point your SDK at Catalyst. Your project API key authenticates the request, and your provider key goes inx-inference-provider-api-key so the gateway can forward it. The example below uses OpenAI. For other providers see the Integrations guide.
Prefer AI-assisted setup
Prefer to avoid manual edits? Use Install with AI. It runsinf instrument, scans your codebase, updates your LLM clients to use the gateway, and adds task IDs automatically.
What gets captured
Once traffic is flowing, every request records:- Full request and response payloads
- Cost (per call and aggregate)
- Latency (end-to-end and time to first token)
- Token counts (input and output)
- Cache hit rates
- Error rates and status codes
- Model and provider information
- Function/tool call details
- Whether the request includes images
- Any task tags you set
Catalyst works with any OpenAI-compatible provider including Anthropic, Groq, Cerebras, and OpenRouter. See the full integration guides for provider-specific setup.
What happens next
Your data shows up immediately in two places:Metrics Explorer
Dashboards for cost, latency, errors, and usage aggregated across all your providers.
Inference Viewer
Browse and filter individual LLM requests and responses.