Skip to main content
The inf install command adds Catalyst observability to your codebase. It uses an AI coding agent to automatically find your LLM client instances and route them through the Inference.net observability proxy.
inf install
After running, every LLM request flows through Inference.net and appears in your Catalyst dashboard with cost, latency, token usage, and full request/response data.

What It Does

The agent makes three changes to your existing LLM SDK clients:
  1. Redirects the base URL to https://api.inference.net so requests flow through the observability proxy
  2. Adds routing headers (x-inference-provider, x-inference-observability-api-key, x-inference-environment) so the proxy knows where to forward and what to record
  3. Adds task IDs (x-inference-task-id) to each call site so you can group requests by logical task in the dashboard
No new SDKs are installed. You keep using the official OpenAI or Anthropic SDKs you already have.

Options

FlagDescription
--dry-runPreview changes without modifying any files

Supported Agents

The CLI detects and launches one of these AI coding agents to apply the changes:
AgentBinary
Claude Codeclaude
OpenCodeopencode
Codexcodex
If multiple agents are available, the CLI prompts you to choose.

Supported Providers

Built-in: OpenAI, Anthropic Custom (via x-inference-provider-url header): Google Gemini, Together AI, Groq, Fireworks AI, Mistral AI, Cerebras, Perplexity, DeepSeek, OpenRouter, Azure OpenAI, and any OpenAI-compatible endpoint.

After Installation

Make a few LLM calls from your application, then check your Catalyst dashboard within about 30 seconds. You can also verify from the CLI:
inf inference list
Add INFERENCE_OBSERVABILITY_API_KEY to your .env file so the instrumentation works across all environments. Find your key in the dashboard under project settings.