Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.inference.net/llms.txt

Use this file to discover all available pages before exploring further.

Catalyst has two ways to connect with your stack. Gateway integrations proxy your existing provider calls through Catalyst with a one-line base URL change, no SDK swap required. Traces integrations use a lightweight SDK to collect the full shape of an LLM operation directly from your code, including agent runs, tool calls, framework steps, and spans you add yourself. Browse the documented integrations below. If you do not see a provider yet, Catalyst can still route many OpenAI-compatible endpoints through x-inference-provider-url.

Browse Integrations

Gateway Integrations

Gateway integrations route requests through the Catalyst gateway with a one-line base URL change. You keep your existing provider API keys. Your Catalyst project API key authenticates requests to the gateway, and a small set of headers control routing, environments, and task grouping. Because requests flow through the gateway, Catalyst can measure performance metrics that are invisible to application code: time to first token (TTFT), tokens per second, and end-to-end latency across providers. These are captured automatically without any changes to your request logic.

Traces Integrations

Traces integrations use the @inference/tracing (TypeScript) or inference-catalyst-tracing (Python) SDK to collect OpenInference-shaped spans directly from LLM SDKs, agent frameworks, and your own orchestration code. A single setup() call instruments the providers or frameworks you enable. Spans are exported over OTLP and grouped in Catalyst by service, trace, and task. Use Traces when you need:
  • Full agent run trees, not just individual requests
  • Tool calls, tool results, and multi-step framework spans
  • Visibility into work that never touches the Catalyst gateway (local models, custom routing, non-HTTP orchestration)

Traces overview

Learn what gets captured and how to get started with Catalyst Tracing.

Routing Headers

HeaderRequiredDescription
AuthorizationYesBearer <your-project-api-key> authenticates the request to the gateway and links it to your project. For OpenAI-compatible SDKs, set this as the SDK’s apiKey.
x-inference-provider-api-keyYesYour provider API key, such as OpenAI or Groq. The gateway forwards it downstream. For Anthropic’s native SDK, use x-api-key instead.
x-inference-providerNoForces routing to a specific provider, such as openai, anthropic, or cerebras. Usually inferred from the SDK or x-inference-provider-url; set it only to override that inference.
x-inference-environmentNoTags requests with an environment, such as production or staging.
x-inference-task-idNoGroups requests under a logical task for filtering and analytics.
x-inference-provider-urlNoRoutes to any OpenAI-compatible provider by specifying its base URL.

Supported OpenAI-compatible Provider URLs

Any OpenAI-compatible provider can be used via the x-inference-provider-url header, even when it does not have a dedicated guide in the catalog yet.
ProviderBase URL
OpenAIhttps://api.openai.com/v1
OpenRouterhttps://openrouter.ai/api
Anthropichttps://api.anthropic.com/v1
Google Geminihttps://generativelanguage.googleapis.com/v1beta/openai
Azure OpenAIhttps://{resource}.openai.azure.com/openai/deployments/{deployment}
Groqhttps://api.groq.com/openai/v1
Together AIhttps://api.together.xyz/v1
Fireworks AIhttps://api.fireworks.ai/inference/v1
Perplexityhttps://api.perplexity.ai
Mistralhttps://api.mistral.ai/v1
DeepSeekhttps://api.deepseek.com/v1
Cerebrashttps://api.cerebras.ai/v1
Inference.nethttps://api.inference.net/v1