Skip to main content
Route your OpenRouter requests through the Inference Catalyst gateway to access hundreds of models while getting full observability. OpenRouter is OpenAI-compatible, so you use the OpenAI SDK with the x-inference-provider-url header.
Prefer automatic setup? Run inf instrument to instrument your codebase in seconds. Learn more

Setup

1

Get your API keys

You need two keys:
2

Set environment variables

export INFERENCE_API_KEY=<your-project-api-key>
export OPENROUTER_API_KEY=<your-openrouter-api-key>
3

Update your code

Point the SDK at the gateway. Your project API key goes in apiKey, and the x-inference-provider-url header tells the gateway to forward requests to OpenRouter.
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.inference.net/v1",
  apiKey: process.env.INFERENCE_API_KEY,
  defaultHeaders: {
    "x-inference-provider-api-key": process.env.OPENROUTER_API_KEY,
    "x-inference-provider-url": "https://openrouter.ai/api",
    "x-inference-environment": process.env.NODE_ENV,
  },
});

const response = await client.chat.completions.create({
  model: "z-ai/glm-5-turbo",
  messages: [{ role: "user", content: "Hello" }],
}, {
  headers: { "x-inference-task-id": "default" },
});