Documentation Index
Fetch the complete documentation index at: https://docs.inference.net/llms.txt
Use this file to discover all available pages before exploring further.
inf models lets you browse every model available to your active team — both platform-provided models and any BYOK (bring-your-own-key) routes your team has configured.
Alias: inf model
inf eval run takes model route IDs via --models and --judge-model. inf models list is where you discover those route IDs.
Route IDs
Route IDs look like<provider>:<model-alias> — for example openai:gpt-5.2, anthropic:claude-sonnet-4-6, cerebras:llama-3.3-70b. They are the canonical identifier the CLI and API use to address a specific model route, and they’re what inf eval run expects for --models and --judge-model. Use inf models list --json to dump every route ID available to your team.
inf models list
Display every callable model visible to the active team, with provider, scope, capability flags, context window, and per-million-token pricing.
inf models ls
Options
| Flag | Required | Description | Default |
|---|---|---|---|
--provider <name> | No | Filter by provider name (case-insensitive exact match) — e.g. openai, anthropic, google, cerebras | All providers |
--scope <scope> | No | Filter by scope: platform (inf-public catalog) or byok (your team’s own provider keys) | Both |
--judge-only | No | Show only models that can act as a judge in evals | Off |
Output
In table mode (default), each row shows:| Column | Description |
|---|---|
Model | Canonical alias (e.g. gpt-5.2, claude-sonnet-4-6, gemini-2.5-flash) |
Provider | Provider brand (OpenAI, Anthropic, Google, Cerebras, …) |
Scope | platform (inf-public) or byok (team-owned route) |
Context | Max context window, rounded to the nearest 1k tokens |
Struct. | Whether the model supports structured outputs (yes / no) |
Tools | Whether the model supports tool / function calling |
Reason. | Whether the model has a reasoning mode |
Judge | Whether the model is allow-listed as an eval judge |
$/1M In | Input price per million tokens |
$/1M Out | Output price per million tokens |
Examples
JSON mode
inf models list --json emits the full enriched record per model, including the routeId string that inf eval run --models and --judge-model expect: