Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.inference.net/llms.txt

Use this file to discover all available pages before exploring further.

inf models lets you browse every model available to your active team — both platform-provided models and any BYOK (bring-your-own-key) routes your team has configured. Alias: inf model inf eval run takes model route IDs via --models and --judge-model. inf models list is where you discover those route IDs.

Route IDs

Route IDs look like <provider>:<model-alias> — for example openai:gpt-5.2, anthropic:claude-sonnet-4-6, cerebras:llama-3.3-70b. They are the canonical identifier the CLI and API use to address a specific model route, and they’re what inf eval run expects for --models and --judge-model. Use inf models list --json to dump every route ID available to your team.

inf models list

Display every callable model visible to the active team, with provider, scope, capability flags, context window, and per-million-token pricing.
inf models list
Alias: inf models ls

Options

FlagRequiredDescriptionDefault
--provider <name>NoFilter by provider name (case-insensitive exact match) — e.g. openai, anthropic, google, cerebrasAll providers
--scope <scope>NoFilter by scope: platform (inf-public catalog) or byok (your team’s own provider keys)Both
--judge-onlyNoShow only models that can act as a judge in evalsOff

Output

In table mode (default), each row shows:
ColumnDescription
ModelCanonical alias (e.g. gpt-5.2, claude-sonnet-4-6, gemini-2.5-flash)
ProviderProvider brand (OpenAI, Anthropic, Google, Cerebras, …)
Scopeplatform (inf-public) or byok (team-owned route)
ContextMax context window, rounded to the nearest 1k tokens
Struct.Whether the model supports structured outputs (yes / no)
ToolsWhether the model supports tool / function calling
Reason.Whether the model has a reasoning mode
JudgeWhether the model is allow-listed as an eval judge
$/1M InInput price per million tokens
$/1M OutOutput price per million tokens

Examples

# All callable models, table view
inf models list

# Only OpenAI routes
inf models list --provider openai

# Only models allow-listed as judges (for inf eval run --judge-model)
inf models list --judge-only

# Only BYOK routes
inf models list --scope byok

# Machine-readable — full routeId per row, good for piping into inf eval run
inf models list --json | jq -r '.[] | select(.judgeCapable == true) | .routeId'

JSON mode

inf models list --json emits the full enriched record per model, including the routeId string that inf eval run --models and --judge-model expect:
[
  {
    "routeId": "openai:gpt-5.2-2025-12-11",
    "canonicalAlias": "gpt-5.2",
    "displayName": "GPT 5.2",
    "providerName": "OpenAI",
    "providerSlug": "openai",
    "scope": "platform",
    "maxContextSize": 128000,
    "structuredOutputs": true,
    "tools": true,
    "reasoning": false,
    "judgeCapable": true,
    "costInputPerMToken": 2.5,
    "costOutputPerMToken": 10
  }
]
inf models list --json | jq '.[] | select(.scope == "platform")' is a quick way to prune your eval model set to just platform routes.