Skip to main content
inf models lets you browse every model available to your active team — both platform-provided models and any BYOK (bring-your-own-key) routes your team has configured. Alias: inf model inf eval run takes model route IDs via --models and --judge-model. inf models list is where you discover those route IDs.

List Models

Display every callable model visible to the active team, with provider, scope, capability flags, context window, and per-million-token pricing.
inf models list
Alias: inf models ls

Options

FlagDescriptionDefault
--provider <name>Filter by provider name (case-insensitive exact match) — e.g. openai, anthropic, google, cerebrasAll providers
--scope <scope>Filter by scope: platform (inf-public catalog) or byok (your team’s own provider keys)Both
--judge-onlyShow only models that can act as a judge in evalsOff

Output

In table mode (default), each row shows:
ColumnDescription
ModelCanonical alias (e.g. gpt-5.2, claude-sonnet-4-6, gemini-2.5-flash)
ProviderProvider brand (OpenAI, Anthropic, Google, Cerebras, …)
Scopeplatform (inf-public) or byok (team-owned route)
ContextMax context window, rounded to the nearest 1k tokens
Struct.Whether the model supports structured outputs (yes / no)
ToolsWhether the model supports tool / function calling
Reason.Whether the model has a reasoning mode
JudgeWhether the model is allow-listed as an eval judge
$/1M InInput price per million tokens
$/1M OutOutput price per million tokens
# All callable models, table view
inf models list

# Only OpenAI routes

inf models list --provider openai

# Only models allow-listed as judges (for inf eval run --judge-model)

inf models list --judge-only

# Only BYOK routes

inf models list --scope byok

# Machine-readable — full routeId per row, good for piping into inf eval run

inf models list --json | jq -r '.[] | select(.judgeCapable == true) | .routeId'

JSON mode

inf models list --json emits the full enriched record per model, including the routeId string that inf eval run --models and --judge-model expect:
[
  {
    "routeId": "openai:gpt-5.2-2025-12-11",
    "canonicalAlias": "gpt-5.2",
    "displayName": "GPT 5.2",
    "providerName": "OpenAI",
    "providerSlug": "openai",
    "scope": "platform",
    "maxContextSize": 128000,
    "structuredOutputs": true,
    "tools": true,
    "reasoning": false,
    "judgeCapable": true,
    "costInputPerMToken": 2.5,
    "costOutputPerMToken": 10
  }
]
inf models list --json | jq '.[] | select(.scope == "platform")' is a quick way to prune your eval model set to just platform routes.