inf models lets you browse every model available to your active team —
both platform-provided models and any BYOK (bring-your-own-key) routes your
team has configured.
Alias: inf model
inf eval run takes model route IDs via --models and --judge-model.
inf models list is where you discover those route IDs.
List Models
Display every callable model visible to the active team, with provider,
scope, capability flags, context window, and per-million-token pricing.
Alias: inf models ls
Options
| Flag | Description | Default |
|---|
--provider <name> | Filter by provider name (case-insensitive exact match) — e.g. openai, anthropic, google, cerebras | All providers |
--scope <scope> | Filter by scope: platform (inf-public catalog) or byok (your team’s own provider keys) | Both |
--judge-only | Show only models that can act as a judge in evals | Off |
Output
In table mode (default), each row shows:
| Column | Description |
|---|
Model | Canonical alias (e.g. gpt-5.2, claude-sonnet-4-6, gemini-2.5-flash) |
Provider | Provider brand (OpenAI, Anthropic, Google, Cerebras, …) |
Scope | platform (inf-public) or byok (team-owned route) |
Context | Max context window, rounded to the nearest 1k tokens |
Struct. | Whether the model supports structured outputs (yes / no) |
Tools | Whether the model supports tool / function calling |
Reason. | Whether the model has a reasoning mode |
Judge | Whether the model is allow-listed as an eval judge |
$/1M In | Input price per million tokens |
$/1M Out | Output price per million tokens |
# All callable models, table view
inf models list
# Only OpenAI routes
inf models list --provider openai
# Only models allow-listed as judges (for inf eval run --judge-model)
inf models list --judge-only
# Only BYOK routes
inf models list --scope byok
# Machine-readable — full routeId per row, good for piping into inf eval run
inf models list --json | jq -r '.[] | select(.judgeCapable == true) | .routeId'
JSON mode
inf models list --json emits the full enriched record per model, including
the routeId string that inf eval run --models and --judge-model
expect:
[
{
"routeId": "openai:gpt-5.2-2025-12-11",
"canonicalAlias": "gpt-5.2",
"displayName": "GPT 5.2",
"providerName": "OpenAI",
"providerSlug": "openai",
"scope": "platform",
"maxContextSize": 128000,
"structuredOutputs": true,
"tools": true,
"reasoning": false,
"judgeCapable": true,
"costInputPerMToken": 2.5,
"costOutputPerMToken": 10
}
]
inf models list --json | jq '.[] | select(.scope == "platform")' is a
quick way to prune your eval model set to just platform routes.