Route your Cerebras requests through the Inference Catalyst gateway to get cost tracking, latency monitoring, and analytics. Cerebras has a dedicated provider routing ID, so you use the OpenAI SDK withDocumentation Index
Fetch the complete documentation index at: https://docs.inference.net/llms.txt
Use this file to discover all available pages before exploring further.
x-inference-provider: cerebras.
Prefer automatic setup? Run
inf instrument to instrument your codebase in seconds. Learn moreSetup
Get your API keys
You need two keys:
- Inference Catalyst project API key — from your dashboard under API Keys
- Cerebras API key — from your Cerebras dashboard