Deployments expose an OpenAI-compatible chat completions endpoint. Point any OpenAI SDK client at the Inference base URL and set theDocumentation Index
Fetch the complete documentation index at: https://docs.inference.net/llms.txt
Use this file to discover all available pages before exploring further.
model to your deployment’s model path.
x-inference-task-id to group calls by objective — see Tasks for details.
Where to find your model path
The model path is shown on your deployment’s detail page in the dashboard. It’s your team slug followed by the name you chose when creating the deployment (e.g.acme-corp/my-model).