Inference.net Documentation home page
Search...
⌘K
Ask AI
Support
Dashboard
Dashboard
Search...
Navigation
Resources
FAQ
Documentation
View Dashboard
Search Models
Get Started
Introduction
Quickstart
Features
Embeddings API
Batch API
Structured Outputs
Vision
Background Inference (Asynchronous API)
Fine-Tuning
Fine‑Tuning & Distillation
Use Cases
Classification
Image Captioning
Translation
Resources
Rate Limits
FAQ
Partner Program
On this page
Does Inference.net support streaming?
Does Inference.net support batching?
Does Inference.net support custom models?
Does Inference.net use my data?
Resources
FAQ
Frequently asked questions about the Inference.net API
Does Inference.net support streaming?
Yes, Inference.net supports streaming for all language models. Simply set the
stream=true
parameter in your request.
Does Inference.net support batching?
Yes, Inference.net supports batching for all models. Please
contact us
for details.
Does Inference.net support custom models?
No currently, we are working on on supported LoRAs for Llama 3.1 models. If you have a specific use case or would like early access, please
contact us
.
Does Inference.net use my data?
No, Inference.net does not use your data for training, or any other purpose. We delete all data after 30 days.
Rate Limits
Partner Program
Assistant
Responses are generated using AI and may contain mistakes.