Resources
FAQ
Frequently asked questions about the Inference.net API
Does Inference.net support streaming?
Yes, Inference.net supports streaming for all language models. Simply set the stream=true
parameter in your request.
Does Inference.net support batching?
Yes, Inference.net supports batching for all models. Please contact us for details.
Does Inference.net support custom models?
No currently, we are working on on supported LoRAs for Llama 3.1 models. If you have a specific use case or would like early access, please contact us.
Does Inference.net use my data?
No, Inference.net does not use your data for training, or any other purpose. We delete all data after 30 days.