This feature is on the roadmap. Today, only models trained on the platform can be deployed.
What’s planned
- Deploy off-the-shelf OSS models — run popular open source models on dedicated GPUs without going through training
- Bring your own trained models — deploy models you’ve already fine-tuned outside the platform
Why this matters
Not every deployment starts with training on Catalyst. Some teams want dedicated GPU serving for an existing open source model, or they’ve already fine-tuned a model elsewhere and want to host it.Current alternative
If you need to deploy a Hugging Face model today, see the Deploy a HF Model guide for the current process.The loop continues
Your custom model is live. Use Observe to watch its production performance, run evals to catch regressions, and train the next version when you’re ready.Observe
Monitor production traffic.
Eval
Catch quality regressions.
Train
Build the next version.