Quick Start
Record your first LLM call
Route traffic through the Catalyst gateway to automatically trace LLM calls and view metrics.
Run your first eval
Define quality, measure it, and compare models side by side.
Train and deploy a model
The full loop: data, training, and a production endpoint.
Use the Inference API
Call open-source and and custom models running on Inference.net.
Catalyst Platform
Observe
Observe your LLM usage and get metrics and visibility into your production traffic.
Datasets
Create and manage datasets for evaluation and training.
Evaluate
Evaluate models to measure quality across model candidates.
Train
Train a custom model on your production data to improve performance, lower latency and cost.
Deploy
Deploy a model to a dedicated GPU to use in production.
API
Call open-source and and custom models running on Inference.net.