Catalyst is a platform for building and deploying task-specific AI models. Instead of relying on large general-purpose models for every task, Catalyst helps you collect production data, evaluate model quality, fine-tune smaller models optimized for your workload, and deploy them on dedicated infrastructure. Create an account to get started with Catalyst.Documentation Index
Fetch the complete documentation index at: https://docs.inference.net/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Record your first LLM call
Route traffic through the Catalyst gateway to automatically trace LLM calls and view metrics.
Run your first eval
Define quality, measure it, and compare models side by side.
Train and deploy a model
The full loop: data, training, and a production endpoint.
Use the Inference API
Call open-source and and custom models running on Inference.net.
Catalyst Platform
Observe
Observe your LLM usage and get metrics and visibility into your production traffic.
Datasets
Create and manage datasets for evaluation and training.
Evaluate
Evaluate models to measure quality across model candidates.
Train
Train a custom model on your production data to improve performance, lower latency and cost.
Deploy
Deploy a model to a dedicated GPU to use in production.
API
Call open-source and and custom models running on Inference.net.