# Inference.net Documentation > Use Inference.net to observe, evaluate, train, and deploy LLM systems. ## Docs - [API Quickstart](https://docs.inference.net/api/api-quickstart.md): Get started with the Inference.net API - [Batch API](https://docs.inference.net/api/async-inference/batch-api.md): Process jobs asynchronously with Batch API. - [Group API](https://docs.inference.net/api/async-inference/group.md): Submit multiple asynchronous inference requests as a single group for easier tracking and webhook notifications. - [Overview](https://docs.inference.net/api/async-inference/overview.md): Make cost-effective inference requests with flexible completion times. - [Getting Started With Webhooks](https://docs.inference.net/api/async-inference/webhooks/getting-started-with-webhooks.md): Everything you need to know to get started with webhooks. - [Webhooks: Quick Reference](https://docs.inference.net/api/async-inference/webhooks/quick-reference.md): Quick reference of webhook support for asynchronous inference - [Data Retention](https://docs.inference.net/api/data-retention.md): Understand how Inference.net handles request data, observability records, and retention controls. - [Function Calling](https://docs.inference.net/api/function-calling.md): Enable models to fetch data and take actions. - [Rate Limits](https://docs.inference.net/api/rate-limits.md): Rate limits for the Inference.net API - [Structured Outputs](https://docs.inference.net/api/structured-outputs.md): Ensure responses adhere to a JSON schema. - [Vision](https://docs.inference.net/api/vision.md): Use models to extract information from images. - [Authentication](https://docs.inference.net/cli/authentication.md): Sign in interactively, use an API key for CI, and manage CLI credentials. - [Dashboard](https://docs.inference.net/cli/dashboard.md): Launch the interactive terminal dashboard for a live overview of your project. - [Datasets](https://docs.inference.net/cli/datasets.md): Upload JSONL inference data, materialize eval/training datasets, and download them. - [Evals](https://docs.inference.net/cli/evals.md): Create rubrics, launch eval runs, and inspect results from the terminal. - [Inferences](https://docs.inference.net/cli/inferences.md): View inference requests and responses captured by Observe. - [Instrument](https://docs.inference.net/cli/instrument.md): Automatically instrument your codebase to route LLM calls through Inference.net Catalyst using an AI coding agent. - [Models](https://docs.inference.net/cli/models.md): Browse callable models, providers, capabilities, and pricing from the terminal. - [Install CLI](https://docs.inference.net/cli/overview.md) - [Projects](https://docs.inference.net/cli/projects.md): List, switch, and inspect your Inference.net projects. - [Training](https://docs.inference.net/cli/training.md): Queue training runs, discover recipes and base models, monitor progress, and surface failures. - [Record Your First LLM Call](https://docs.inference.net/get-started/record-first-call.md) - [Run Your First Eval](https://docs.inference.net/get-started/run-first-eval.md) - [Train and Deploy a Custom Model](https://docs.inference.net/get-started/train-and-deploy.md): Train a task-specific model using demo data, deploy it, and see how it performs. - [LangChain](https://docs.inference.net/integrations/frameworks/langchain.md): Route supported LangChain OpenAI and Anthropic wrappers through Inference.net while preserving provider routing, environments, and task metadata. - [Install with AI](https://docs.inference.net/integrations/install-with-ai.md): Use the Inference CLI and an AI coding agent to instrument your codebase automatically. - [Anthropic](https://docs.inference.net/integrations/model-providers/anthropic.md): Route Anthropic requests through Inference Catalyst for full observability. - [Cerebras](https://docs.inference.net/integrations/model-providers/cerebras.md): Route Cerebras requests through Inference Catalyst for full observability. - [Groq](https://docs.inference.net/integrations/model-providers/groq.md): Route Groq requests through Inference Catalyst for full observability. - [OpenAI](https://docs.inference.net/integrations/model-providers/openai.md): Route OpenAI requests through Inference Catalyst for full observability. - [OpenRouter](https://docs.inference.net/integrations/model-providers/openrouter.md): Route OpenRouter requests through Inference Catalyst for full observability. - [Integrations](https://docs.inference.net/integrations/overview.md): Connect Catalyst to your existing tooling, model providers, and frameworks. - [Anthropic Traces](https://docs.inference.net/integrations/traces/anthropic.md): Trace Anthropic Messages API calls, tool use, and prompt caching. - [Claude Agent SDK Traces](https://docs.inference.net/integrations/traces/claude-agent-sdk.md): Trace Claude Agent SDK query loops and yielded agent messages. - [Claude Code SDK Traces](https://docs.inference.net/integrations/traces/claude-code-sdk.md): Trace Claude Code CLI and SDK-style invocations with OpenInference AGENT spans. - [Instrumentation Examples](https://docs.inference.net/integrations/traces/examples.md): Copyable tracing patterns for providers, frameworks, agents, and custom application code. - [LangChain Traces](https://docs.inference.net/integrations/traces/langchain.md): Capture LangChain chains, agents, LLM calls, and tools through Catalyst callback instrumentation. - [LangGraph Traces](https://docs.inference.net/integrations/traces/langgraph.md): Trace LangGraph workflows while preserving graph and node parent-child spans. - [LangSmith Traces](https://docs.inference.net/integrations/traces/langsmith.md): Bridge LangSmith OpenTelemetry spans into the Catalyst tracer provider. - [Manual Spans](https://docs.inference.net/integrations/traces/manual-spans.md): Wrap custom agents, CLI calls, provider routing, and unsupported SDKs with OpenInference-shaped spans. - [OpenAI Traces](https://docs.inference.net/integrations/traces/openai.md): Trace OpenAI Chat Completions, tool calls, structured outputs, and Responses API calls. - [OpenAI Agents Traces](https://docs.inference.net/integrations/traces/openai-agents.md): Trace OpenAI Agents runs, tool calls, handoffs, and nested OpenAI model calls. - [Traces](https://docs.inference.net/integrations/traces/overview.md): Collect OpenInference-shaped traces from LLM SDKs, agent frameworks, and custom application code. - [Pydantic AI Traces](https://docs.inference.net/integrations/traces/pydantic-ai.md): Trace Pydantic AI agents, tool calls, and structured outputs through native OpenTelemetry instrumentation. - [Traces Quickstart](https://docs.inference.net/integrations/traces/quickstart.md): Install a Catalyst tracing SDK, configure export, and capture your first trace. - [Traces Troubleshooting](https://docs.inference.net/integrations/traces/troubleshooting.md): Debug missing spans, missing attributes, and shutdown behavior. - [Catalyst by Inference.net](https://docs.inference.net/introduction.md) - [Build a Dataset from Traffic](https://docs.inference.net/platform/datasets/build-from-traffic.md): Turn production traffic into datasets for evaluation and training. - [Dataset Formats and Schemas](https://docs.inference.net/platform/datasets/formats.md): JSONL upload formats, required fields, validation rules, and upload limits. - [Datasets](https://docs.inference.net/platform/datasets/overview.md): Curate datasets from production traffic or your own files for evals and training. - [Upload a Dataset](https://docs.inference.net/platform/datasets/upload-a-dataset.md): Import JSONL inference data, then turn it into eval or training datasets. - [Call Your Deployment](https://docs.inference.net/platform/deploy/call-your-deployment.md): Connect to your deployed model using the OpenAI-compatible API. - [Deploy a Trained Model](https://docs.inference.net/platform/deploy/deploy-a-model.md): Go from a completed training run to a live endpoint in a few clicks. - [Manage and Monitor](https://docs.inference.net/platform/deploy/manage-and-monitor.md): Start, stop, and delete deployments. Monitor production performance and scale when you need to. - [Open Source Models](https://docs.inference.net/platform/deploy/open-source-models.md): Deploy off-the-shelf open source models or bring your own trained models. - [Deploy](https://docs.inference.net/platform/deploy/overview.md): Dedicated GPU infrastructure for serving trained models via an OpenAI-compatible API. - [How LLM-as-a-Judge Works](https://docs.inference.net/platform/eval/llm-as-a-judge.md): The evaluation mechanism that scores model outputs against your rubric criteria. - [Offline vs Online Evaluation](https://docs.inference.net/platform/eval/offline-vs-online.md): Running evals against collected samples vs scoring live production traffic. - [Eval](https://docs.inference.net/platform/eval/overview.md): Measure model quality with rubrics scored by LLM judges. Know which model is better and by how much. - [Read the Results](https://docs.inference.net/platform/eval/read-the-results.md): Interpret the side-by-side comparison view and decide which model wins. - [Run a Model Comparison](https://docs.inference.net/platform/eval/run-a-comparison.md): Run your eval dataset through multiple models and score the outputs against your rubric. - [Writing Rubrics](https://docs.inference.net/platform/eval/write-a-rubric.md): Create evaluation rubrics from templates, AI generation, or plain English. - [How to Create a Task](https://docs.inference.net/platform/observe/create-a-task.md) - [Inference Viewer](https://docs.inference.net/platform/observe/inference-viewer.md): Browse, filter, and inspect individual LLM requests and responses. - [Trace Your Application](https://docs.inference.net/platform/observe/integrate.md): Collect data from your AI application for evaluation and training - [Metrics Explorer](https://docs.inference.net/platform/observe/metrics-explorer.md): Dashboards for cost, latency, errors, and token usage across all your LLM calls. - [Observe](https://docs.inference.net/platform/observe/overview.md): Record and analyze your production LLM traffic - [Prompt Versions](https://docs.inference.net/platform/observe/prompt-versions.md): Catalyst records a hash of each prompt so you can track changes and see which inferences used which version. - [Tasks Overview](https://docs.inference.net/platform/observe/tasks.md): Group LLM calls by objective to track metrics, run evaluations, and train models - [After Training Completes](https://docs.inference.net/platform/train/after-training.md): What you get when training finishes and how to evaluate the result before deploying. - [Choose a Recipe](https://docs.inference.net/platform/train/choose-a-recipe.md): Pre-configured training setups that abstract away base model selection and training parameters. - [Launch a Training Run](https://docs.inference.net/platform/train/launch-a-run.md): Start a training job from the dashboard — select your datasets, rubric, and recipe, then start training. - [Monitor a Training Run](https://docs.inference.net/platform/train/mid-training-evals.md): Track training progress with real-time graphs, eval scores, and GPU logs. - [Train](https://docs.inference.net/platform/train/overview.md): Fine-tune task-specific models using your data and eval criteria. The platform handles base model selection, parameters, and compute. - [Troubleshooting Training Failures](https://docs.inference.net/platform/train/troubleshooting.md): Common training failures, what they look like, and how to recover. - [API Keys and Authentication](https://docs.inference.net/reference/api-keys.md): Where to find your API key, how authentication works, and security best practices. - [Glossary](https://docs.inference.net/reference/glossary.md): Quick-reference definitions for Catalyst concepts and terminology. - [Rate Limits](https://docs.inference.net/reference/rate-limits.md): Request limits, what happens when you hit them, and how to request higher limits. - [Catalyst Workflow](https://docs.inference.net/workflow.md): The end-to-end workflow for observing, evaluating, training, and deploying task-specific AI models. - [ClipTagger](https://docs.inference.net/workhorse-models/cliptagger.md): Programmatic video understanding built for massive scale - [Schematron](https://docs.inference.net/workhorse-models/schematron.md): Schema-guided extraction from messy HTML