Skip to main content
A task is a user-defined objective like “summarize this document,” “classify this ticket,” or “extract these fields.” Tagging your LLM calls with a task-id groups them by function, not just which model or prompt was used. If you don’t set a task, calls are automatically grouped under a default task so nothing gets lost. We recommend using Install with AI to automatically add tags to your LLM calls.
To get the most out of Catalyst, we highly recommend adding tags to your LLM calls.

Why tasks matter

Once you have more than one AI feature, tasks let you:
  • Track metrics per feature — cost, latency, and error rates for each objective independently
  • Run evals per task — measure whether a specific capability is getting better or worse
  • Build focused datasets — filter by task to get clean, relevant samples for training
  • Experiment safely — change the model or prompt for one task without affecting others
The task is the stable anchor. You might swap models, rewrite prompts, or redesign your agent, but the task stays the same.

How to tag a call

Set the x-inference-task-id header on your request. The task appears in the dashboard automatically once the first tagged request comes through. See How to Create a Task for the full setup.
const response = await client.chat.completions.create({
  model: "gpt-4.1",
  messages: [{ role: "user", content: "Summarize this document..." }],
}, {
  headers: { "x-inference-task-id": "document-summary" },
});

Using tasks across the platform

Once calls are tagged, tasks appear as a filter and grouping dimension everywhere:
  • Metrics Explorer — break down cost, latency, and errors by task
  • Inference Viewer — filter to see all calls for a specific task
  • Datasets — filter by task to build focused eval and training datasets