Every time Catalyst sees a new prompt for a task, it records a hash of the prompt content and treats it as a new version. You don’t need to do anything to enable this. It happens automatically as requests flow through the gateway.Documentation Index
Fetch the complete documentation index at: https://docs.inference.net/llms.txt
Use this file to discover all available pages before exploring further.
How it works
Catalyst hashes the system message of each request. When the hash changes (because you updated your system prompt), a new version is created under that task. The previous versions are preserved, so you have a full history of every system prompt your task has used. This means you can:- See which prompt version a specific inference used
- View the full history of prompt changes for a task
- Filter inferences by prompt version to compare behavior across changes
What triggers a new version
A change to the system message creates a new version. Changes to user messages, parameters liketemperature, or the model do not currently trigger a new version.
Coming Soon
- Broader hashing - future versions will include user message templates, model, and parameter changes in the version hash
- Dataset filtering by version - build training and eval datasets tied to a specific version of a prompt, so you can train on data from the version that performed best