Skip to main content
Catalyst hooks LangChain callback managers so runnable calls emit spans without manual span creation. In agent workflows, the trace tree includes chain, model, and tool spans with parent-child relationships preserved by LangChain callbacks.

Install

bun add @inference/tracing @langchain/core langchain @langchain/anthropic zod

TypeScript Agent With Tools

Pass the callback manager module to setup(). Catalyst patches the static configuration path LangChain uses when constructing callback managers.
TypeScript
import { setup } from "@inference/tracing";
import { ChatAnthropic } from "@langchain/anthropic";
import * as CallbackManagerModule from "@langchain/core/callbacks/manager";
import { createAgent, tool } from "langchain";
import { z } from "zod";

const tracing = await setup({
  serviceName: "support-agent",
  modules: { langchainCallbacksManager: CallbackManagerModule },
});

const lookupOrder = tool(
  ({ orderId }) => JSON.stringify({ orderId, status: "shipped", total: 42.5 }),
  {
    name: "lookup_order",
    description: "Look up an order by ID.",
    schema: z.object({ orderId: z.string() }),
  },
);

const cancelOrder = tool(
  ({ orderId, reason }) => JSON.stringify({ ok: true, orderId, reason }),
  {
    name: "cancel_order",
    description: "Cancel a not-yet-delivered order.",
    schema: z.object({ orderId: z.string(), reason: z.string() }),
  },
);

const agent = createAgent({
  model: new ChatAnthropic({ model: "claude-haiku-4-5", maxTokens: 512 }),
  tools: [lookupOrder, cancelOrder],
  systemPrompt: "Use tools to resolve order issues.",
});

const result = await agent.invoke({
  messages: [{ role: "user", content: "Cancel order ABC-123." }],
});

console.log(result.messages.at(-1)?.content);
await tracing.shutdown();

Python Agent With Tools

Python
import json

from inference_catalyst_tracing import setup
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

tracing = setup(service_name="support-agent")

@tool
def lookup_order(order_id: str) -> str:
    """Look up an order by ID."""
    return json.dumps({"order_id": order_id, "status": "shipped", "total": 42.5})

@tool
def cancel_order(order_id: str, reason: str) -> str:
    """Cancel a not-yet-delivered order."""
    return json.dumps({"ok": True, "order_id": order_id, "reason": reason})

llm = ChatAnthropic(model_name="claude-haiku-4-5", max_tokens_to_sample=512)
agent = create_agent(
    llm,
    tools=[lookup_order, cancel_order],
    system_prompt="Use tools to resolve order issues.",
)

result = agent.invoke(
    {"messages": [{"role": "user", "content": "Cancel order ABC-123."}]},
)

print(result["messages"][-1].content)
tracing.shutdown()

What To Look For

  • A top-level LangChain chain or agent span
  • Nested LLM spans for model calls
  • Tool spans named after LangChain tools
  • Tool input and output attributes on tool spans
  • Token counts on model spans when the provider returns usage