HazelJS Observability Package
@hazeljs/observability provides native OpenTelemetry integration for AI agents. It allows you to track LLM costs, trace complex agentic flows, and monitor the internal "thought process" of your AI agents with zero effort using decorators.
Quick Reference
- Purpose: Distributed tracing, LLM cost tracking, and automatic span management for AI agents and RAG pipelines.
- When to use: Use this when you want to debug why an agent is failing, monitor performance bottlenecks across tool calls, or track total token usage/spend in production.
- Key concepts:
@Trace()decorator,OpenTelemetryProvider, LLM cost tracking, distributed tracing, Span status. - Dependencies:
@opentelemetry/api,@hazeljs/core. - Common patterns: Initialize the provider in your app startup → decorate agent methods with
@Trace()→ view traces in Jaeger, Honeycomb, or AWS X-Ray.
Why Observability in AI?
AI agents are fundamentally non-deterministic. A single user request might trigger 5 tool calls, 3 RAG retrievals, and multiple LLM generations.
| Without @hazeljs/observability | With @hazeljs/observability |
|---|---|
| Guess why the agent hallucinated | Full trace of every nested tool call |
| Manual console logs everywhere | Automatic spans with @Trace decorator |
| Unknown token spend | Per-request LLM cost and usage metrics |
| Hard-to-find async errors | Exceptions automatically caught and recorded in spans |
Architecture
graph TD A["User Request"] --> B["Agent Method (@Trace)"] B --> C["OpenTelemetry Provider"] C --> D["Active Span"] D --> E["LLM Call (Token Tracking)"] D --> F["Tool Invocations"] F --> G["Exporter (OTLP/Jaeger)"] style B fill:#6366f1,color:#fff style C fill:#10b981,color:#fff style D fill:#f59e0b,color:#fff
Installation
npm install @hazeljs/observability
Quick Start
1. Initialize the Provider
Initialize the OpenTelemetryProvider at the entry point of your application.
import { OpenTelemetryProvider } from '@hazeljs/observability';
const provider = new OpenTelemetryProvider({
serviceName: 'my-ai-agent-service',
otlpEndpoint: 'http://localhost:4318/v1/traces', // Optional OTLP endpoint
});
await provider.start();
2. Trace Your Agents
Use the @Trace() decorator to automatically monitor your agent methods.
import { Trace } from '@hazeljs/observability';
class CustomerSupportAgent {
@Trace('support_workflow')
async handleTicket(ticketId: string) {
// This entire method is now wrapped in an OTel span
const data = await this.fetchTicketData(ticketId);
return await this.generateResponse(data);
}
@Trace() // Defaults to method name: fetchTicketData
private async fetchTicketData(id: string) {
return { id, subject: 'Login Issue' };
}
}
LLM Cost Tracking
You can track token usage and cost metrics directly within the active span.
import { OpenTelemetryProvider } from '@hazeljs/observability';
// Inside your provider or agent
provider.trackCost('gpt-4o', 150, 50); // model, promptTokens, completionTokens
This attaches attributes like llm.model, llm.usage.prompt_tokens, and llm.usage.completion_tokens to the current telemetry span.
API Reference
@Trace(spanName?: string)
A method decorator that creates a new OpenTelemetry span for the duration of the method call.
spanName: Custom name for the span. Defaults to the method name.- Auto-handling: Correctfully handles both synchronous and asynchronous methods.
- Error Recording: Automatically marks the span as
Errorand records any thrown exception.
OpenTelemetryProvider
The central manager for the observability lifecycle.
config.serviceName: The name of your service in your tracing dashboard.config.otlpEndpoint: (Optional) Destination for traces (default: no export).start(): Initializes the SDK and begins tracking.stop(): Gracefully shuts down exporters and flushes remaining spans.
Best Practices
Wrap Tool Calls
Always decorate tools with @Trace so you can see which specific tool is slow or failing during an agent's reasoning loop.
Use Descriptive Span Names
While the default method name is fine, using business-specific names like @Trace('llm_reasoning_step') helps non-developers understand the traces.
Monitor Token Consumption
Integrate trackCost with every LLM call to get a clear picture of your operational costs in production.
For full implementation details, see the Observability package on GitHub.