HazelJS Agent Package

npm downloads

@hazeljs/agent provides a production-grade AI agent runtime for HazelJS. The HazelJS Agent package enables building stateful, tool-using, memory-enabled agents with approval workflows, RAG integration, human-in-the-loop support, and multi-agent orchestration — in a fully decorator-based TypeScript API.

Quick Reference

  • Purpose: @hazeljs/agent provides a runtime for building stateful AI agents that think in loops, call tools, persist memory, retrieve documents via RAG, and support human-in-the-loop approval — all using decorators.
  • When to use: Use @hazeljs/agent when building multi-step AI workflows where the LLM needs to make decisions, call tools, remember state across turns, or coordinate with other agents. Use @hazeljs/ai instead for simple one-shot LLM calls without agent orchestration.
  • Key concepts: @Agent decorator, @Tool decorator, @Trace decorator, @Delegate decorator, AgentRuntime, A2AServer (Agent-to-Agent protocol), McpClient (Model Context Protocol), AgentGraph (DAG pipelines), SupervisorAgent (LLM-driven routing), agent state machine, human-in-the-loop approval, memory via sessionId, RAG integration.
  • Inputs: User message (string), sessionId (string), optional initialContext (object), optional AbortSignal.
  • Outputs: AgentExecutionResult containing response (string), steps array, state, and executionId.
  • Dependencies: @hazeljs/core, @hazeljs/ai (LLM provider), @hazeljs/rag (memory and document retrieval).
  • Common patterns: Define agent class with @Agent → define tools with @Tool → create AgentRuntime → call runtime.execute(agentName, input, options) → handle result or approval events.
  • Common mistakes: Forgetting to register the agent with runtime.registerAgent(); vague @Tool descriptions that confuse the LLM; not handling waiting_for_approval state; not using requiresApproval: true on mutation tools in production; exceeding maxSteps with overly complex prompts.

When to Use @hazeljs/agent vs Alternatives

NeedUsePackage
Multi-step AI workflow with tools and memory@Agent, @Tool, AgentRuntime@hazeljs/agent
Simple one-shot LLM call (chat, completion)AIEnhancedService@hazeljs/ai
Document retrieval for grounded responses@SemanticSearch, @HybridSearch@hazeljs/rag
Known multi-step workflow with defined stepsAgentGraph (DAG pipeline)@hazeljs/agent
Open-ended task with dynamic decompositionSupervisorAgent (LLM routing)@hazeljs/agent
One agent calling another as a tool@Delegate@hazeljs/agent

Decision guide:

  • Use @hazeljs/agent when the LLM needs to make multiple decisions, call tools, and maintain state across turns.
  • Use @hazeljs/ai when you need a single LLM call (completion, streaming, embedding) without agent orchestration.
  • Use @hazeljs/agent + @hazeljs/rag when agents need to retrieve documents before reasoning.
  • Use AgentGraph when you know the execution order upfront (deterministic pipeline).
  • Use SupervisorAgent when the task is open-ended and the LLM should decide the execution plan dynamically.

Architecture Mental Model

graph TD
  A["Your Code<br/>(@Agent, @Tool classes)"] --> B["AgentRuntime<br/>(Orchestrator)"]
  B --> C["Registry<br/>(Agents & Tools)"]
  B --> D["State Manager<br/>(Persist & Restore)"]
  B --> E["Executor<br/>(Think → Act → Persist loop)"]
  E --> F["Tool Executor<br/>(Validate, Run, Approve)"]
  B --> G["Event Bus<br/>(Observability)"]
  H["AIService<br/>(@hazeljs/ai)"] --> E
  I["Memory<br/>(@hazeljs/rag)"] --> E
  J["RAG / Docs<br/>(@hazeljs/rag)"] --> E
  
  style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
  style B fill:#6366f1,stroke:#818cf8,stroke-width:2px,color:#fff
  style C fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style D fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style E fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
  style F fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
  style G fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff
  style H fill:#ec4899,stroke:#f472b6,stroke-width:2px,color:#fff
  style I fill:#ec4899,stroke:#f472b6,stroke-width:2px,color:#fff
  style J fill:#ec4899,stroke:#f472b6,stroke-width:2px,color:#fff
User Input
    ↓
AgentRuntime.execute(agentName, input, options)
    ↓
┌─────────────────────────────────────────────┐
│              Execution Loop                  │
│  1. Load state from State Manager            │
│  2. Load memory for this sessionId           │
│  3. Retrieve RAG documents (if enabled)      │
│  4. Send context + tools + history to LLM    │
│  5. LLM decides: call tool OR respond        │
│  6. Execute tool (validate, approve, run)    │
│  7. Persist state + memory                   │
│  8. Repeat or return result                  │
└─────────────────────────────────────────────┘
    ↓
AgentExecutionResult { response, steps, state, executionId }

HazelJS Agent state machine:

idle → thinking → using_tool → thinking → ... → completed
                  ↓
            waiting_for_approval → (approved) → using_tool
                                → (rejected) → thinking
       thinking → waiting_for_input → (resume) → thinking
       thinking → failed (error or maxSteps)

HazelJS Agent Runtime component architecture:

Your Code (@Agent, @Tool classes)
    ↓
AgentRuntime (Orchestrator)
    ├── Registry (agents and tool metadata, Zod schema support)
    ├── State Manager (persist and restore execution state)
    ├── Executor (think → act → persist loop)
    │       ├── Tool Executor (validate, timeout, retry, approve)
    │       ├── AIService (@hazeljs/ai — LLM calls)
    │       ├── Memory (@hazeljs/rag — conversation history)
    │       ├── RAG (@hazeljs/rag — document retrieval)
    │       └── MCP Client (@hazeljs/mcp — external tools)
    └── Event Bus & Tracing (OTel @Trace, observability events)

Installation

npm install @hazeljs/agent @hazeljs/core @hazeljs/rag

Install the AI provider (OpenAI is the most common):

npm install @hazeljs/ai openai

Quick Start

Fastest Way: AgentRuntime.quick()

For rapid prototyping, use the one-liner static method:

import { Agent, Tool, AgentRuntime } from '@hazeljs/agent';
import { OpenAIProvider } from '@hazeljs/ai';

@Agent({
  name: 'QuickAgent',
  description: 'A quick test agent',
})
class MyAgent {
  @Tool({ description: 'Get current time' })
  getTime() {
    return new Date().toISOString();
  }
}

// One-liner: register and execute immediately
const result = await AgentRuntime.quick(
  MyAgent,
  'What time is it?',
  {
    llmProvider: new OpenAIProvider(process.env.OPENAI_API_KEY!),
  }
);

console.log(result.response);

Sequential Multi-Agent Pipeline

Chain multiple agents in sequence with runtime.pipeline():

const runtime = new AgentRuntime({
  llmProvider: new OpenAIProvider(process.env.OPENAI_API_KEY!),
});

// Register your agents
runtime.registerAgent(ResearchAgent);
runtime.registerAgent(WriterAgent);
runtime.registerAgent(EditorAgent);

// Create a sequential pipeline
const result = await runtime
  .pipeline('content-creation', ['ResearchAgent', 'WriterAgent', 'EditorAgent'])
  .execute('Write an article about TypeScript');

console.log(result.response);

1. Define an Agent with Tools (Standard Setup)

import { Agent, Tool } from '@hazeljs/agent';

@Agent({
  name: 'support-agent',
  description: 'Customer support agent that can look up orders and process refunds',
  systemPrompt: `You are a helpful customer support agent for an e-commerce store.
You have access to order lookup and refund processing tools.
Always verify the order exists before processing a refund.`,
  enableMemory: true,
  enableRAG: true,
  ragTopK: 5,
  maxSteps: 10,
})
export class SupportAgent {
  @Tool({
    description: 'Look up order information by order ID. Returns status, items, and tracking.',
    parameters: [
      { name: 'orderId', type: 'string', description: 'The order ID to look up', required: true },
    ],
  })
  async lookupOrder(input: { orderId: string }) {
    // Call your actual order service
    return {
      orderId: input.orderId,
      status: 'shipped',
      items: [{ name: 'Blue T-Shirt', quantity: 2, price: 29.99 }],
      trackingNumber: 'TRACK123456',
      estimatedDelivery: '2024-12-10',
    };
  }

  @Tool({
    description: 'Process a refund for an order. Requires human approval before executing.',
    requiresApproval: true,
    timeout: 30000,
    retries: 2,
    parameters: [
      { name: 'orderId', type: 'string', description: 'Order ID to refund', required: true },
      { name: 'amount', type: 'number', description: 'Refund amount in USD', required: true },
      { name: 'reason', type: 'string', description: 'Reason for the refund', required: true },
    ],
  })
  async processRefund(input: { orderId: string; amount: number; reason: string }) {
    // Call your payment service
    return {
      success: true,
      refundId: `REF-${Date.now()}`,
      amount: input.amount,
      estimatedCredit: '3-5 business days',
    };
  }

  @Tool({
    description: 'Send a follow-up email to the customer with case details.',
    parameters: [
      { name: 'email', type: 'string', description: 'Customer email address', required: true },
      { name: 'subject', type: 'string', description: 'Email subject line', required: true },
      { name: 'message', type: 'string', description: 'Email body', required: true },
    ],
  })
  async sendFollowUpEmail(input: { email: string; subject: string; message: string }) {
    // Call your email service
    return { sent: true, messageId: `MSG-${Date.now()}` };
  }
}

2. Set Up the Runtime

import { AgentRuntime } from '@hazeljs/agent';
import { MemoryManager } from '@hazeljs/rag';
import { AIService } from '@hazeljs/ai';

// Create the LLM provider
const aiService = new AIService({
  provider: 'openai',
  model: 'gpt-4-turbo-preview',
  apiKey: process.env.OPENAI_API_KEY,
});

// Create the memory manager for conversation history
const memoryManager = new MemoryManager({ /* vector store config */ });

// Create the runtime
const runtime = new AgentRuntime({
  memoryManager,
  llmProvider: aiService,
  defaultMaxSteps: 10,
  enableObservability: true,
});

// Register agents
const supportAgent = new SupportAgent();
runtime.registerAgent(SupportAgent);
runtime.registerAgentInstance('support-agent', supportAgent);

3. Execute the Agent

const result = await runtime.execute(
  'support-agent',
  'I ordered a blue t-shirt last week (order #A12345) but it arrived damaged. I want a refund.',
  {
    sessionId: 'user-session-abc',
    userId: 'user-123',
    enableMemory: true,
    enableRAG: true,
    timeout: 120_000,        // optional: max execution time in ms
    signal: abortController.signal, // optional: cancel via AbortSignal
    streaming: true,       // optional: stream tokens when LLM supports it
  }
);

console.log(result.response);
// "I've looked up your order #A12345 and can see it was shipped with tracking TRACK123456.
//  I've initiated a refund request for $29.99 which is now pending approval from our team.
//  You'll receive a confirmation email once processed — credit takes 3-5 business days."

console.log(`Completed in ${result.steps.length} steps`);
// "Completed in 4 steps"

4. Handle Human-in-the-Loop Approvals

Approval is event-driven: when you call approveToolExecution or rejectToolExecution with the requestId from the event, the waiting execution resumes immediately (no polling). Pending requests from getPendingApprovals() include requestId, toolName, input, status (pending | approved | rejected | expired), and optional approvedBy / rejectedAt.

// Subscribe before executing — set up your approval handler
runtime.on('tool.approval.requested', async (event) => {
  const { requestId, toolName, input } = event.data;
  
  console.log(`Approval needed for tool: ${toolName}`);
  console.log('Arguments:', JSON.stringify(input, null, 2));
  
  // In production: send to an admin dashboard, Slack, PagerDuty, etc.
  const approved = await askAdminForApproval(requestId, toolName, input);
  
  if (approved) {
    runtime.approveToolExecution(requestId, 'admin@company.com');
  } else {
    runtime.rejectToolExecution(requestId);
  }
});

// Execute — agent will pause at the refund tool and wait
const result = await runtime.execute('support-agent', userMessage, { sessionId });

// If the agent paused waiting for approval
if (result.state === 'waiting_for_approval') {
  // Resume after the approval event fires and is handled
  const resumed = await runtime.resume(result.executionId);
  console.log('Final response:', resumed.response);
}

HazelJS Agent Core Concepts

HazelJS Agent State Machine

Every HazelJS agent execution follows a deterministic state machine — no hidden state, fully observable:

stateDiagram-v2
  [*] --> idle
  idle --> thinking : execute()
  thinking --> using_tool : LLM decides to call a tool
  thinking --> completed : LLM decides to respond
  thinking --> waiting_for_input : LLM asks for more info
  using_tool --> thinking : tool result returned
  using_tool --> waiting_for_approval : tool requires approval
  waiting_for_approval --> using_tool : approved
  waiting_for_approval --> thinking : rejected
  waiting_for_input --> thinking : resume() called
  thinking --> failed : error or maxSteps reached
  completed --> [*]
  failed --> [*]

HazelJS Agent Execution Loop

On every execute() or resume() call, the HazelJS Agent Runtime runs this loop until completion, max steps, or a wait state:

  1. Load state — Restore agent context from the state manager
  2. Load memory — Retrieve conversation history for this session
  3. Retrieve RAG — Fetch relevant documents if enableRAG: true
  4. Ask LLM — Send context + tools + history to the model; model decides next action
  5. Execute action — Call the tool, ask the user, or emit the final response
  6. Persist state — Save state and memory after every step
  7. Repeat or finish — Continue if more steps are needed, or return the result

HazelJS @Agent Decorator

The @Agent decorator declares a class as a HazelJS agent and stores configuration in metadata:

interface AgentConfig {
  name: string;             // Unique agent identifier — used in runtime.execute('name', ...)
  description?: string;     // Human-readable description
  systemPrompt: string;     // Instructions to the LLM — defines personality and behavior
  enableMemory?: boolean;   // Persist conversation history per sessionId (default: false)
  enableRAG?: boolean;      // Retrieve relevant docs before reasoning (default: false)
  ragTopK?: number;         // Number of RAG results to include (default: 5)
  maxSteps?: number;        // Max execution steps before stopping (default: runtime setting)
}

HazelJS @Tool Decorator

The @Tool decorator marks a method as a callable tool on a HazelJS agent, with full metadata for the LLM:

interface ToolConfig {
  description: string;        // Shown to the LLM — be specific and accurate
  requiresApproval?: boolean; // Pause execution and emit 'tool.approval.requested'
  timeout?: number;           // Ms before the tool call times out (default: 10000)
  retries?: number;           // Retry attempts on failure (default: 0)
  schema?: z.ZodType<any>;    // Optional: Use Zod for strict parameter validation
  parameters?: Array<{        // Legacy parameter definition
    name: string;
    type: 'string' | 'number' | 'boolean' | 'object' | 'array';
    description: string;
    required: boolean;
  }>;
}

What happens when a tool runs:

  1. The LLM decides to call the tool and provides arguments
  2. Parameters are validated against the schema
  3. If requiresApproval: true, the runtime emits the approval event and waits
  4. The tool method is called; result is logged and returned to the LLM
  5. If it fails: retried up to retries times, then the error is returned to the LLM

HazelJS Agent Memory and RAG Integration

HazelJS Agent Memory (conversation history) — Use the same sessionId across multiple execute() calls. The HazelJS agent automatically loads and appends history so it remembers what was discussed:

// First turn
await runtime.execute('support-agent', 'My order is #A12345', {
  sessionId: 'session-abc',
  enableMemory: true,
});

// Second turn — agent remembers order #A12345
await runtime.execute('support-agent', 'Can I get a refund for it?', {
  sessionId: 'session-abc', // same session
  enableMemory: true,
});

RAG (document retrieval) — Set enableRAG: true to automatically retrieve relevant documents from your vector store before the LLM reasons. Useful for help center articles, runbooks, product catalogs, etc.:

@Agent({
  name: 'knowledge-agent',
  enableRAG: true,
  ragTopK: 8, // retrieve top 8 chunks
  systemPrompt: 'Answer questions using the retrieved knowledge base articles.',
})
export class KnowledgeAgent {}

Recipe: HazelJS E-Commerce Support Agent (Complete Example)

This example shows a production-ready support agent that handles the most common customer scenarios end-to-end, including order lookup, refunds, shipping changes, and FAQ answers via RAG.

import { Agent, Tool, AgentRuntime, AgentEventType } from '@hazeljs/agent';
import { AIService } from '@hazeljs/ai';
import { MemoryManager, RagService } from '@hazeljs/rag';
import { HazelModule, Service, Controller, Post, Body } from '@hazeljs/core';

// ─── Agent Definition ─────────────────────────────────────────────────────────

@Agent({
  name: 'ecommerce-support',
  description: 'Full-service e-commerce customer support agent',
  systemPrompt: `You are a friendly and efficient customer support agent for ShopCo.
You have tools to look up orders, process refunds, update shipping addresses,
and search the knowledge base for FAQs and policies.
Always look up order details first before taking any action.
Be concise, empathetic, and professional.`,
  enableMemory: true,
  enableRAG: true,
  ragTopK: 5,
  maxSteps: 15,
})
@Service()
export class EcommerceSupportAgent {
  constructor(
    private readonly orderService: OrderService,
    private readonly paymentService: PaymentService,
    private readonly shippingService: ShippingService,
    private readonly emailService: EmailService,
  ) {}

  @Tool({
    description: 'Look up order by order ID. Returns items, status, shipping, and payment info.',
    parameters: [
      { name: 'orderId', type: 'string', description: 'Order ID starting with #', required: true },
    ],
  })
  async getOrder(input: { orderId: string }) {
    const order = await this.orderService.findById(input.orderId);
    if (!order) return { error: 'Order not found', orderId: input.orderId };
    return {
      orderId: order.id,
      status: order.status,
      items: order.items,
      total: order.total,
      shippingAddress: order.shippingAddress,
      tracking: order.trackingNumber,
      estimatedDelivery: order.estimatedDelivery,
      canRefund: order.status !== 'refunded' && order.status !== 'processing',
    };
  }

  @Tool({
    description: 'Search FAQs, return policies, and shipping information from the knowledge base.',
    parameters: [
      { name: 'query', type: 'string', description: 'Search query for FAQs or policies', required: true },
    ],
  })
  async searchKnowledgeBase(input: { query: string }) {
    // RAG search over your knowledge base
    const results = await this.ragService.search(input.query, { topK: 5 });
    return { articles: results.map(r => ({ title: r.title, content: r.content })) };
  }

  @Tool({
    description: 'Check the current status of a delivery from the shipping carrier.',
    parameters: [
      { name: 'trackingNumber', type: 'string', description: 'Carrier tracking number', required: true },
    ],
  })
  async trackShipment(input: { trackingNumber: string }) {
    const tracking = await this.shippingService.track(input.trackingNumber);
    return {
      status: tracking.status,
      location: tracking.currentLocation,
      events: tracking.events.slice(0, 5),
      estimatedDelivery: tracking.eta,
    };
  }

  @Tool({
    description: 'Update the shipping address for an order that has not yet shipped.',
    requiresApproval: true,
    parameters: [
      { name: 'orderId', type: 'string', description: 'Order ID', required: true },
      { name: 'newAddress', type: 'string', description: 'New shipping address', required: true },
    ],
  })
  async updateShippingAddress(input: { orderId: string; newAddress: string }) {
    const result = await this.shippingService.updateAddress(input.orderId, input.newAddress);
    return { updated: result.success, confirmationNumber: result.confirmationId };
  }

  @Tool({
    description: 'Process a full or partial refund. Always verify order eligibility first.',
    requiresApproval: true,
    timeout: 30000,
    parameters: [
      { name: 'orderId', type: 'string', description: 'Order ID to refund', required: true },
      { name: 'amount', type: 'number', description: 'Refund amount in USD', required: true },
      { name: 'reason', type: 'string', description: 'Refund reason', required: true },
      { name: 'itemIds', type: 'array', description: 'Specific item IDs to refund (empty = full refund)', required: false },
    ],
  })
  async processRefund(input: { orderId: string; amount: number; reason: string; itemIds?: string[] }) {
    const refund = await this.paymentService.refund({
      orderId: input.orderId,
      amount: input.amount,
      reason: input.reason,
      itemIds: input.itemIds,
    });
    
    // Send confirmation email automatically
    await this.emailService.send({
      template: 'refund-confirmation',
      data: { refundId: refund.id, amount: refund.amount },
    });

    return {
      success: true,
      refundId: refund.id,
      amount: refund.amount,
      estimatedCredit: '3-5 business days',
      emailSent: true,
    };
  }

  @Tool({
    description: 'Send a summary email to the customer about the support case resolution.',
    parameters: [
      { name: 'customerId', type: 'string', description: 'Customer ID', required: true },
      { name: 'summary', type: 'string', description: 'Summary of what was done', required: true },
      { name: 'nextSteps', type: 'string', description: 'What the customer should expect', required: false },
    ],
  })
  async sendResolutionEmail(input: { customerId: string; summary: string; nextSteps?: string }) {
    const customer = await this.orderService.getCustomer(input.customerId);
    await this.emailService.send({
      to: customer.email,
      template: 'support-resolution',
      data: { name: customer.name, summary: input.summary, nextSteps: input.nextSteps },
    });
    return { sent: true };
  }
}

// ─── Runtime Setup ────────────────────────────────────────────────────────────

function createSupportRuntime(): AgentRuntime {
  const aiService = new AIService({
    provider: 'openai',
    model: 'gpt-4-turbo-preview',
    apiKey: process.env.OPENAI_API_KEY!,
  });

  const memoryManager = new MemoryManager({
    vectorStore: { type: 'memory' }, // use Redis/Pinecone in production
  });

  const runtime = new AgentRuntime({
    memoryManager,
    llmProvider: aiService,
    defaultMaxSteps: 15,
    enableObservability: true,
  });

  // Observability: log every step and tool call
  runtime.on(AgentEventType.STEP_STARTED, (e) =>
    console.log(`[agent] step ${e.data.stepNumber} — thinking`));
  runtime.on(AgentEventType.TOOL_EXECUTION_STARTED, (e) =>
    console.log(`[agent] calling tool: ${e.data.tool}`, e.data.args));
  runtime.on(AgentEventType.EXECUTION_COMPLETED, (e) =>
    console.log(`[agent] done in ${e.data.steps} steps`));

  // Approval handler: send to Slack, respond async (event-driven — approve/reject resolves immediately)
  runtime.on(AgentEventType.TOOL_APPROVAL_REQUESTED, async (event) => {
    const { requestId, toolName, input } = event.data;
    console.log(`[approval] ${toolName} needs approval`, input);

    // Auto-approve in development; use real approval flow in production
    if (process.env.NODE_ENV === 'development') {
      runtime.approveToolExecution(requestId, 'auto-dev');
    } else {
      // Send to approval queue — approve/reject from your dashboard
      await approvalQueue.push({ requestId, toolName, input });
    }
  });

  // Register the agent
  const agent = new EcommerceSupportAgent(
    new OrderService(),
    new PaymentService(),
    new ShippingService(),
    new EmailService(),
  );
  runtime.registerAgent(EcommerceSupportAgent);
  runtime.registerAgentInstance('ecommerce-support', agent);

  return runtime;
}

// ─── HTTP Controller ──────────────────────────────────────────────────────────

@Controller('/support')
@Service()
export class SupportController {
  private runtime = createSupportRuntime();

  @Post('/chat')
  async chat(@Body() body: { message: string; sessionId: string; userId: string }) {
    const result = await this.runtime.execute(
      'ecommerce-support',
      body.message,
      {
        sessionId: body.sessionId,
        userId: body.userId,
        enableMemory: true,
        enableRAG: true,
      }
    );

    return {
      response: result.response,
      sessionId: body.sessionId,
      steps: result.steps.length,
      state: result.state,
      executionId: result.executionId,
    };
  }

  @Post('/resume')
  async resume(@Body() body: { executionId: string; input?: string }) {
    const result = await this.runtime.resume(body.executionId, body.input);
    return { response: result.response, state: result.state };
  }
}

What this example demonstrates:

  • Multiple tools with different safety levels (read-only vs. requiresApproval)
  • Dependency injection inside an @Agent class (order, payment, shipping services)
  • Automatic email on refund — the tool handles side effects
  • Full observability setup (logging, approval handler)
  • HTTP controller exposing chat and resume endpoints
  • Session-based memory so the agent remembers context across turns

HazelJS Agent Event System

Subscribe to any combination of HazelJS agent events for observability, audit logging, and integrations:

import { AgentEventType } from '@hazeljs/agent';

// Execution lifecycle
runtime.on(AgentEventType.EXECUTION_STARTED, (e) => {
  metrics.increment('agent.executions');
  logger.info('Agent started', { agent: e.data.agentName, session: e.data.sessionId });
});

runtime.on(AgentEventType.EXECUTION_COMPLETED, (e) => {
  metrics.histogram('agent.steps', e.data.steps);
  logger.info('Agent completed', { response: e.data.response.slice(0, 100) });
});

// Individual steps
runtime.on(AgentEventType.STEP_STARTED, (e) =>
  logger.debug(`Step ${e.data.stepNumber} started`));

// Tool calls
runtime.on(AgentEventType.TOOL_EXECUTION_STARTED, (e) =>
  auditLog.write({ action: e.data.tool, args: e.data.args, session: e.data.sessionId }));

runtime.on(AgentEventType.TOOL_EXECUTION_COMPLETED, (e) =>
  logger.debug('Tool done', { tool: e.data.tool, duration: e.data.duration }));

// Approval workflow
runtime.on(AgentEventType.TOOL_APPROVAL_REQUESTED, async (e) =>
  slack.post('#approvals', `Agent wants to call ${e.data.tool}: ${JSON.stringify(e.data.args)}`));

// Catch all
runtime.onAny((e) => console.log(e.type, e.data));

HazelJS AgentModule Integration

Register HazelJS agents at the module level for full dependency injection using AgentModule.forRoot():

import { HazelModule } from '@hazeljs/core';
import { AgentModule } from '@hazeljs/agent';
import { RagModule } from '@hazeljs/rag';
import { AIModule } from '@hazeljs/ai';

@HazelModule({
  imports: [
    AIModule.register({
      provider: 'openai',
      model: 'gpt-4-turbo-preview',
      apiKey: process.env.OPENAI_API_KEY,
    }),
    RagModule.forRoot({
      vectorStore: { type: 'pinecone', apiKey: process.env.PINECONE_KEY, index: 'support-docs' },
      embeddings: { provider: 'openai', apiKey: process.env.OPENAI_API_KEY },
    }),
    AgentModule.forRoot({
      runtime: {
        defaultMaxSteps: 15,
        enableObservability: true,
      },
      agents: [EcommerceSupportAgent, KnowledgeAgent],
    }),
  ],
  controllers: [SupportController],
})
export class AppModule {}

HazelJS Agent Execution Control: Timeout, Cancellation, and Streaming

Timeout

Single-agent runs respect an execution timeout so long-running or stuck executions fail cleanly. Set it per run or globally:

// Per execution
const result = await runtime.execute('support-agent', userMessage, {
  timeout: 60_000, // 60 seconds
});

// Global default (AgentRuntimeConfig)
const runtime = new AgentRuntime({
  defaultTimeout: 120_000, // 2 minutes
});

When the timeout is exceeded, execution throws an AgentError with code AGENT_TIMEOUT.

Cancellation

Cancel an in-flight execution using an AbortSignal or by ID:

// Option 1: Pass an AbortSignal when starting
const controller = new AbortController();
const resultPromise = runtime.execute('support-agent', userMessage, {
  signal: controller.signal,
});
// Later: cancel from UI or another flow
controller.abort();

// Option 2: Cancel by execution ID (e.g. from a "Stop" button)
const result = await runtime.execute('support-agent', longQuery, { sessionId });
// User clicks Stop — you have the executionId from events or the initial result
runtime.cancel(result.executionId);

When cancelled, the execution fails with an AgentError with code AGENT_CANCELLED.

Streaming

When your LLM provider implements optional streamChat(), you can stream step and token chunks for the final response:

// Execution options
const result = await runtime.execute('support-agent', userMessage, {
  streaming: true, // use streamChat when available
});

// Or use the dedicated async generator for full control
for await (const chunk of runtime.executeStream('support-agent', userMessage, {
  sessionId: 'user-123',
  streaming: true,
  timeout: 60_000,
  signal: abortController.signal,
})) {
  switch (chunk.type) {
    case 'step':
      console.log('Step completed', chunk.step);
      break;
    case 'token':
      process.stdout.write(chunk.content);
      break;
    case 'done':
      console.log('Final result', chunk.result);
      break;
  }
}

Chunk types: { type: 'step', step }, { type: 'token', content }, { type: 'done', result }.

HazelJS Agent Structured Errors

The HazelJS Agent Runtime uses AgentError with stable codes for programmatic handling and observability:

import { AgentError, AgentErrorCode } from '@hazeljs/agent';

try {
  const result = await runtime.execute('support-agent', input, options);
} catch (err) {
  if (err instanceof AgentError) {
    switch (err.code) {
      case AgentErrorCode.TIMEOUT:
        // Execution exceeded timeout
        break;
      case AgentErrorCode.CANCELLED:
        // User or system cancelled
        break;
      case AgentErrorCode.MAX_STEPS_EXCEEDED:
        // Hit max steps limit
        break;
      case AgentErrorCode.LLM_ERROR:
        // LLM call failed (err.cause has details)
        break;
      case AgentErrorCode.TOOL_NOT_FOUND:
      case AgentErrorCode.INVALID_TOOL_INPUT:
        // Tool invocation problem
        break;
      case AgentErrorCode.EXECUTION_NOT_FOUND:
        // resume() with invalid executionId
        break;
      case AgentErrorCode.RATE_LIMIT_EXCEEDED:
        // Rate limiter rejected the request
        break;
    }
  }
  throw err;
}

Use err.cause for the underlying error when available (e.g. for LLM_ERROR, INVALID_TOOL_INPUT).

HazelJS Agent Advanced Usage

Custom Initial Context

Pass structured data to the agent at execution time — useful for injecting user profile, order data, or tenant context:

const result = await runtime.execute('support-agent', 'I want a refund', {
  sessionId: 'session-abc',
  initialContext: {
    customerId: 'cust-123',
    customerTier: 'premium',
    recentOrders: ['#A001', '#A002'],
    preferredLanguage: 'en-US',
  },
});

Pause and Resume on User Input

Agents can pause mid-execution waiting for user input:

// First message — agent starts working and may ask a clarifying question
const result = await runtime.execute('support-agent', 'I have a problem with my order', {
  sessionId: 'session-abc',
});

if (result.state === 'waiting_for_input') {
  // result.response contains the question the agent asked
  console.log(result.response); // "Could you provide your order number?"

  // User responds — resume with their answer
  const continued = await runtime.resume(result.executionId, '#A12345');
  console.log(continued.response);
}

Tool Policies

Tag tools with custom policy identifiers so your approval handler can apply different logic:

@Tool({
  description: 'Delete all customer data (GDPR erasure)',
  requiresApproval: true,
  policy: 'gdpr-erasure',   // your handler checks this
  parameters: [
    { name: 'customerId', type: 'string', required: true },
  ],
})
async deleteCustomerData(input: { customerId: string }) { /* ... */ }

In your approval handler (use requestId from the event to call approveToolExecution or rejectToolExecution):

runtime.on(AgentEventType.TOOL_APPROVAL_REQUESTED, async (event) => {
  const { requestId, toolName, input } = event.data;
  const pending = runtime.getPendingApprovals().find((r) => r.requestId === requestId);
  const policy = pending?.metadata?.policy;
  if (policy === 'gdpr-erasure') {
    await gdpoApprovalQueue.push({ requestId, toolName, input });
  } else {
    await standardApprovalQueue.push({ requestId, toolName, input });
  }
});

HazelJS Agent Best Practices

Write Clear Tool Descriptions

The LLM reads tool descriptions to decide which to call and when. Be specific: describe what the tool does, what the parameters mean, and what it returns.

// ✅ Good — specific and informative
@Tool({
  description: 'Look up order status, items, and shipping tracking. Returns canRefund flag.',
  parameters: [
    { name: 'orderId', type: 'string', description: 'Order ID starting with #, e.g. #A12345', required: true },
  ],
})

// ❌ Bad — vague, LLM may misuse it
@Tool({ description: 'Get order', parameters: [{ name: 'id', type: 'string', required: true }] })

Return Structured Errors (Not Exceptions)

When a tool fails gracefully (item not found, API unavailable), return a structured object the LLM can reason about. Reserve exceptions for bugs.

async lookupOrder(input: { orderId: string }) {
  const order = await this.db.find(input.orderId);
  if (!order) return { error: 'Order not found', suggestion: 'Double-check the order ID' };
  return order;
}

Design Idempotent Tools

The LLM may decide to call a tool twice. Check before creating or modifying.

async createTicket(input: { orderId: string; issue: string }) {
  const existing = await this.ticketDb.findByOrderId(input.orderId);
  if (existing) return { ticketId: existing.id, alreadyExists: true };
  return await this.ticketDb.create(input);
}

Use Approval for All Mutations

Read-only tools (lookup, search, track) never need approval. Anything that writes, sends, charges, or deletes should use requiresApproval: true in production.

Keep System Prompts Focused

The system prompt is the agent's personality and constraints. Be specific about what the agent should and should not do, what tone to use, and which tools to prefer in which situations.

HazelJS Multi-Agent Patterns

Single HazelJS agents are powerful, but real workflows often need multiple specialized agents working together. @hazeljs/agent ships three first-class multi-agent patterns: peer-to-peer delegation with @Delegate, DAG pipelines with AgentGraph, and LLM-driven routing with SupervisorAgent.

HazelJS @Delegate — Peer-to-Peer Agent Calls

The HazelJS @Delegate decorator marks a method as a transparent call to another registered agent. The LLM sees it as a regular tool; at runtime the AgentRuntime replaces the method body with runtime.execute(targetAgent, input).

import { Agent, Tool, Delegate, AgentRuntime } from '@hazeljs/agent';

@Agent({
  name: 'research-agent',
  systemPrompt: 'You are a research specialist. Find detailed information on any topic.',
})
export class ResearchAgent {
  @Tool({
    description: 'Research a topic in depth and return a structured summary.',
    parameters: [
      { name: 'topic', type: 'string', description: 'Topic to research', required: true },
    ],
  })
  async research(input: { topic: string }) {
    // Calls real research logic — web search, RAG, etc.
    return { topic: input.topic, findings: '...' };
  }
}

@Agent({
  name: 'writer-agent',
  systemPrompt: 'You write polished blog posts. Delegate research tasks to the research agent.',
})
export class WriterAgent {
  // This method is replaced at runtime by AgentRuntime.execute('research-agent', ...)
  @Delegate({
    agent: 'research-agent',
    description: 'Research a topic and return detailed findings. Use before writing.',
    inputField: 'query',  // maps the string argument to { query: '...' }
  })
  async researchTopic(query: string): Promise<string> {
    return ''; // body is never called — runtime replaces it
  }

  @Tool({
    description: 'Write a blog post on a topic (research is done automatically).',
    parameters: [
      { name: 'topic', type: 'string', description: 'Blog post topic', required: true },
    ],
  })
  async writeBlogPost(input: { topic: string }) {
    // Agent will call researchTopic() which delegates to ResearchAgent
    return { title: `All about ${input.topic}`, body: '...' };
  }
}

When to use @Delegate: When you want one agent to transparently call another agent as if it were a local tool, with no extra orchestration code.


HazelJS AgentGraph — DAG Pipelines

AgentGraph lets you wire HazelJS agents and functions into a directed acyclic graph with sequential edges, conditional routing, or parallel fan-out. Each node runs in sequence (or parallel), and the output of one node becomes the input of the next.

graph TD
  A["Entry: researcher"] --> B["writer"]
  B --> C["reviewer"]
  C -->|"approved"| D["END"]
  C -->|"needs revision"| B

  style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
  style B fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style C fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
  style D fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff

Building a Graph

import { AgentRuntime, END } from '@hazeljs/agent';

const runtime = new AgentRuntime({ /* ... */ });

// Register agents
runtime.registerAgent(ResearchAgent);
runtime.registerAgent(WriterAgent);

// Build the pipeline
const graph = runtime
  .createGraph('blog-pipeline')
  .addNode('researcher', { type: 'agent', agentName: 'research-agent' })
  .addNode('writer',     { type: 'agent', agentName: 'writer-agent' })
  .addNode('publisher',  {
    type: 'function',
    fn: async (input) => {
      // Custom function node — publish to CMS, send email, etc.
      await cms.publish(input.body);
      return { published: true, url: `https://blog.com/${input.title}` };
    },
  })
  .addEdge('researcher', 'writer')
  .addEdge('writer', 'publisher')
  .addEdge('publisher', END)
  .setEntryPoint('researcher')
  .compile();

// Run the graph
const result = await graph.run('Write a post about GraphRAG', { sessionId: 'blog-001' });
console.log(result.url);

Conditional Routing

Add a router function on an edge to route dynamically based on the previous node's output:

const graph = runtime
  .createGraph('review-pipeline')
  .addNode('writer',   { type: 'agent', agentName: 'writer-agent' })
  .addNode('reviewer', { type: 'agent', agentName: 'reviewer-agent' })
  .addNode('publisher',{ type: 'function', fn: publishFn })
  .addEdge('writer', 'reviewer')
  .addConditionalEdge('reviewer', (output) => {
    // Route based on reviewer's output
    if (output.approved) return 'publisher';
    return 'writer'; // loop back for revision
  })
  .addEdge('publisher', END)
  .setEntryPoint('writer')
  .compile();

Parallel Fan-Out

Run multiple agents simultaneously and merge their outputs before continuing:

const graph = runtime
  .createGraph('research-pipeline')
  .addNode('coordinator', { type: 'agent', agentName: 'coordinator-agent' })
  .addNode('parallel-research', {
    type: 'parallel',
    branches: ['web-researcher', 'academic-researcher', 'news-researcher'],
  })
  .addNode('synthesizer', { type: 'agent', agentName: 'synthesizer-agent' })
  .addEdge('coordinator', 'parallel-research')
  .addEdge('parallel-research', 'synthesizer')
  .addEdge('synthesizer', END)
  .setEntryPoint('coordinator')
  .compile();

Streaming Node-by-Node

Use .stream() to get results from each node as it completes — useful for showing progress in a UI:

for await (const event of graph.stream('Write a post about AI', { sessionId: 'abc' })) {
  console.log(`[${event.node}] completed:`, event.output);
}

Visualizing the Graph

const mermaid = graph.visualize();
console.log(mermaid);
// graph TD
//   researcher --> writer
//   writer --> publisher
//   publisher --> END

HazelJS SupervisorAgent — LLM-Driven Routing

SupervisorAgent uses an LLM to decompose a complex task into subtasks, routes each to the best available worker agent, accumulates results, and loops until the task is complete.

graph TD
  A["User Task"] --> B["Supervisor<br/>(LLM Router)"]
  B -->|"Subtask 1"| C["ResearchAgent"]
  B -->|"Subtask 2"| D["CoderAgent"]
  B -->|"Subtask 3"| E["WriterAgent"]
  C --> F["Results Accumulator"]
  D --> F
  E --> F
  F --> B
  B -->|"Done"| G["Final Response"]

  style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
  style B fill:#6366f1,stroke:#818cf8,stroke-width:2px,color:#fff
  style C fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style D fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style E fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style F fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
  style G fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff
import { AgentRuntime } from '@hazeljs/agent';
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const runtime = new AgentRuntime({ /* ... */ });

// Register worker agents
runtime.registerAgent(ResearchAgent);
runtime.registerAgent(CoderAgent);
runtime.registerAgent(WriterAgent);

// Create the supervisor
const supervisor = runtime.createSupervisor({
  name: 'project-manager',
  workers: ['research-agent', 'coder-agent', 'writer-agent'],
  maxRounds: 6,  // Maximum supervisor-worker loops before stopping
  llm: async (prompt: string) => {
    const res = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: prompt }],
    });
    return res.choices[0].message.content ?? '';
  },
});

// Run — the supervisor decomposes, routes, and synthesizes automatically
const result = await supervisor.run(
  'Build a REST API for a todo app: research best practices, write the code, and document it.',
  { sessionId: 'project-001' },
);

console.log(result.response);
console.log(`Completed in ${result.rounds} supervisor rounds`);

How the supervisor works:

  1. The LLM receives the task and the list of available workers with their descriptions
  2. It decomposes the task into subtasks and assigns each to the best worker
  3. Each worker executes its subtask using its own tools and memory
  4. Results are accumulated and fed back to the supervisor LLM
  5. The LLM decides whether to do another round or return the final response
  6. Loops until the task is done or maxRounds is reached

When to use SupervisorAgent: When the task is too complex and open-ended to predetermine the execution order — you want the LLM to figure out the plan dynamically.


Choosing the Right HazelJS Multi-Agent Pattern

PatternWhen to usePredictabilityFlexibility
@DelegateOne agent calls another agent as a toolHigh — explicit call siteLow — wired at code level
AgentGraphKnown workflow with defined stepsHigh — you define the DAGMedium — conditional routing
SupervisorAgentOpen-ended task, dynamic decompositionLow — LLM decides the planHigh — adapts to any task

HazelJS Agent API Reference

APIDescription
runtime.execute(agentName, input, options?)Run an agent. Options: sessionId, userId, maxSteps, timeout, signal, streaming, enableMemory, enableRAG, initialContext, metadata.
runtime.executeStream(agentName, input, options?)Run an agent and stream chunks (step, token, done). Same options as execute; use streaming: true when LLM supports streamChat.
runtime.resume(executionId, input?)Resume a paused execution (e.g. after user input or approval).
runtime.getContext(executionId)Async. Returns Promise<AgentContext | undefined> for the execution.
runtime.cancel(executionId)Cancel an in-flight execution (next check throws AgentError with CANCELLED).
runtime.approveToolExecution(requestId, approvedBy)Approve a pending tool execution (event-driven; use requestId from TOOL_APPROVAL_REQUESTED).
runtime.rejectToolExecution(requestId)Reject a pending tool execution.
runtime.getPendingApprovals()List pending approval requests (each has requestId, status, toolName, input, etc.).
AgentService (module)Same as above; getContext is async and returns Promise<AgentContext | undefined>. executeStream and cancel are available.
AgentError, AgentErrorCodeStructured errors: TIMEOUT, CANCELLED, MAX_STEPS_EXCEEDED, TOOL_NOT_FOUND, INVALID_TOOL_INPUT, LLM_ERROR, EXECUTION_NOT_FOUND, RATE_LIMIT_EXCEEDED.

For the full API reference (AgentRuntime, AgentModule, @Agent, @Tool, @Delegate, AgentGraph, SupervisorAgent, event types, execution result shape), see the Agent package on GitHub.

  • AI Package — LLM providers (OpenAI, Anthropic, Gemini) that power the agent's reasoning
  • RAG Package — Vector stores, memory, and document retrieval used by agents
  • Memory Guide — Conversation and entity memory systems
  • MCP Package — Expose agent tools as a Model Context Protocol server
  • Prompts Package — Manage and version system prompts used by agents
  • Guardrails Package — Content safety and validation on agent inputs/outputs
  • Eval Package — Golden datasets and trajectory scoring for tool-call regression tests
  • Ops Agent Package — Pre-built ops agent for Jira and Slack workflows

Prerequisites

  • Installation — Install @hazeljs/core, @hazeljs/agent, @hazeljs/ai, @hazeljs/rag
  • Core Package — Understand modules, DI, and the request pipeline
  • AI Package — Understand AIEnhancedService and LLM provider configuration

Recipes

Recipe: Customer Support Agent with Tools

// File: src/support/support.agent.ts
import { Agent, Tool, AgentResponse } from '@hazeljs/agent';
import { Service } from '@hazeljs/core';

@Agent({
  name: 'support-agent',
  description: 'Customer support agent that looks up orders and processes refunds',
  model: 'gpt-4-turbo-preview',
  systemPrompt: 'You are a customer support agent. Look up orders and help customers with refunds.',
})
@Service()
export class SupportAgent {
  @Tool({
    name: 'lookup_order',
    description: 'Look up an order by ID',
    parameters: { orderId: { type: 'string', description: 'The order ID' } },
  })
  async lookupOrder(params: { orderId: string }) {
    // Replace with actual database lookup
    return { orderId: params.orderId, status: 'shipped', total: 49.99 };
  }

  @Tool({
    name: 'process_refund',
    description: 'Process a refund for an order',
    parameters: { orderId: { type: 'string' }, reason: { type: 'string' } },
    requiresApproval: true,
  })
  async processRefund(params: { orderId: string; reason: string }) {
    return { refundId: 'REF-123', status: 'processed', orderId: params.orderId };
  }
}
// File: src/support/support.controller.ts
import { Controller, Post, Body } from '@hazeljs/core';
import { AgentRuntime } from '@hazeljs/agent';

@Controller('support')
export class SupportController {
  constructor(private readonly runtime: AgentRuntime) {}

  @Post()
  async chat(@Body('message') message: string, @Body('sessionId') sessionId: string) {
    return this.runtime.execute('support-agent', message, { sessionId });
  }
}

Recipe: Multi-Agent Pipeline with AgentGraph

// File: src/pipeline/research.graph.ts
import { AgentGraph } from '@hazeljs/agent';
import { Service } from '@hazeljs/core';

@Service()
export class ResearchPipeline {
  constructor(private readonly graph: AgentGraph) {}

  async run(topic: string) {
    const result = await this.graph
      .start('researcher', topic)
      .then('fact-checker')
      .then('writer')
      .execute();

    return result;
  }
}

Recipe: Agent with RAG Context

// File: src/docs/docs.agent.ts
import { Agent, Tool } from '@hazeljs/agent';
import { Service } from '@hazeljs/core';
import { RAGPipeline } from '@hazeljs/rag';

@Agent({
  name: 'docs-agent',
  description: 'Answers questions using documentation',
  model: 'gpt-4-turbo-preview',
  systemPrompt: 'Answer questions based on the retrieved documentation context. Cite sources.',
})
@Service()
export class DocsAgent {
  constructor(private readonly rag: RAGPipeline) {}

  @Tool({
    name: 'search_docs',
    description: 'Search documentation for relevant information',
    parameters: { query: { type: 'string', description: 'Search query' } },
  })
  async searchDocs(params: { query: string }) {
    const results = await this.rag.search(params.query, { topK: 5 });
    return results.map(r => ({ content: r.content, source: r.metadata.source }));
  }
}

Next Concepts to Learn