HazelJS Messaging Package

npm downloads

@hazeljs/messaging provides multichannel messaging for HazelJS — build conversational AI bots across WhatsApp, Telegram, and Viber with LLM-powered responses, RAG integration, and agent workflows.

Quick Reference

  • Purpose: @hazeljs/messaging provides channel adapters for WhatsApp, Telegram, and Viber with webhook handling, conversation context management, LLM-powered responses, RAG integration, and agent workflow orchestration.
  • When to use: Use @hazeljs/messaging when building conversational AI bots on messaging platforms. Use @hazeljs/websocket for browser-based real-time communication instead.
  • Key concepts: Channel adapters (Telegram, WhatsApp, Viber), webhook handling, conversation context, LLM integration, RAG integration, agent workflows, unified message interface.
  • Dependencies: @hazeljs/core, @hazeljs/ai, optionally @hazeljs/agent and @hazeljs/rag.
  • Common patterns: Register MessagingModule with channel credentials → define message handlers → integrate with @hazeljs/ai for LLM responses → optionally wire @hazeljs/agent for multi-step conversations.
  • Common mistakes: Not verifying webhook signatures (security risk); not managing conversation context per user/chat; hardcoding API tokens instead of using environment variables.

Purpose

Building messaging bots requires handling webhooks, managing conversation context, integrating with multiple platforms, and orchestrating AI responses. The @hazeljs/messaging package simplifies this by providing:

  • Channel Adapters – Telegram (Telegraf), WhatsApp Cloud API, Viber with unified interface
  • Unified Message Format – Channel-agnostic IncomingMessage / OutgoingMessage types
  • LLM Integration – Uses @hazeljs/ai providers (OpenAI, Anthropic, Gemini, etc.) for conversational responses
  • Conversation Context – Memory (development) or Redis (production, horizontally scalable)
  • Kafka Processing – Optional async message handling for horizontal scalability
  • Webhook Controller – Single endpoint per channel for incoming messages
  • RAG Integration – Ground responses in your knowledge base with @hazeljs/rag
  • Agent Workflows – Full CSR-style agent support with tools, RAG, and external APIs

Architecture

graph TD
  A["Incoming Message<br/>(WhatsApp/Telegram/Viber)"] --> B["Webhook Controller"]
  B --> C{Kafka Enabled?}
  C -->|Yes| D["Kafka Producer"]
  C -->|No| E["Message Handler"]
  D --> F["Kafka Consumer"]
  F --> E
  
  E --> G["Context Manager<br/>(Memory/Redis)"]
  G --> H{Handler Type}
  
  H -->|LLM| I["AI Provider<br/>(OpenAI, Anthropic)"]
  H -->|RAG| J["RAG Service<br/>(Knowledge Base)"]
  H -->|Agent| K["Agent Runtime<br/>(Tools + RAG)"]
  H -->|Custom| L["Custom Handler"]
  
  I --> M["Response"]
  J --> M
  K --> M
  L --> M
  
  M --> N["Channel Adapter"]
  N --> O["Send Reply<br/>(WhatsApp/Telegram/Viber)"]
  
  style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
  style B fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
  style E fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff
  style G fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style I fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
  style J fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
  style K fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff

Key Components

  1. MessagingModule – Registers webhook controllers, channel adapters, and message handlers
  2. Channel Adapters – Platform-specific implementations (Telegram, WhatsApp, Viber)
  3. Context Manager – Stores conversation history (Memory or Redis)
  4. Message Handler – Orchestrates LLM, RAG, or agent responses
  5. Webhook Controller – Receives and validates incoming messages

Installation

npm install @hazeljs/messaging @hazeljs/ai @hazeljs/core

Optional Dependencies

# Redis for production context storage (horizontal scaling)
npm install ioredis

# Kafka for async message processing (horizontal scaling)
npm install @hazeljs/kafka

# Viber support
npm install viber-bot

# RAG integration
npm install @hazeljs/rag

# Agent workflows
npm install @hazeljs/agent

Quick Start

Basic LLM Bot

import { HazelApp } from '@hazeljs/core';
import { MessagingModule } from '@hazeljs/messaging';
import { OpenAIProvider } from '@hazeljs/ai';

const app = new HazelApp({
  imports: [
    MessagingModule.forRoot({
      aiProvider: new OpenAIProvider(process.env.OPENAI_API_KEY),
      systemPrompt: 'You are a helpful support assistant. Keep responses concise.',
      model: 'gpt-4o-mini',
      channels: {
        telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
        whatsapp: {
          accessToken: process.env.WHATSAPP_ACCESS_TOKEN!,
          phoneNumberId: process.env.WHATSAPP_PHONE_NUMBER_ID!,
        },
      },
    }),
  ],
});

app.listen(3000);

Production Configuration (Redis + Kafka)

For horizontal scalability, use Redis for context and Kafka for async processing:

import Redis from 'ioredis';

MessagingModule.forRoot({
  aiProvider: new OpenAIProvider(),
  systemPrompt: 'You are a helpful assistant.',
  model: 'gpt-4o-mini',
  
  channels: {
    telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
    whatsapp: {
      accessToken: process.env.WHATSAPP_ACCESS_TOKEN!,
      phoneNumberId: process.env.WHATSAPP_PHONE_NUMBER_ID!,
    },
  },
  
  // Redis for shared conversation context
  redis: {
    host: process.env.REDIS_HOST ?? 'localhost',
    port: parseInt(process.env.REDIS_PORT ?? '6379', 10),
    password: process.env.REDIS_PASSWORD,
    ttlSeconds: 86400, // 24 hours
  },
  
  // Kafka for async message processing
  kafka: {
    brokers: (process.env.KAFKA_BROKERS ?? 'localhost:9092').split(','),
  },
});

Benefits:

  • Redis: Any instance can serve any session (stateless workers)
  • Kafka: Webhook returns 200 immediately; consumers process asynchronously (scale workers independently)

Channel Configuration

Telegram

Create a bot via @BotFather and set the webhook:

curl -X POST "https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://your-domain.com/api/messaging/webhook/telegram"
channels: {
  telegram: {
    botToken: process.env.TELEGRAM_BOT_TOKEN!,
  },
}

WhatsApp

Requires WhatsApp Business API access:

  1. Create a Meta for Developers app
  2. Get accessToken and phoneNumberId
  3. Set WHATSAPP_VERIFY_TOKEN in your environment
  4. Configure webhook URL: https://your-domain.com/api/messaging/webhook/whatsapp
channels: {
  whatsapp: {
    accessToken: process.env.WHATSAPP_ACCESS_TOKEN!,
    phoneNumberId: process.env.WHATSAPP_PHONE_NUMBER_ID!,
  },
}

Viber

Create a bot on Viber Developers:

npm install viber-bot
channels: {
  viber: {
    authToken: process.env.VIBER_AUTH_TOKEN!,
    name: 'My Bot',
    avatar: 'https://example.com/avatar.jpg',
  },
}

Webhook Endpoints

ChannelMethodURLPurpose
TelegramPOST/api/messaging/webhook/telegramReceive messages
WhatsAppGET/api/messaging/webhook/whatsappWebhook verification
WhatsAppPOST/api/messaging/webhook/whatsappReceive messages
ViberPOST/api/messaging/webhook/viberReceive messages

RAG Integration

Ground bot responses in your knowledge base using @hazeljs/rag:

import { RAGService, MemoryVectorStore, OpenAIEmbeddings } from '@hazeljs/rag';

// Set up RAG service
const embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY });
const vectorStore = new MemoryVectorStore(embeddings);
const ragService = new RAGService({ vectorStore, embeddingProvider: embeddings });

// Load knowledge base
await ragService.addDocuments(docs);

// Configure messaging with RAG
MessagingModule.forRoot({
  aiProvider: new OpenAIProvider(),
  ragService: ragService,
  ragTopK: 5,           // Retrieve top 5 documents
  ragMinScore: 0.5,     // Minimum relevance score
  channels: {
    telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
  },
});

How it works:

  1. User sends message
  2. RAG retrieves relevant documents from knowledge base
  3. Documents are added to LLM context
  4. LLM generates grounded response

Agent Workflows

Wire your CSRService or AgentRuntime for full control with tools, RAG, and external APIs:

import { MessagingModule } from '@hazeljs/messaging';
import { AgentRuntime } from '@hazeljs/agent';
import { SupportAgent } from './agents/support.agent';

const runtime = new AgentRuntime({
  llmProvider: new OpenAIProvider(),
  defaultMaxSteps: 10,
});

runtime.registerAgent(SupportAgent);

MessagingModule.forRoot({
  agentHandler: async ({ message, sessionId, conversationTurns }) => {
    const result = await runtime.execute(
      'support-agent',
      message.text,
      { sessionId, userId: message.userId, enableMemory: true }
    );
    
    return {
      response: result.response,
      sources: result.sources,
    };
  },
  channels: {
    telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
  },
});

Agent capabilities:

  • Call tools (lookup orders, check inventory, create tickets)
  • Use RAG for knowledge retrieval
  • Maintain conversation memory
  • Require human approval for sensitive actions

Custom Handler

Override the default LLM handler with custom logic:

MessagingModule.forRoot({
  customHandler: async (msg) => {
    if (msg.text === '/help') {
      return 'Available commands: /help, /status, /contact';
    }
    
    if (msg.text === '/status') {
      return 'All systems operational ✅';
    }
    
    // Fallback to LLM
    return 'I can help you with /help, /status, or /contact';
  },
  channels: {
    telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
  },
});

Configuration Options

MessagingModuleOptions

OptionTypeDefaultDescription
aiProviderIAIProviderRequiredAI provider from @hazeljs/ai
systemPromptstring''System prompt for LLM
modelstring'gpt-4o-mini'Model name
temperaturenumber0.7LLM temperature (0-1)
maxTokensnumber500Max response tokens
maxContextTurnsnumber10Conversation turns to keep
channelsobjectRequiredChannel configurations
redisobjectundefinedRedis config for context
kafkaobjectundefinedKafka config for async processing
ragServiceRAGServiceundefinedRAG service instance
ragTopKnumber5Number of documents to retrieve
ragMinScorenumber0.5Minimum relevance score
agentHandlerfunctionundefinedCustom agent handler
customHandlerfunctionundefinedCustom message handler

Advantages

1. Unified Interface

Single codebase for WhatsApp, Telegram, and Viber with channel-agnostic message types.

2. Horizontal Scalability

Redis + Kafka support for stateless workers and async processing.

3. AI-Native

First-class LLM, RAG, and agent integration out of the box.

4. Production-Ready

Webhook validation, error handling, retry logic, and conversation context management.

5. Flexible Architecture

Use LLM-only, RAG-augmented, full agent workflows, or custom handlers.

Use Cases

  • Customer Support Bots – Answer FAQs, lookup orders, create tickets
  • Knowledge Base Assistants – RAG-powered documentation bots
  • Notification Systems – Send alerts and updates via messaging platforms
  • Conversational Commerce – Product recommendations, order tracking
  • Internal Tools – DevOps alerts, incident management, team notifications

Recipes

Recipe: Slack Bot with AI Responses

// File: src/bot/slack.bot.ts
import { Service } from '@hazeljs/core';
import { MessagingBot, OnMessage, Channel } from '@hazeljs/messaging';
import { AIEnhancedService } from '@hazeljs/ai';

@MessagingBot({ platform: 'slack' })
@Service()
export class SlackBot {
  constructor(private readonly ai: AIEnhancedService) {}

  @OnMessage()
  async handleMessage(@Channel() channel: string, message: string) {
    const response = await this.ai
      .chat(message)
      .system('You are a helpful team assistant. Keep responses concise.')
      .text();

    return { channel, text: response };
  }
}

Recipe: Multi-Channel Bot Registration

// File: src/app.module.ts
import { HazelModule } from '@hazeljs/core';
import { MessagingModule } from '@hazeljs/messaging';

@HazelModule({
  imports: [
    MessagingModule.register({
      platforms: {
        slack: { token: process.env.SLACK_BOT_TOKEN, signingSecret: process.env.SLACK_SIGNING_SECRET },
        discord: { token: process.env.DISCORD_BOT_TOKEN },
      },
    }),
  ],
})
export class AppModule {}