HazelJS Messaging Package
@hazeljs/messaging provides multichannel messaging for HazelJS — build conversational AI bots across WhatsApp, Telegram, and Viber with LLM-powered responses, RAG integration, and agent workflows.
Quick Reference
- Purpose:
@hazeljs/messagingprovides channel adapters for WhatsApp, Telegram, and Viber with webhook handling, conversation context management, LLM-powered responses, RAG integration, and agent workflow orchestration. - When to use: Use
@hazeljs/messagingwhen building conversational AI bots on messaging platforms. Use@hazeljs/websocketfor browser-based real-time communication instead. - Key concepts: Channel adapters (Telegram, WhatsApp, Viber), webhook handling, conversation context, LLM integration, RAG integration, agent workflows, unified message interface.
- Dependencies:
@hazeljs/core,@hazeljs/ai, optionally@hazeljs/agentand@hazeljs/rag. - Common patterns: Register
MessagingModulewith channel credentials → define message handlers → integrate with@hazeljs/aifor LLM responses → optionally wire@hazeljs/agentfor multi-step conversations. - Common mistakes: Not verifying webhook signatures (security risk); not managing conversation context per user/chat; hardcoding API tokens instead of using environment variables.
Purpose
Building messaging bots requires handling webhooks, managing conversation context, integrating with multiple platforms, and orchestrating AI responses. The @hazeljs/messaging package simplifies this by providing:
- Channel Adapters – Telegram (Telegraf), WhatsApp Cloud API, Viber with unified interface
- Unified Message Format – Channel-agnostic
IncomingMessage/OutgoingMessagetypes - LLM Integration – Uses
@hazeljs/aiproviders (OpenAI, Anthropic, Gemini, etc.) for conversational responses - Conversation Context – Memory (development) or Redis (production, horizontally scalable)
- Kafka Processing – Optional async message handling for horizontal scalability
- Webhook Controller – Single endpoint per channel for incoming messages
- RAG Integration – Ground responses in your knowledge base with
@hazeljs/rag - Agent Workflows – Full CSR-style agent support with tools, RAG, and external APIs
Architecture
graph TD
A["Incoming Message<br/>(WhatsApp/Telegram/Viber)"] --> B["Webhook Controller"]
B --> C{Kafka Enabled?}
C -->|Yes| D["Kafka Producer"]
C -->|No| E["Message Handler"]
D --> F["Kafka Consumer"]
F --> E
E --> G["Context Manager<br/>(Memory/Redis)"]
G --> H{Handler Type}
H -->|LLM| I["AI Provider<br/>(OpenAI, Anthropic)"]
H -->|RAG| J["RAG Service<br/>(Knowledge Base)"]
H -->|Agent| K["Agent Runtime<br/>(Tools + RAG)"]
H -->|Custom| L["Custom Handler"]
I --> M["Response"]
J --> M
K --> M
L --> M
M --> N["Channel Adapter"]
N --> O["Send Reply<br/>(WhatsApp/Telegram/Viber)"]
style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
style B fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
style E fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff
style G fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
style I fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
style J fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fff
style K fill:#f59e0b,stroke:#fbbf24,stroke-width:2px,color:#fffKey Components
- MessagingModule – Registers webhook controllers, channel adapters, and message handlers
- Channel Adapters – Platform-specific implementations (Telegram, WhatsApp, Viber)
- Context Manager – Stores conversation history (Memory or Redis)
- Message Handler – Orchestrates LLM, RAG, or agent responses
- Webhook Controller – Receives and validates incoming messages
Installation
npm install @hazeljs/messaging @hazeljs/ai @hazeljs/core
Optional Dependencies
# Redis for production context storage (horizontal scaling)
npm install ioredis
# Kafka for async message processing (horizontal scaling)
npm install @hazeljs/kafka
# Viber support
npm install viber-bot
# RAG integration
npm install @hazeljs/rag
# Agent workflows
npm install @hazeljs/agent
Quick Start
Basic LLM Bot
import { HazelApp } from '@hazeljs/core';
import { MessagingModule } from '@hazeljs/messaging';
import { OpenAIProvider } from '@hazeljs/ai';
const app = new HazelApp({
imports: [
MessagingModule.forRoot({
aiProvider: new OpenAIProvider(process.env.OPENAI_API_KEY),
systemPrompt: 'You are a helpful support assistant. Keep responses concise.',
model: 'gpt-4o-mini',
channels: {
telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
whatsapp: {
accessToken: process.env.WHATSAPP_ACCESS_TOKEN!,
phoneNumberId: process.env.WHATSAPP_PHONE_NUMBER_ID!,
},
},
}),
],
});
app.listen(3000);
Production Configuration (Redis + Kafka)
For horizontal scalability, use Redis for context and Kafka for async processing:
import Redis from 'ioredis';
MessagingModule.forRoot({
aiProvider: new OpenAIProvider(),
systemPrompt: 'You are a helpful assistant.',
model: 'gpt-4o-mini',
channels: {
telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
whatsapp: {
accessToken: process.env.WHATSAPP_ACCESS_TOKEN!,
phoneNumberId: process.env.WHATSAPP_PHONE_NUMBER_ID!,
},
},
// Redis for shared conversation context
redis: {
host: process.env.REDIS_HOST ?? 'localhost',
port: parseInt(process.env.REDIS_PORT ?? '6379', 10),
password: process.env.REDIS_PASSWORD,
ttlSeconds: 86400, // 24 hours
},
// Kafka for async message processing
kafka: {
brokers: (process.env.KAFKA_BROKERS ?? 'localhost:9092').split(','),
},
});
Benefits:
- Redis: Any instance can serve any session (stateless workers)
- Kafka: Webhook returns 200 immediately; consumers process asynchronously (scale workers independently)
Channel Configuration
Telegram
Create a bot via @BotFather and set the webhook:
curl -X POST "https://api.telegram.org/bot<YOUR_BOT_TOKEN>/setWebhook?url=https://your-domain.com/api/messaging/webhook/telegram"
channels: {
telegram: {
botToken: process.env.TELEGRAM_BOT_TOKEN!,
},
}
Requires WhatsApp Business API access:
- Create a Meta for Developers app
- Get
accessTokenandphoneNumberId - Set
WHATSAPP_VERIFY_TOKENin your environment - Configure webhook URL:
https://your-domain.com/api/messaging/webhook/whatsapp
channels: {
whatsapp: {
accessToken: process.env.WHATSAPP_ACCESS_TOKEN!,
phoneNumberId: process.env.WHATSAPP_PHONE_NUMBER_ID!,
},
}
Viber
Create a bot on Viber Developers:
npm install viber-bot
channels: {
viber: {
authToken: process.env.VIBER_AUTH_TOKEN!,
name: 'My Bot',
avatar: 'https://example.com/avatar.jpg',
},
}
Webhook Endpoints
| Channel | Method | URL | Purpose |
|---|---|---|---|
| Telegram | POST | /api/messaging/webhook/telegram | Receive messages |
| GET | /api/messaging/webhook/whatsapp | Webhook verification | |
| POST | /api/messaging/webhook/whatsapp | Receive messages | |
| Viber | POST | /api/messaging/webhook/viber | Receive messages |
RAG Integration
Ground bot responses in your knowledge base using @hazeljs/rag:
import { RAGService, MemoryVectorStore, OpenAIEmbeddings } from '@hazeljs/rag';
// Set up RAG service
const embeddings = new OpenAIEmbeddings({ apiKey: process.env.OPENAI_API_KEY });
const vectorStore = new MemoryVectorStore(embeddings);
const ragService = new RAGService({ vectorStore, embeddingProvider: embeddings });
// Load knowledge base
await ragService.addDocuments(docs);
// Configure messaging with RAG
MessagingModule.forRoot({
aiProvider: new OpenAIProvider(),
ragService: ragService,
ragTopK: 5, // Retrieve top 5 documents
ragMinScore: 0.5, // Minimum relevance score
channels: {
telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
},
});
How it works:
- User sends message
- RAG retrieves relevant documents from knowledge base
- Documents are added to LLM context
- LLM generates grounded response
Agent Workflows
Wire your CSRService or AgentRuntime for full control with tools, RAG, and external APIs:
import { MessagingModule } from '@hazeljs/messaging';
import { AgentRuntime } from '@hazeljs/agent';
import { SupportAgent } from './agents/support.agent';
const runtime = new AgentRuntime({
llmProvider: new OpenAIProvider(),
defaultMaxSteps: 10,
});
runtime.registerAgent(SupportAgent);
MessagingModule.forRoot({
agentHandler: async ({ message, sessionId, conversationTurns }) => {
const result = await runtime.execute(
'support-agent',
message.text,
{ sessionId, userId: message.userId, enableMemory: true }
);
return {
response: result.response,
sources: result.sources,
};
},
channels: {
telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
},
});
Agent capabilities:
- Call tools (lookup orders, check inventory, create tickets)
- Use RAG for knowledge retrieval
- Maintain conversation memory
- Require human approval for sensitive actions
Custom Handler
Override the default LLM handler with custom logic:
MessagingModule.forRoot({
customHandler: async (msg) => {
if (msg.text === '/help') {
return 'Available commands: /help, /status, /contact';
}
if (msg.text === '/status') {
return 'All systems operational ✅';
}
// Fallback to LLM
return 'I can help you with /help, /status, or /contact';
},
channels: {
telegram: { botToken: process.env.TELEGRAM_BOT_TOKEN! },
},
});
Configuration Options
MessagingModuleOptions
| Option | Type | Default | Description |
|---|---|---|---|
aiProvider | IAIProvider | Required | AI provider from @hazeljs/ai |
systemPrompt | string | '' | System prompt for LLM |
model | string | 'gpt-4o-mini' | Model name |
temperature | number | 0.7 | LLM temperature (0-1) |
maxTokens | number | 500 | Max response tokens |
maxContextTurns | number | 10 | Conversation turns to keep |
channels | object | Required | Channel configurations |
redis | object | undefined | Redis config for context |
kafka | object | undefined | Kafka config for async processing |
ragService | RAGService | undefined | RAG service instance |
ragTopK | number | 5 | Number of documents to retrieve |
ragMinScore | number | 0.5 | Minimum relevance score |
agentHandler | function | undefined | Custom agent handler |
customHandler | function | undefined | Custom message handler |
Advantages
1. Unified Interface
Single codebase for WhatsApp, Telegram, and Viber with channel-agnostic message types.
2. Horizontal Scalability
Redis + Kafka support for stateless workers and async processing.
3. AI-Native
First-class LLM, RAG, and agent integration out of the box.
4. Production-Ready
Webhook validation, error handling, retry logic, and conversation context management.
5. Flexible Architecture
Use LLM-only, RAG-augmented, full agent workflows, or custom handlers.
Use Cases
- Customer Support Bots – Answer FAQs, lookup orders, create tickets
- Knowledge Base Assistants – RAG-powered documentation bots
- Notification Systems – Send alerts and updates via messaging platforms
- Conversational Commerce – Product recommendations, order tracking
- Internal Tools – DevOps alerts, incident management, team notifications
Related Resources
- AI Package – LLM providers and integration
- RAG Package – Knowledge base and semantic search
- Agent Package – Agent runtime and tools
- Kafka Package – Async message processing
- Cache Package – Redis integration
Recipes
Recipe: Slack Bot with AI Responses
// File: src/bot/slack.bot.ts
import { Service } from '@hazeljs/core';
import { MessagingBot, OnMessage, Channel } from '@hazeljs/messaging';
import { AIEnhancedService } from '@hazeljs/ai';
@MessagingBot({ platform: 'slack' })
@Service()
export class SlackBot {
constructor(private readonly ai: AIEnhancedService) {}
@OnMessage()
async handleMessage(@Channel() channel: string, message: string) {
const response = await this.ai
.chat(message)
.system('You are a helpful team assistant. Keep responses concise.')
.text();
return { channel, text: response };
}
}
Recipe: Multi-Channel Bot Registration
// File: src/app.module.ts
import { HazelModule } from '@hazeljs/core';
import { MessagingModule } from '@hazeljs/messaging';
@HazelModule({
imports: [
MessagingModule.register({
platforms: {
slack: { token: process.env.SLACK_BOT_TOKEN, signingSecret: process.env.SLACK_SIGNING_SECRET },
discord: { token: process.env.DISCORD_BOT_TOKEN },
},
}),
],
})
export class AppModule {}