HazelJS Overview
HazelJS is a TypeScript backend framework with built-in Agent Runtime, Agentic RAG, and persistent memory. HazelJS provides dependency injection, decorator-based routing, modular architecture, and first-class AI integration for building stateful AI agents, intelligent APIs, and production-ready backends.
Quick Reference
- Purpose: HazelJS is a TypeScript backend framework for building AI-native server applications with built-in agent runtime, RAG, and memory.
- When to use: Use HazelJS when building backend APIs that need AI capabilities (agents, RAG, LLM integration) alongside traditional REST/GraphQL services in a single TypeScript codebase.
- Key concepts: Modules, Controllers, Providers (Services), Dependency Injection, Decorators, Guards, Interceptors, Pipes, Exception Filters, AI Providers, Agent Runtime, RAG Pipeline, Memory System.
- Inputs: HTTP requests, WebSocket messages, Kafka events, cron triggers, agent execution requests.
- Outputs: HTTP responses (JSON), streaming AI responses, agent execution results, RAG query results.
- Dependencies: Node.js 18+, TypeScript with
experimentalDecoratorsandemitDecoratorMetadataenabled,reflect-metadata. - Common patterns: Controller → Service → AI Module → Provider → Response; User Query → Embed → Vector Search → Context Injection → LLM Response; Agent execute → Think → Tool Call → Persist → Respond.
- Common mistakes: Forgetting to enable
experimentalDecoratorsin tsconfig; not registering providers in a module; putting business logic in controllers instead of services; not exporting providers that other modules need.
Quick Start (5 minutes)
1. Try the Playground Online
The fastest way to experience HazelJS is with our interactive playground:
🚀 Try HazelJS Playground - A complete demo with AI agents, RAG, microservices, and more.
2. Create Your First AI-Native App
# Install HazelJS CLI
npm install -g @hazeljs/cli
# Create a new AI-native application
hazel new my-ai-app --template ai-native
cd my-ai-app
# Setup environment
cp .env.example .env
# Add your OpenAI API key to .env
# OPENAI_API_KEY=your_key_here
# Start PostgreSQL and Redis
docker-compose up -d
# Run database migrations
npm run db:push
npm run db:seed
# Start your AI application
npm run dev
Available Templates:
--template ai-native- Complete AI app with agents, RAG, and chat (recommended)--template basic- Basic HazelJS application (default)
Available Generators:
ai-service <name>- AI service with decoratorsagent <name>- AI agent with @Agent and @Toolrag <name>- RAG (Retrieval-Augmented Generation) servicecontroller <name>- REST controllerservice <name>- Service classcrud <name>- Full CRUD resourceauth [name]- Auth module (JWT guard, service, controller)cache <name>- Cache service with decoratorscron <name>- Cron/scheduled job service
Your app is now running at http://localhost:3000 with:
- ✨ AI chat endpoint at
/chat - 🤖 AI agent with tools at
/agent - 📚 RAG document ingestion at
/rag/ingest - 🔍 Semantic search at
/rag/search - 🏥 Health checks at
/health - 📊 HazelJS Inspector at
/__hazel
AI-Native CLI Template
The AI-Native CLI Template is HazelJS's flagship starter template that provides a complete, production-ready foundation for building AI-powered applications. It comes pre-configured with everything you need to build sophisticated AI systems in minutes.
What's Included
🤖 AI Agents & Tools
- Pre-configured WeatherAgent with real weather data tools
- Multi-agent system with delegation capabilities
- Agent runtime with streaming responses and tracing
- Tool execution with error handling and retries
📚 Advanced RAG System
- PostgreSQL with pgvector for persistent vector storage
- Document ingestion with automatic embedding generation
- Semantic search with cosine similarity
- Support for multiple document formats (PDF, TXT, JSON, CSV, MD)
🗄️ Production Database
- PostgreSQL with Prisma ORM
- pgvector extension for vector operations
- Database migrations and seeding
- Connection pooling and health checks
🐳 Docker & Deployment
- Complete Docker setup with docker-compose
- PostgreSQL, Redis, and application containers
- Health checks and monitoring
- Production-ready environment variables
🏗️ Enterprise Architecture
- Modular structure with clear separation of concerns
- Dependency injection and service layer
- Error handling and logging
- Testing utilities and examples
Available Endpoints
Your AI-Native application includes these ready-to-use endpoints:
AI Chat & Agents
# Chat with AI
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{"message": "What can you help me with?"}'
# Execute AI agent with tools
curl -X POST http://localhost:3000/agent \
-H "Content-Type: application/json" \
-d '{"message": "What is the weather in Tokyo?"}'
# Agent execution with full trace
curl -X POST http://localhost:3000/agent/trace \
-H "Content-Type: application/json" \
-d '{"message": "Compare weather in New York and London"}'
# Streaming agent responses
curl -X POST http://localhost:3000/agent/stream \
-H "Content-Type: application/json" \
-d '{"message": "Tell me about the weather forecast"}'
RAG Document Management
# Ingest documents for semantic search
curl -X POST http://localhost:3000/rag/ingest \
-H "Content-Type: application/json" \
-d '{"content": "HazelJS is a TypeScript framework for AI-native backends", "metadata": { "source": "docs", "type": "introduction" }}'
# Search documents with semantic similarity
curl -X POST http://localhost:3000/rag/search \
-H "Content-Type: application/json" \
-d '{"query": "What is HazelJS?"}'
# Get RAG statistics
curl http://localhost:3000/rag/stats
# List all ingested documents
curl http://localhost:3000/rag/list
Multi-Agent Workflows
# Travel agent with delegation
curl -X POST http://localhost:3000/travel \
-H "Content-Type: application/json" \
-d '{"query": "Plan a trip to Paris with weather information"}'
# Supervisor agent routing
curl -X POST http://localhost:3000/travel/supervisor \
-H "Content-Type: application/json" \
-d '{"query": "What is the weather in Tokyo and what should I visit?"}'
Architecture Overview
graph TB
subgraph "Client Layer"
A[HTTP Client]
end
subgraph "Application Layer"
B[Controllers]
C[Services]
end
subgraph "AI Layer"
D[RAG Service]
E[Agent Runtime]
end
subgraph "Data Layer"
F[PostgreSQL]
G[pgvector]
H[Prisma ORM]
end
subgraph "External Services"
I[LLM Provider]
J[Tool APIs]
end
A --> B
B --> C
C --> D
C --> E
D --> F
F --> G
H --> F
E --> I
E --> J
style D fill:#e1f5fe
style E fill:#f3e5f5
style F fill:#e8f5e8Development Workflow
- Define Your Agents: Create new agents with
@Agentdecorator - Add Tools: Implement tools with
@Tooldecorator - Configure RAG: Ingest documents and enable semantic search
- Build Workflows: Combine agents with delegation patterns
- Deploy: Use Docker for production deployment
Why Choose the AI-Native Template?
- 🚀 Fast Start: Production-ready AI app in under 5 minutes
- 🔧 Batteries Included: All AI capabilities pre-configured
- 📈 Scalable: Built for production with PostgreSQL and Redis
- 🧪 Testable: Complete test suite and examples
- 📚 Well Documented: Comprehensive guides and API reference
- 🔄 Maintainable: Clean architecture and separation of concerns
The AI-Native template is the fastest way to build sophisticated AI applications with HazelJS. Whether you're building a chatbot, an intelligent assistant, or a complex multi-agent system, this template provides the foundation you need.
What Is HazelJS?
HazelJS is a TypeScript framework for intelligent, stateful backend applications. HazelJS applications reason, remember, and integrate with LLMs without glue code. HazelJS supports building RAG-powered APIs, multi-agent workflows, and classic REST services with AI capabilities in one stack.
HazelJS provides four pillars:
- Agent Runtime — Stateful AI agents with tools, memory, and human-in-the-loop approval. Define agents and tools with
@Agentand@Tooldecorators. The HazelJS Agent Runtime handles orchestration, state persistence, and execution loops. - Agentic RAG — Retrieval-augmented generation and vector search built into the framework. Ingest documents, generate embeddings, and query with semantic search, hybrid search, or advanced strategies like HyDE and multi-hop retrieval.
- Built-in AI Integration — First-class support for OpenAI, Anthropic, Gemini, Cohere, and Ollama through
@hazeljs/ai. Swap AI providers or stream responses without rewriting application code. - Enterprise-Grade Foundation — Dependency injection, decorator-based API, modules, guards, interceptors, pipes, exception filters, and testing utilities. The same patterns used in traditional backend development.
Architecture Mental Model
HTTP Request Flow
flowchart LR A[Request] --> B[Router] B --> C[Middleware] C --> D[Guards] D --> E[Interceptors] E --> F[Pipes] F --> G[Controller] G --> H[Service] H --> I[Response]
AI-Powered Request Flow
flowchart LR
A[Request] --> B[Controller]
B --> C[Service]
C --> D{AI Component}
D -->|Chat| E[AI Service]
D -->|Agent| F[Agent Runtime]
D -->|RAG| G[RAG Pipeline]
E --> H[LLM Provider]
F --> H
G --> H
H --> I[Response]RAG Query Flow with PostgreSQL
flowchart TD A[User Query] --> B[Embed Query] B --> C[Vector Search<br/>pgvector] C --> D[Retrieve Top-K<br/>Documents] D --> E[Inject Context] E --> F[LLM Generation] F --> G[Response]
Agent Execution Flow
flowchart TD
A[User Input] --> B[AgentRuntime.execute]
B --> C[Load State]
C --> D[Load Memory]
D --> E[Retrieve RAG]
E --> F[Ask LLM]
F --> G{Tool Call?}
G -->|Yes| H[Execute Tool]
G -->|No| I[Generate Response]
H --> J[Persist State]
I --> J
J --> K{Continue?}
K -->|Yes| E
K -->|No| L[Response]Multi-Agent Workflow
flowchart TD
A[User Query] --> B[Supervisor Agent]
B --> C{Delegate?}
C -->|Yes| D[Specialized Agents]
C -->|No| E[Direct Response]
D --> F[Weather Agent]
D --> G[Facts Agent]
D --> H[Other Agents]
F --> I[Aggregate Results]
G --> I
H --> I
I --> J[Synthesize Response]
J --> K[Persist to Memory]
K --> L[Final Response]AI-Native Template Architecture
graph TB
subgraph "Client Layer"
A[HTTP Client]
end
subgraph "Application Layer"
B[Controllers]
C[Services]
end
subgraph "AI Layer"
D[RAG Service]
E[Agent Runtime]
end
subgraph "Data Layer"
F[PostgreSQL]
G[pgvector]
H[Prisma ORM]
end
subgraph "External Services"
I[LLM Provider]
J[Tool APIs]
end
A --> B
B --> C
C --> D
C --> E
D --> F
F --> G
H --> F
E --> I
E --> J
style D fill:#e1f5fe
style E fill:#f3e5f5
style F fill:#e8f5e8Production Deployment Architecture
graph TB
subgraph "Load Balancer"
LB[Nginx/ALB]
end
subgraph "Application"
APP1[HazelJS App 1]
APP2[HazelJS App 2]
APP3[HazelJS App 3]
end
subgraph "Database"
PG[PostgreSQL]
REDIS[Redis Cache]
end
subgraph "AI Services"
OPENAI[OpenAI API]
WEATHER[Weather API]
end
LB --> APP1
LB --> APP2
LB --> APP3
APP1 --> PG
APP1 --> REDIS
APP2 --> PG
APP2 --> REDIS
APP3 --> PG
APP3 --> REDIS
APP1 --> OPENAI
APP1 --> WEATHER
APP2 --> OPENAI
APP2 --> WEATHER
APP3 --> OPENAI
APP3 --> WEATHER
style PG fill:#ffebee
style REDIS fill:#fff3e0
style OPENAI fill:#e8eaf6Core Philosophy
-
AI-Native — HazelJS Agent Runtime, Agentic RAG, and persistent memory are built into the framework. AI is part of the HazelJS design, not a plugin. The AI-Native Template demonstrates this philosophy with pre-configured agents, RAG, and memory systems.
-
Developer Experience — HazelJS uses a clean, declarative API with decorators:
@Controller,@Get,@Agent,@Tool,@SemanticSearch. Learn the decorator pattern once and apply the HazelJS decorator pattern everywhere. The AI-Native Template provides ready-to-use examples of all patterns. -
Type Safety — HazelJS requires full TypeScript with compile-time safety and reflection-based dependency injection. HazelJS catches errors at compile time. The template includes proper TypeScript configuration and type definitions for all AI components.
-
Modularity — HazelJS uses independent packages:
@hazeljs/core,@hazeljs/ai,@hazeljs/agent,@hazeljs/rag,@hazeljs/auth, and 30+ more. Install only the HazelJS packages your application needs. The template includes only the packages you need for AI applications. -
Production Ready — HazelJS includes dependency injection, advanced routing, guards, interceptors, pipes, exception filters, health checks, and testing utilities. The AI-Native Template adds PostgreSQL, Redis, Docker, and monitoring for production deployment.
-
Batteries Included — The AI-Native Template provides everything out of the box: database setup, vector storage, agent examples, RAG pipeline, and deployment configuration. No need to wire together disparate libraries.
-
Scalable by Design — Built for production with PostgreSQL + pgvector for vector storage, Redis for caching, and horizontal scaling through containerization. The template architecture supports millions of documents and concurrent AI requests.
HazelJS Architecture
HazelJS uses a modular, decorator-based architecture.
Modules
HazelJS Modules group features into cohesive units. Each HazelJS module declares controllers, providers, and imports. The HazelJS framework wires dependencies automatically.
Dependency Injection
The HazelJS DI container manages providers with three scopes: singleton (default), transient, and request. The HazelJS DI container resolves constructor dependencies automatically using TypeScript reflection metadata.
Request Pipeline
Every HTTP request in HazelJS flows through this pipeline in order:
- Routing — HazelJS router matches the request URL to a controller method
- Middleware — HazelJS middleware runs (body parsing, CORS, logging)
- Guards — HazelJS guards authorize the request (authentication, roles)
- Interceptors — HazelJS interceptors wrap handler execution (logging, caching, retry)
- Pipes — HazelJS pipes validate and transform inputs (DTOs, parameters)
- Controller Handler — The HazelJS controller method executes
- Exception Filters — HazelJS exception filters catch and format errors
Key HazelJS Packages
| Package | Purpose | AI-Native Template Usage |
|---|---|---|
@hazeljs/core | Foundation: DI, routing, decorators, guards, interceptors, pipes, exception filters | Controllers, services, dependency injection for all AI components |
@hazeljs/ai | AI integration: OpenAI, Anthropic, Gemini, Cohere, Ollama providers | Chat service, LLM provider configuration, streaming responses |
@hazeljs/agent | Agent Runtime: stateful AI agents with tools, memory, human-in-the-loop | WeatherAgent, TravelAgent, delegation patterns, tool execution |
@hazeljs/rag | Agentic RAG: vector stores, semantic search, hybrid search, memory | PostgreSQL + pgvector integration, document ingestion, semantic search |
@hazeljs/auth | Authentication: JWT, guards, role hierarchy, tenant isolation | API authentication, agent access control |
@hazeljs/config | Type-safe configuration from environment variables | OpenAI API keys, database URLs, Redis configuration |
@hazeljs/cache | Caching: in-memory, Redis, multi-tier | Agent response caching, RAG result caching |
@hazeljs/prisma | Prisma ORM integration for database access | PostgreSQL schema, migrations, vector operations |
@hazeljs/flow | Durable execution graphs: workflows with wait/resume and idempotency | Complex agent workflows, multi-step processes |
@hazeljs/guardrails | Content safety: PII redaction, prompt injection detection | Input validation, output filtering for AI responses |
Package Dependencies in AI-Native Template
The AI-Native Template includes these core packages by default:
{
"dependencies": {
"@hazeljs/core": "^0.4.0",
"@hazeljs/ai": "^0.4.0",
"@hazeljs/agent": "^0.4.0",
"@hazeljs/rag": "^0.4.0",
"@hazeljs/prisma": "^0.4.0",
"@hazeljs/config": "^0.4.0",
"@prisma/client": "^5.0.0",
"pgvector": "^0.5.0"
}
}
How They Work Together
- @hazeljs/core provides the foundation for controllers and services
- @hazeljs/ai integrates with OpenAI for chat and embeddings
- @hazeljs/agent manages AI agents with tools and memory
- @hazeljs/rag handles vector storage and semantic search
- @hazeljs/prisma manages PostgreSQL database operations
- @hazeljs/config provides type-safe environment configuration
See Installation for the complete package list.
When to Use HazelJS
Use HazelJS When
- Building backend APIs that need AI capabilities alongside traditional REST/GraphQL services
- Building stateful AI agents with tools, memory, and human-in-the-loop approval
- Building RAG-powered search or Q&A applications
- Building multi-agent workflows with delegation, pipelines, or supervisor patterns
- Needing a structured TypeScript backend with dependency injection and modularity
- Wanting one framework for both traditional backend and AI features
Consider Alternatives When
- Building a frontend-only application (use Next.js, Remix)
- Building a simple serverless function with no shared state (use plain Lambda handlers)
- Needing a Python AI ecosystem (use LangChain, LlamaIndex)
- Building only a static website or simple CRUD with no AI requirements
Decision Guide: HazelJS Packages
| Need | HazelJS Package |
|---|---|
| REST API with decorators | @hazeljs/core |
| LLM completions and streaming | @hazeljs/ai |
| Stateful AI agents with tools | @hazeljs/agent |
| Document search and RAG | @hazeljs/rag |
| Authentication and JWT | @hazeljs/auth |
| Background jobs | @hazeljs/queue or @hazeljs/cron |
| Real-time communication | @hazeljs/websocket |
| Durable workflows | @hazeljs/flow |
| Content safety and guardrails | @hazeljs/guardrails |
| Golden datasets and AI evals (RAG/agents) | @hazeljs/eval |
Next Steps
Pick your path:
- Installation — Install
@hazeljs/cliand@hazeljs/coreand run your first HazelJS application - Concepts & Glossary — Core HazelJS terms, decorators by package, and links to deep dives
- Controllers Guide — Define routes, inject request data, and handle HTTP with HazelJS decorators
- REST API Tutorial — Build a full HazelJS API from scratch step by step
- AI Package — Integrate OpenAI, Anthropic, Gemini, and other LLM providers
- Agent Package — Build stateful AI agents with tools and memory
- Eval Package — Golden datasets, retrieval metrics, and CI gates for RAG and agents
- API Reference — Complete list of
@hazeljs/coredecorators, types, and APIs
Related Pages
- Installation — Prerequisites and setup
- Core Package — Foundation package details
- AI Package — LLM provider integration
- Agent Package — Agent Runtime documentation
- Eval Package — Regression testing for AI pipelines
- Concepts & Glossary — Terminology reference