HCEL (HazelJS Composable Expression Language)
HCEL is a fluent, TypeScript-native domain-specific language (DSL) for composing and executing complex AI operations in HazelJS. It provides a unified interface for linking prompts, RAG (Retrieval-Augmented Generation), agents, and machine learning tasks into a single executable chain.
Core Concepts
HCEL allows you to build "chains" of AI operations where the output of one operation becomes the input of the next. It handles state management, observability, and error propagation automatically.
- Chain: A sequence of AI operations execute in order.
- Operation: A single unit of work (e.g., a prompt, a RAG search, or an agent call).
- Execution Engine: The runtime that processes the chain and handles lifecycle events.
- Fluent API: A method-chaining interface for building chains programmatically.
Getting Started
To use HCEL, access the hazel builder through a HazelAI instance.
// File: src/ai/research.service.ts
import { Service } from '@hazeljs/core';
import { HazelAI } from '@hazeljs/ai';
@Service()
export class MyAIService {
private readonly ai = HazelAI.create({
defaultProvider: 'openai',
providers: {
openai: { apiKey: process.env.OPENAI_API_KEY || '' },
},
});
async processResearch(query: string) {
const result = await this.ai.hazel
.prompt('Summarize the key points from this query: {{input}}')
.rag('corporate-kb')
.agent('ResearchAgent')
.execute(query);
return result;
}
}
Core Operations
.prompt(template, options?)
Executes a text completion using the specified template. The current chain output is injected into the {{input}} placeholder.
.prompt('Rewrite the following text in a professional tone: {{input}}', {
model: 'gpt-4',
temperature: 0.3
})
.rag(source, options?)
Performs a Retrieval-Augmented Generation operation against a specified source (vector store or document collection).
.rag('documentation-v2', {
limit: 5,
minScore: 0.7
})
.agent(name, options?)
Delegates task execution to a named HazelJS Agent.
.agent('AnalysisAgent', {
maxIterations: 5,
includeTrace: true
})
.agentPipeline(config)
Runs a sequential multi-agent pipeline using compiled graph execution.
const graphResult = await ai.hazel
.agentPipeline({
pipelineId: 'support-pipeline',
agents: ['triage-agent', 'resolver-agent'],
})
.execute('Customer cannot complete checkout');
.agentSupervisor(config)
Runs a supervisor agent that routes subtasks to worker agents.
const supervisorResult = await ai.hazel
.context({ sessionId: 's1', userId: 'u1' })
.agentSupervisor({
name: 'support-supervisor',
workers: ['billing-agent', 'shipping-agent'],
maxRounds: 4,
})
.execute('Issue: refund and shipping update');
.agentGraphCompiled(graphId, compiled, graphOptions?)
Executes a pre-compiled AgentGraph from HazelAI.createAgentGraph(...).
const graph = await ai.createAgentGraph('custom-support');
const compiled = graph
.addNode('triage', { type: 'agent', agentName: 'triage-agent' })
.addEdge('triage', '__end__')
.setEntryPoint('triage')
.compile();
const result = await ai.hazel
.agentGraphCompiled('custom-support', compiled, { maxSteps: 20 })
.execute('User asks for priority escalation');
.memory(service)
Attaches @hazeljs/memory service to the current chain context.
import { createMemoryStore, MemoryService } from '@hazeljs/memory';
const memoryStore = createMemoryStore({ type: 'in-memory' });
const memory = new MemoryService(memoryStore);
await memory.initialize();
await ai.hazel
.memory(memory)
.context({ userId: 'u1', sessionId: 's1' })
.prompt('Hello {{input}}')
.execute('world');
.memoryRecall(config), .memorySave(config), .memorySearch(config?)
Read from / write to user memory directly inside HCEL.
import { MemoryCategory } from '@hazeljs/memory';
const response = await ai.hazel
.memory(memory)
.context({ userId: 'u1', sessionId: 's1' })
.memoryRecall({ category: [MemoryCategory.PREFERENCE, MemoryCategory.EPISODIC], limit: 8 })
// Empty template forwards recall-enriched input while setting system behavior
.prompt('', { systemPrompt: 'Be concise and use recalled context when relevant.' })
.agent('support-agent')
.memorySave({ category: MemoryCategory.SEMANTIC_SUMMARY, key: 'last-reply' })
.execute('Recommend shipping options for my usual preferences');
.ml(operation, options?)
Executes built-in machine learning operations like sentiment analysis, classification, or scoring.
.ml('sentiment') // Detects sentiment of the current input
.ml('classify', { categories: ['billing', 'technical', 'sales'] })
Control Flow
.parallel(...builders)
Executes multiple HCEL chains in parallel and merges their results.
const result = await ai.hazel
.parallel(
ai.hazel.prompt('Tone analysis: {{input}}'),
ai.hazel.prompt('Key entity extraction: {{input}}')
)
.execute(text);
.conditional(condition)
Executes the preceding operation only if the condition is met.
.prompt('Check for urgency: {{input}}')
.conditional((output) => output.includes('URGENT'))
.agent('EscalationAgent')
.adaptive()
Enables adaptive execution where the engine optimizes the chain based on latency, cost, and historical performance.
.prompt('Analyze data')
.adaptive()
.execute(data);
Persistence & Caching
.persist(key?)
Enables state persistence for the chain, allowing it to be resumed or audited later.
.persist('research-task-123')
.cache(ttl?)
Caches the result of the entire chain to reduce cost and latency for identical inputs.
.cache(3600) // Cache for 1 hour
Execution & Observation
Execution
Use .execute(input) to run the chain and return the final output.
const output = await builder.execute('My input');
Streaming
Use .stream(input) to return an AsyncGenerator for real-time output (useful for chat UIs).
for await (const chunk of builder.stream('Hello')) {
console.log(chunk);
}
Observation
Register observers to monitor the chain's execution lifecycle.
builder.observe((event) => {
console.log(`[${event.type}] Chain: ${event.chainId} - Operation: ${event.operationId}`);
});
Flow Integration
HCEL chains can be converted directly into HazelJS Flow nodes for inclusion in durable workflows.
const node = ai.hazel
.prompt('Plan travel for: {{input}}')
.agent('BookingAgent')
.asFlowNode();
// Use 'node' in a @Flow definition
Advanced Composition
You can compose multiple builders together to create modular, reusable AI pipelines.
import { compose } from '@hazeljs/ai';
const analyzer = ai.hazel.ml('sentiment').prompt('Explain why sentiment is {{input}}');
const summarizer = ai.hazel.prompt('Summarize: {{input}}');
const fullPipeline = compose(analyzer, summarizer);
await fullPipeline.execute(text);
Related Pages
- AI Package — Core AI integration
- Agent Package — Stateful AI agents
- Eval Package — Golden datasets and regression tests for chained AI behavior
- Memory Package — Unified user memory primitives (
@hazeljs/memory) - RAG Package — Retrieval strategies
- Flow Package — Durable workflows