HazelJS LogoHazelJS
Introducing HCEL: The Most Fluent Way to Build AI Pipelines in TypeScript
HazelJS Blog

Introducing HCEL: The Most Fluent Way to Build AI Pipelines in TypeScript

Author
HazelJS Team
4/1/2026
General
← Back to Blog

A deep dive into HCEL (HazelJS Composable Expression Language), a fluent DSL for orchestrating prompts, RAG, and AI agents with minimal boilerplate and maximum type safety.

Author: HazelJS AI Team

In the rapidly evolving landscape of AI development, orchestration is everything. As developers move from simple LLM calls to complex, multi-step agentic workflows, the need for a clean, expressive, and type-safe way to define these pipelines becomes critical.

Today, we are excited to introduce HCEL (HazelJS Composable Expression Language)—a fluent, TypeScript-native DSL designed to make AI orchestration as intuitive as a standard functional chain.

What is HCEL?

HCEL stands for HazelJS Composable Expression Language. It is not a separate language you need to learn, but a fluent API provided by the @hazeljs/ai package. It allows you to "chain" together different AI capabilities—prompts, RAG searches, agents, and machine learning models—into a single, executable pipeline.

Why HCEL?

Traditional AI pipelines often suffer from "pyramid of doom" callback structures or messy async/await boilerplate for passing state between steps. HCEL solves this by providing:

  1. Fluent Method Chaining: Build complex logic step-by-step.
  2. Implicit Context Passing: The output of operation A automatically becomes the input for operation B.
  3. Observability by Default: Every step in the chain is automatically traced and timed.
  4. Flow Integration: Easily convert any HCEL chain into a durable @hazeljs/flow node.

Real-World Examples from Our Codebase

Let's explore actual working examples from our @hazeljs/ai/examples directory that demonstrate HCEL in action.

Example 1: Simple HCEL Chain

From hcel-demo.ts:

// Simple HCEL Chain
const simpleResult = await ai.hazel
  .prompt('What is HazelJS?')
  .execute() as string;

console.log(`Result: ${simpleResult.slice(0, 100)}...`);

What's happening here?

  • We start with ai.hazel - the entry point for HCEL chains
  • .prompt() creates a text generation operation
  • .execute() runs the entire chain and returns the result
  • The result is automatically typed as string thanks to TypeScript inference

Example 2: RAG + ML Chain

From hcel-demo.ts:

// RAG + ML Chain
const ragMlResult = await ai.hazel
  .rag('docs')                    // Search documentation
  .ml('sentiment')                // Analyze sentiment
  .execute() as SentimentResult;

console.log(`Sentiment: ${ragMlResult.sentiment} (${ragMlResult.score})`);

What's happening here?

  • .rag('docs') searches your documentation for relevant context
  • The RAG result is automatically passed to the next step
  • .ml('sentiment') runs sentiment analysis on the RAG output
  • The final result includes both sentiment and confidence score

Example 3: Streaming Chain

From hcel-demo.ts:

// Streaming Chain
console.log('Assistant: ');
for await (const chunk of ai.hazel
  .prompt('Tell me a short story about AI and creativity')
  .stream()) {
  process.stdout.write(chunk as string);
}

What's happening here?

  • .stream() instead of .execute() enables real-time streaming
  • Perfect for chat interfaces or long-running generations
  • Each chunk is processed as it arrives from the AI provider

Example 4: Chain with Context & Observability

From hcel-demo.ts:

const contextChain = ai.hazel
  .prompt('Analyze this user feedback: {feedback}')
  .ml('sentiment')
  .context({ userId: 'user-123', sessionId: 'session-456' })
  .observe((event) => {
    console.log(`📡 Event: ${event.type} at ${new Date(event.timestamp).toISOString()}`);
  });

const contextResult = await contextChain.execute() as SentimentResult;

What's happening here?

  • .context() adds metadata that flows through the entire chain
  • .observe() hooks into every operation for logging and monitoring
  • Perfect for debugging and production observability

Production-Ready Features

HCEL isn't just for demos—it's built for production workloads. Let's look at our production example:

Example 5: Persistent HCEL Chains

From hcel-production-demo.ts:

// Production HazelAI with Persistence Configuration
const ai = HazelAI.create({
  defaultProvider: 'openai',
  model: 'gpt-4o',
  temperature: 0.7,
  
  // Production persistence configuration
  persistence: {
    memory: {
      store: 'in-memory', // Change to 'postgres' or 'redis' for production
      ttl: 3600, // 1 hour
    },
    rag: {
      vectorStore: 'in-memory', // Change to 'pinecone', 'qdrant', etc. for production
      options: {
        topK: 5,
        chunkSize: 1000,
        chunkOverlap: 200,
      }
    },
    chains: {
      store: 'in-memory', // Change to 'postgres' or 'redis' for production
      ttl: 7200, // 2 hours
    }
  }
});

// Create a persistent analysis chain
const analysisChain = ai.hazel
  .prompt('Analyze user feedback: This product is amazing! It works perfectly.')
  .persist('user-feedback-analysis') // Persist this chain
  .cache(1800); // Cache results for 30 minutes

const result = await analysisChain.execute();

Production Features:

  • Persistence: Chain state is automatically saved to Redis/Postgres
  • Caching: Results are cached to avoid redundant API calls
  • Configuration: Production-ready settings for stores and TTLs

Example 6: Parallel Operations

From hcel-demo.ts:

// Parallel Operations
const parallelChain = ai.hazel
  .parallel(
    ai.hazel.prompt('Summarize: "AI is transforming the world"'),
    ai.hazel.ml('sentiment', { labels: ['positive', 'negative', 'neutral'] })
  );

const parallelResult = await parallelChain.execute();

What's happening here?

  • .parallel() executes multiple operations simultaneously
  • Perfect for independent tasks that can run concurrently
  • Results are collected and returned as an array

Advanced Orchestration Patterns

Example 7: Flow Engine Integration

From hcel-flow-demo.ts:

// HCEL-Flow Bridge - Wraps HCEL chains as Flow Engine nodes
class HCELFlowNode {
  constructor(private chain: any) {}

  async execute(input: unknown): Promise<NodeResult> {
    try {
      const result = await this.chain.execute(input);
      return {
        status: 'ok',
        output: result
      };
    } catch (error) {
      return {
        status: 'error',
        reason: error instanceof Error ? error.message : 'Unknown error'
      };
    }
  }
}

// Convert HCEL chain to Flow node
const hcelNode = new HCELFlowNode(
  ai.hazel
    .prompt('Process user request: {{input}}')
    .rag('knowledge-base')
    .agent('support-specialist')
);

What's happening here?

  • HCEL chains can be wrapped as Flow Engine nodes
  • Enables durable, long-running workflows with AI steps
  • Perfect for human-in-the-loop processes and complex business logic

How to Run These Examples

All examples are available in our packages/ai/examples directory:

Setup

# From the packages/ai directory
npm run build

# Set your API key
export OPENAI_API_KEY=your-key-here

Run the Examples

# Basic HCEL demo
node dist/examples/hcel-demo.js

# Production features demo
node dist/examples/hcel-production-demo.js

# Flow integration demo
node dist/examples/hcel-flow-demo.js

What Each Example Demonstrates

  1. hcel-demo.ts - Basic HCEL operations, streaming, parallel execution
  2. hcel-production-demo.ts - Persistence, caching, memory management
  3. hcel-flow-demo.ts - Integration with HazelJS Flow Engine
  4. simple-demo.ts - Works without API keys for testing
  5. unified-platform-example.ts - Complete platform showcase

The Power of Implicit Context

The core magic of HCEL is the Implicit Context. Every step in the builder returns a new state that includes a pipe. The pipe is what transforms the output of Operation N into the input of Operation N+1.

Explicit vs Implicit Context

// ❌ Traditional approach - explicit context passing
const summary = await ai.chat('Summarize: ' + userQuery);
const context = await ai.rag('docs', summary);
const analysis = await ai.agent('analyst', context);

// ✅ HCEL approach - implicit context passing
const result = await ai.hazel
  .prompt('Summarize: {{input}}')
  .rag('docs')
  .agent('analyst')
  .execute(userQuery);

Context Transformation

// Each step transforms the context
const pipeline = ai.hazel
  .prompt('Extract key topics: {{input}}')           // string → string
  .ml('classify', { labels: ['tech', 'business', 'other'] }) // string → ClassificationResult
  .conditional((result) => result.label === 'tech')   // ClassificationResult → boolean
    .prompt('Explain this tech topic: {{input}}')     // ClassificationResult → string
  .execute('AI is revolutionizing software development');

Type Safety and IntelliSense

Because HCEL is built with TypeScript, you get full type safety and IDE support:

// TypeScript knows the return type based on the last operation
const sentimentResult = await ai.hazel
  .prompt('Analyze: {{input}}')
  .ml('sentiment')
  .execute() as SentimentResult; // Type is inferred!

// Get full chain summary with types
const summary = ai.hazel
  .prompt('Test')
  .ml('sentiment')
  .getSummary();

console.log(summary.operations); // ['prompt', 'ml']
console.log(summary.config);     // Chain configuration

Getting Started with HCEL

Installation

npm install @hazeljs/ai @hazeljs/core

Basic Usage

import { HazelAI } from '@hazeljs/ai';

const ai = HazelAI.create({
  defaultProvider: 'openai',
  model: 'gpt-4o'
});

// Your first HCEL chain
const result = await ai.hazel
  .prompt('What is the future of AI?')
  .execute();

Advanced Usage

// Production-ready chain
const pipeline = ai.hazel
  .persist('user-analysis')
  .prompt('Analyze user feedback: {{input}}')
  .rag('product-docs')
  .ml('sentiment')
  .cache(3600)
  .observe((event) => console.log(event));

What's Next?

HCEL is just the beginning. We're working on:

  1. More ML Operations: Classification, extraction, translation
  2. Advanced Flow Patterns: Conditional branching, loops, retries
  3. Enhanced Observability: OpenTelemetry integration, custom metrics
  4. Visual Builder: Web-based HCEL chain designer
  5. Template Library: Pre-built chains for common use cases

Join the HCEL Community


Conclusion

HCEL represents a fundamental shift in how we think about AI orchestration. By providing a fluent, type-safe, and production-ready API, we're making sophisticated AI workflows accessible to every TypeScript developer.

Whether you're building simple chatbots or complex multi-agent systems, HCEL provides the tools you need to compose, observe, and scale your AI operations with confidence.

Try HCEL today and experience the future of AI orchestration in TypeScript! 🚀 in .prompt() tells HCEL exactly where to inject the previous step's result.

  • Auto-Injection: Operations like .rag() or .agent() automatically use the current context as their search query or instruction if no specific input is provided.

This allows you to focus on the logic, not the data-shuffling boilerplate.


Real-World Case Study: Automated Support Triage

Here is a full implementation of an automated support triage system built with HCEL:

@Service()
export class SupportTriage {
  constructor(private readonly ai: AIEnhancedService) {}

  async handleTicket(ticketText: string) {
    return this.ai.hazel
      .persist('support-triage')
      .ml('sentiment')
      .parallel(
        this.ai.hazel.ml('classify', { categories: ['billing', 'technical', 'sales'] }),
        this.ai.hazel.prompt('Extract product names from: {{input}}')
      )
      .conditional((context) => context.sentiment === 'negative' && context.classification === 'technical')
        .agent('SeniorTechnicalSupport')
      .conditional((context) => context.classification === 'billing')
        .rag('billing-faq')
        .prompt('Answer billing query: {{input}}')
      .execute(ticketText);
  }
}

This single method replaces what would traditionally be dozens of lines of nested if statements, manual RAG lookups, and complex state management.


Try it Today

HCEL is available now in @hazeljs/ai version v0.7.0+.

Whether you are building a simple chat bot or a massively parallel research engine, HCEL provides the expressive power you need without the boilerplate.

Resources:

Happy coding with HazelJS!