HCEL (HazelJS Composable Expression Language)

HCEL is a fluent, TypeScript-native domain-specific language (DSL) for composing and executing complex AI operations in HazelJS. It provides a unified interface for linking prompts, RAG (Retrieval-Augmented Generation), agents, and machine learning tasks into a single executable chain.

Core Concepts

HCEL allows you to build "chains" of AI operations where the output of one operation becomes the input of the next. It handles state management, observability, and error propagation automatically.

  • Chain: A sequence of AI operations execute in order.
  • Operation: A single unit of work (e.g., a prompt, a RAG search, or an agent call).
  • Execution Engine: The runtime that processes the chain and handles lifecycle events.
  • Fluent API: A method-chaining interface for building chains programmatically.

Getting Started

To use HCEL, access the hazel builder through the AIEnhancedService (or HazelAI instance).

// File: src/ai/research.service.ts
import { Service } from '@hazeljs/core';
import { AIEnhancedService } from '@hazeljs/ai';

@Service()
export class MyAIService {
  constructor(private readonly ai: AIEnhancedService) {}

  async processResearch(query: string) {
    const result = await this.ai.hazel
      .prompt('Summarize the key points from this query: {{input}}')
      .rag('corporate-kb')
      .agent('ResearchAgent')
      .execute(query);

    return result;
  }
}

Core Operations

.prompt(template, options?)

Executes a text completion using the specified template. The current chain output is injected into the {{input}} placeholder.

.prompt('Rewrite the following text in a professional tone: {{input}}', {
  model: 'gpt-4',
  temperature: 0.3
})

.rag(source, options?)

Performs a Retrieval-Augmented Generation operation against a specified source (vector store or document collection).

.rag('documentation-v2', {
  limit: 5,
  minScore: 0.7
})

.agent(name, options?)

Delegates task execution to a named HazelJS Agent.

.agent('AnalysisAgent', {
  maxIterations: 5,
  includeTrace: true
})

.ml(operation, options?)

Executes built-in machine learning operations like sentiment analysis, classification, or scoring.

.ml('sentiment') // Detects sentiment of the current input
.ml('classify', { categories: ['billing', 'technical', 'sales'] })

Control Flow

.parallel(...builders)

Executes multiple HCEL chains in parallel and merges their results.

const result = await ai.hazel
  .parallel(
    ai.hazel.prompt('Tone analysis: {{input}}'),
    ai.hazel.prompt('Key entity extraction: {{input}}')
  )
  .execute(text);

.conditional(condition)

Executes the preceding operation only if the condition is met.

.prompt('Check for urgency: {{input}}')
.conditional((output) => output.includes('URGENT'))
.agent('EscalationAgent')

.adaptive()

Enables adaptive execution where the engine optimizes the chain based on latency, cost, and historical performance.

.prompt('Analyze data')
.adaptive()
.execute(data);

Persistence & Caching

.persist(key?)

Enables state persistence for the chain, allowing it to be resumed or audited later.

.persist('research-task-123')

.cache(ttl?)

Caches the result of the entire chain to reduce cost and latency for identical inputs.

.cache(3600) // Cache for 1 hour

Execution & Observation

Execution

Use .execute(input) to run the chain and return the final output.

const output = await builder.execute('My input');

Streaming

Use .stream(input) to return an AsyncGenerator for real-time output (useful for chat UIs).

for await (const chunk of builder.stream('Hello')) {
  console.log(chunk);
}

Observation

Register observers to monitor the chain's execution lifecycle.

builder.observe((event) => {
  console.log(`[${event.type}] Chain: ${event.chainId} - Operation: ${event.operationId}`);
});

Flow Integration

HCEL chains can be converted directly into HazelJS Flow nodes for inclusion in durable workflows.

const node = ai.hazel
  .prompt('Plan travel for: {{input}}')
  .agent('BookingAgent')
  .asFlowNode();

// Use 'node' in a @Flow definition

Advanced Composition

You can compose multiple builders together to create modular, reusable AI pipelines.

import { compose } from '@hazeljs/ai';

const analyzer = ai.hazel.ml('sentiment').prompt('Explain why sentiment is {{input}}');
const summarizer = ai.hazel.prompt('Summarize: {{input}}');

const fullPipeline = compose(analyzer, summarizer);
await fullPipeline.execute(text);