ML Package

npm downloads

The @hazeljs/ml package provides machine learning and model management capabilities for HazelJS applications. It includes a model registry, decorator-based training and prediction APIs, batch inference, and metrics tracking.

Purpose

Building ML-powered applications requires model registration, training pipelines, inference services, and evaluation metrics. The @hazeljs/ml package simplifies this by providing:

  • Model Registry – Register and discover models by name and version
  • Decorator-Based API@Model, @Train, @Predict for declarative ML classes
  • Training Pipeline – PipelineService for data preprocessing (normalize, filter)
  • Inference – PredictorService for single and batch predictions
  • Metrics – MetricsService for evaluation, A/B testing, and monitoring
  • Framework-Agnostic – Works with TensorFlow.js, ONNX, Transformers.js, or custom backends

Architecture

The package uses a registry-based architecture with decorator-driven model registration:

graph TD
  A["MLModule.forRoot()<br/>(Model Registration)"] --> B["MLModelBootstrap<br/>(Discovers @Train, @Predict)"]
  B --> C["ModelRegistry<br/>(Name/Version Lookup)"]
  
  D["@Model Decorator<br/>(Metadata)"] --> E["@Train / @Predict<br/>(Method Discovery)"]
  E --> B
  
  C --> F["TrainerService<br/>(Training)"]
  C --> G["PredictorService<br/>(Inference)"]
  C --> H["BatchService<br/>(Batch Predictions)"]
  C --> I["MetricsService<br/>(Evaluation)"]
  
  G --> J["Single / Batch Prediction"]
  F --> K["Training Pipeline"]
  
  style A fill:#3b82f6,stroke:#60a5fa,stroke-width:2px,color:#fff
  style B fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff
  style C fill:#10b981,stroke:#34d399,stroke-width:2px,color:#fff
  style D fill:#8b5cf6,stroke:#a78bfa,stroke-width:2px,color:#fff

Key Components

  1. MLModule – Registers ModelRegistry, TrainerService, PredictorService, BatchService, MetricsService
  2. ModelRegistry – Stores and retrieves models by name and version
  3. TrainerService – Discovers and invokes @Train methods
  4. PredictorService – Discovers and invokes @Predict methods
  5. PipelineService – Data preprocessing for training
  6. MetricsService – Model evaluation and metrics tracking
  7. Decorators@Model, @Train, @Predict for declarative ML

ML Decorators

Three decorators define an ML model and how it is trained and used. The registry and services discover them via reflection—no manual wiring.

@Model (class)

Attaches registry metadata so the model can be registered and looked up by name and version.

PropertyTypeRequiredDescription
namestringYesUnique model id (e.g. sentiment-classifier)
versionstringYesSemver (e.g. 1.0.0)
frameworkstringYestensorflow | onnx | custom
descriptionstringNoHuman-readable description
tagsstring[]NoTags for filtering (default: [])

Use one @Model per class and add @Injectable() so the app can construct the model.

@Train (method)

Marks the single method that trains the model. TrainerService.train(modelName, data) invokes it.

OptionTypeDefaultDescription
pipelinestringdefaultName of a registered PipelineService pipeline to run before training
batchSizenumber32Hint for batching (optional)
epochsnumber10Hint for epochs (optional)

Exactly one @Train() method per model; it receives training data and can return TrainingResult (e.g. accuracy, loss).

@Predict (method)

Marks the single method that runs inference. PredictorService.predict(modelName, input) invokes it.

OptionTypeDefaultDescription
batchbooleanfalseHint that the method supports batch input
endpointstring/predictHint for route naming

Exactly one @Predict() method per model; it receives one input and returns a prediction object (e.g. { sentiment, confidence }).

Rules

  • One model class = one @Model, one @Train method, one @Predict method.
  • Order: Apply @Model on the class, then @Train and @Predict on the methods. Use @Injectable() from @hazeljs/core.
  • Discovery: When you pass model classes to MLModule.forRoot({ models: [...] }), the bootstrap finds the decorated methods and registers the model.

Advantages

1. Declarative ML

Define models with decorators—training and prediction methods are discovered automatically.

2. Model Versioning

Register multiple versions of a model; the registry supports lookup by name and version.

3. Framework Flexibility

Use TensorFlow.js, ONNX, Transformers.js, or custom implementations—the package is backend-agnostic.

4. Batch Inference

BatchService for efficient batch predictions with configurable batch size. Results preserve input order.

5. Evaluation Built-In

MetricsService with evaluate() to run predictions on test data and compute accuracy, F1, precision, and recall. Supports custom label and prediction keys.

Installation

npm install @hazeljs/ml @hazeljs/core

Optional Peer Dependencies

# TensorFlow.js
npm install @tensorflow/tfjs-node

# ONNX Runtime
npm install onnxruntime-node

# Hugging Face Transformers (embeddings, sentiment)
npm install @huggingface/transformers

Quick Start

1. Import MLModule

import { HazelApp } from '@hazeljs/core';
import { MLModule } from '@hazeljs/ml';

const app = new HazelApp({
  imports: [
    MLModule.forRoot({
      models: [SentimentClassifier, SpamClassifier],
    }),
  ],
});

app.listen(3000);

2. Define a Model

import { Service } from '@hazeljs/core';
import { Model, Train, Predict, ModelRegistry } from '@hazeljs/ml';

@Model({ name: 'sentiment-classifier', version: '1.0.0', framework: 'custom' })
@Service()
export class SentimentClassifier {
  private labels = ['positive', 'negative', 'neutral'];
  private weights: Record<string, number[]> = {};

  constructor(private registry: ModelRegistry) {}

  @Train()
  async train(data: { text: string; label: string }[]): Promise<void> {
    // Your training logic – e.g. bag-of-words, embeddings
    const vocab = this.buildVocabulary(data);
    this.weights = this.computeWeights(data, vocab);
  }

  @Predict()
  async predict(input: { text: string }): Promise<{ sentiment: string; confidence: number }> {
    const scores = this.score(input.text);
    const idx = scores.indexOf(Math.max(...scores));
    return {
      sentiment: this.labels[idx],
      confidence: scores[idx],
    };
  }
}

3. Predict from a Controller

import { Controller, Post, Body, Inject } from '@hazeljs/core';
import { PredictorService } from '@hazeljs/ml';

@Controller('ml')
export class MLController {
  constructor(private predictor: PredictorService) {}

  @Post('predict')
  async predict(@Body() body: { text: string; model?: string }) {
    const result = await this.predictor.predict(
      body.model ?? 'sentiment-classifier',
      body
    );
    return { result };
  }
}

Training Pipeline

Preprocess data before training with PipelineService. Use inline steps (no registration) or named pipelines:

import { PipelineService } from '@hazeljs/ml';

const pipeline = new PipelineService();

// Inline steps (no registration required)
const steps = [
  { name: 'normalize', transform: (d: unknown) => ({ ...(d as object), text: (d as { text: string }).text?.toLowerCase() }) },
  { name: 'filter', transform: (d: unknown) => (d as { text: string }).text?.length ? d : null },
];
const processed = await pipeline.run(data, steps);
await model.train(processed);

// Or register a named pipeline for reuse
pipeline.registerPipeline('default', steps);
const processed2 = await pipeline.run('default', data);

Batch Predictions

BatchService processes inputs in batches with configurable concurrency. Results are returned in the same order as inputs.

import { BatchService } from '@hazeljs/ml';

const batchService = new BatchService(predictorService);
const results = await batchService.predictBatch('sentiment-classifier', items, {
  batchSize: 32,
  concurrency: 4,
});
// results[i] corresponds to items[i]

Metrics and Evaluation

Inject MetricsService via MLModule (it receives PredictorService and ModelRegistry). Use evaluate() to run predictions on test data and compute metrics:

import { Injectable } from '@hazeljs/core';
import { MetricsService } from '@hazeljs/ml';

@Injectable()
class EvaluationService {
  constructor(private metricsService: MetricsService) {}

  async runEvaluation() {
    const testData = [
      { text: 'great product', label: 'positive' },
      { text: 'terrible', label: 'negative' },
    ];
    const evaluation = await this.metricsService.evaluate('sentiment-classifier', testData, {
      metrics: ['accuracy', 'f1', 'precision', 'recall'],
      labelKey: 'label',           // key in test sample for ground truth
      predictionKey: 'sentiment',  // key in prediction result (auto-detect: label, sentiment, class)
    });
    // evaluation.metrics: { accuracy, precision, recall, f1Score }
    // Result is automatically recorded via recordEvaluation()
  }
}

Manual Model Registration

When not using MLModule.forRoot({ models: [...] }):

import { registerMLModel, ModelRegistry, TrainerService, PredictorService } from '@hazeljs/ml';

registerMLModel(
  sentimentInstance,
  modelRegistry,
  trainerService,
  predictorService
);

Service Summary

ServicePurpose
ModelRegistryRegister and lookup models by name/version
TrainerServiceDiscover and invoke @Train methods
PredictorServiceDiscover and invoke @Predict methods
PipelineServiceData preprocessing (inline run(data, steps) or named pipelines)
BatchServiceBatch prediction with configurable batch size (results in input order)
MetricsServiceModel evaluation via evaluate() and metrics tracking

Related Resources

  • AI Package – LLM integration for hybrid AI/ML workflows
  • Cache Package – Cache model outputs and embeddings
  • Config Package – Model paths and API keys
  • hazeljs-ml-starter – Full app with sentiment, spam, intent classifiers, REST API, and scripts
  • example/src/ml – Minimal runnable example of @Model, @Train, @Predict (run: npm run ml:decorators)