Xybrid

Pipeline DSL

Defining multi-stage ML pipelines in YAML

Xybrid uses a YAML-based DSL for defining multi-stage ML pipelines. Pipelines enable chaining models together (ASR → LLM → TTS) with automatic routing between device and cloud execution.

Basic Structure

name: "Voice Assistant"
registry: "http://localhost:8080"

input:
  kind: "AudioRaw"

stages:
  - whisper-tiny@1.0
  - kokoro-82m@0.1

Stage Formats

Simple Format

Reference a model by ID and version:

stages:
  - wav2vec2-base-960h@1.0
  - kokoro-82m@0.1

Object Format

For more control, use the object format:

stages:
  - name: whisper-tiny@1.0
    registry: "http://other-registry:8080"

Integration Stages

For cloud LLM execution:

stages:
  - whisper-tiny@1.0

  - target: integration
    provider: openai
    model: gpt-4o-mini
    options:
      system_prompt: "You are a helpful voice assistant."
      max_tokens: 150
      temperature: 0.7

  - kokoro-82m@0.1

Execution Targets

TargetDescriptionConfig Source
deviceOn-device inference.xyb bundle from registry
integrationThird-party APIProvider config (OpenAI, Anthropic)
autoFramework decidesResolved at runtime

Input Types

Declare expected input type:

input:
  kind: "AudioRaw"   # For ASR pipelines
input:
  kind: "Text"       # For TTS or text pipelines
input:
  kind: "Embedding"  # For vector search

Registry Configuration

Simple URL

registry: "http://localhost:8080"

File Path (Local)

registry: "file:///Users/me/.xybrid/registry"

Full Configuration

registry:
  local_path: "/Users/me/.xybrid/registry"
  remote:
    base_url: "http://localhost:8080"
    timeout_ms: 30000
    retry_attempts: 3

Integration Providers

Supported providers for cloud LLM stages:

ProviderModelsNotes
openaigpt-4o, gpt-4o-miniRequires OPENAI_API_KEY
anthropicclaude-3-5-sonnetRequires ANTHROPIC_API_KEY

API Key Management


Xybrid.simple(apiKey: "your-xybrid-api-key")

Example Pipelines

Voice Assistant (ASR → LLM → TTS)

name: "Voice Assistant"
registry: "http://localhost:8080"

input:
  kind: "AudioRaw"

stages:
  # Speech recognition (on-device)
  - whisper-tiny@1.0

  # Language model (cloud)
  - target: integration
    provider: openai
    model: gpt-4o-mini
    options:
      system_prompt: "You are a helpful voice assistant. Keep responses brief."
      max_tokens: 150

  # Text-to-speech (on-device)
  - kokoro-82m@0.1

Speech-to-Text Only

name: "Transcription"
registry: "http://localhost:8080"

input:
  kind: "AudioRaw"

stages:
  - wav2vec2-base-960h@1.0

Text-to-Speech Only

name: "TTS"
registry: "http://localhost:8080"

input:
  kind: "Text"

stages:
  - kitten-tts-nano@1.0

Running Pipelines

Flutter SDK

final pipeline = await Xybrid.pipeline('assets/pipelines/voice-assistant.yaml');

// Check input type
if (pipeline.inputType.isAudio()) {
  final result = await pipeline.run(
    envelope: Envelope.audio(bytes: audioBytes),
  );
  print(result.text);
}

Rust SDK

use xybrid_sdk::PipelineLoader;

let pipeline = PipelineLoader::from_yaml(yaml_content)?
    .load()?;

let result = pipeline.run(&input_envelope)?;

Pipeline Metadata

Query pipeline properties:

final pipeline = await Xybrid.pipeline('pipeline.yaml');

pipeline.name;        // "Voice Assistant"
pipeline.inputType;   // FfiPipelineInputType.audio
pipeline.stageCount;  // 3
pipeline.stageNames;  // ["whisper-tiny@1.0", "gpt-4o-mini", "kokoro-82m@0.1"]
pipeline.isLoaded;    // true/false

Lifecycle

// Load
final pipeline = await Xybrid.pipeline('pipeline.yaml');

// Run
final result = await pipeline.run(envelope: input);

// Unload when done
pipeline.unload();

On this page