Xybrid

Data Flow & Execution

Data flow and execution model

Xybrid orchestrates ML inference across device and cloud. This page explains the data flow and how components interact.

Data Flow

Every inference follows this path:

Input → Envelope → Orchestrator → Pipeline → Output
  • The Envelope wraps your data.
  • The Orchestrator decides where to run it.
  • The Pipeline chains models together.

Voice Assistant Example

Here's a complete voice assistant flow:

Component Roles

ComponentWhat It Does
EnvelopeWraps data (audio, text, embeddings) flowing through the system
OrchestratorEvaluates policies, decides routing, coordinates execution
PipelineDefines multi-stage workflows in YAML
TemplateExecutorRuns preprocessing → model → postprocessing for each stage
StreamSessionHandles real-time chunked audio processing

Execution Targets

Each stage can run in different locations:

TargetWhere It RunsWhen to Use
deviceOn the user's devicePrivacy-sensitive, low latency
integrationThird-party API (OpenAI, etc.)Large models, cloud capability
autoOrchestrator decidesLet Xybrid optimize

When target: auto, the Orchestrator considers:

  1. Is a local bundle available?
  2. Does the device have sufficient capability?
  3. Is network available for cloud routing?
  4. What are the privacy constraints?

Next: Core Components

On this page