Data Flow & Execution
Data flow and execution model
Xybrid orchestrates ML inference across device and cloud. This page explains the data flow and how components interact.
Data Flow
Every inference follows this path:
Input → Envelope → Orchestrator → Pipeline → Output- The Envelope wraps your data.
- The Orchestrator decides where to run it.
- The Pipeline chains models together.
Voice Assistant Example
Here's a complete voice assistant flow:
Component Roles
| Component | What It Does |
|---|---|
| Envelope | Wraps data (audio, text, embeddings) flowing through the system |
| Orchestrator | Evaluates policies, decides routing, coordinates execution |
| Pipeline | Defines multi-stage workflows in YAML |
| TemplateExecutor | Runs preprocessing → model → postprocessing for each stage |
| StreamSession | Handles real-time chunked audio processing |
Execution Targets
Each stage can run in different locations:
| Target | Where It Runs | When to Use |
|---|---|---|
device | On the user's device | Privacy-sensitive, low latency |
integration | Third-party API (OpenAI, etc.) | Large models, cloud capability |
auto | Orchestrator decides | Let Xybrid optimize |
When target: auto, the Orchestrator considers:
- Is a local bundle available?
- Does the device have sufficient capability?
- Is network available for cloud routing?
- What are the privacy constraints?