Xybrid
Command Reference

xybrid run

Execute a hybrid pipeline

Execute a hybrid pipeline on the current device.

Usage

xybrid run --config <path>
xybrid run --pipeline <name>
xybrid run --pipeline <name> --dry-run

Options

OptionShortDescription
--config-cLoad a pipeline YAML file directly
--pipeline-pLook up <name>.yml under examples/
--dry-runSimulate routing without execution
--policyLoad an additional policy bundle

Behavior

  1. Parses YAML into PipelineConfig
  2. Runs policy → routing → execution loop
  3. Prints stage latency and routing summaries

In dry-run mode, the CLI prints routing decisions and simulated outputs without invoking adapters.

Examples

Run from config file

xybrid run --config ./pipelines/voice-assistant.yaml

Run named pipeline

xybrid run --pipeline hiiipe

Looks for hiiipe.yml or hiiipe.yaml in:

  • xybrid-cli/examples/
  • ./examples/

Dry run (simulate only)

xybrid run --pipeline hiiipe --dry-run

Output shows routing decisions without actual execution:

▶️ Stage: whisper-tiny@1.2 → would route to: local
▶️ Stage: gpt-4o-mini → would route to: integration
▶️ Stage: kokoro-82m@0.1 → would route to: local

With custom policy

xybrid run --pipeline hiiipe --policy ./policies/strict-privacy.yaml

Sample Output

▶️ Stage: whisper-tiny@1.2 → local
🎯 Routing: local (fast path)
⚙️ Execution complete (52ms)

▶️ Stage: gpt-4o-mini → integration
🎯 Routing: integration (OpenAI)
⚙️ Execution complete (340ms)

▶️ Stage: kokoro-82m@0.1 → local
🎯 Routing: local (fast path)
⚙️ Execution complete (89ms)

🎉 Pipeline complete (total 481ms)

Pipeline Format

The run command expects YAML pipelines:

name: "voice-assistant"
stages:
  - whisper-tiny@1.0
  - target: integration
    provider: openai
    model: gpt-4o-mini
  - kokoro-82m@0.1

See Pipelines for full DSL reference.

On this page