How It Works
From flow.yaml to executed pipeline — here’s what happens under the hood.
Radhflow is a compiled agent platform. Traditional AI agents call an LLM on every execution. Radhflow calls AI once — at creation time — and runs deterministic code from that point forward.
The compilation model
Section titled “The compilation model”Three phases: describe, generate, execute.
┌─────────────────────────────────────────────────────────┐ │ CREATION TIME (once) │ │ │ │ You ──describe──▶ Conductor ──plan──▶ Code Agents │ │ (AI) (AI) │ │ │ │ │ generates │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────┐ │ │ │ flow.yaml │ │ │ │ node.yaml (per node) │ │ │ │ main.py / main.sql / main.js │ │ │ └──────────────────────────────────┘ │ └────────────────────────────┬────────────────────────────┘ │ committed to Git ▼ ┌─────────────────────────────────────────────────────────┐ │ EXECUTION TIME (every run) │ │ │ │ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │ │ │ Node │──▶│ Node │──▶│ Node │──▶│ Node │ │ │ └──────┘ └──────┘ └──────┘ └──────┘ │ │ │ │ DuckDB SQL · nix-shell CLI · HTTP fetch │ │ No LLM. No tokens. No surprises. │ └─────────────────────────────────────────────────────────┘AI phase: what happens at creation time
Section titled “AI phase: what happens at creation time”You describe a pipeline in natural language or YAML. Plain English works. Structured YAML works. A mix of both works.
"Read leads from a Google Sheet, score them by engagement, filter the top 20%, push to HubSpot."The conductor agent decomposes this into a typed graph: nodes, edges, schemas. Then code agents write the implementation for each node. Each node gets a node.yaml defining its contract — typed input ports, typed output ports, parameters — followed by the implementation code: SQL transforms, Python scripts, API calls.
Every generated artifact is a file in your Git repository. You can read it, edit it, diff it, review it in a PR.
Execution phase: what runs
Section titled “Execution phase: what runs”The runtime walks the graph. It resolves edges, validates schemas, and runs each node in topological order. Every node executes real code — SQL queries via DuckDB, shell commands via nix-shell, HTTP calls via fetch. No LLM in the loop.
[read-leads] Read 1,204 rows from Google Sheet[score] Computed engagement scores via SQL[filter-top] 242 rows matched (top 20%)[push-crm] Pushed 242 contacts to HubSpot
Pipeline completed. 4 nodes executed in 2.1s. 0 tokens used.File system layout after generation
Section titled “File system layout after generation”my-pipeline/├── flow.yaml├── .rf/│ ├── state.db│ └── runs/└── nodes/ ├── read-csv/ │ ├── node.yaml │ ├── main.py │ └── schemas/ ├── filter-leads/ │ ├── node.yaml │ ├── main.py │ └── schemas/ └── write-json/ ├── node.yaml ├── main.py └── schemas/flow.yaml is the graph topology — nodes, edges, and configuration. Each node directory contains the contract (node.yaml), the implementation (main.py, main.sql, main.js, or run.sh), and schema files for its ports.
Runtime architecture
Section titled “Runtime architecture”The executor reads flow.yaml, resolves the topological order, and runs each node in sequence (or in parallel where the graph allows). Data passes between nodes as NDJSON files — one JSON object per line, with a companion .schema.json for type information.
┌──────────┐ NDJSON ┌──────────┐ NDJSON ┌──────────┐ │ read-csv │──────────────▶│ filter │──────────────▶│write-json│ │ │ leads.ndjson │ │ filtered.ndjson│ │ └──────────┘ └──────────┘ └──────────┘ │ │ │ ▼ ▼ ▼ schemas/ schemas/ schemas/ leads.schema.json filtered.schema.json output.schema.jsonEach node reads its input NDJSON files, processes the data, and writes output NDJSON files. The executor validates schemas at each boundary — before a node runs, the runtime checks that upstream outputs match the node’s declared input schemas.
Why this matters
Section titled “Why this matters”Zero runtime cost. AI runs once at creation time. Every subsequent execution uses zero tokens. A pipeline that runs hourly for a year costs the same as one that runs once.
Reproducible. Same input produces identical output every time. No temperature variance, no prompt drift, no model version changes mid-pipeline.
Auditable. Every line of generated code is in Git. You can diff what changed between versions. You can review AI-generated code before it runs — just like a pull request from a junior engineer.
No hallucination at runtime. LLM hallucination is a creation-time concern, caught by schema validation and code review. At runtime, there is nothing to hallucinate. It is SQL and scripts.
Compared to traditional AI agents
Section titled “Compared to traditional AI agents”| Property | Traditional Agent | Radhflow |
|---|---|---|
| LLM calls per run | Every execution | Zero |
| Token cost over time | Linear (grows with runs) | Fixed (creation only) |
| Output determinism | Non-deterministic | Deterministic |
| Auditability | Prompt logs | Git-versioned code |
| Failure mode | Hallucination, drift | Standard code bugs |
| Latency | LLM round-trip per step | Native code execution |
Traditional agents are good for open-ended reasoning. Radhflow is for pipelines where the logic is known — transforms, filters, API calls, file operations — and should execute identically every time.