How It Works
Radhflow is a compiled agent platform. Traditional AI agents call an LLM on every execution. Radhflow calls AI once — at creation time — and runs deterministic code from that point forward.
Three steps
Section titled “Three steps”1. Describe
Section titled “1. Describe”You describe a pipeline in natural language or YAML. Plain English works. Structured YAML works. A mix of both works.
"Read leads from a Google Sheet, score them by engagement, filter the top 20%, push to HubSpot."The conductor agent decomposes this into a typed graph: nodes, edges, schemas.
2. Generate
Section titled “2. Generate”AI code agents write the implementation. Each node gets a node-spec.yaml defining its contract — typed input ports, typed output ports, parameters. Then code agents generate the node logic: SQL transforms, CLI scripts, API calls.
Every generated artifact is a file in your Git repository. You can read it, edit it, diff it, review it in a PR.
pipeline/ pipeline.rf.yaml # graph topology nodes/ read-leads/ node-spec.yaml # contract: inputs, outputs, params main.sql # generated SQL score/ node-spec.yaml main.sql push-crm/ node-spec.yaml main.ts # generated API call3. Execute
Section titled “3. Execute”The runtime walks the graph. It resolves edges, validates schemas, and runs each node in topological order. Every node executes real code — SQL queries via DuckDB, shell commands via nix-shell, HTTP calls via fetch. No LLM in the loop.
[read-leads] Read 1,204 rows from Google Sheet[score] Computed engagement scores via SQL[filter-top] 242 rows matched (top 20%)[push-crm] Pushed 242 contacts to HubSpot
Pipeline completed. 4 nodes executed in 2.1s. 0 tokens used.Flow diagram
Section titled “Flow diagram” ┌─────────────────────────────────────────────────────┐ │ CREATION TIME (once) │ │ │ │ You ──describe──▶ Conductor ──plan──▶ Code Agents │ │ (AI) (AI) │ │ │ │ │ generates │ │ │ │ │ ▼ │ │ ┌─────────────────────────────┐ │ │ │ pipeline.rf.yaml │ │ │ │ node-spec.yaml (per node) │ │ │ │ main.sql / main.ts / ... │ │ │ └─────────────────────────────┘ │ └──────────────────────────────┬──────────────────────┘ │ committed to Git ▼ ┌─────────────────────────────────────────────────────┐ │ EXECUTION TIME (every run) │ │ │ │ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ │ │ │ Node │──▶│ Node │──▶│ Node │──▶│ Node │ │ │ └──────┘ └──────┘ └──────┘ └──────┘ │ │ │ │ DuckDB SQL · nix-shell CLI · HTTP fetch │ │ No LLM. No tokens. No surprises. │ └─────────────────────────────────────────────────────┘Why this matters
Section titled “Why this matters”Zero runtime cost. AI runs once at creation time. Every subsequent execution uses zero tokens. A pipeline that runs hourly for a year costs the same as one that runs once.
Reproducible. Same input produces identical output every time. No temperature variance, no prompt drift, no model version changes mid-pipeline.
Auditable. Every line of generated code is in Git. You can diff what changed between versions. You can review AI-generated code before it runs — just like a pull request from a junior engineer.
No hallucination at runtime. LLM hallucination is a creation-time concern, caught by schema validation and code review. At runtime, there is nothing to hallucinate. It is SQL and scripts.
Compared to traditional AI agents
Section titled “Compared to traditional AI agents”| Property | Traditional Agent | Radhflow |
|---|---|---|
| LLM calls per run | Every execution | Zero |
| Token cost over time | Linear (grows with runs) | Fixed (creation only) |
| Output determinism | Non-deterministic | Deterministic |
| Auditability | Prompt logs | Git-versioned code |
| Failure mode | Hallucination, drift | Standard code bugs |
| Latency | LLM round-trip per step | Native code execution |
Traditional agents are powerful for open-ended reasoning. Radhflow is for pipelines where the logic is known — transforms, filters, API calls, file operations — and should execute identically every time.