Quick Start
This guide walks you through installing Radhflow, creating a three-node pipeline, and running it. You will read a CSV, filter rows with SQL, and write the results as JSON.
1. Prerequisites
Section titled “1. Prerequisites”- Node.js 20+ — check with
node --version - Docker (optional) — needed for sandboxed CLI nodes, not required for basic pipelines
2. Install
Section titled “2. Install”npm install -g radhflowOr run without installing:
npx radhflowVerify the installation:
rf --versionradhflow v0.1.03. Create a project
Section titled “3. Create a project”rf init my-pipelinecd my-pipelineThis creates a Git-backed project:
my-pipeline/ flow.yaml # pipeline definition nodes/ # generated node code .rf/ # runtime state (gitignored) .gitignore4. Define the pipeline
Section titled “4. Define the pipeline”Open flow.yaml. Replace its contents with this three-node pipeline that reads leads from a CSV, filters by score, and writes the results as JSON.
nodes: read-leads: type: source op: file.read_csv params: path: leads.csv outputs: leads: type: Table schema: name: { type: string } email: { type: string } score: { type: number }
filter-top: type: deterministic op: sql.query params: query: "SELECT * FROM leads WHERE score >= 80" inputs: leads: type: Table from: ref(read-leads.leads) outputs: qualified: type: Table
write-output: type: deterministic op: file.write_json params: path: qualified-leads.json inputs: qualified: type: Table from: ref(filter-top.qualified)5. Add test data
Section titled “5. Add test data”Create a leads.csv file in the project root:
name,email,scoreAlice,alice@example.com,92Bob,bob@example.com,45Carol,carol@example.com,88Dave,dave@example.com,71Eve,eve@example.com,956. Run the pipeline
Section titled “6. Run the pipeline”rf runExpected output:
[read-leads] ✔ Read 5 rows from leads.csv[filter-top] ✔ 3 rows matched (score >= 80)[write-output] ✔ Wrote qualified-leads.json
Pipeline completed. 3 nodes executed in 0.4s.7. Check the results
Section titled “7. Check the results”cat qualified-leads.json[ { "name": "Alice", "email": "alice@example.com", "score": 92 }, { "name": "Carol", "email": "carol@example.com", "score": 88 }, { "name": "Eve", "email": "eve@example.com", "score": 95 }]8. Inspect the pipeline
Section titled “8. Inspect the pipeline”Use rf inspect to see the graph structure and execution status:
rf inspectPipeline: my-pipeline (3 nodes, 2 edges)
read-leads [source] └─ leads (Table: 5 rows) ──> filter-top.leads
filter-top [deterministic] └─ qualified (Table: 3 rows) ──> write-output.qualified
write-output [deterministic] └─ qualified-leads.json (written)
Last run: 0.4s, all nodes succeeded.What just happened
Section titled “What just happened”read-leadsread the CSV and produced a typed Table with 5 rows.filter-topran a DuckDB SQL query against that Table, selecting rows withscore >= 80.write-outputwrote the 3 matching rows as JSON.
No LLM ran. No tokens burned. Same input produces identical output every time.
Next steps
Section titled “Next steps”- Key Concepts — understand pipelines, nodes, ports, edges, and data types
- Pipeline YAML — full reference for
flow.yamlsyntax - Data Operations — filter, map, sort, join, group, and more
- Troubleshooting — common issues and how to fix them