Skip to content

Quick Start

This guide walks you through installing Radhflow, creating a three-node pipeline, and running it. You will read a CSV, filter rows with SQL, and write the results as JSON.

  • Node.js 20+ — check with node --version
  • Docker (optional) — needed for sandboxed CLI nodes, not required for basic pipelines
Terminal window
npm install -g radhflow

Or run without installing:

Terminal window
npx radhflow

Verify the installation:

Terminal window
rf --version
radhflow v0.1.0
Terminal window
rf init my-pipeline
cd my-pipeline

This creates a Git-backed project:

my-pipeline/
flow.yaml # pipeline definition
nodes/ # generated node code
.rf/ # runtime state (gitignored)
.gitignore

Open flow.yaml. Replace its contents with this three-node pipeline that reads leads from a CSV, filters by score, and writes the results as JSON.

flow.yaml
nodes:
read-leads:
type: source
op: file.read_csv
params:
path: leads.csv
outputs:
leads:
type: Table
schema:
name: { type: string }
email: { type: string }
score: { type: number }
filter-top:
type: deterministic
op: sql.query
params:
query: "SELECT * FROM leads WHERE score >= 80"
inputs:
leads:
type: Table
from: ref(read-leads.leads)
outputs:
qualified:
type: Table
write-output:
type: deterministic
op: file.write_json
params:
path: qualified-leads.json
inputs:
qualified:
type: Table
from: ref(filter-top.qualified)

Create a leads.csv file in the project root:

leads.csv
name,email,score
Alice,alice@example.com,92
Bob,bob@example.com,45
Carol,carol@example.com,88
Dave,dave@example.com,71
Eve,eve@example.com,95
Terminal window
rf run

Expected output:

[read-leads] ✔ Read 5 rows from leads.csv
[filter-top] ✔ 3 rows matched (score >= 80)
[write-output] ✔ Wrote qualified-leads.json
Pipeline completed. 3 nodes executed in 0.4s.
Terminal window
cat qualified-leads.json
[
{ "name": "Alice", "email": "alice@example.com", "score": 92 },
{ "name": "Carol", "email": "carol@example.com", "score": 88 },
{ "name": "Eve", "email": "eve@example.com", "score": 95 }
]

Use rf inspect to see the graph structure and execution status:

Terminal window
rf inspect
Pipeline: my-pipeline (3 nodes, 2 edges)
read-leads [source]
└─ leads (Table: 5 rows) ──> filter-top.leads
filter-top [deterministic]
└─ qualified (Table: 3 rows) ──> write-output.qualified
write-output [deterministic]
└─ qualified-leads.json (written)
Last run: 0.4s, all nodes succeeded.
  1. read-leads read the CSV and produced a typed Table with 5 rows.
  2. filter-top ran a DuckDB SQL query against that Table, selecting rows with score >= 80.
  3. write-output wrote the 3 matching rows as JSON.

No LLM ran. No tokens burned. Same input produces identical output every time.