Skip to content

Quick Start

Install Radhflow, create a pipeline, and run it in 60 seconds.

Terminal window
npm install -g radhflow

Verify the installation:

Terminal window
radhflow --version
Terminal window
mkdir my-pipeline && cd my-pipeline
radhflow init

This creates a Git-backed workspace:

my-pipeline/
gain.yaml # pipeline definition
nodes/ # generated node code
.gitignore

Open gain.yaml and replace its contents:

nodes:
read-leads:
type: source
op: file.read_csv
params:
path: leads.csv
outputs:
leads:
type: Table
schema:
name: { type: string }
email: { type: string }
score: { type: number }
filter-top:
type: deterministic
op: sql.query
params:
query: "SELECT * FROM leads WHERE score >= 80"
inputs:
leads:
type: Table
from: ref(read-leads.leads)
outputs:
qualified:
type: Table
write-output:
type: deterministic
op: file.write_json
params:
path: qualified-leads.json
inputs:
qualified:
type: Table
from: ref(filter-top.qualified)

Three nodes. CSV in, SQL filter, JSON out. Every connection is typed.

Create leads.csv:

name,email,score
Alice,alice@example.com,92
Bob,bob@example.com,45
Carol,carol@example.com,88
Dave,dave@example.com,71
Eve,eve@example.com,95
Terminal window
radhflow run

Output:

[read-leads] ✔ Read 5 rows from leads.csv
[filter-top] ✔ 3 rows matched (score >= 80)
[write-output] ✔ Wrote qualified-leads.json
Pipeline completed. 3 nodes executed in 0.4s.

Check the result:

Terminal window
cat qualified-leads.json
[
{ "name": "Alice", "email": "alice@example.com", "score": 92 },
{ "name": "Carol", "email": "carol@example.com", "score": 88 },
{ "name": "Eve", "email": "eve@example.com", "score": 95 }
]
  1. read-leads read the CSV and produced a typed Table.
  2. filter-top ran a DuckDB SQL query against that Table.
  3. write-output wrote the filtered rows as JSON.

No LLM ran. No tokens burned. Same input produces identical output every time.