Data Operations
Radhflow ships twelve declarative data operations. Each one is a node type you drop into pipeline.rf.yaml. You configure it with YAML. No code runs — Radhflow translates the config into DuckDB SQL at execution time.
The philosophy: built-in ops for known problems, custom code only for novel logic. Filtering, sorting, joining, grouping — these are solved patterns. You configure them. When your problem is genuinely unique, you write a custom node.
All operations
Section titled “All operations”| Operation | Description | SQL equivalent |
|---|---|---|
data.filter | Keep rows matching conditions | WHERE |
data.map | Compute, rename, or transform fields | SELECT expr AS name |
data.sort | Order rows by one or more fields | ORDER BY |
data.limit | Take first N rows, optional offset | LIMIT / OFFSET |
data.dedup | Remove duplicates on key fields | ROW_NUMBER() OVER (PARTITION BY ...) |
data.join | Combine two tables on matching keys | JOIN |
data.group | Group rows and aggregate | GROUP BY |
data.select | Keep only named fields | SELECT col1, col2 |
data.concat | Stack multiple tables vertically | UNION ALL BY NAME |
data.partition | Split rows into two groups | WHERE / WHERE NOT |
data.pull | Extract a single field from the first row as a Value | First-row field access |
data.collect | Gather multiple Value inputs into a list | Array aggregation |
How they work
Section titled “How they work”Every data operation follows the same lifecycle:
- Parse. Radhflow reads your YAML config and validates it.
- Translate. The config becomes a DuckDB SQL query.
- Execute. DuckDB runs the query against NDJSON input files.
- Write. Results go to an NDJSON output file with a companion
.schema.json.
All operations handle nulls, type coercion, and large datasets automatically. DuckDB processes data in-memory with columnar compression — millions of rows complete in seconds.
Common patterns
Section titled “Common patterns”Most operations take a single Table input and produce a single Table output:
nodes: clean-emails: type: data.filter config: conditions: all: - field: email op: is_not_nullTwo exceptions:
data.jointakes two Table inputs (leftandrightports) and produces one Table output.data.pulltakes a Table input and produces a Value output (a single scalar).data.collecttakes multiple Value inputs and produces a single list-typed Value.data.concattakes multiple Table inputs and produces one Table output.data.partitiontakes one Table input and produces two Table outputs (matchingandnot_matching).
Chaining operations
Section titled “Chaining operations”Operations compose naturally. Connect the output of one to the input of the next:
nodes: load-csv: type: file.csv config: path: leads.csv
active-only: type: data.filter config: conditions: all: - field: status op: equals value: active
by-score: type: data.sort config: by: - field: score direction: desc
top-100: type: data.limit config: count: 100
edges: - load-csv.output -> active-only.input - active-only.output -> by-score.input - by-score.output -> top-100.inputThis pipeline loads a CSV, keeps active leads, sorts by score descending, and takes the top 100. Four nodes, zero code.
Schema propagation
Section titled “Schema propagation”Data operations propagate schemas automatically. If the input table has fields name, email, score, the output schema reflects exactly what the operation produces — same fields for filter and sort, new fields for map, aggregation columns for group. The type checker validates field references at parse time, before any data flows.
When to use a custom node instead
Section titled “When to use a custom node instead”Use built-in ops when the transformation maps to standard SQL. Use a custom node when:
- The logic requires external API calls or side effects.
- The transformation needs a library not available in DuckDB (e.g., ML inference).
- The operation is domain-specific and not generalizable.
For complex SQL that combines multiple operations in a single query, see SQL Transforms.