Skip to content

Data Operations

Radhflow ships twelve declarative data operations. Each one is a node type you drop into pipeline.rf.yaml. You configure it with YAML. No code runs — Radhflow translates the config into DuckDB SQL at execution time.

The philosophy: built-in ops for known problems, custom code only for novel logic. Filtering, sorting, joining, grouping — these are solved patterns. You configure them. When your problem is genuinely unique, you write a custom node.

OperationDescriptionSQL equivalent
data.filterKeep rows matching conditionsWHERE
data.mapCompute, rename, or transform fieldsSELECT expr AS name
data.sortOrder rows by one or more fieldsORDER BY
data.limitTake first N rows, optional offsetLIMIT / OFFSET
data.dedupRemove duplicates on key fieldsROW_NUMBER() OVER (PARTITION BY ...)
data.joinCombine two tables on matching keysJOIN
data.groupGroup rows and aggregateGROUP BY
data.selectKeep only named fieldsSELECT col1, col2
data.concatStack multiple tables verticallyUNION ALL BY NAME
data.partitionSplit rows into two groupsWHERE / WHERE NOT
data.pullExtract a single field from the first row as a ValueFirst-row field access
data.collectGather multiple Value inputs into a listArray aggregation

Every data operation follows the same lifecycle:

  1. Parse. Radhflow reads your YAML config and validates it.
  2. Translate. The config becomes a DuckDB SQL query.
  3. Execute. DuckDB runs the query against NDJSON input files.
  4. Write. Results go to an NDJSON output file with a companion .schema.json.

All operations handle nulls, type coercion, and large datasets automatically. DuckDB processes data in-memory with columnar compression — millions of rows complete in seconds.

Most operations take a single Table input and produce a single Table output:

nodes:
clean-emails:
type: data.filter
config:
conditions:
all:
- field: email
op: is_not_null

Two exceptions:

  • data.join takes two Table inputs (left and right ports) and produces one Table output.
  • data.pull takes a Table input and produces a Value output (a single scalar).
  • data.collect takes multiple Value inputs and produces a single list-typed Value.
  • data.concat takes multiple Table inputs and produces one Table output.
  • data.partition takes one Table input and produces two Table outputs (matching and not_matching).

Operations compose naturally. Connect the output of one to the input of the next:

nodes:
load-csv:
type: file.csv
config:
path: leads.csv
active-only:
type: data.filter
config:
conditions:
all:
- field: status
op: equals
value: active
by-score:
type: data.sort
config:
by:
- field: score
direction: desc
top-100:
type: data.limit
config:
count: 100
edges:
- load-csv.output -> active-only.input
- active-only.output -> by-score.input
- by-score.output -> top-100.input

This pipeline loads a CSV, keeps active leads, sorts by score descending, and takes the top 100. Four nodes, zero code.

Data operations propagate schemas automatically. If the input table has fields name, email, score, the output schema reflects exactly what the operation produces — same fields for filter and sort, new fields for map, aggregation columns for group. The type checker validates field references at parse time, before any data flows.

Use built-in ops when the transformation maps to standard SQL. Use a custom node when:

  • The logic requires external API calls or side effects.
  • The transformation needs a library not available in DuckDB (e.g., ML inference).
  • The operation is domain-specific and not generalizable.

For complex SQL that combines multiple operations in a single query, see SQL Transforms.