Skip to content

File I/O

Read and write local files — CSV, JSON, NDJSON, Parquet.

FormatExtensionsReadWriteNotes
CSV.csv, .tsvYesYesConfigurable delimiter, headers, encoding.
JSON.jsonYesYesArray of objects or single object.
NDJSON.ndjson, .jsonlYesYesOne JSON object per line. Native interchange format.
Parquet.parquetYesNoColumnar binary format. Read via DuckDB.
FieldRequiredDefaultDescription
pathYesFile path, relative to project root.
formatNoInferred from extensionOne of csv, json, ndjson, parquet.
encodingNoutf-8File encoding. Applies to CSV and JSON.
headersNotrueWhether the first row contains column names. CSV only.
options.delimiterNo,Column separator for CSV/TSV.

Paths are resolved relative to the project root (the directory containing flow.yaml). Absolute paths are rejected.

flow.yaml
read-leads:
type: source
op: file.read
params:
path: data/leads.csv
format: csv
options:
delimiter: ","
header: true
encoding: utf-8
outputs:
rows:
type: Table
schema:
name: { type: string }
email: { type: string }
score: { type: number }
read-config:
type: source
op: file.read
params:
path: config/settings.json
format: json
outputs:
settings: { type: Record }
read-events:
type: source
op: file.read
params:
path: logs/events.ndjson
outputs:
events: { type: Table }
read-warehouse:
type: source
op: file.read
params:
path: warehouse/orders.parquet
outputs:
orders: { type: Table }
write-results:
type: deterministic
op: file.write
params:
path: output/qualified.csv
format: csv
options: { delimiter: ",", header: true }
inputs:
data: { type: Table, from: ref(filter-leads.qualified) }
write-json:
type: deterministic
op: file.write
params:
path: output/report.json
format: json
inputs:
data: { type: Table, from: ref(score-leads.scored) }

You can also write NDJSON by setting format: ndjson. The same options fields apply across all formats.

All paths are relative to the project root — the directory that contains your flow.yaml. Given this project structure:

my-project/
flow.yaml
data/
leads.csv
output/

A path of data/leads.csv resolves to my-project/data/leads.csv. Radhflow creates output directories automatically if they don’t exist.

File not found. Check that the path is relative to the project root, not your current working directory. Run rf validate to catch path issues before execution.

Encoding issues. If you see garbled characters, set encoding explicitly. Common values: utf-8, utf-16, latin-1, ascii.

Permission errors. Radhflow runs in a sandbox. The file must be inside the project directory or a path explicitly mounted into the runtime. Files outside the project root are inaccessible by design.

CSV delimiter mismatch. If your CSV uses tabs, semicolons, or pipes as separators, set options.delimiter to the correct character. TSV files (.tsv) default to tab.