File I/O
Read and write local files — CSV, JSON, NDJSON, Parquet.
Supported formats
Section titled “Supported formats”| Format | Extensions | Read | Write | Notes |
|---|---|---|---|---|
| CSV | .csv, .tsv | Yes | Yes | Configurable delimiter, headers, encoding. |
| JSON | .json | Yes | Yes | Array of objects or single object. |
| NDJSON | .ndjson, .jsonl | Yes | Yes | One JSON object per line. Native interchange format. |
| Parquet | .parquet | Yes | No | Columnar binary format. Read via DuckDB. |
Config fields
Section titled “Config fields”| Field | Required | Default | Description |
|---|---|---|---|
path | Yes | — | File path, relative to project root. |
format | No | Inferred from extension | One of csv, json, ndjson, parquet. |
encoding | No | utf-8 | File encoding. Applies to CSV and JSON. |
headers | No | true | Whether the first row contains column names. CSV only. |
options.delimiter | No | , | Column separator for CSV/TSV. |
Paths are resolved relative to the project root (the directory containing flow.yaml). Absolute paths are rejected.
Reading files
Section titled “Reading files”CSV with headers
Section titled “CSV with headers”read-leads: type: source op: file.read params: path: data/leads.csv format: csv options: delimiter: "," header: true encoding: utf-8 outputs: rows: type: Table schema: name: { type: string } email: { type: string } score: { type: number }JSON array
Section titled “JSON array”read-config: type: source op: file.read params: path: config/settings.json format: json outputs: settings: { type: Record }NDJSON
Section titled “NDJSON”read-events: type: source op: file.read params: path: logs/events.ndjson outputs: events: { type: Table }Parquet
Section titled “Parquet”read-warehouse: type: source op: file.read params: path: warehouse/orders.parquet outputs: orders: { type: Table }Writing files
Section titled “Writing files”Output to CSV
Section titled “Output to CSV”write-results: type: deterministic op: file.write params: path: output/qualified.csv format: csv options: { delimiter: ",", header: true } inputs: data: { type: Table, from: ref(filter-leads.qualified) }Output to JSON
Section titled “Output to JSON”write-json: type: deterministic op: file.write params: path: output/report.json format: json inputs: data: { type: Table, from: ref(score-leads.scored) }You can also write NDJSON by setting format: ndjson. The same options fields apply across all formats.
File path resolution
Section titled “File path resolution”All paths are relative to the project root — the directory that contains your flow.yaml. Given this project structure:
my-project/ flow.yaml data/ leads.csv output/A path of data/leads.csv resolves to my-project/data/leads.csv. Radhflow creates output directories automatically if they don’t exist.
Troubleshooting
Section titled “Troubleshooting”File not found. Check that the path is relative to the project root, not your current working directory. Run rf validate to catch path issues before execution.
Encoding issues. If you see garbled characters, set encoding explicitly. Common values: utf-8, utf-16, latin-1, ascii.
Permission errors. Radhflow runs in a sandbox. The file must be inside the project directory or a path explicitly mounted into the runtime. Files outside the project root are inaccessible by design.
CSV delimiter mismatch. If your CSV uses tabs, semicolons, or pipes as separators, set options.delimiter to the correct character. TSV files (.tsv) default to tab.