Data Types
Radhflow has four primitive port types. Every input and output port declares one of these types. The type system enforces compatibility at edge validation time, before any code runs.
The four types
Section titled “The four types”A single typed value: string, number, or boolean.
42"enterprise"trueUse for constants, thresholds, feature flags, and single extracted values. Produced by value.literal nodes and data.pull (which extracts a single value from a table row).
Record
Section titled “Record”A single JSON object with named, typed fields.
{ "name": "Acme Corp", "score": 87.5, "tier": "enterprise", "active": true}Use for single entities, configuration objects, and API response payloads. A Record is one row of data with named fields.
An ordered collection of Records, stored as NDJSON (one JSON object per line). This is the primary data interchange format.
{"email":"a@example.com","score":92,"tier":"high"}{"email":"b@example.com","score":45,"tier":"low"}{"email":"c@example.com","score":78,"tier":"medium"}Use for datasets, query results, CSV imports — any multi-row data. Most nodes consume and produce Tables.
Stream
Section titled “Stream”A Table that arrives incrementally, row by row, rather than all at once. Semantically identical to Table.
{"event":"click","user":"u1","ts":"2025-01-15T10:00:00Z"}{"event":"open","user":"u2","ts":"2025-01-15T10:00:01Z"}Use for real-time feeds, event logs, and webhook payloads.
Type compatibility matrix
Section titled “Type compatibility matrix”| Source port | Dest port | Compatible | Notes |
|---|---|---|---|
| Table | Table | Yes | Standard connection |
| Stream | Table | Yes | Interchangeable |
| Table | Stream | Yes | Interchangeable |
| Table | Value | Yes | Auto-pull from first row |
| Value | Value | Yes | Standard connection |
| Record | Record | Yes | Standard connection |
| Value | Table | No | Type mismatch |
| Record | Table | No | Type mismatch |
Table and Stream are fully interchangeable — a Table output can connect to a Stream input and vice versa. Value-to-Table connections are not allowed; use data.collect to gather Values into a Table.
NDJSON format
Section titled “NDJSON format”Tables are stored as NDJSON — Newline-Delimited JSON. Each line is a complete, valid JSON object.
{"email":"a@example.com","name":"Alice","score":92}{"email":"b@example.com","name":"Bob","score":45}{"email":"c@example.com","name":"Carol","score":78}Rules:
- Each line is a complete, valid JSON object.
- Lines are separated by
\n. - No blank lines between records.
- Field order does not matter.
- All records in a file share the same schema.
NDJSON is line-oriented, so it streams well and concatenates trivially. You can inspect files with standard Unix tools (head, wc -l, jq).
Schema companions
Section titled “Schema companions”Every NDJSON data file has a companion .schema.json file. For output.ndjson, the schema is output.schema.json.
{ "email": { "type": "string", "required": true }, "name": { "type": "string" }, "score": { "type": "number" }}Schema companions travel with the data. They enable type checking across node boundaries without reading the data itself. See Schemas for the full reference.
Field types
Section titled “Field types”Each field in a schema declares one of these types:
| Type | JSON representation | Example |
|---|---|---|
string | string | "hello" |
number | number (int or float) | 42, 3.14 |
boolean | boolean | true |
timestamp | ISO 8601 string | "2025-01-15T10:00:00Z" |
null | null | null |
list | array | [1, 2, 3] |
record | object | {"a": 1} |
Nested types
Section titled “Nested types”schema: tags: type: list items: type: string address: type: record schema: city: type: string zip: type: stringField modifiers
Section titled “Field modifiers”schema: email: type: string required: true # default: true score: type: number default: 0 # used when field is missing nullable: true # allows null values status: type: string enum: [active, inactive, pending]Type coercion rules
Section titled “Type coercion rules”The type checker enforces these rules at edge validation time:
| Source | Destination | Result | Reason |
|---|---|---|---|
number | number | OK | int and float are both number |
string | timestamp | OK | timestamps are stored as strings |
timestamp | string | OK | reverse also holds |
string | number | Error | no implicit parsing |
number | string | Error | no implicit conversion |
boolean | string | Error | no implicit conversion |
| enum subset | enum | OK | source values are all valid |
| enum superset | enum | Warning | source may produce unexpected values |
| missing required field | — | Error | contract violation |
| extra fields | — | OK | consumer ignores extras |
No implicit type conversions happen at runtime. If you need to convert a string to a number, use data.map with the to_number filter.
How types flow through edges
Section titled “How types flow through edges”Each edge connects a source port to a destination port. The type checker validates that:
- The port types are compatible (see matrix above).
- Every required field in the destination schema exists in the source schema.
- Field types match (using coercion rules).
- Enum constraints are satisfied.
Extra fields in the source are ignored — the destination only sees what it declares. This makes pipelines resilient to upstream schema additions.