Skip to content

Data Types

Radhflow has four primitive port types. Every input and output port declares one of these types. The type system enforces compatibility at edge validation time, before any code runs.

A single typed value: string, number, or boolean.

42
"enterprise"
true

Use for constants, thresholds, feature flags, and single extracted values. Produced by value.literal nodes and data.pull (which extracts a single value from a table row).

A single JSON object with named, typed fields.

{
"name": "Acme Corp",
"score": 87.5,
"tier": "enterprise",
"active": true
}

Use for single entities, configuration objects, and API response payloads. A Record is one row of data with named fields.

An ordered collection of Records, stored as NDJSON (one JSON object per line). This is the primary data interchange format.

{"email":"a@example.com","score":92,"tier":"high"}
{"email":"b@example.com","score":45,"tier":"low"}
{"email":"c@example.com","score":78,"tier":"medium"}

Use for datasets, query results, CSV imports — any multi-row data. Most nodes consume and produce Tables.

A Table that arrives incrementally, row by row, rather than all at once. Semantically identical to Table.

{"event":"click","user":"u1","ts":"2025-01-15T10:00:00Z"}
{"event":"open","user":"u2","ts":"2025-01-15T10:00:01Z"}

Use for real-time feeds, event logs, and webhook payloads.

Source portDest portCompatibleNotes
TableTableYesStandard connection
StreamTableYesInterchangeable
TableStreamYesInterchangeable
TableValueYesAuto-pull from first row
ValueValueYesStandard connection
RecordRecordYesStandard connection
ValueTableNoType mismatch
RecordTableNoType mismatch

Table and Stream are fully interchangeable — a Table output can connect to a Stream input and vice versa. Value-to-Table connections are not allowed; use data.collect to gather Values into a Table.

Tables are stored as NDJSON — Newline-Delimited JSON. Each line is a complete, valid JSON object.

{"email":"a@example.com","name":"Alice","score":92}
{"email":"b@example.com","name":"Bob","score":45}
{"email":"c@example.com","name":"Carol","score":78}

Rules:

  • Each line is a complete, valid JSON object.
  • Lines are separated by \n.
  • No blank lines between records.
  • Field order does not matter.
  • All records in a file share the same schema.

NDJSON is line-oriented, so it streams well and concatenates trivially. You can inspect files with standard Unix tools (head, wc -l, jq).

Every NDJSON data file has a companion .schema.json file. For output.ndjson, the schema is output.schema.json.

{
"email": {
"type": "string",
"required": true
},
"name": {
"type": "string"
},
"score": {
"type": "number"
}
}

Schema companions travel with the data. They enable type checking across node boundaries without reading the data itself. See Schemas for the full reference.

Each field in a schema declares one of these types:

TypeJSON representationExample
stringstring"hello"
numbernumber (int or float)42, 3.14
booleanbooleantrue
timestampISO 8601 string"2025-01-15T10:00:00Z"
nullnullnull
listarray[1, 2, 3]
recordobject{"a": 1}
schema:
tags:
type: list
items:
type: string
address:
type: record
schema:
city:
type: string
zip:
type: string
schema:
email:
type: string
required: true # default: true
score:
type: number
default: 0 # used when field is missing
nullable: true # allows null values
status:
type: string
enum: [active, inactive, pending]

The type checker enforces these rules at edge validation time:

SourceDestinationResultReason
numbernumberOKint and float are both number
stringtimestampOKtimestamps are stored as strings
timestampstringOKreverse also holds
stringnumberErrorno implicit parsing
numberstringErrorno implicit conversion
booleanstringErrorno implicit conversion
enum subsetenumOKsource values are all valid
enum supersetenumWarningsource may produce unexpected values
missing required fieldErrorcontract violation
extra fieldsOKconsumer ignores extras

No implicit type conversions happen at runtime. If you need to convert a string to a number, use data.map with the to_number filter.

Each edge connects a source port to a destination port. The type checker validates that:

  1. The port types are compatible (see matrix above).
  2. Every required field in the destination schema exists in the source schema.
  3. Field types match (using coercion rules).
  4. Enum constraints are satisfied.

Extra fields in the source are ignored — the destination only sees what it declares. This makes pipelines resilient to upstream schema additions.