Skip to content

Data Types

Radhflow has four primitive port types. Every input and output port declares one of these types. The type system enforces compatibility at edge validation time, before any code runs.

A single primitive: string, number, or boolean. Used for configuration parameters, thresholds, flags, and scalar results.

42
"enterprise"
true

When to use: constants, thresholds, feature flags, single extracted values. Produced by value.literal nodes and data.pull (extracts a single value from a table row).

Port declaration:

outputs:
threshold:
type: value
schema:
threshold:
type: number

A single key-value object. Fields have typed values.

{
"name": "Acme Corp",
"score": 87.5,
"tier": "enterprise",
"active": true
}

When to use: single entities, configuration objects, API response payloads. A Record is one row of data with named fields.

Port declaration:

outputs:
profile:
type: record
schema:
name:
type: string
required: true
score:
type: number
tier:
type: string
enum: [enterprise, startup, smb]

An ordered list of Records, stored as NDJSON (one JSON object per line). This is the primary data interchange format.

{"email":"a@example.com","score":92,"tier":"high"}
{"email":"b@example.com","score":45,"tier":"low"}
{"email":"c@example.com","score":78,"tier":"medium"}

When to use: datasets, query results, CSV imports, any multi-row data. Most nodes consume and produce Tables.

Port declaration:

inputs:
records:
type: table
schema:
email:
type: string
required: true
score:
type: number
tier:
type: string

Semantically identical to Table but signals incremental processing. A Stream is a Table that arrives row-by-row rather than all at once.

{"event":"click","user":"u1","ts":"2025-01-15T10:00:00Z"}
{"event":"open","user":"u2","ts":"2025-01-15T10:00:01Z"}

When to use: real-time feeds, event logs, webhook payloads. Table and Stream are interchangeable for compatibility checking — a Table output can connect to a Stream input and vice versa.

Port declaration:

inputs:
events:
type: stream
schema:
event:
type: string
user:
type: string
ts:
type: timestamp

Each field in a schema declares one of these types:

TypeJSON representationExample
stringstring"hello"
numbernumber (int or float)42, 3.14
booleanbooleantrue
timestampISO 8601 string"2025-01-15T10:00:00Z"
nullnullnull
listarray[1, 2, 3]
recordobject{"a": 1}

Nested types:

schema:
tags:
type: list
items:
type: string
address:
type: record
schema:
city:
type: string
zip:
type: string
schema:
email:
type: string
required: true # default: true
score:
type: number
default: 0 # used when field is missing
nullable: true # allows null values
status:
type: string
enum: [active, inactive, pending]

The type checker enforces these rules at edge validation time:

SourceDestinationResultReason
numbernumberOKint and float are both number
stringtimestampOKtimestamps are stored as strings
timestampstringOKreverse also holds
stringnumberErrorno implicit parsing
numberstringErrorno implicit conversion
booleanstringErrorno implicit conversion
enum subsetenumOKsource values are all valid
enum supersetenumWarningsource may produce unexpected values
missing required fieldErrorcontract violation
extra fieldsOKconsumer ignores extras
Source portDest portCompatibleNotes
tabletableYesStandard connection
streamtableYesInterchangeable
tablestreamYesInterchangeable
tablevalueYesAuto-pull from first row
valuevalueYesStandard connection
recordrecordYesStandard connection
valuetableNoType mismatch
recordtableNoType mismatch

Each edge connects a source port to a destination port. The type checker validates that:

  1. The port types are compatible (see table above).
  2. Every required field in the destination schema exists in the source schema.
  3. Field types match (using coercion rules).
  4. Enum constraints are satisfied.

Extra fields in the source are ignored — the destination only sees what it declares. This makes pipelines resilient to upstream schema additions.