Pipelines that run without AI
One file. A complete pipeline.
name: lead-scoringnodes: ingest-leads: type: source connector: csv config: path: ./data/leads.csv outputs: leads: { type: table, schema: ./schemas/lead.schema.json }
enrich-company: type: deterministic runtime: python inputs: leads: { from: ingest-leads.leads } outputs: enriched: { type: table, schema: ./schemas/enriched.schema.json }
score: type: deterministic runtime: python inputs: enriched: { from: enrich-company.enriched } outputs: scored: { type: table, schema: ./schemas/scored.schema.json }
push-to-crm: type: deterministic runtime: python inputs: scored: { from: score.scored } config: filter: "score >= 80" endpoint: https://api.crm.example/leadsThis is a real pipeline. Four nodes, each with typed inputs and outputs, connected by edges that enforce schema contracts.
Every node runs deterministic code — Python scripts, SQL transforms, API calls. No LLM is invoked at runtime. AI wrote the code once; now it executes the same way every time.
The schemas in ./schemas/ define exactly what data flows between nodes.
A type mismatch between enrich-company.enriched and score.enriched
is caught before anything runs.
You check this file into Git. You diff it, review it, branch it. It is yours.
What makes Radhflow different
Deterministic
Section titled “Deterministic”Same input, same output, every time. No LLM in the execution loop. AI writes the code at creation time. After that, your pipeline is ordinary scripts running on ordinary infrastructure.
Four data types: Value, Record, Table, Stream. Schemas enforce contracts between every node. If a producer emits a field the consumer does not expect, the pipeline will not start. Mismatches are caught before anything runs.
flow.yaml + scripts in a Git repo. No vendor lock-in. No opaque runtime.
Run locally, deploy to the cloud, or move to another tool entirely.
You own every line of code.
How it works
1. Describe
Section titled “1. Describe”Initialize a project and define your pipeline.
rf init lead-scoringcd lead-scoring# flow.yaml — you write this (or AI generates it from a prompt)name: lead-scoringnodes: ingest-leads: type: source connector: csv config: path: ./data/leads.csv outputs: leads: { type: table } score: type: deterministic runtime: python inputs: leads: { from: ingest-leads.leads } outputs: scored: { type: table }2. Generate
Section titled “2. Generate”Preview what Radhflow will create. Schemas, node implementations, edge contracts.
$ rf run --dry-run
[dry-run] Would generate: nodes/ingest-leads/main.py (source: CSV reader) nodes/score/main.py (deterministic: scoring logic) schemas/lead.schema.json (table schema: 6 fields) schemas/scored.schema.json (table schema: 7 fields) .rf/contracts/ingest-leads→score.json
No files written. Run `rf run` to execute.3. Execute
Section titled “3. Execute”Run the pipeline. Deterministic code, no tokens burned.
$ rf run
[run] ingest-leads ............. ok 142 records[run] score .................... ok 142 records[run] Pipeline complete. 2/2 nodes succeeded. Output: .rf/artifacts/scored/output.ndjsonBuilt for two audiences
For humans
Section titled “For humans”- Visual canvas with React Flow — drag nodes, draw edges, see data flow
- Edit
flow.yamldirectly or use the canvas; they stay in sync - Git-native workflow: branch, diff, review, merge
- Run locally with
rf runor deploy to managed infrastructure
For AI agents
Section titled “For AI agents”- MCP server exposes every pipeline operation as a tool
- Structured specs —
flow.yamlandnode-spec.yamlare machine-readable - Deterministic execution means agents can predict outcomes
- No ambiguity: schemas define every input and output exactly