Skip to content

Pipelines that run without AI

Describe what you need. AI writes typed, deterministic code. That code runs forever — no tokens, no drift, no surprises.

One file. A complete pipeline.

flow.yaml
name: lead-scoring
nodes:
ingest-leads:
type: source
connector: csv
config:
path: ./data/leads.csv
outputs:
leads: { type: table, schema: ./schemas/lead.schema.json }
enrich-company:
type: deterministic
runtime: python
inputs:
leads: { from: ingest-leads.leads }
outputs:
enriched: { type: table, schema: ./schemas/enriched.schema.json }
score:
type: deterministic
runtime: python
inputs:
enriched: { from: enrich-company.enriched }
outputs:
scored: { type: table, schema: ./schemas/scored.schema.json }
push-to-crm:
type: deterministic
runtime: python
inputs:
scored: { from: score.scored }
config:
filter: "score >= 80"
endpoint: https://api.crm.example/leads

This is a real pipeline. Four nodes, each with typed inputs and outputs, connected by edges that enforce schema contracts.

Every node runs deterministic code — Python scripts, SQL transforms, API calls. No LLM is invoked at runtime. AI wrote the code once; now it executes the same way every time.

The schemas in ./schemas/ define exactly what data flows between nodes. A type mismatch between enrich-company.enriched and score.enriched is caught before anything runs.

You check this file into Git. You diff it, review it, branch it. It is yours.

What makes Radhflow different

Same input, same output, every time. No LLM in the execution loop. AI writes the code at creation time. After that, your pipeline is ordinary scripts running on ordinary infrastructure.

Four data types: Value, Record, Table, Stream. Schemas enforce contracts between every node. If a producer emits a field the consumer does not expect, the pipeline will not start. Mismatches are caught before anything runs.

flow.yaml + scripts in a Git repo. No vendor lock-in. No opaque runtime. Run locally, deploy to the cloud, or move to another tool entirely. You own every line of code.

How it works

Initialize a project and define your pipeline.

Terminal window
rf init lead-scoring
cd lead-scoring
# flow.yaml — you write this (or AI generates it from a prompt)
name: lead-scoring
nodes:
ingest-leads:
type: source
connector: csv
config:
path: ./data/leads.csv
outputs:
leads: { type: table }
score:
type: deterministic
runtime: python
inputs:
leads: { from: ingest-leads.leads }
outputs:
scored: { type: table }

Preview what Radhflow will create. Schemas, node implementations, edge contracts.

$ rf run --dry-run
[dry-run] Would generate:
nodes/ingest-leads/main.py (source: CSV reader)
nodes/score/main.py (deterministic: scoring logic)
schemas/lead.schema.json (table schema: 6 fields)
schemas/scored.schema.json (table schema: 7 fields)
.rf/contracts/ingest-leads→score.json
No files written. Run `rf run` to execute.

Run the pipeline. Deterministic code, no tokens burned.

$ rf run
[run] ingest-leads ............. ok 142 records
[run] score .................... ok 142 records
[run] Pipeline complete. 2/2 nodes succeeded.
Output: .rf/artifacts/scored/output.ndjson

Built for two audiences

  • Visual canvas with React Flow — drag nodes, draw edges, see data flow
  • Edit flow.yaml directly or use the canvas; they stay in sync
  • Git-native workflow: branch, diff, review, merge
  • Run locally with rf run or deploy to managed infrastructure
  • MCP server exposes every pipeline operation as a tool
  • Structured specs — flow.yaml and node-spec.yaml are machine-readable
  • Deterministic execution means agents can predict outcomes
  • No ambiguity: schemas define every input and output exactly