What is Radhflow?
You have a CSV of leads. You want to score them, filter the top 20%, and push them to your CRM. Every week, automatically.
You could write a Python script. You could wire up Zapier. You could ask an LLM to do it every time it runs — burning tokens and getting slightly different results each time.
Or you could describe what you want, let AI write the pipeline once, and run deterministic code forever.
That is Radhflow.
The compiled agent model
Section titled “The compiled agent model”Radhflow is a compiled agent platform. You describe a data pipeline in plain language or YAML. AI generates typed, deterministic code — SQL transforms, API connectors, shell scripts. That code runs without any LLM in the loop. Zero token cost at runtime. Same input, same output, every time.
You ──describe──> +------------+ +----------------+ | Conductor |--->| Code Agents | | (AI) | | (AI, one-time) | +------------+ +-------+--------+ | generates v +-------------------------------------+ | Pipeline (flow.yaml + scripts) | | | | +------+ +------+ +------+ | | | Node |-->| Node |-->| Node | | | +------+ +------+ +------+ | +-------------------------------------+ | executes v +-------------------------------------+ | Deterministic Runtime | | DuckDB . SQL . HTTP . Shell | | No LLM. No tokens. No surprises. | +-------------------------------------+AI touches your pipeline once — at creation time. After that, it is plain code.
How it works
Section titled “How it works”A pipeline is a flow.yaml file. It describes a directed graph of nodes connected by edges. Data flows from sources through transforms to outputs, and every connection is typed.
nodes: read-leads: type: source op: file.read_csv params: path: leads.csv outputs: leads: type: Table
filter-top: type: deterministic op: sql.query params: query: "SELECT * FROM leads WHERE score >= 80" inputs: leads: type: Table from: ref(read-leads.leads) outputs: qualified: type: Table
write-output: type: deterministic op: file.write_json params: path: qualified-leads.json inputs: qualified: type: Table from: ref(filter-top.qualified)Three nodes. CSV in, SQL filter, JSON out. Every connection carries a type contract. Schema mismatches are caught before any code runs.
Core properties
Section titled “Core properties”- Deterministic execution. No LLM at runtime. Pipelines produce identical output for identical input.
- Four data types. Value (scalar), Record (single object), Table (rows), Stream (unbounded rows). Nothing else needed.
- NDJSON interchange. Tables are
.ndjsonfiles with.schema.jsoncompanions. Human-readable, diffable, universal. - DuckDB transforms. SQL is the default language for filtering, joining, aggregating. No custom DSL.
- Git-native. Branch a pipeline. Diff it. Review it in a PR. Merge it.
- Local-first. Runs on your laptop. Cloud deployment optional. EU infrastructure when you need it.
Who it’s for
Section titled “Who it’s for”| Role | Use case |
|---|---|
| Data engineer | Typed, reproducible ETL pipelines versioned in Git. |
| Marketing coordinator | Lead scoring, campaign reporting, CRM sync — no engineering tickets. |
| Ops team | Automate invoicing, inventory, reporting — own the pipeline forever. |
| AI agents | Structured execution backend via MCP server. |
| Freelancer | Client data workflows that run locally without cloud subscriptions. |
What Radhflow is NOT
Section titled “What Radhflow is NOT”Not a chatbot. There is no conversational AI at runtime. AI generates code once. That code is static and deterministic.
Not an agent framework. There is no autonomous agent loop running on every execution. The AI’s job ends when the pipeline is compiled.
Not a no-code tool. Radhflow generates real code. You can read, edit, and extend every file. The visual canvas is a view into YAML, not a replacement for it.