Skip to content

Deployment

Run locally, deploy to Docker, or ship to the cloud.

The fastest way to run a pipeline. No containers, no infrastructure.

Terminal window
rf run

This reads flow.yaml in the current directory, validates schemas, and executes every node in topological order. Results land in each node’s artifacts/ directory. State is tracked in .rf/state.db.

For iterative development, validate before running:

Terminal window
rf validate # check for schema errors, type mismatches, cycles
rf run # execute the pipeline
rf inspect # view results and execution history

Local mode is best for development, personal automation, and sensitive data that shouldn’t leave your machine.

For repeatable execution on any machine or server.

FROM ghcr.io/radh-io/radhflow:latest
COPY flow.yaml /rf/project/flow.yaml
COPY nodes/ /rf/project/nodes/
COPY data/ /rf/project/data/
WORKDIR /rf/project
CMD ["rf", "run"]
version: "3.9"
services:
radhflow:
image: ghcr.io/radh-io/radhflow:latest
ports:
- "8080:80"
volumes:
- ./flow.yaml:/rf/project/flow.yaml
- ./nodes:/rf/project/nodes
- ./data:/rf/project/data
- rf-state:/rf/project/.rf
environment:
RF_MODE: local
RF_LOG_LEVEL: info
restart: unless-stopped
volumes:
rf-state:
Terminal window
docker compose up -d
MountPurpose
flow.yamlPipeline definition
nodes/Node specs and implementations
data/Input data files
rf-statePersistent state across runs (.rf/ directory)
VariableDefaultDescription
RF_MODElocalDeployment mode: local, saas, enterprise
RF_PORT80HTTP port inside the container
RF_PROJECT_PATH/rf/projectPipeline workspace root
RF_CONFIG_PATH/rf/configConfig and database directory
RF_LOG_LEVELinfoLog verbosity: debug, info, warn, error
RF_CREDENTIALS_KEYEncryption key for credential vault

Radhflow Cloud provides managed infrastructure, scheduling, monitoring, and a credential vault. You deploy with a single command.

Terminal window
rf deploy

This pushes your pipeline to Radhflow Cloud, which handles:

  • Scheduled execution (cron-based or event-triggered)
  • Secret storage and injection
  • Execution monitoring and alerting
  • Artifact storage with versioning

Radhflow Cloud runs on EU infrastructure (Hetzner and Scaleway). Data stays in EU jurisdiction. No US hyperscaler in the data path.

ComponentInfrastructure
ComputeHetzner Cloud (Nuremberg / Helsinki)
Object storageScaleway S3 (Paris / Amsterdam)
Container registryGitHub Container Registry
name: Pipeline CI/CD
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Radhflow
run: npm install -g @radh/flow-cli
- name: Validate pipeline
run: rf validate --strict
run:
needs: validate
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Radhflow
run: npm install -g @radh/flow-cli
- name: Run pipeline
run: rf run
env:
API_KEY: ${{ secrets.API_KEY }}

Use rf validate in CI to catch errors on every pull request. Use rf run in CD to execute the pipeline on merge to main.

Every rf run writes logs to .rf/runs/<run-id>/. Each run directory contains:

  • log.ndjson — timestamped execution events
  • summary.json — node statuses, durations, row counts
  • Per-node artifacts in nodes/<slug>/artifacts/

In Docker or cloud deployments, the Radhflow container exposes a health endpoint:

Terminal window
curl http://localhost:8080/health
# {"status":"ok","version":"0.3.0"}

When a node fails, the executor:

  1. Logs the error with stack trace to .rf/runs/<run-id>/log.ndjson
  2. Marks the node status as error in summary.json
  3. Halts downstream nodes that depend on the failed node
  4. On Radhflow Cloud: sends an alert via configured webhook or email

Check execution status programmatically:

Terminal window
rf inspect --run latest --format json