Skip to content

Deployment

Radhflow ships as a single Docker image. The same image runs on your laptop, on EU cloud infrastructure, or on a customer’s on-premise server. No code forks. No edition-specific features. Configuration determines the mode.

Your machine. Your data. Your Git repo.

Terminal window
docker run -p 8080:80 \
-v ~/.radhflow/projects:/rf/project \
-v ~/.radhflow/config:/rf/config \
ghcr.io/radh-io/radhflow:latest

Data stays on disk. Pipelines live in a local Git repository. No cloud account required. No telemetry. Credentials stored in local SQLite.

Best for: development, personal automation, sensitive data processing.

EU container infrastructure. Managed scheduling. Credential vault.

Radhflow SaaS runs on Hetzner and Scaleway — EU-sovereign providers. Data stays in EU jurisdiction. Pipeline artifacts stored in customer-scoped S3 buckets on EU endpoints.

ComponentInfrastructure
ComputeHetzner Cloud (Nuremberg / Helsinki)
Object storageScaleway S3 (Paris / Amsterdam)
Container registryGitHub Container Registry
DNS + CDNCloudflare (EU-only regions)

No US hyperscaler in the data path. No customer code stored beyond execution.

Best for: teams that need scheduling, shared credentials, and managed infrastructure without US cloud dependency.

Customer VPS. Customer data. Customer Git.

The same Docker image runs on the customer’s infrastructure. Radhflow provides the image; the customer provides compute, storage, and Git hosting. No data leaves the customer’s network.

Terminal window
# Customer's infrastructure
docker compose up -d

Configuration points to the customer’s Git server, storage backend, and credential store. Radhflow has no access to the customer’s instance.

Best for: regulated industries, air-gapped environments, organizations with existing infrastructure.

VariableDefaultDescription
RF_PORT80HTTP port inside the container
RF_PROJECT_PATH/rf/projectPipeline workspace root
RF_CONFIG_PATH/rf/configConfig and database directory
RF_MODElocalDeployment mode: local, saas, enterprise
RF_LOG_LEVELinfoLog verbosity: debug, info, warn, error
RF_S3_ENDPOINTS3-compatible endpoint for artifact storage
RF_S3_BUCKETBucket name for pipeline artifacts
RF_S3_REGIONeu-central-1S3 region
RF_GIT_REMOTERemote Git URL for pipeline sync
RF_SCHEDULERfalseEnable cron-based pipeline scheduling
RF_CREDENTIALS_KEYEncryption key for credential vault

Full local setup with persistent volumes:

version: "3.9"
services:
radhflow:
image: ghcr.io/radh-io/radhflow:latest
ports:
- "8080:80"
volumes:
- ~/.radhflow/projects/default:/rf/project
- ~/.radhflow/config:/rf/config
environment:
RF_MODE: local
RF_LOG_LEVEL: info
restart: unless-stopped
Terminal window
docker compose up -d

The web UI is available at http://localhost:8080. The API is at http://localhost:8080/api.

For hosted or team deployments, Fly.io provides EU regions with persistent volumes:

Terminal window
fly launch --image ghcr.io/radh-io/radhflow:latest \
--region ams \
--vm-size shared-cpu-1x
fly secrets set RF_MODE=saas \
RF_S3_ENDPOINT=https://s3.fr-par.scw.cloud \
RF_S3_BUCKET=my-team-artifacts \
RF_CREDENTIALS_KEY=$(openssl rand -hex 32)
fly volumes create rf_data --region ams --size 10
fly deploy

The container listens on port 80 internally. Map it to any host port:

Terminal window
# Development
docker run -p 8080:80 ghcr.io/radh-io/radhflow:latest
# Production (behind reverse proxy)
docker run -p 127.0.0.1:3000:80 ghcr.io/radh-io/radhflow:latest
PathContentsBackup strategy
/rf/projectPipeline YAML, node specs, generated codeGit push
/rf/config/rf.dbCredentials, events, preferencesSQLite backup or volume snapshot
S3 bucketPipeline artifacts, NDJSON outputsS3 versioning

Git is the primary backup mechanism for pipeline definitions. The SQLite database stores runtime state and credentials. For hosted deployments, S3 stores pipeline artifacts with versioning enabled.