Deployment
Radhflow ships as a single Docker image. The same image runs on your laptop, on EU cloud infrastructure, or on a customer’s on-premise server. No code forks. No edition-specific features. Configuration determines the mode.
Three modes
Section titled “Three modes”Your machine. Your data. Your Git repo.
docker run -p 8080:80 \ -v ~/.radhflow/projects:/rf/project \ -v ~/.radhflow/config:/rf/config \ ghcr.io/radh-io/radhflow:latestData stays on disk. Pipelines live in a local Git repository. No cloud account required. No telemetry. Credentials stored in local SQLite.
Best for: development, personal automation, sensitive data processing.
SaaS (hosted)
Section titled “SaaS (hosted)”EU container infrastructure. Managed scheduling. Credential vault.
Radhflow SaaS runs on Hetzner and Scaleway — EU-sovereign providers. Data stays in EU jurisdiction. Pipeline artifacts stored in customer-scoped S3 buckets on EU endpoints.
| Component | Infrastructure |
|---|---|
| Compute | Hetzner Cloud (Nuremberg / Helsinki) |
| Object storage | Scaleway S3 (Paris / Amsterdam) |
| Container registry | GitHub Container Registry |
| DNS + CDN | Cloudflare (EU-only regions) |
No US hyperscaler in the data path. No customer code stored beyond execution.
Best for: teams that need scheduling, shared credentials, and managed infrastructure without US cloud dependency.
Enterprise
Section titled “Enterprise”Customer VPS. Customer data. Customer Git.
The same Docker image runs on the customer’s infrastructure. Radhflow provides the image; the customer provides compute, storage, and Git hosting. No data leaves the customer’s network.
# Customer's infrastructuredocker compose up -dConfiguration points to the customer’s Git server, storage backend, and credential store. Radhflow has no access to the customer’s instance.
Best for: regulated industries, air-gapped environments, organizations with existing infrastructure.
Environment variables
Section titled “Environment variables”| Variable | Default | Description |
|---|---|---|
RF_PORT | 80 | HTTP port inside the container |
RF_PROJECT_PATH | /rf/project | Pipeline workspace root |
RF_CONFIG_PATH | /rf/config | Config and database directory |
RF_MODE | local | Deployment mode: local, saas, enterprise |
RF_LOG_LEVEL | info | Log verbosity: debug, info, warn, error |
RF_S3_ENDPOINT | — | S3-compatible endpoint for artifact storage |
RF_S3_BUCKET | — | Bucket name for pipeline artifacts |
RF_S3_REGION | eu-central-1 | S3 region |
RF_GIT_REMOTE | — | Remote Git URL for pipeline sync |
RF_SCHEDULER | false | Enable cron-based pipeline scheduling |
RF_CREDENTIALS_KEY | — | Encryption key for credential vault |
Docker Compose
Section titled “Docker Compose”Full local setup with persistent volumes:
version: "3.9"services: radhflow: image: ghcr.io/radh-io/radhflow:latest ports: - "8080:80" volumes: - ~/.radhflow/projects/default:/rf/project - ~/.radhflow/config:/rf/config environment: RF_MODE: local RF_LOG_LEVEL: info restart: unless-stoppeddocker compose up -dThe web UI is available at http://localhost:8080. The API is at http://localhost:8080/api.
Fly.io deployment
Section titled “Fly.io deployment”For hosted or team deployments, Fly.io provides EU regions with persistent volumes:
fly launch --image ghcr.io/radh-io/radhflow:latest \ --region ams \ --vm-size shared-cpu-1x
fly secrets set RF_MODE=saas \ RF_S3_ENDPOINT=https://s3.fr-par.scw.cloud \ RF_S3_BUCKET=my-team-artifacts \ RF_CREDENTIALS_KEY=$(openssl rand -hex 32)
fly volumes create rf_data --region ams --size 10fly deployPort mapping
Section titled “Port mapping”The container listens on port 80 internally. Map it to any host port:
# Developmentdocker run -p 8080:80 ghcr.io/radh-io/radhflow:latest
# Production (behind reverse proxy)docker run -p 127.0.0.1:3000:80 ghcr.io/radh-io/radhflow:latestData persistence
Section titled “Data persistence”| Path | Contents | Backup strategy |
|---|---|---|
/rf/project | Pipeline YAML, node specs, generated code | Git push |
/rf/config/rf.db | Credentials, events, preferences | SQLite backup or volume snapshot |
| S3 bucket | Pipeline artifacts, NDJSON outputs | S3 versioning |
Git is the primary backup mechanism for pipeline definitions. The SQLite database stores runtime state and credentials. For hosted deployments, S3 stores pipeline artifacts with versioning enabled.