MCP Server
Radhflow exposes an MCP server so AI agents can create, modify, and run pipelines programmatically.
What is MCP
Section titled “What is MCP”The Model Context Protocol (MCP) is a standard for AI agents to interact with external tools. It provides typed tool definitions, structured inputs, and machine-readable responses. Instead of parsing CLI output, an agent calls a tool with structured arguments and gets back JSON.
Starting the MCP server
Section titled “Starting the MCP server”rf mcp serveAdd to your MCP client config (Claude Code .claude/mcp.json, Cursor, or any MCP-compatible client):
{ "mcpServers": { "radhflow": { "command": "npx", "args": ["@radh/flow-mcp"], "env": { "RF_PROJECT_PATH": "/path/to/your/project" } } }}The server speaks standard MCP over stdio.
Available MCP tools
Section titled “Available MCP tools”create_pipeline
Section titled “create_pipeline”Creates a new project with a flow.yaml.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | yes | Pipeline name. Lowercase, hyphens, no spaces. |
yaml | string | yes | Complete flow.yaml content as a YAML string. |
Example request:
{ "tool": "create_pipeline", "arguments": { "name": "lead-scoring", "yaml": "name: lead-scoring\nversion: 1\nnodes:\n read-leads:\n type: file.source\n path: leads.csv\n format: csv\n score:\n type: data.sql\n query: \"SELECT *, clicks * 0.3 + opens * 0.5 AS score FROM input\"\nedges:\n - \"read-leads.data -> score.input\"" }}Example response:
{ "status": "created", "pipeline": "lead-scoring", "nodes": 2, "edges": 1, "path": "/rf/project/lead-scoring/flow.yaml"}add_node
Section titled “add_node”Adds a node to an existing pipeline.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
pipeline | string | yes | Pipeline name. |
node_id | string | yes | Node ID. Lowercase, hyphens. |
type | string | yes | Node type (e.g., data.sql, file.source). |
config | object | yes | Type-specific configuration fields. |
Example request:
{ "tool": "add_node", "arguments": { "pipeline": "lead-scoring", "node_id": "filter-active", "type": "data.filter", "config": { "expression": "status = 'active' AND email IS NOT NULL" } }}Example response:
{ "status": "added", "pipeline": "lead-scoring", "node_id": "filter-active", "total_nodes": 3}connect_nodes
Section titled “connect_nodes”Creates an edge between two ports.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
pipeline | string | yes | Pipeline name. |
source | string | yes | Source in nodeId.portName format. |
target | string | yes | Target in nodeId.portName format. |
Example request:
{ "tool": "connect_nodes", "arguments": { "pipeline": "lead-scoring", "source": "read-leads.data", "target": "filter-active.input" }}Example response:
{ "status": "connected", "edge": "read-leads.data -> filter-active.input", "total_edges": 2}run_pipeline
Section titled “run_pipeline”Executes the pipeline and returns results.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
pipeline | string | yes | Pipeline name. |
dry_run | boolean | no | If true, validate without executing. Default: false. |
Example request:
{ "tool": "run_pipeline", "arguments": { "pipeline": "lead-scoring", "dry_run": false }}Example response:
{ "status": "completed", "duration_ms": 1240, "nodes_executed": 3, "results": { "score": { "output_rows": 847, "artifact": "nodes/score/artifacts/output.ndjson" } }}inspect_pipeline
Section titled “inspect_pipeline”Returns pipeline state and results.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
pipeline | string | yes | Pipeline name. |
run_id | string | no | Specific run ID. Default: latest run. |
Example request:
{ "tool": "inspect_pipeline", "arguments": { "pipeline": "lead-scoring" }}Example response:
{ "pipeline": "lead-scoring", "last_run": { "id": "run-20260304-001", "status": "completed", "duration_ms": 1240, "nodes": { "read-leads": { "status": "success", "output_rows": 1204 }, "filter-active": { "status": "success", "output_rows": 962 }, "score": { "status": "success", "output_rows": 962 } } }}validate_pipeline
Section titled “validate_pipeline”Checks for errors without executing.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
pipeline | string | yes | Pipeline name. |
strict | boolean | no | Enable strict mode (flags security issues). Default: false. |
Example request:
{ "tool": "validate_pipeline", "arguments": { "pipeline": "lead-scoring" }}Response (valid):
{ "valid": true, "nodes": 3, "edges": 2, "warnings": []}Response (invalid):
{ "valid": false, "errors": [ { "type": "schema_mismatch", "edge": "read-leads.data -> score.input", "message": "Field 'clicks' required by score.input but not in read-leads.data schema" } ]}Error handling
Section titled “Error handling”All tools return errors in a consistent format:
{ "error": true, "code": "PIPELINE_NOT_FOUND", "message": "Pipeline 'lead-scoring' does not exist"}Error codes:
| Code | Description |
|---|---|
PIPELINE_NOT_FOUND | Named pipeline does not exist. |
PIPELINE_EXISTS | Pipeline already exists (on create). |
INVALID_YAML | YAML syntax error in pipeline definition. |
VALIDATION_FAILED | Schema or type validation failed. See errors array. |
NODE_NOT_FOUND | Referenced node ID does not exist. |
PORT_NOT_FOUND | Referenced port name does not exist on the node. |
CYCLE_DETECTED | Edge creates a cycle in the graph. |
EXECUTION_FAILED | Pipeline execution failed. See errors array. |
Agent workflow
Section titled “Agent workflow”A typical AI agent workflow:
- Call
validate_pipelineorinspect_pipelineto understand current state. - Read the Pipeline Spec and Node Spec for schema details.
- Call
create_pipelinewith generated YAML (oradd_node/connect_nodesto modify). - Call
validate_pipelineto check for errors. - If errors, fix the YAML and re-validate.
- Call
run_pipelineto execute. - Call
inspect_pipelineto check results and iterate if needed.
Every generated file is committed to Git for review.