Skip to content

MCP Server

@radh/flow-mcp is a Model Context Protocol server that exposes Radhflow to AI coding agents. Claude Code, Cursor, Copilot, and any MCP-compatible client can create, validate, and execute pipelines programmatically.

AI coding agents work best when they can interact with tools structurally. MCP gives them typed tool definitions, structured inputs, and machine-readable responses. Instead of scraping CLI output, an agent calls create_pipeline with a YAML string and gets back a validated graph or a list of errors.

This closes the loop: an AI agent can generate a pipeline, validate it, fix errors, and run it — all without human intervention.

Create a new pipeline from a YAML definition.

{
"tool": "create_pipeline",
"arguments": {
"name": "lead-scoring",
"yaml": "name: lead-scoring\nversion: 1\nnodes:\n read-leads:\n type: file.source\n path: leads.csv\n format: csv\n score:\n type: data.sql\n query: \"SELECT *, clicks * 0.3 + opens * 0.5 AS score FROM input\"\nedges:\n - \"read-leads.data -> score.input\""
}
}

Response:

{
"status": "created",
"pipeline": "lead-scoring",
"nodes": 2,
"edges": 1,
"path": "/rf/project/lead-scoring/pipeline.rf.yaml"
}

Check a pipeline for schema errors, type mismatches, and invalid edges.

{
"tool": "validate_pipeline",
"arguments": {
"pipeline": "lead-scoring"
}
}

Response (valid):

{
"valid": true,
"nodes": 2,
"edges": 1,
"warnings": []
}

Response (invalid):

{
"valid": false,
"errors": [
{
"type": "schema_mismatch",
"edge": "read-leads.data -> score.input",
"message": "Field 'clicks' required by score.input but not in read-leads.data schema"
}
]
}

Execute a pipeline and return results.

{
"tool": "run_pipeline",
"arguments": {
"pipeline": "lead-scoring",
"dry_run": false
}
}

Response:

{
"status": "completed",
"duration_ms": 1240,
"nodes_executed": 2,
"results": {
"score": {
"output_rows": 847,
"artifact": "nodes/score/artifacts/output.ndjson"
}
}
}

List available node types and connectors.

{
"tool": "list_connectors",
"arguments": {}
}

Response:

{
"connectors": [
{ "type": "file.source", "description": "Read CSV, JSON, or NDJSON files" },
{ "type": "data.sql", "description": "SQL transform via DuckDB" },
{ "type": "data.filter", "description": "Filter rows by expression" },
{ "type": "http.request", "description": "HTTP GET/POST to REST APIs" },
{ "type": "google.sheets", "description": "Read/write Google Sheets" },
{ "type": "browser.extract", "description": "Extract data from web pages" }
]
}

Get the current Radhflow CLI version, available commands, and environment info.

{
"tool": "introspect_cli",
"arguments": {}
}

Response:

{
"version": "0.3.0",
"commands": ["init", "run", "validate", "inspect"],
"mode": "local",
"project_path": "/rf/project"
}

Add to your MCP config (Claude Code .claude/mcp.json, Cursor, or any MCP client):

{
"mcpServers": {
"radhflow": {
"command": "npx",
"args": ["@radh/flow-mcp"],
"env": { "RF_PROJECT_PATH": "/path/to/your/project" }
}
}
}

The server speaks standard MCP over stdio. Any client that implements the protocol can connect.

A typical AI agent workflow with Radhflow:

  1. Agent calls list_connectors to discover available node types.
  2. Agent reads the Pipeline Spec and Node Spec for schema details.
  3. Agent calls create_pipeline with generated YAML.
  4. Agent calls validate_pipeline to check for errors.
  5. If errors, agent fixes the YAML and re-validates.
  6. Agent calls run_pipeline to execute.
  7. Agent inspects results and iterates if needed.

No human in the loop required — but every generated file is committed to Git for review.