Radar OSS · MCP serverBeta

Give Claude, Cursor, and Copilot your cluster. Safely.

Not a dumb pipe for kubectl. Radar ships a built-in MCP server with purpose-built tools that hand the model a pre-built topology, problem-correlated timelines, deduplicated events, and error-filtered logs — already minified, with secrets redacted. 12 read tools, 5 non-destructive write tools. Local-only by default, RBAC-aware, enabled on first install.

Apache 2.0 · Enabled by default · Disable with --no-mcp
claude · connected to radar · mcp/http
You
why is the payments pod in crashloop
Claude · calling radar tools
get_resource(kind=“Pod”, name=“payments-7d9f”)
get_events(namespace=“prod”, since=“1h”)
get_pod_logs(name=“payments-7d9f”, level=“error”)
Response
The pod payments-7d9f is in CrashLoopBackOff. The kubelet reported OOMKilled at 14:01:47 — the container exceeded its 512Mi limit. A ConfigMap change 90 seconds earlier set cache.preload_on_startup: true, which matches the timing. Revert the ConfigMap or raise the memory limit.
3 tools, 1,847 tokens. kubectl get pod -o yaml would have been ~12k.
The problem

Raw kubectl is a bad MCP tool.

Two problems kill the naive “let Claude run kubectl” setup.

Token waste. kubectl get pod -o yaml returns hundreds of lines of managedFields, status conditions, and metadata noise. Three of those and you've burned the model's context window on data you didn't need.

Write-access risk. Giving an agent shell access to your cluster means a wrong inference can kubectl delete -f something important. You need write capabilities that are scoped, annotated, and non-destructive.

Radar's MCP server fixes both. Reads are minified and enriched with Radar's already-computed topology and health. Writes are confined to five explicit tools, annotated so the AI client knows they mutate state, and scoped to non-destructive operations (restart, scale, sync, cordon — no delete).

The tool surface

12 read tools. 5 write tools. Every one purpose-built.

Each tool does work a raw kubectl wrapper can't — composing related calls into one, correlating timelines with problems, collapsing duplicate events, redacting secrets, and emitting minified output sized for a model's context window.

Read tools12 · read-only
get_dashboard

Cluster health overview — resource counts, problems, warning events, Helm status. Auto-correlates the recent changes that touched any broken resource.

list_resources

Lists resources with minified summaries (pods, deployments, services, CRDs, etc.).

get_resource

Detailed view of a single resource, with optional events + relationships + metrics + logs in one call — one tool instead of four.

get_topology

Pre-built topology graph (nodes + edges) or an LLM-friendly text summary of resource chains and problems.

get_events

Kubernetes events, deduplicated (same reason+message collapsed) and sorted by recency.

get_changes

Resource changes (creates, updates, deletes) from the cluster timeline, with computed diffs.

get_cluster_audit

Best-practice findings with remediation — security, reliability, efficiency. Filter by namespace, category, severity.

get_pod_logs

Pod logs filtered to errors/warnings, with secret redaction. Falls back to the last 20 lines if nothing matches.

get_workload_logs

Logs from every pod of a workload, deduplicated and error-prioritized across replicas.

list_namespaces

List all namespaces with status.

list_helm_releases

List all Helm releases with status and health.

get_helm_release

Helm release info with optional values, revision history, and manifest diff across revisions.

Write tools5 · annotated, non-destructive
apply_resource

Create or update a Kubernetes resource from YAML.

manage_workload

Restart, scale, or rollback a Deployment, StatefulSet, or DaemonSet.

manage_cronjob

Trigger, suspend, or resume a CronJob.

manage_gitops

Manage ArgoCD and FluxCD resources — sync, reconcile, suspend, resume.

manage_node

Cordon, uncordon, or drain a Kubernetes node.

Note: `scale` is not supported for DaemonSets. No delete, no namespace mutation, no arbitrary shell.
cluster:// resourcesMCP resource URIs
cluster://health

Cluster health summary (same data as get_dashboard).

cluster://topology

Full cluster topology graph.

cluster://events

Recent warning events (up to 50).

Wire it up

Five MCP clients. Five copy-paste blocks. All verbatim.

Install Radar. Paste one of these into your MCP client's config. That's it.

Claude CodeOne-liner CLI
bash
claude mcp add radar --transport http http://localhost:9280/mcp
Claude Desktop
json
{
  "mcpServers": {
    "radar": {
      "type": "http",
      "url": "http://localhost:9280/mcp"
    }
  }
}
Cursor
json
{
  "mcpServers": {
    "radar": {
      "url": "http://localhost:9280/mcp"
    }
  }
}
VS Code Copilot
json
{
  "servers": {
    "radar": {
      "type": "http",
      "url": "http://localhost:9280/mcp"
    }
  }
}
Cline
json
{
  "mcpServers": {
    "radar": {
      "url": "http://localhost:9280/mcp",
      "type": "streamableHttp"
    }
  }
}

Also supported: Windsurf, JetBrains, Codex, Gemini CLI. See the full MCP docs for all 9 clients.

What you can ask

Real questions, the tools they'll resolve to.

What's wrong with my cluster right now?
get_dashboard
Resource counts, problems by kind, recent warnings, Helm release health
Why is the payments pod failing?
get_pod_logs + get_events + get_resource
Scrubbed logs filtered to errors/warnings, recent events for the Pod, full resource detail with related context
What changed in the last hour that could explain this?
get_changes
All creates, updates, and deletes from the cluster timeline within the window
Diff my last two Helm revisions for the api chart.
get_helm_release (with include=diff)
Values diff, manifest diff, and history for the release
Show me the network topology around the ingress namespace.
get_topology
LLM-friendly structured graph of nodes, edges, and ownership for the requested scope
Safe by design

The guardrails aren't optional.

Local-only

The MCP server runs on localhost alongside Radar — bound to 127.0.0.1, not your public interface. AI clients connect to your machine, not the cluster's control plane.

RBAC-aware

Respects your kubeconfig's RBAC permissions. If your ServiceAccount can't list secrets in the payments namespace, neither can the MCP-backed agent. Returns 403 for unauthorized resources.

Three layers of redaction

Secret data never exposed — only key names. Env var values scrubbed for API keys, tokens, and base64 blobs. Pod log output scrubbed for secret patterns before returning.

Read tools are strictly read

The 12 read tools have no write path. No privilege-upgrade path from a read to a write. The tool hints annotate write tools so the AI client UI can prompt before executing them.

Non-destructive write ops only

Write tools do restart, scale, sync, cordon, drain. They do not delete pods, delete namespaces, or create arbitrary resources except via apply_resource (which you review).

Disable with one flag

Pass `--no-mcp` when starting Radar to turn the server off entirely. On by default, off when you need it off — no stale config surface area.

Open source

Apache 2.0. Yours to inspect, fork, or self-host.

Radar's source is on GitHub. Every feature on this page is in the binary you install with brew install. No telemetry, no mandatory login, no phone-home. If we ever change that, you'll see it in a diff first.

skyhook-io/radar
1.3k★ GitHub stars
Apache 2.0 · Actively maintained

Your AI's new favorite kubectl.

Install Radar, paste a config block, and Claude can reason about your cluster in under a minute.

Apache 2.0 OSS · Unlimited clusters self-hosted · Hosted free tier for up to 3 clusters