Give Claude, Cursor, and Copilot your cluster. Safely.
Not a dumb pipe for kubectl. Radar ships a built-in MCP server with purpose-built tools that hand the model a pre-built topology, problem-correlated timelines, deduplicated events, and error-filtered logs — already minified, with secrets redacted. 12 read tools, 5 non-destructive write tools. Local-only by default, RBAC-aware, enabled on first install.
--no-mcpRaw kubectl is a bad MCP tool.
Two problems kill the naive “let Claude run kubectl” setup.
Token waste. kubectl get pod -o yaml returns hundreds of lines of managedFields, status conditions, and metadata noise. Three of those and you've burned the model's context window on data you didn't need.
Write-access risk. Giving an agent shell access to your cluster means a wrong inference can kubectl delete -f something important. You need write capabilities that are scoped, annotated, and non-destructive.
Radar's MCP server fixes both. Reads are minified and enriched with Radar's already-computed topology and health. Writes are confined to five explicit tools, annotated so the AI client knows they mutate state, and scoped to non-destructive operations (restart, scale, sync, cordon — no delete).
12 read tools. 5 write tools. Every one purpose-built.
Each tool does work a raw kubectl wrapper can't — composing related calls into one, correlating timelines with problems, collapsing duplicate events, redacting secrets, and emitting minified output sized for a model's context window.
get_dashboardCluster health overview — resource counts, problems, warning events, Helm status. Auto-correlates the recent changes that touched any broken resource.
list_resourcesLists resources with minified summaries (pods, deployments, services, CRDs, etc.).
get_resourceDetailed view of a single resource, with optional events + relationships + metrics + logs in one call — one tool instead of four.
get_topologyPre-built topology graph (nodes + edges) or an LLM-friendly text summary of resource chains and problems.
get_eventsKubernetes events, deduplicated (same reason+message collapsed) and sorted by recency.
get_changesResource changes (creates, updates, deletes) from the cluster timeline, with computed diffs.
get_cluster_auditBest-practice findings with remediation — security, reliability, efficiency. Filter by namespace, category, severity.
get_pod_logsPod logs filtered to errors/warnings, with secret redaction. Falls back to the last 20 lines if nothing matches.
get_workload_logsLogs from every pod of a workload, deduplicated and error-prioritized across replicas.
list_namespacesList all namespaces with status.
list_helm_releasesList all Helm releases with status and health.
get_helm_releaseHelm release info with optional values, revision history, and manifest diff across revisions.
apply_resourceCreate or update a Kubernetes resource from YAML.
manage_workloadRestart, scale, or rollback a Deployment, StatefulSet, or DaemonSet.
manage_cronjobTrigger, suspend, or resume a CronJob.
manage_gitopsManage ArgoCD and FluxCD resources — sync, reconcile, suspend, resume.
manage_nodeCordon, uncordon, or drain a Kubernetes node.
cluster:// resourcesMCP resource URIscluster://healthCluster health summary (same data as get_dashboard).
cluster://topologyFull cluster topology graph.
cluster://eventsRecent warning events (up to 50).
Five MCP clients. Five copy-paste blocks. All verbatim.
Install Radar. Paste one of these into your MCP client's config. That's it.
claude mcp add radar --transport http http://localhost:9280/mcp{
"mcpServers": {
"radar": {
"type": "http",
"url": "http://localhost:9280/mcp"
}
}
}{
"mcpServers": {
"radar": {
"url": "http://localhost:9280/mcp"
}
}
}{
"servers": {
"radar": {
"type": "http",
"url": "http://localhost:9280/mcp"
}
}
}{
"mcpServers": {
"radar": {
"url": "http://localhost:9280/mcp",
"type": "streamableHttp"
}
}
}Also supported: Windsurf, JetBrains, Codex, Gemini CLI. See the full MCP docs for all 9 clients.
Real questions, the tools they'll resolve to.
get_dashboardget_pod_logs + get_events + get_resourceget_changesget_helm_release (with include=diff)get_topologyThe guardrails aren't optional.
Local-only
The MCP server runs on localhost alongside Radar — bound to 127.0.0.1, not your public interface. AI clients connect to your machine, not the cluster's control plane.
RBAC-aware
Respects your kubeconfig's RBAC permissions. If your ServiceAccount can't list secrets in the payments namespace, neither can the MCP-backed agent. Returns 403 for unauthorized resources.
Three layers of redaction
Secret data never exposed — only key names. Env var values scrubbed for API keys, tokens, and base64 blobs. Pod log output scrubbed for secret patterns before returning.
Read tools are strictly read
The 12 read tools have no write path. No privilege-upgrade path from a read to a write. The tool hints annotate write tools so the AI client UI can prompt before executing them.
Non-destructive write ops only
Write tools do restart, scale, sync, cordon, drain. They do not delete pods, delete namespaces, or create arbitrary resources except via apply_resource (which you review).
Disable with one flag
Pass `--no-mcp` when starting Radar to turn the server off entirely. On by default, off when you need it off — no stale config surface area.
Apache 2.0. Yours to inspect, fork, or self-host.
Radar's source is on GitHub. Every feature on this page is in the binary you install with brew install. No telemetry, no mandatory login, no phone-home. If we ever change that, you'll see it in a diff first.
Four more things Radar does in the same binary.
Live topology graph
Every resource and connection, laid out by ELK.js, updated via SSE.
Event timeline
Every K8s event and resource change, retained past the 1-hour TTL.
Image filesystem viewer
Browse any container image tree without kubectl exec or docker pull.
Cluster audit
31 best-practice checks across security, reliability, and efficiency.
Your AI's new favorite kubectl.
Install Radar, paste a config block, and Claude can reason about your cluster in under a minute.
Apache 2.0 OSS · Unlimited clusters self-hosted · Hosted free tier for up to 3 clusters