Skip to main content

Documentation Index

Fetch the complete documentation index at: https://radarhq.io/docs/llms.txt

Use this file to discover all available pages before exploring further.

Radar Cloud is a hosted control plane that sits in front of your in-cluster Radars. The Radar binary still runs in each of your clusters - the same OSS engine, configured to dial out over a long-lived WebSocket. Skyhook hosts the team layer (app.radarhq.io UI, api.radarhq.io control plane), and your data plane stays where the cluster is. What we host: auth, orgs, billing, audit, the cluster registry, and the WebSocket multiplexer that fans your browser’s requests out to the right in-cluster Radar. What we don’t host: the cluster-side binary itself, the K8s API client, your manifests, your logs, your traffic data. Those never leave the cluster.

What you get on top of OSS

  • Multi-cluster from one URL. Each cluster runs its own in-cluster Radar. Cloud federates them - they appear in a switcher, and fleet views aggregate across them.
  • Team workflow. Organizations, role-based access (owner / member / viewer), email + SSO invitations, audit log.
  • WorkOS auth. Email magic-link, Google, passkeys, plus customer-side SAML / OIDC via the WorkOS Admin Portal.
  • Personal access tokens for AI / CI clients, scoped per user, with rotation.
  • In-app inbox + webhook deliveries for cluster / org / billing events.
  • Stripe billing with a real Free tier, per-cluster Team plan, and Enterprise.
The cluster engine itself is identical - same topology, timeline, audit, MCP, integrations.

Architecture

Browser (app.radarhq.io)
  │  session cookie (WorkOS sealed JWT)

Hub (api.radarhq.io) - Go service
  ├─ owns: users, orgs, clusters, members, billing, audit, PATs, prefs
  ├─ stores: Postgres (durable state)

  ├─ per cluster: long-lived WebSocket (yamux)
  │     wss:443 (TLS, outbound from your cluster)
  └─→ Customer Kubernetes cluster
        └─ Radar pod (the OSS binary, in cloud-mode)
           ├─ does NOT store browser session secrets
           ├─ does NOT phone home except on the WS
           ├─ Hub injects X-Forwarded-User / X-Forwarded-Groups on every request
           └─ Radar (--auth-mode=proxy) impersonates those on K8s calls
When you open a cluster in the UI, the browser hits api.radarhq.io/c/{cluster_id}/... over HTTPS. The Hub validates your session, opens a fresh stream over the cluster’s existing WebSocket, and reverse-proxies the request to the Radar pod inside the cluster. SSE, pod-exec WebSockets, and MCP all flow through the same yamux multiplex.

What stays in your cluster

DataLives inWhy
Kubernetes resources, logs, metrics, eventsClusterLive K8s state is authoritative; copying it to SaaS would just stale-cache it.
Topology, audit findings, trafficClusterComputed at request time from live K8s state.
Helm releases, GitOps stateClusterRead straight from the cluster’s own secrets / CRDs.

What lives in the Cloud

DataWhy
User identity, org membership, rolesTenancy boundary - Hub enforces access.
Cluster registry (id, name, env, labels, install-token hash, last-connected)So the org has a shared list of clusters.
Billing + subscription stateStripe lives here.
Audit log + PATsCross-cluster audit trail and AI tokens.
User preferences (theme, pinned kinds, notify-on-* toggles)Follows you across clusters.
The cluster bearer token is one-time-reveal: the Hub stores only a SHA-256 hash. If you lose the token, rotate it.

Auth boundary

The cluster’s bearer token (Bearer rhc_...) is validated by the Hub on the WebSocket handshake. Once the tunnel is open, in-cluster requests are implicitly trusted - the tunnel is the boundary. There’s no per-request token exchange. For requests through the tunnel, the Hub strips inbound X-Forwarded-* (anti-spoof) and injects X-Forwarded-User: <user_id> + X-Forwarded-Groups: cloud:<role>,cloud:org:<id>,cloud:user:<id>. The in-cluster Radar runs --auth-mode=proxy under cloud-mode and impersonates those headers on K8s calls. See Cloud RBAC for how the cloud groups map to K8s ClusterRoles.

Where to next