Radar Cloud is a hosted control plane that sits in front of your in-cluster Radars. The Radar binary still runs in each of your clusters - the same OSS engine, configured to dial out over a long-lived WebSocket. Skyhook hosts the team layer (Documentation Index
Fetch the complete documentation index at: https://radarhq.io/docs/llms.txt
Use this file to discover all available pages before exploring further.
app.radarhq.io UI, api.radarhq.io control plane), and your data plane stays where the cluster is.
What we host: auth, orgs, billing, audit, the cluster registry, and the WebSocket multiplexer that fans your browser’s requests out to the right in-cluster Radar. What we don’t host: the cluster-side binary itself, the K8s API client, your manifests, your logs, your traffic data. Those never leave the cluster.
What you get on top of OSS
- Multi-cluster from one URL. Each cluster runs its own in-cluster Radar. Cloud federates them - they appear in a switcher, and fleet views aggregate across them.
- Team workflow. Organizations, role-based access (owner / member / viewer), email + SSO invitations, audit log.
- WorkOS auth. Email magic-link, Google, passkeys, plus customer-side SAML / OIDC via the WorkOS Admin Portal.
- Personal access tokens for AI / CI clients, scoped per user, with rotation.
- In-app inbox + webhook deliveries for cluster / org / billing events.
- Stripe billing with a real Free tier, per-cluster Team plan, and Enterprise.
Architecture
api.radarhq.io/c/{cluster_id}/... over HTTPS. The Hub validates your session, opens a fresh stream over the cluster’s existing WebSocket, and reverse-proxies the request to the Radar pod inside the cluster. SSE, pod-exec WebSockets, and MCP all flow through the same yamux multiplex.
What stays in your cluster
| Data | Lives in | Why |
|---|---|---|
| Kubernetes resources, logs, metrics, events | Cluster | Live K8s state is authoritative; copying it to SaaS would just stale-cache it. |
| Topology, audit findings, traffic | Cluster | Computed at request time from live K8s state. |
| Helm releases, GitOps state | Cluster | Read straight from the cluster’s own secrets / CRDs. |
What lives in the Cloud
| Data | Why |
|---|---|
| User identity, org membership, roles | Tenancy boundary - Hub enforces access. |
| Cluster registry (id, name, env, labels, install-token hash, last-connected) | So the org has a shared list of clusters. |
| Billing + subscription state | Stripe lives here. |
| Audit log + PATs | Cross-cluster audit trail and AI tokens. |
| User preferences (theme, pinned kinds, notify-on-* toggles) | Follows you across clusters. |
Auth boundary
The cluster’s bearer token (Bearerrhc_...) is validated by the Hub on the WebSocket handshake. Once the tunnel is open, in-cluster requests are implicitly trusted - the tunnel is the boundary. There’s no per-request token exchange.
For requests through the tunnel, the Hub strips inbound X-Forwarded-* (anti-spoof) and injects X-Forwarded-User: <user_id> + X-Forwarded-Groups: cloud:<role>,cloud:org:<id>,cloud:user:<id>. The in-cluster Radar runs --auth-mode=proxy under cloud-mode and impersonates those headers on K8s calls. See Cloud RBAC for how the cloud groups map to K8s ClusterRoles.
Where to next
- Connecting a cluster - the install command and what each Helm value does.
- Organizations & roles - the permission model.
- SSO - SAML / OIDC self-serve.
- Billing & plans - free tier, Team, Enterprise.
- Audit log - what’s recorded and how to export.