All posts
Announcements·December 1, 2025· 6 min read

What We're Building in 2026: Radar OSS and Radar Cloud

Two shipping dates, one product line. Radar open source ships January 2026, Radar follows in February. Here's the scope, the split, and what won't make it into v1.

Eyal Dulberg
CTO, Skyhook
What We're Building in 2026: Radar OSS and Radar Cloud

Two shipping dates, one product line, and a clear split between what's free and what costs money.

In January 2026 we're releasing Radar as open source under Apache 2.0. In February 2026 we're releasing Radar, the hosted multi-cluster version. This post is the engineering view of what lands in each, what doesn't, and why we drew the line where we did.

Radar 2026 roadmap

Why two products instead of one

We've spent the better part of a year talking to teams running Kubernetes at the messy middle of the curve: past the first production cluster, not yet big enough to have a platform team. The consistent pattern: they have three to twelve clusters, no good way to see them together, and a debugging experience that still leans on kubectl plus tabs.

Radar (the OSS tool we built for ourselves) solves the single-cluster debugging problem. It's a Go binary, the React UI is embedded with go:embed, it runs against your kubeconfig, and it shows you resources, topology, a timeline, Helm releases, and traffic - live, via SharedInformers on top of the Watch API. No agent. No cloud account. No data leaving your laptop.

That covers one engineer debugging one cluster. It does not cover:

  • Multiple clusters visible in one place
  • History that survives restarts and outlives your current kubeconfig session
  • Alerts when something breaks at 3am and nobody's looking
  • RBAC beyond whatever your kubeconfig already grants
  • A shared URL to a specific resource at a specific time for your incident channel

Those are team problems. They need a server, persistence, auth, and a notification pipeline. Trying to solve them inside the OSS binary would make Radar slow, opinionated, and hard to run on a laptop. So we split.

January 2026: Radar open source

Radar ships as a single Go binary. The React frontend is compiled in. On startup it reads your kubeconfig, opens your browser, and connects directly to the API server.

curl -fsSL https://raw.githubusercontent.com/skyhook-io/radar/main/install.sh | bash
kubectl radar

Also available via Homebrew, Krew, and an in-cluster Helm chart for teams that want a shared instance.

What's in the v1 binary:

  • Resources view - browse every resource type, YAML edit, logs, events, exec, port-forward, image filesystem browsing
  • Topology view - structured DAG of ownership, service routing, ingress paths, config dependencies
  • Timeline view - live stream of Kubernetes events and resource changes, in-memory by default, SQLite-backed if you pass --timeline-storage sqlite
  • Helm view - list, inspect, upgrade, rollback, uninstall; uses the Helm Go SDK directly, not shelling out
  • Traffic view - auto-detects Cilium/Hubble or Caretta and draws a live flow graph

GitOps gets first-class treatment. Argo CD Applications and Flux resources (Kustomizations, HelmReleases, GitRepositories) show up in topology with sync status, and you can trigger sync/reconcile from the UI. CRDs are auto-discovered via dynamic informers, so Argo Rollouts, Istio VirtualServices, and your own CRDs appear without plugin work.

Source lands at github.com/skyhook-io/radar in January. Apache 2.0. We want PRs.

Radar OSS

February 2026: Radar

Radar is a hosted backend and an agent you run in each cluster. The agent is a small Go binary (~32MB RSS steady state) that uses the same SharedInformer pattern as the OSS tool, but instead of serving a local UI it ships state changes, Kubernetes events, Helm release data, and coarse pod metrics up to agents.radarhq.io:443 over mutual TLS.

The connection is outbound-only. No inbound ports. No NodePort, no LoadBalancer, no Ingress. If your cluster can reach the public internet, the agent works. If it can't, you stay on OSS.

helm repo add skyhook https://skyhook-io.github.io/helm-charts
helm repo update
 
helm install radar-cloud-agent skyhook/radar-cloud-agent \
  --namespace radar-cloud --create-namespace \
  --set token=$RADAR_HUB_TOKEN

The enrollment token is issued from the Radar dashboard. The agent runs with a scoped Kubernetes ServiceAccount - read-only by default, read/write if you opt in for features like rollback from the UI. Logs, exec streams, and port-forwards all go through the agent on demand and are never stored at rest.

In GA on February release:

  • Fleet view across every connected cluster
  • Persistent event timeline, 24 hours on Free, 30 days on Team, 1 year+ on Enterprise
  • SSO: Okta, Google Workspace, Entra ID, generic SAML/OIDC
  • Scoped RBAC: per-cluster and per-namespace, with Admin/Operator/Viewer plus Custom roles
  • Alerts to Slack, PagerDuty, Opsgenie, MS Teams, and generic webhooks
  • Smart event correlation: dedup, grouping into incidents, suppression windows
  • Shareable deep links that preserve filters and time range
  • REST and GraphQL API for automation
  • SOC 2 Type II compliance
  • Data residency in US (us-east-1) or EU (eu-west-1)

SAML/OIDC SSO ships on Team and up; SCIM 2.0 provisioning is Enterprise-only. Audit logs ship on every tier (retention grows with tier: 7 days on Free, 30 days on Team, 1 year on Enterprise). The backend is Go microservices, PostgreSQL for metadata, ClickHouse for timeline events at scale.

What's not in v1

Being specific about gaps is more useful than listing features.

  • No multi-cluster topology yet. Fleet view gives you the aggregate dashboard and cross-cluster timeline in February. A true cross-cluster topology graph (tracing a service mesh request from cluster A to cluster B) is a Q2 target, not a launch feature. The rendering and the data model both need more work.
  • No Windows node support. The agent ships as a Linux container only. We don't have Windows nodes in our test matrix, and we'd rather not claim support we haven't exercised.
  • Limited CRD coverage in alerts. Alert rules in v1 target core resources, Kubernetes events, Helm release status, and the GitOps CRDs from Argo CD and Flux. Alerting on arbitrary CRDs (say, a custom Database resource) needs a rule-authoring UI we haven't finished.
  • Self-hosted below the Enterprise tier. More on this below.

The self-hosted question

We've been asked this enough that it deserves a paragraph.

Radar is a multi-tenant hosted service by default: ClickHouse clusters, Postgres with tenant isolation, a control plane that manages enrollment tokens and rolling agent upgrades. Packaging that into a single-tenant BYOC / on-prem install is a real project, which is why we only support it on the Enterprise tier. Free and Team are SaaS-only.

This matters for two groups: teams in air-gapped environments (defense, some finance, some regulated health workloads) and teams whose security policy prohibits outbound telemetry to a shared multi-tenant SaaS. If you need to keep data inside your own infrastructure, Enterprise BYOC is the path; if you don't need the Radar backend at all, Radar OSS is the right answer, and it will stay the right answer. We are not going to cripple the OSS version to push you toward Radar.

When to use which

Use Radar OSS when you're a single engineer debugging your own cluster, when you're working in an air-gapped environment, when you need a fast local tool for incident response without logging in to anything, or when you want to see what a cluster is doing in under ten seconds flat. It's a better kubectl for most day-to-day visibility work.

Use Radar when you're a team. When you want the timeline to still be there on Monday morning for the incident that happened Saturday. When you want Slack to tell you a deployment is crash-looping before a customer does. When "who changed that ConfigMap" needs an answer that survives pod restarts. When your engineers log in with Okta and you don't want to hand out kubeconfigs as the unit of access control.

Radar (OSS)Radar
Install locationYour laptop or a shared in-cluster podHosted; agent per cluster
Multi-clusterOne context at a timeFleet view across all connected clusters
Event retentionIn-memory, SQLite optional24 hours Free / 30 days Team / 1 year+ Enterprise
Audit log-7 days Free / 30 days Team / 1 year Enterprise
AuthYour kubeconfigGoogle/GitHub + SAML/OIDC SSO on Team; + SCIM 2.0 on Enterprise
RBACWhatever your kubeconfig grantsPer-cluster and per-namespace, Admin/Operator/Viewer/Custom
AlertsNoneSlack/PagerDuty/MS Teams on Team; + webhooks on Enterprise
Data residencyYour machineUS (us-east-1) or EU (eu-west-1); BYOC on Enterprise
SLANone - it's a binary you run99.5% on Team, 99.9% on Enterprise
PricingFree, Apache 2.0Free (3 clusters), $99/cluster/month Team, Enterprise custom

The rest of 2026

Q2: cross-cluster topology, alert rules for custom CRDs, a first pass at cost visibility using the metrics the agent already ships. Q3: deeper GitOps workflows (drift detection history, rollback orchestration across clusters), expanded Windows support if the demand signal is there. Q4: the self-hosted story.

If you want to follow along, the OSS repo goes public in January at github.com/skyhook-io/radar and the Radar waitlist is already open at radarhq.io. We'll post install-day walkthroughs for both. The next post in this series gets into the agent architecture: what it actually sends, what it deliberately doesn't, and why the ~32MB number is the one we kept optimizing toward.

radarradar-cloudkubernetesopen-sourceroadmap

Bring your first cluster online in 60 seconds.

Install the Helm chart, paste a token, see your cluster. No credit card required.

Apache 2.0 OSS · Unlimited clusters self-hosted · Hosted free tier for up to 3 clusters