Help when you need it. Humans, not tickets.
Four severity levels. Four tiers. One honest matrix of what to expect when you file. Plus the self-serve stuff you probably wanted before opening a ticket in the first place.
Start here — most questions answer themselves.
Four levels. No gotchas.
You pick the severity when you file; we review it on first response. We default to your call when it's close.
S1 — Critical
Radar is unavailable or unresponsive across a workspace, or a security issue that materially impacts your use of the product.
Dashboard returns 5xx globally; SSO blocks every user from logging in; suspected data exposure.
S2 — High
A core feature is broken for most users in a workspace. Workarounds exist but they're disruptive.
Event timeline stops ingesting for one cluster; alert routing to Slack fails; Helm rollback returns an error in the UI.
S3 — Normal
A non-critical feature is impaired, or the product behaves inconsistently but an acceptable workaround exists.
Specific resource types don't render correctly in the topology; a filter in the Resources view misses some matches.
S4 — Question / Request
How-to questions, configuration guidance, feature requests, and feedback.
How do I wire SCIM with Azure Entra ID? Can you add a webhook for deploy events?
What to expect from each tier.
Times are from ticket filed to first meaningful human response. Resolution depends on the issue.
| Tier | 1Critical | 2High | 3Normal | 4Question / Request |
|---|---|---|---|---|
Free $0 GitHub Issues, community | Best-effort | Best-effort | Best-effort | Best-effort |
TeamMost popular $99 / cluster / mo Email + in-app chat + GitHub | 8 business hours | 1 business day | 3 business days | 5 business days |
Enterprise Custom Dedicated CSM, phone, private Slack Connect, email | 30 minutes24×7 | 2 hours24×7 | 1 business day | 2 business days |
“Business hours” means 8am–8pm Mon–Fri in your workspace's region (US-East or EU-Central). 24×7 is exactly what it sounds like.
- Coverage hours
- Business hours, best-effort
- Uptime SLA
- No formal SLA
- Dedicated
- —
- Coverage hours
- Mon–Fri 9am–6pm in your workspace region
- Uptime SLA
- 99.5% target (no service credits)
- Dedicated
- —
- Coverage hours
- S1/S2 24×7; S3/S4 Mon–Fri 8am–8pm in your workspace region
- Uptime SLA
- 99.9% with service credits
- Dedicated
- Named CSM + solutions engineer
Four channels. Pick the one that fits.
GitHub issues
Best for: bugs in Radar OSS, Radar agent bugs, feature requests, questions the community might benefit from. Maintainers respond within 48 hours on weekdays.
Email support
Best for: paid-tier incidents, account questions, anything that needs a private conversation. Team lands in our support queue; Enterprise routes to your CSM.
In-app chat
Best for: quick questions while you're inside the Radar dashboard. Triaged to the same engineers as email, faster on average.
Dedicated CSM + Slack Connect
Best for: Enterprise incident coordination, ongoing design reviews, architecture calls. You get named humans plus a shared Slack channel between your team and ours.
A good ticket shaves hours off the response.
We've boiled it down to five fields. The more of these we see up front, the faster you get a meaningful reply instead of a follow-up question.
- 1
The severity you're filing at
S1–S4. We'll adjust if we see it differently, but having your call on the ticket helps us route.
- 2
What you saw vs. what you expected
One sentence each. If it's a UI bug, a screenshot or short screen recording is worth ten paragraphs.
- 3
A link back into Radar
Workspace, cluster, resource, timeline slice — whatever's relevant. Deep-links preserve filter + time state, so paste the URL from your browser directly.
- 4
The cluster + Radar agent version
Available in Workspace → Settings → Agents. We often need this to reproduce.
- 5
Whether it's reproducible and how
One-off, intermittent, or "every time I click X." If it's blocking a production workflow, say so — we'll treat that as an S1 modifier.
Severity: S2 Saw: Event timeline stopped updating on prod-us-east around 14:20 UTC. Expected: Events continue streaming live. Workspace: app.radarhq.io/w/acme Cluster: prod-us-east (agent v2026.4.1) Reproducible: Reload fixes it for ~2 minutes, then it pauses again. Blocking: Yes — we're mid-incident and need live events back.
Don't file security issues in public GitHub.
Suspected vulnerabilities go to security@skyhook.io. You'll get an acknowledgement within 1 business day. We follow responsible-disclosure conventions and credit researchers in our release notes with your permission.
Questions we get a lot.
How do I file a ticket?
Who decides the severity?
What do response times cover?
What about weekends?
Do you offer service credits when you miss an SLA?
Can I get phone support on Team?
What if the issue is with Radar OSS, not Radar?
Can I pay for faster response than Team provides?
How do you define downtime?
Is there a separate security-disclosure channel?
Still stuck? Say hi.
radar@skyhook.io lands in the founders' inbox. If the response-time matrix says you should be waiting and you're not — tell us. We'd rather hear it.
Apache 2.0 OSS or hosted free for 3 clusters. Support humans for everyone else.
Start on Free to try the product. Upgrade to Team for real response times, or Enterprise for 24×7 S1/S2 coverage.
Apache 2.0 OSS · Unlimited clusters self-hosted · Hosted free tier for up to 3 clusters