Alerts & Notifications
Configure alerts across 14 notification channels so the right people know the moment something goes wrong.
Ionhour sends alerts when your checks change status — going down, running late, or recovering. You configure alert channels at the workspace level, and all checks in the workspace use those channels.
Alert Channels
An alert channel is a destination where Ionhour sends notifications. Each workspace can have multiple channels, and each channel can be independently enabled or disabled.
Supported Channel Types
Ionhour supports 14 notification channel types across five categories:
Chat
| Type | How it works |
|---|---|
| Slack | Posts messages to a Slack channel via OAuth integration |
| Discord | Sends messages to a Discord channel via webhook |
| Microsoft Teams | Sends messages to a Teams channel via incoming webhook |
| Telegram | Sends messages to a Telegram chat via bot API |
| Google Chat | Sends messages to a Google Chat space via webhook |
| Type | How it works |
|---|---|
| Sends emails via Postmark to configured addresses |
Phone
| Type | How it works |
|---|---|
| SMS | Sends text messages to configured phone numbers |
| Phone Call | Places voice calls to configured phone numbers |
| Sends messages via WhatsApp Business API |
Webhook
| Type | How it works |
|---|---|
| Webhook | Sends a JSON payload to a custom URL (ideal for building your own integrations) |
Incident Management & Issue Trackers
| Type | How it works |
|---|---|
| PagerDuty | Creates incidents in PagerDuty via Events API |
| OpsGenie | Creates alerts in OpsGenie via Alert API |
| Jira | Creates issues in a Jira project |
| YouTrack | Creates issues in a YouTrack project |
Email Alerts
Email is the simplest notification channel. You configure a list of recipient email addresses, and Ionhour sends an email whenever an alert is triggered.
Setting Up Email
Per-User Preferences
Individual users can opt out of email notifications in their profile settings. When a user disables email notifications:
- They are removed from the recipient list for email channels.
- Shared/external email addresses (not tied to a user account) are unaffected.
This lets teams use a shared inbox (e.g., [email protected]) that always receives alerts, while individual team members can control their own notification volume.
Slack Alerts
Slack alerts post messages directly to a Slack channel using an incoming webhook. The integration uses Slack's OAuth flow — no manual webhook URL configuration required.
Setting Up Slack
The Slack channel is now connected and will receive alerts immediately. You can verify the connection by clicking Test on the channel to send a test message.
Webhook Alerts
Webhook channels send a JSON payload to any URL you configure. This is useful for building custom integrations with internal tools, triggering automation workflows, or forwarding alerts to services that Ionhour doesn't have a native integration for.
Setting Up a Webhook
Incident Management Integrations
PagerDuty, OpsGenie, Jira, and YouTrack channels create incidents or issues directly in your existing incident management and issue tracking tools. Each integration has its own setup drawer — navigate to Settings > Alerts, click Add Channel, and select the service to see the required configuration fields.
What Triggers Alerts
Ionhour sends alerts for these events:
Check Status Changes
| Event | Subject Line | When |
|---|---|---|
| Check goes DOWN | [ionhour] name is DOWN | Check has been down long enough to trigger an incident |
| Check is LATE | [ionhour] name is LATE | A ping is overdue but still within the failure threshold |
| Check recovers | [ionhour] name is back UP | Check returns to OK after being down, includes downtime duration |
Dependency Impact
| Event | Subject Line | When |
|---|---|---|
| Dependency down | [ionhour] name impacted by dependency | A check's dependency is reported as unavailable |
Outbound Check Alerts
| Event | Subject Line | When |
|---|---|---|
| Latency breach | [Ionhour] name latency breach (actualms > thresholdms) | Response time exceeds the configured latency warning threshold |
| SSL expiring | [Ionhour] name SSL certificate expires in N day(s) | SSL certificate is within the expiry warning window |
Alert Content
Each alert includes contextual information to help you respond quickly:
Down alerts include:
- Time of the last successful signal
- When the next signal was expected
- 24-hour uptime percentage (when available)
- Link to the incidents page
Recovery alerts include:
- Total downtime duration (from incident start to resolution)
- Time of recovery
- Current uptime percentage
Latency alerts include:
- Measured latency vs. configured threshold
- How much the latency exceeds the limit
SSL alerts include:
- Days remaining until expiry
- Exact expiry date
Testing Channels
After setting up a channel, use the Test button to send a test notification. This verifies that the channel is correctly configured and can deliver alerts. We recommend testing after any configuration change.
Managing Channels
Enable / Disable
You can temporarily disable a channel without deleting it. Disabled channels don't receive any alerts. This is useful during maintenance windows or when you want to silence a specific channel without losing its configuration.
Muting Individual Checks
If a specific check is noisy, you can mute notifications on that check instead of disabling an entire channel. Muted checks won't trigger alerts regardless of their status, but incidents are still created and tracked.
Muting is configured per-check in the check settings, not at the channel level.
Best Practices
- Use multiple channel types. Email provides a reliable, searchable record. Chat (Slack, Teams, Discord) provides real-time team visibility. Incident management tools (PagerDuty, OpsGenie) provide structured on-call routing. They complement each other.
- Set up a shared inbox. Use a team email address (e.g.,
[email protected]) that always receives alerts, even if individual team members have notifications disabled. - Test after setup. Always send a test notification after configuring a channel. A misconfigured channel that silently drops alerts is worse than no channel at all.
- Don't mute — fix. If you're muting a check because it's too noisy, the check's schedule or grace period probably needs tuning. Muting should be temporary, not a permanent workaround.
- Review alert fatigue. If your team is ignoring alerts, you have too many. Tighten your thresholds, increase grace periods, or raise the consecutive failure count on outbound checks.
- Use webhooks for custom workflows. If you need to trigger automation (restart a service, page a custom system), use a webhook channel to forward alerts to your own endpoint.