Two types of alerts
Issue alerts
Triggered by activity on an issue — when a new issue is created, an existing one regresses, or event frequency crosses a threshold. Best for reacting to specific error patterns.
Metric alerts
Triggered when an aggregate metric (error rate, transaction duration, failure rate) crosses a threshold over a time window. Best for monitoring the overall health of a service.
Issue alerts
Issue alerts fire when specific conditions are met on an issue. You define conditions, optionally add filters, and specify actions to take when the rule fires.Conditions
Conditions determine when the rule evaluates. Available conditions:| Condition | Description |
|---|---|
| A new issue is created | Fires the first time Sentry sees a new fingerprint |
| The issue changes state from resolved to unresolved | Fires on regressions |
| An issue escalates | Fires when Sentry detects a significant increase in event volume for an existing issue |
| Sentry marks an existing issue as high priority | Fires when an existing issue is escalated to high priority automatically |
| The issue is seen more than times in | Fires when event count exceeds a threshold within a time window (1 minute to 30 days) |
| The issue is seen by more than users in | Fires when unique user count exceeds a threshold |
| The issue affects more than percent of sessions in | Fires when an issue impacts a percentage of total sessions |
Filters
Filters narrow down which issues the rule applies to, without changing when it triggers. Examples:- Issue is assigned to a specific user or team
- Issue has a specific tag value (e.g.
browser:Chrome) - Issue level is error, warning, etc.
- Issue age is older or newer than a duration
- The event’s attribute matches a value (e.g.
server_namecontainsprod)
Actions
When an alert fires, Sentry executes one or more actions:| Action | Description |
|---|---|
| Send an email | Notify a member, team, or the issue’s assignee/owner |
| Send a Slack message | Post to a Slack channel or DM |
| Send a PagerDuty notification | Page on-call via PagerDuty |
| Send an OpsGenie alert | Trigger an OpsGenie alert |
| Send a notification via a webhook | POST a JSON payload to a custom URL |
| Create a ticket | Open a ticket in Jira, GitHub Issues, or Azure DevOps |
| Send a notification via an integration | Route through a configured Sentry App |
Creating an issue alert
Configure conditions
Choose one or more conditions that determine when the rule fires. Select whether all conditions must match or any single condition is sufficient.
Metric alerts
Metric alerts monitor aggregate data over a rolling time window. Instead of reacting to a single issue, they watch your error rates, transaction performance, or crash-free session percentages.Alert thresholds
A metric alert has two optional threshold levels:- Critical — fires when the metric crosses this threshold; highest severity
- Warning — fires at a lower threshold to give early warning before critical is reached
- Resolved — the alert returns to resolved state when the metric drops back below the threshold
Time windows
Metric alerts evaluate data over a configurable window: 1 minute, 5 minutes, 10 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, 1 day, or 2 days.Creating a metric alert
Define the metric
Select the metric to monitor: error count, error rate, transaction duration (p50, p75, p95, p99), failure rate, Apdex, or crash-free session/user rate.
Set thresholds
Enter a critical threshold and optionally a warning threshold. Choose whether the alert fires when the value is above or below the threshold.
Alert status
| Status | Meaning |
|---|---|
| Critical | The metric crossed the critical threshold |
| Warning | The metric crossed the warning threshold but not critical |
| Resolved | The metric returned below all thresholds |
Metric alerts require that your project is sending transactions or session data to Sentry. Error rate alerts work with error events only.