Overview
The Warrn Alerts API allows you to programmatically create, update, and manage alerts within your organization. This is perfect for:
- CI/CD Integration: Automatically create alerts when deployments fail
- Monitoring Systems: Send alerts from your existing monitoring tools
- Custom Applications: Integrate alert creation into your applications
- Third-party Tools: Connect external services to Warrn
All alerts created via API are automatically categorized as integration source type for better analytics and tracking.
Prerequisites
Before using the Alerts API, you’ll need:
- API Key: Create an organization API key with
write or admin permissions
- Service & Team IDs: Optional UUIDs for associating alerts with services and teams
Base URL
All API requests should be made to:
https://api.warrn.io/api/
Authentication
Include your API key in the X-API-Key header:
curl -H "X-API-Key: warrn_your_api_key_here" \
-H "Content-Type: application/json" \
https://api.warrn.io/api/alerts/
Creating Alerts
Basic Alert Creation
curl -X POST "https://api.warrn.io/api/alerts/" \
-H "X-API-Key: warrn_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"name": "Database Connection Failed",
"description": "Unable to connect to primary database server",
"severity": "critical"
}'
Advanced Alert Creation
Create alerts with full metadata, tags, and associations:
curl -X POST "https://api.warrn.io/api/alerts/" \
-H "X-API-Key: warrn_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"name": "High Memory Usage",
"description": "Server memory usage exceeded 85% threshold",
"severity": "medium",
"service_id": "550e8400-e29b-41d4-a716-446655440000",
"team_id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"tags": ["performance", "memory", "server-01"],
"metadata": {
"server": "web-server-01",
"memory_usage": "87%",
"threshold": "85%",
"region": "us-east-1"
}
}'
Alert Fields
Required Fields
| Field | Type | Description |
|---|
name | string | Alert title (max 255 characters) |
description | string | Detailed description of the alert |
severity | string | Alert severity: low, medium, high, critical |
Optional Fields
| Field | Type | Description |
|---|
service_id | UUID | Associate alert with a specific service |
team_id | UUID | Associate alert with a specific team |
tags | array | List of tags for categorization and filtering |
metadata | object | Custom key-value data (useful for context) |
The source field is automatically set to integration for API-created alerts and should not be included in requests.
Listing Alerts
Retrieve alerts with filtering and pagination:
# Get all alerts
curl -H "X-API-Key: warrn_your_api_key_here" \
"https://api.warrn.io/api/alerts/"
# Filter by status
curl -H "X-API-Key: warrn_your_api_key_here" \
"https://api.warrn.io/api/alerts/?status=open"
# Filter by severity
curl -H "X-API-Key: warrn_your_api_key_here" \
"https://api.warrn.io/api/alerts/?severity=critical"
# Multiple filters with pagination
curl -H "X-API-Key: warrn_your_api_key_here" \
"https://api.warrn.io/api/alerts/?status=open&severity=high&limit=10&offset=0"
Available Filters
| Parameter | Type | Description | Values |
|---|
status | string | Filter by alert status | open, acknowledged, resolved |
severity | string | Filter by severity level | low, medium, high, critical |
service_id | UUID | Filter by service | Valid service UUID |
team_id | UUID | Filter by team | Valid team UUID |
limit | integer | Number of results (max 100) | Default: 20 |
offset | integer | Pagination offset | Default: 0 |
Successful Alert Creation
{
"id": "123e4567-e89b-12d3-a456-426614174000",
"name": "Database Connection Failed",
"description": "Unable to connect to primary database server",
"severity": "critical",
"status": "open",
"source": "integration",
"service_id": "550e8400-e29b-41d4-a716-446655440000",
"team_id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"tags": ["database", "connection"],
"metadata": {
"server": "db-primary",
"error_code": "CONNECTION_TIMEOUT"
},
"organization_id": "789e4567-e89b-12d3-a456-426614174999",
"created_at": "2024-01-15T10:30:00Z",
"updated_at": "2024-01-15T10:30:00Z",
"warrn_status": "agent_not_called",
"is_deleted": false
}
Alert List Response
[
{
"id": "123e4567-e89b-12d3-a456-426614174000",
"name": "Database Connection Failed",
"severity": "critical",
"status": "open",
"service_id": "550e8400-e29b-41d4-a716-446655440000",
"team_id": "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
"warrn_status": "agent_not_called",
"created_at": "2024-01-15T10:30:00Z",
"updated_at": "2024-01-15T10:30:00Z",
"alert_alias": null,
"occurrence_count": 1,
"first_occurred_at": null,
"last_occurred_at": null
}
]
Common Use Cases
CI/CD Pipeline Integration
name: Deploy and Monitor
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy Application
run: ./deploy.sh
- name: Create Alert on Failure
if: failure()
run: |
curl -X POST "https://api.warrn.io/api/alerts/" \
-H "X-API-Key: ${{ secrets.WARRN_API_KEY }}" \
-H "Content-Type: application/json" \
-d '{
"name": "Deployment Failed",
"description": "Production deployment failed for commit ${{ github.sha }}",
"severity": "critical",
"tags": ["deployment", "production", "ci-cd"],
"metadata": {
"commit": "${{ github.sha }}",
"branch": "${{ github.ref_name }}",
"workflow": "${{ github.workflow }}",
"run_id": "${{ github.run_id }}"
}
}'
pipeline {
agent any
stages {
stage('Deploy') {
steps {
script {
try {
sh './deploy.sh'
} catch (Exception e) {
// Create alert on deployment failure
sh """
curl -X POST "https://api.warrn.io/api/alerts/" \\
-H "X-API-Key: ${env.WARRN_API_KEY}" \\
-H "Content-Type: application/json" \\
-d '{
"name": "Jenkins Deployment Failed",
"description": "Deployment failed in Jenkins job ${env.JOB_NAME}",
"severity": "critical",
"tags": ["jenkins", "deployment", "failure"],
"metadata": {
"job_name": "${env.JOB_NAME}",
"build_number": "${env.BUILD_NUMBER}",
"build_url": "${env.BUILD_URL}"
}
}'
"""
throw e
}
}
}
}
}
}
Monitoring Integration
Prometheus Alertmanager Webhook
# alertmanager_webhook.py
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
WARRN_API_KEY = "warrn_your_api_key_here"
WARRN_API_URL = "https://api.warrn.io/api/alerts/"
@app.route('/webhook', methods=['POST'])
def alertmanager_webhook():
data = request.json
for alert in data.get('alerts', []):
# Convert Prometheus alert to Warrn alert
warrn_alert = {
"name": alert.get('labels', {}).get('alertname', 'Unknown Alert'),
"description": alert.get('annotations', {}).get('description', ''),
"severity": map_prometheus_severity(alert.get('labels', {}).get('severity', 'warning')),
"tags": [
alert.get('labels', {}).get('job', ''),
alert.get('labels', {}).get('instance', ''),
"prometheus"
],
"metadata": {
"prometheus_labels": alert.get('labels', {}),
"prometheus_annotations": alert.get('annotations', {}),
"generator_url": alert.get('generatorURL', '')
}
}
# Send to Warrn
response = requests.post(
WARRN_API_URL,
headers={
"X-API-Key": WARRN_API_KEY,
"Content-Type": "application/json"
},
json=warrn_alert
)
print(f"Alert sent to Warrn: {response.status_code}")
return jsonify({"status": "ok"})
def map_prometheus_severity(prom_severity):
mapping = {
"critical": "critical",
"warning": "medium",
"info": "low"
}
return mapping.get(prom_severity, "medium")
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Custom Application Integration
# error_tracker.py
import requests
import traceback
from functools import wraps
class WarrnErrorTracker:
def __init__(self, api_key, service_id=None, team_id=None):
self.api_key = api_key
self.service_id = service_id
self.team_id = team_id
self.api_url = "https://api.warrn.io/api/alerts/"
def track_error(self, error, context=None):
alert_data = {
"name": f"Application Error: {type(error).__name__}",
"description": str(error),
"severity": self._determine_severity(error),
"tags": ["error", "application", type(error).__name__.lower()],
"metadata": {
"error_type": type(error).__name__,
"traceback": traceback.format_exc(),
"context": context or {}
}
}
if self.service_id:
alert_data["service_id"] = self.service_id
if self.team_id:
alert_data["team_id"] = self.team_id
try:
response = requests.post(
self.api_url,
headers={
"X-API-Key": self.api_key,
"Content-Type": "application/json"
},
json=alert_data,
timeout=5
)
return response.status_code == 200
except Exception as e:
print(f"Failed to send error to Warrn: {e}")
return False
def _determine_severity(self, error):
# Customize based on error types
critical_errors = (SystemExit, KeyboardInterrupt, MemoryError)
high_errors = (ConnectionError, TimeoutError, FileNotFoundError)
if isinstance(error, critical_errors):
return "critical"
elif isinstance(error, high_errors):
return "high"
else:
return "medium"
# Usage decorator
def track_errors(tracker):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
tracker.track_error(e, {
"function": func.__name__,
"args": str(args),
"kwargs": str(kwargs)
})
raise
return wrapper
return decorator
# Example usage
tracker = WarrnErrorTracker(
api_key="warrn_your_api_key_here",
service_id="550e8400-e29b-41d4-a716-446655440000"
)
@track_errors(tracker)
def risky_function():
# Your application code
raise ValueError("Something went wrong!")
Error Handling
Common HTTP Status Codes
| Code | Description | Solution |
|---|
200 | Success | Alert created successfully |
400 | Bad Request | Check request format and required fields |
401 | Unauthorized | Verify API key is correct and active |
403 | Forbidden | Check API key has write or admin permissions |
422 | Validation Error | Review field values (severity, UUIDs, etc.) |
429 | Rate Limited | Wait and retry, or contact support |
500 | Server Error | Contact support if persistent |
Example Error Response
{
"detail": [
{
"type": "value_error",
"loc": ["body", "severity"],
"msg": "Invalid severity level",
"ctx": {
"error": "Must be one of: low, medium, high, critical"
}
}
]
}
Retry Logic Example
import requests
import time
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def create_alert_with_retry(api_key, alert_data, max_retries=3):
session = requests.Session()
# Configure retry strategy
retry_strategy = Retry(
total=max_retries,
status_forcelist=[429, 500, 502, 503, 504],
backoff_factor=1
)
adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("http://", adapter)
session.mount("https://", adapter)
headers = {
"X-API-Key": api_key,
"Content-Type": "application/json"
}
try:
response = session.post(
"https://api.warrn.io/api/alerts/",
headers=headers,
json=alert_data,
timeout=10
)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
print(f"Failed to create alert after {max_retries} retries: {e}")
raise
Best Practices
- Descriptive Names: Use clear, specific alert names
- Rich Context: Include relevant metadata and tags
- Appropriate Severity: Use severity levels consistently
- Deduplication: Use consistent naming for similar alerts
- Environment Variables: Store API keys securely
- Key Rotation: Regularly rotate API keys
- Least Privilege: Use minimum required permissions
- IP Restrictions: Limit API key access when possible
- Log API Usage: Track alert creation in your application logs
- Monitor Rate Limits: Watch for 429 responses
- Health Checks: Verify API connectivity regularly
- Alert on Failures: Alert when alert creation fails (meta!)
Rate Limits & Quotas
- Default Limit: 1,000 requests per hour per API key
- Burst Allowance: Short bursts up to 100 requests per minute
- Rate Limit Headers: Check
X-RateLimit-* headers in responses
- Upgrade Options: Contact support for higher limits
Next Steps