API Health Checker
Pricing
from $10.00 / 1,000 results
API Health Checker
Monitor API endpoints with health scoring (0-100), performance ratings, and actionable insights. Checks availability, latency, and response validation. Retry logic with exponential backoff. Get recommendations for slow or failing endpoints. Perfect for uptime monitoring and SLA verification.
Pricing
from $10.00 / 1,000 results
Rating
0.0
(0)
Developer

Varun Chopra
Actor stats
0
Bookmarked
3
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Comprehensive API health assessment engine with scoring, multi-dimensional evaluation, and actionable insights. Perfect for uptime monitoring, SLA verification, and automated health checks.
Now with GenAI/LLM API monitoring – supports OpenAI, Anthropic, Google AI, and Azure OpenAI.
Features
- Health Scoring (0-100) – Quantitative assessment of each endpoint's health
- Multi-Dimensional Evaluation – Availability, performance, validation, reliability
- Actionable Insights – Deterministic recommendations for operators
- Performance Rating – fast / acceptable / slow classification
- Configurable Scoring – Custom thresholds and weights
- Retry Logic – Exponential backoff for transient failures
- Fault Tolerant – Individual failures don't crash the run
GenAI/LLM Features
- Multi-Provider Support – OpenAI, Anthropic, Google AI, Azure OpenAI, Custom
- Token Usage Tracking – Input/output/total tokens per request
- Rate Limit Detection – Automatic detection and retry on 429 errors
- Content Filtering – Detects safety filter activations
- Time-to-First-Token – Streaming latency measurement (coming soon)
Health Scoring System
Each endpoint receives a health score from 0-100 based on four dimensions:
| Dimension | Weight | Description |
|---|---|---|
| Availability | 40% | Endpoint reachable + correct status code |
| Performance | 30% | Latency rating (fast=100%, acceptable=67%, slow=33%) |
| Validation | 20% | Response validation passed |
| Reliability | 10% | Retry behavior (0 retries=100%, 1=70%, 2+=30%) |
Status Classification
| Status | Condition | Description |
|---|---|---|
healthy | Score ≥ 80 | Endpoint performing well |
degraded | Score 50-79 | Endpoint has issues but functional |
unhealthy | Score < 50 | Endpoint failing or critical issues |
Input Example
Standard REST Endpoints
{"endpoints": [{"name": "Production API","url": "https://api.example.com/health","method": "GET","expectedStatus": 200,"timeoutMs": 10000,"maxLatencyMs": 2000,"validateJsonKeys": ["status"]}],"retryCount": 3,"notifyOnFailure": true}
GenAI Endpoints
{"endpoints": [],"genaiEndpoints": [{"name": "OpenAI GPT-4","provider": "openai","model": "gpt-4","apiKeyEnvVar": "OPENAI_API_KEY","testPrompt": "Hi, respond with one word.","maxTokens": 50,"timeoutMs": 30000},{"name": "Claude 3 Sonnet","provider": "anthropic","model": "claude-3-sonnet-20240229","apiKeyEnvVar": "ANTHROPIC_API_KEY"},{"name": "Gemini Pro","provider": "google","model": "gemini-pro","apiKeyEnvVar": "GOOGLE_AI_KEY"}]}
Output Example
{"summary": {"totalEndpoints": 3,"healthyEndpoints": 2,"degradedEndpoints": 1,"unhealthyEndpoints": 0,"averageLatencyMs": 450,"overallHealthScore": 85,"overallStatus": "degraded","durationMs": 1523,"timestamp": "2024-01-15T10:00:00.000Z","genai": {"totalGenAIEndpoints": 2,"totalTokensUsed": 150,"averageTTFTMs": null,"rateLimitedEndpoints": 0,"contentFilteredEndpoints": 0}},"results": [{"name": "OpenAI GPT-4","url": "openai://gpt-4","status": "healthy","healthScore": 95,"latencyMs": 850,"isGenAI": true,"genaiMetrics": {"provider": "openai","model": "gpt-4","tokenUsage": {"inputTokens": 10,"outputTokens": 5,"totalTokens": 15},"isRateLimited": false,"isContentFiltered": false,"responsePreview": "Hello!"}}],"insights": {"recommendedActions": []}}
GenAI Provider Configuration
| Provider | Required Fields | Notes |
|---|---|---|
openai | model, apiKeyEnvVar | Uses chat/completions API |
anthropic | model, apiKeyEnvVar | Uses messages API |
google | model, apiKeyEnvVar | Uses generateContent API |
azure | model, apiKeyEnvVar, baseUrl | OpenAI-compatible format |
custom | model, apiKeyEnvVar, baseUrl | For self-hosted or other LLMs |
GenAI Endpoint Fields
| Field | Type | Required | Default |
|---|---|---|---|
name | String | ✓ | – |
provider | String | ✓ | – |
model | String | ✓ | – |
apiKeyEnvVar | String | ✓ | – |
baseUrl | String | * | Provider default |
testPrompt | String | "Hi, respond with a single word." | |
maxTokens | Integer | 50 | |
timeoutMs | Integer | 30000 | |
streaming | Boolean | false |
* Required for azure and custom providers
Insights & Recommendations
The Actor generates deterministic, rule-based recommendations:
| Condition | Recommendation |
|---|---|
| Endpoint unreachable | Investigate - endpoint is unreachable |
| Wrong status code | Check - returning X instead of expected Y |
| Slow performance | Optimize - latency exceeds threshold |
| GenAI rate limited | Implement backoff or increase quota |
| Content filtered | Review test prompt or safety settings |
| High token usage | Reduce test prompt complexity |
Environment Variables
Set API keys as environment variables:
OPENAI_API_KEY=sk-...ANTHROPIC_API_KEY=sk-ant-...GOOGLE_AI_KEY=AI...
Use {{VAR_NAME}} in REST endpoint headers:
{"headers": {"Authorization": "Bearer {{API_TOKEN}}"}}
Programmatic Integration
const result = await apify.actor('your-actor').call(input);const output = await result.dataset().listItems();const report = output[0];if (report.summary.overallStatus === 'unhealthy') {// Trigger alert}// Check GenAI healthif (report.summary.genai?.rateLimitedEndpoints > 0) {// Handle rate limiting}
Limitations
- JSON key validation: top-level keys only
- Maximum timeout: 5 minutes per endpoint
- GenAI streaming TTFT: Coming in future release
- Retry delays capped at 30 seconds
Technical Specifications
| Spec | Value |
|---|---|
| Max timeout | 300,000 ms (5 min) |
| Max retry delay | 30,000 ms |
| Min timeout | 100 ms |
| Score range | 0-100 |
| GenAI max tokens | 1-1000 |
Deployment
npm installnpm run buildapify push
