API Health Checker avatar
API Health Checker

Pricing

from $10.00 / 1,000 results

Go to Apify Store
API Health Checker

API Health Checker

Monitor API endpoints with health scoring (0-100), performance ratings, and actionable insights. Checks availability, latency, and response validation. Retry logic with exponential backoff. Get recommendations for slow or failing endpoints. Perfect for uptime monitoring and SLA verification.

Pricing

from $10.00 / 1,000 results

Rating

0.0

(0)

Developer

Varun Chopra

Varun Chopra

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

2 days ago

Last modified

Share

Comprehensive API health assessment engine with scoring, multi-dimensional evaluation, and actionable insights. Perfect for uptime monitoring, SLA verification, and automated health checks.

Now with GenAI/LLM API monitoring – supports OpenAI, Anthropic, Google AI, and Azure OpenAI.

Features

  • Health Scoring (0-100) – Quantitative assessment of each endpoint's health
  • Multi-Dimensional Evaluation – Availability, performance, validation, reliability
  • Actionable Insights – Deterministic recommendations for operators
  • Performance Rating – fast / acceptable / slow classification
  • Configurable Scoring – Custom thresholds and weights
  • Retry Logic – Exponential backoff for transient failures
  • Fault Tolerant – Individual failures don't crash the run

GenAI/LLM Features

  • Multi-Provider Support – OpenAI, Anthropic, Google AI, Azure OpenAI, Custom
  • Token Usage Tracking – Input/output/total tokens per request
  • Rate Limit Detection – Automatic detection and retry on 429 errors
  • Content Filtering – Detects safety filter activations
  • Time-to-First-Token – Streaming latency measurement (coming soon)

Health Scoring System

Each endpoint receives a health score from 0-100 based on four dimensions:

DimensionWeightDescription
Availability40%Endpoint reachable + correct status code
Performance30%Latency rating (fast=100%, acceptable=67%, slow=33%)
Validation20%Response validation passed
Reliability10%Retry behavior (0 retries=100%, 1=70%, 2+=30%)

Status Classification

StatusConditionDescription
healthyScore ≥ 80Endpoint performing well
degradedScore 50-79Endpoint has issues but functional
unhealthyScore < 50Endpoint failing or critical issues

Input Example

Standard REST Endpoints

{
"endpoints": [
{
"name": "Production API",
"url": "https://api.example.com/health",
"method": "GET",
"expectedStatus": 200,
"timeoutMs": 10000,
"maxLatencyMs": 2000,
"validateJsonKeys": ["status"]
}
],
"retryCount": 3,
"notifyOnFailure": true
}

GenAI Endpoints

{
"endpoints": [],
"genaiEndpoints": [
{
"name": "OpenAI GPT-4",
"provider": "openai",
"model": "gpt-4",
"apiKeyEnvVar": "OPENAI_API_KEY",
"testPrompt": "Hi, respond with one word.",
"maxTokens": 50,
"timeoutMs": 30000
},
{
"name": "Claude 3 Sonnet",
"provider": "anthropic",
"model": "claude-3-sonnet-20240229",
"apiKeyEnvVar": "ANTHROPIC_API_KEY"
},
{
"name": "Gemini Pro",
"provider": "google",
"model": "gemini-pro",
"apiKeyEnvVar": "GOOGLE_AI_KEY"
}
]
}

Output Example

{
"summary": {
"totalEndpoints": 3,
"healthyEndpoints": 2,
"degradedEndpoints": 1,
"unhealthyEndpoints": 0,
"averageLatencyMs": 450,
"overallHealthScore": 85,
"overallStatus": "degraded",
"durationMs": 1523,
"timestamp": "2024-01-15T10:00:00.000Z",
"genai": {
"totalGenAIEndpoints": 2,
"totalTokensUsed": 150,
"averageTTFTMs": null,
"rateLimitedEndpoints": 0,
"contentFilteredEndpoints": 0
}
},
"results": [
{
"name": "OpenAI GPT-4",
"url": "openai://gpt-4",
"status": "healthy",
"healthScore": 95,
"latencyMs": 850,
"isGenAI": true,
"genaiMetrics": {
"provider": "openai",
"model": "gpt-4",
"tokenUsage": {
"inputTokens": 10,
"outputTokens": 5,
"totalTokens": 15
},
"isRateLimited": false,
"isContentFiltered": false,
"responsePreview": "Hello!"
}
}
],
"insights": {
"recommendedActions": []
}
}

GenAI Provider Configuration

ProviderRequired FieldsNotes
openaimodel, apiKeyEnvVarUses chat/completions API
anthropicmodel, apiKeyEnvVarUses messages API
googlemodel, apiKeyEnvVarUses generateContent API
azuremodel, apiKeyEnvVar, baseUrlOpenAI-compatible format
custommodel, apiKeyEnvVar, baseUrlFor self-hosted or other LLMs

GenAI Endpoint Fields

FieldTypeRequiredDefault
nameString
providerString
modelString
apiKeyEnvVarString
baseUrlString*Provider default
testPromptString"Hi, respond with a single word."
maxTokensInteger50
timeoutMsInteger30000
streamingBooleanfalse

* Required for azure and custom providers

Insights & Recommendations

The Actor generates deterministic, rule-based recommendations:

ConditionRecommendation
Endpoint unreachableInvestigate - endpoint is unreachable
Wrong status codeCheck - returning X instead of expected Y
Slow performanceOptimize - latency exceeds threshold
GenAI rate limitedImplement backoff or increase quota
Content filteredReview test prompt or safety settings
High token usageReduce test prompt complexity

Environment Variables

Set API keys as environment variables:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_AI_KEY=AI...

Use {{VAR_NAME}} in REST endpoint headers:

{
"headers": {
"Authorization": "Bearer {{API_TOKEN}}"
}
}

Programmatic Integration

const result = await apify.actor('your-actor').call(input);
const output = await result.dataset().listItems();
const report = output[0];
if (report.summary.overallStatus === 'unhealthy') {
// Trigger alert
}
// Check GenAI health
if (report.summary.genai?.rateLimitedEndpoints > 0) {
// Handle rate limiting
}

Limitations

  • JSON key validation: top-level keys only
  • Maximum timeout: 5 minutes per endpoint
  • GenAI streaming TTFT: Coming in future release
  • Retry delays capped at 30 seconds

Technical Specifications

SpecValue
Max timeout300,000 ms (5 min)
Max retry delay30,000 ms
Min timeout100 ms
Score range0-100
GenAI max tokens1-1000

Deployment

npm install
npm run build
apify push

Support