Api Rate Limit Orchestrator
Pricing
Pay per event
Api Rate Limit Orchestrator
Never hit rate limits again. Intelligent request queuing, auto-retry, and parallel execution for rate-limited APIs.
Pricing
Pay per event
Rating
0.0
(0)
Developer

Cody Churchwell
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Never hit rate limits again. Intelligent request queuing, auto-retry, and parallel execution for rate-limited APIs.
๐ฏ What It Does
API Rate Limit Orchestrator manages hundreds or thousands of API requests while respecting rate limits. Smart queuing, exponential backoff retries, batch processing, and real-time stats ensure maximum throughput without hitting limits.
Perfect for:
- Data Engineers: Bulk data fetching from rate-limited APIs
- Integration Developers: Managing multi-API workflows
- DevOps Teams: Orchestrating API-heavy operations
- Researchers: Large-scale API data collection
โจ Key Features
๐ฆ Intelligent Rate Limiting
- Token Bucket Algorithm: Smooth request distribution
- Sliding Window: Precise rate control
- Fixed Window: Simple time-based limits
- Multi-Level Limits: Per-second, per-minute, per-hour
- Concurrent Control: Max parallel requests
๐ Smart Retry Logic
- Exponential Backoff: Automatic retry delays
- Retry-After Header: Respects API guidance
- 429 Handling: Auto-retry on rate limit errors
- 5xx Retry: Handles temporary server errors
- Timeout Retry: Recovers from network issues
๐ฆ Batch Processing
- Configurable Batch Size: Group requests efficiently
- Batch Delays: Prevent burst rate limit hits
- Priority Queuing: High-priority requests first
๐ Real-Time Tracking
- Success/Failure Rates: Monitor request outcomes
- Response Times: P50, P95, P99 latency metrics
- Retry Statistics: Track retry patterns
- Usage Metrics: Request throughput over time
๐ API Presets
- GitHub: 5,000 requests/hour
- Stripe: 100 requests/second
- OpenAI: 3,500 requests/minute
- Twitter: 20 requests/minute
- Shopify: 2 requests/second
- Custom: Define your own limits
๐ Use Cases
Use Case 1: Bulk GitHub API Fetching
Problem: Need to fetch data for 10,000 repositories without hitting rate limits
{"apiPreset": "github","requests": [{ "id": "1", "url": "https://api.github.com/repos/facebook/react", "method": "GET", "headers": { "Authorization": "token ghp_..." } },{ "id": "2", "url": "https://api.github.com/repos/microsoft/vscode", "method": "GET", "headers": { "Authorization": "token ghp_..." } }// ... 9,998 more],"retryConfig": {"maxRetries": 3,"retryOn429": true,"retryOn5xx": true},"batchConfig": {"enabled": true,"batchSize": 100,"batchDelayMs": 1000}}
Result: All 10,000 requests executed respecting GitHub's 5k/hour limit with auto-retries
Use Case 2: Stripe Payment Processing
Problem: Process 5,000 payment records via Stripe API
{"apiPreset": "stripe","requests": [{ "id": "payment-1", "url": "https://api.stripe.com/v1/charges", "method": "POST", "headers": { "Authorization": "Bearer sk_..." }, "body": { "amount": 1000, "currency": "usd" } }// ... more payments],"rateLimitConfig": {"requestsPerSecond": 100,"concurrentRequests": 25},"retryConfig": {"maxRetries": 5,"initialDelayMs": 2000,"backoffMultiplier": 2}}
Use Case 3: OpenAI Batch Completions
Problem: Generate AI completions for 1,000 prompts
{"apiPreset": "openai","requests": [{ "id": "prompt-1", "url": "https://api.openai.com/v1/chat/completions", "method": "POST", "headers": { "Authorization": "Bearer sk-..." }, "body": { "model": "gpt-4", "messages": [...] }, "priority": 1 }// ... more prompts],"rateLimitConfig": {"requestsPerMinute": 3500,"concurrentRequests": 5},"trackingConfig": {"saveResponses": true,"calculateStats": true}}
๐ฅ Input Configuration
Required Fields
-
requests (array): API requests to orchestrate
{"id": "unique-id","url": "https://api.example.com/endpoint","method": "GET|POST|PUT|DELETE|PATCH","headers": { "Authorization": "Bearer token" },"body": { ... }, // For POST/PUT/PATCH"priority": 1 // Optional: 1 = highest} -
rateLimitConfig (object): Rate limiting rules
{"requestsPerSecond": 10, // 0 = no limit"requestsPerMinute": 600,"requestsPerHour": 5000,"concurrentRequests": 5,"algorithm": "token-bucket" // or "sliding-window", "fixed-window"}
Optional Fields
-
retryConfig (object): Retry behavior
{"maxRetries": 3,"initialDelayMs": 1000,"maxDelayMs": 30000,"backoffMultiplier": 2,"retryOn429": true,"retryOn5xx": true,"retryOnTimeout": true} -
batchConfig (object): Batch processing
{"enabled": false,"batchSize": 100,"batchDelayMs": 500} -
trackingConfig (object): Usage tracking
{"enabled": true,"logSuccesses": true,"logFailures": true,"saveResponses": false,"calculateStats": true} -
apiPreset (string): Use predefined limits
- Options:
custom,github,stripe,openai,twitter,shopify
- Options:
๐ค Output Data
Individual Request Results
{"requestId": "req-123","url": "https://api.example.com/endpoint","method": "GET","status": "success","statusCode": 200,"responseTime": 234,"retryCount": 0,"timestamp": "2025-11-24T15:30:00.000Z","response": { ... } // If saveResponses: true}
Orchestration Statistics
Stored in key-value store as orchestration_stats:
{"totalRequests": 1000,"successful": 987,"failed": 13,"successRate": "98.70%","totalRetries": 45,"avgResponseTime": 234,"p50ResponseTime": 210,"p95ResponseTime": 450,"p99ResponseTime": 890,"minResponseTime": 89,"maxResponseTime": 2340,"totalTime": 300000,"totalTimeFriendly": "5 minutes"}
๐ Rate Limiting Algorithms
Token Bucket (Recommended)
- Smooth request distribution
- Allows burst traffic
- Refills tokens continuously
Sliding Window
- Precise rate control
- No burst allowed
- Tracks exact window
Fixed Window
- Simple implementation
- Resets at window boundaries
- Can allow bursts at boundaries
๐ก Best Practices
Choosing Limits
- Start Conservative: Begin with lower limits, increase gradually
- Monitor Headers: Check API response headers for actual limits
- Concurrent vs Rate: Balance parallelism with rate limits
Retry Strategy
- 429 Errors: Always retry with exponential backoff
- 5xx Errors: Retry server errors, they're usually temporary
- Timeouts: Retry network timeouts with longer delays
Batch Processing
- Large Jobs: Enable batching for 1000+ requests
- Batch Size: 50-200 requests per batch typically optimal
- Batch Delay: 500-2000ms between batches
Priority Queuing
- Critical Requests: Priority 1
- Normal Requests: Priority 5 (default)
- Background Jobs: Priority 10
๐ Technical Details
Dependencies
- Bottleneck: Token bucket rate limiting
- Axios: HTTP client with interceptors
- date-fns: Duration formatting
Rate Limiting
- Reservoir pattern for token bucket
- Automatic reservoir refresh
- Dynamic concurrency control
Retry Logic
- Exponential backoff: delay = initial ร multiplier^(retryCount)
- Respects
Retry-Afterheaders - Max delay cap to prevent infinite waits
Performance
- Parallel Execution: Up to concurrentRequests in parallel
- Memory Efficient: Streams results to dataset
- Throughput: Depends on limits, typically 100-1000 req/min
๐ Monitoring & Debugging
Success Metrics
- Success Rate: Should be >95% for stable APIs
- Avg Response Time: Baseline for API performance
- P95/P99: Identify outliers and slow requests
Failure Analysis
- Failed Requests: Review errors in dataset
- Retry Count: High retries indicate API instability
- Status Codes: Pattern analysis (429s, 5xxs, timeouts)
Optimization
- Increase Concurrency: If response times are good
- Decrease Rate: If hitting 429s frequently
- Adjust Retries: Balance success rate vs time
๐ Integration Examples
CI/CD Pipeline
# Orchestrate API calls in GitHub Actions- name: Bulk API Operationrun: |apify call YOUR_ACTOR_ID --input '{"apiPreset": "github","requests": [...],"retryConfig": {"maxRetries": 5}}'
Data Pipeline
// Node.js integrationconst ApifyClient = require('apify-client');const client = new ApifyClient({ token: 'YOUR_TOKEN' });const run = await client.actor('YOUR_ACTOR_ID').call({apiPreset: 'stripe',requests: generateRequests(),trackingConfig: { saveResponses: true }});const { items } = await client.dataset(run.defaultDatasetId).listItems();
๐จ Troubleshooting
High Failure Rate
- Check API credentials in request headers
- Verify rate limit configuration matches API limits
- Enable retryOn429 and retryOn5xx
- Increase initialDelayMs for retry backoff
Slow Execution
- Increase concurrentRequests if API allows
- Reduce batchDelayMs if using batches
- Check if rate limits are too conservative
429 Errors
- Reduce requestsPerSecond/Minute/Hour
- Increase retry delays (initialDelayMs, maxDelayMs)
- Enable batch processing with delays
๐ Performance Tips
Maximum Throughput
- Set concurrentRequests to API's concurrent limit
- Use token-bucket algorithm for bursts
- Disable saveResponses unless needed
Reliability
- Enable all retry options
- Set maxRetries to 5+
- Use exponential backoff (multiplier: 2-3)
Cost Optimization
- Batch similar requests together
- Prioritize critical requests
- Monitor stats to tune limits
๐ License
MIT License - use freely!
๐ Apify $1M Challenge
Built to solve real rate limiting pain. Help us improve:
- Test with your favorite APIs
- Report edge cases or bugs
- Suggest new API presets
- Share success stories!
Orchestrate with confidence ๐ฆ