LLM Radar - AI Model Pricing, Benchmarks & Status Actor API avatar
LLM Radar - AI Model Pricing, Benchmarks & Status Actor API

Pricing

Pay per usage

Go to Apify Store
LLM Radar - AI Model Pricing, Benchmarks & Status Actor API

LLM Radar - AI Model Pricing, Benchmarks & Status Actor API

Real-time pricing for 110+ AI models, live LMSYS Arena ELO scores, and provider operational status from 11 providers. One API call.

Pricing

Pay per usage

Rating

0.0

(0)

Developer

DataHQ

DataHQ

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

0

Monthly active users

10 days ago

Last modified

Categories

Share

🎯 LLM Radar - AI Model Pricing & Status Hub

The most comprehensive LLM intelligence API - Get real-time pricing for 115 AI models and live operational status from 11 providers in a single API call.

✨ Why LLM Radar?

Building AI applications? You need to know:

  • 💰 How much does each model cost? (input/output tokens, cached, batch pricing)
  • 🚨 Is the API up right now? (real-time operational status)
  • 🏆 How good is the model? (Live LMSYS Arena ELO, Rank, & Vote counts)
  • 📊 Which model is best for my task? (capability scores from Costbase)

LLM Radar gives you all of this in one structured JSON response.

🚀 Features

  • Live LMSYS Benchmarks: Automatically fetches the latest ELO ratings, ranks, and vote counts from LMSYS Chatbot Arena.
  • 115+ Models: Covers OpenAI, Anthropic, Google, Amazon Bedrock, Mistral, and more.
  • Unified Pricing: Normalized pricing data (per 1M tokens) across all providers.
  • Provider Status: Real-time status checks for 8 major API providers.

🏢 Supported Providers & Models

ProviderModelsHighlights
OpenAI30GPT-5.1, GPT-5, GPT-4.1, o1, o3-mini, DALL-E, Whisper
Mistral17Large, Medium, Small, Codestral, Pixtral, Ministral
Google11Gemini 2.5, 2.0, 1.5 Flash/Pro, Imagen, Veo
Groq11Llama 3.3/3.2/3.1, Mixtral, Gemma (ultra-fast)
Cohere10Command R+, R, Embeddings, Rerank
Amazon8Titan, Nova Pro/Lite/Micro, Bedrock Embeddings
xAI8Grok 3, Grok 2, Grok Vision
Anthropic7Claude Opus/Sonnet/Haiku 4.5, 4, 3.5
Together6Llama 3.1 405B/70B, Qwen 2.5, DeepSeek R1
Fireworks5Llama 3.3, Qwen 2.5, DeepSeek V3 (serverless)
DeepSeek2DeepSeek V3, Reasoner R1

Total: 115 models with detailed pricing & capability scores


📊 What You Get

1. Comprehensive Pricing Data

{
"models": [
{
"model_id": "gpt-4.1",
"provider": "openai",
"display_name": "GPT-4.1",
"model_type": "text",
"tier": "flagship",
"pricing": {
"text": {
"input_per_million": 30.00,
"output_per_million": 60.00,
"cached_input_per_million": 15.00
},
"context_window": 128000,
"currency": "USD"
},
"benchmarks": {
"arena_elo": 1287,
"rank": 24,
"votes": 24834,
"coding": 0.94,
"math": 0.92,
"source": "LMSYS Chatbot Arena"
}
}
]
}

2. Real-Time Provider Status

{
"status": [
{
"provider": "openai",
"status": "operational",
"description": "All Systems Operational",
"latency_p50_ms": 145
},
{
"provider": "anthropic",
"status": "degraded",
"description": "Increased API Latency"
}
]
}

3. Summary Statistics

{
"summary": {
"total_models": 115,
"providers": {
"openai": 30,
"mistral": 17,
"google": 11,
"groq": 11,
"cohere": 10,
"xai": 8,
"anthropic": 7,
"amazon": 8,
"together": 6,
"fireworks": 5
},
"status_summary": {
"operational": ["openai", "google", "groq"],
"degraded": ["anthropic"],
"outage": []
}
}
}

💰 Data Fields Included

CategoryFieldsDescription
Pricinginput_per_million, output_per_million, cached_input_per_millionToken costs in USD
Capability Scorescoding, creative, analysis, translation, math, speed0-1 scale performance ratings
Model Infotier, release_date, context_window, max_output_tokensModel specifications
Statusoperational, degraded, outage, unknownReal-time API health

🎯 Model Tiers

TierDescriptionExample Models
flagshipBest performance, highest costGPT-5.1, Claude Opus 4.5, Grok 3
standardBalanced performance/costGPT-5-mini, Claude Sonnet, Gemini 2.0
budgetCost-optimizedGPT-5-nano, Claude Haiku, Ministral 3B
premiumSpecialized (reasoning, etc.)o1, o3-mini, DeepSeek Reasoner

🔧 Input Parameters

ParameterTypeDefaultDescription
providersarray["all"]Which providers to include
dataTypesarray["pricing", "status"]What data to fetch
forceRefreshbooleanfalseBypass cache for fresh pricing

Example Input

{
"providers": ["openai", "anthropic", "groq"],
"dataTypes": ["pricing", "status"],
"forceRefresh": false
}

🚀 Use Cases

1. Cost Calculator

const { models } = await llmRadar.getData();
const gpt5 = models.find(m => m.model_id === 'gpt-5.1');
const cost = (tokens / 1_000_000) * gpt5.pricing.text.input_per_million;

2. Find Best Model for Task

const { models } = await llmRadar.getData();
const bestForCoding = models
.filter(m => m.benchmarks?.coding)
.sort((a, b) => b.benchmarks.coding - a.benchmarks.coding)[0];
console.log(`Best for coding: ${bestForCoding.display_name}`);

3. Smart Model Router

const { models, status } = await llmRadar.getData();
const operational = status.filter(s => s.status === 'operational').map(s => s.provider);
const cheapest = models
.filter(m => operational.includes(m.provider) && m.tier === 'budget')
.sort((a, b) => a.pricing.text.input_per_million - b.pricing.text.input_per_million)[0];

4. Status Dashboard

const { status, summary } = await llmRadar.getData();
console.log(`Operational: ${summary.operational_count}/${status.length}`);
status.forEach(s => showBadge(s.provider, s.status));

⚡ Performance

  • Fast execution: ~5 seconds per run
  • Pricing cached: 30 days (prices rarely change)
  • Status fresh: Always real-time from provider APIs
  • Minimal compute: Low resource usage per run

📡 API Access

After running, access your data via the Apify Dataset API:

curl "https://api.apify.com/v2/datasets/{DATASET_ID}/items" \
-H "Authorization: Bearer YOUR_TOKEN"

Or use the Apify SDK:

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_TOKEN' });
const run = await client.actor('llm-radar').call();
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items[0].summary); // { total_models: 96, ... }

🔄 Data Freshness

Data TypeCache DurationNotes
Pricing & Models30 daysFrom curated database
StatusAlways freshFetched from provider APIs each run

Use forceRefresh: true to bypass pricing cache.


⚡ Optimized Usage

1. 🚨 Status Monitor (Save Costs)

Use Case: Check if APIs are down every 15 minutes. Config:

{
"dataTypes": ["status"],
"providers": ["all"]
}

Output: Only returns status array. Minimal data transfer.

2. 💰 Pricing & Benchmarks (Full Data)

Use Case: Update your internal database with latest prices and ELO scores. Config:

{
"dataTypes": ["pricing"],
"forceRefresh": true
}

📈 Historical Data

The actor automatically snapshots data daily:

  • Storage: Key-Value Store
  • Key Format: HISTORY_YYYY_MM_DD
  • Content: Full pricing & status snapshot

🏷️ Tags

llm ai-pricing openai anthropic claude gpt-5 gemini groq mistral cohere deepseek grok benchmarks lmsys elo status-monitor cost-calculator model-comparison


📄 License

ISC

👤 Author

Built by Raihan K.