LLM Benchmark Aggregator
Pricing
from $0.75 / 1,000 results
LLM Benchmark Aggregator
Scrape LLM benchmark sites (MMLU, HumanEval, MATH). Aggregate scores across models for comparison tables.
LLM Benchmark Aggregator
Pricing
from $0.75 / 1,000 results
Scrape LLM benchmark sites (MMLU, HumanEval, MATH). Aggregate scores across models for comparison tables.
You can access the LLM Benchmark Aggregator programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
{ "mcpServers": { "apify": { "command": "npx", "args": [ "mcp-remote", "https://mcp.apify.com/?tools=consummate_mandala/llm-benchmark-aggregator", "--header", "Authorization: Bearer <YOUR_API_TOKEN>" ] } }}Get a ready-to-use configuration for your MCP client with the LLM Benchmark Aggregator - AI Model Rankings Actor preconfigured at mcp.apify.com?tools=consummate_mandala/llm-benchmark-aggregator .
You can connect to the Apify MCP Server using clients like Tester MCP Client, or any other MCP client of your choice.
If you want to learn more about our Apify MCP implementation, check out our MCP documentation. To learn more about the Model Context Protocol in general, refer to the official MCP documentation or read our blog post.