Cloud GPU Pricing Aggregator avatar

Cloud GPU Pricing Aggregator

Pricing

Pay per event

Go to Apify Store
Cloud GPU Pricing Aggregator

Cloud GPU Pricing Aggregator

Scrapes normalized GPU cloud pricing from Runpod and Vast.ai. Returns per-GPU-hour prices with hardware specs and availability across community and secure tiers. Useful for AI training cost comparison, arbitrage monitoring, and GPU price trend analysis.

Pricing

Pay per event

Rating

0.0

(0)

Developer

BowTiedRaccoon

BowTiedRaccoon

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

6 days ago

Last modified

Categories

Share

Pulls normalized GPU cloud pricing from Runpod and Vast.ai in a single run. Returns per-GPU-hour prices, hardware specs, and availability across community and secure rental tiers — all in clean JSON, so you can actually compare them without building a spreadsheet by hand.


Cloud GPU Pricing Aggregator Features

  • Queries Runpod GraphQL and Vast.ai REST APIs — no auth, no browser, no proxies required
  • Normalizes per-GPU-hour price across providers so you can compare apples to apples — Runpod quotes per GPU, Vast.ai quotes per machine, the aggregator handles the math
  • Returns both secure and community tiers from Runpod separately, since the price difference is usually significant
  • Captures 40+ GPU types from Runpod (RTX 3070 through H100 SXM, B200, and beyond) and 60+ live marketplace offers from Vast.ai
  • Includes hardware specs: VRAM, vCPU count, system RAM, storage, and interconnect type
  • Timestamps every snapshot so you can build a price-history series by scheduling runs
  • Filters by provider (runpod, vastai) and GPU model substring — run just the H100s if that's all you need

Who Uses GPU Pricing Data?

  • AI founders and ML engineers — compare cloud GPU costs before committing to a training run budget
  • Cloud cost analysts — track H100/A100 price trends over weeks and months without refreshing a browser
  • Arbitrage-minded users — spot when community cloud prices undercut hyperscaler rates for the same GPU class
  • Infrastructure researchers — build datasets on GPU commodity pricing for market analysis or academic work
  • Startups evaluating providers — pick the right cloud tier based on actual prices, not marketing copy

How Cloud GPU Pricing Aggregator Works

  1. The actor seeds two queries — one for Runpod, one for Vast.ai — and runs them in parallel using a single JSON API crawler
  2. For Runpod, it posts a GraphQL query that returns all available GPU types with secure and community prices; each GPU type generates up to two output records (one per tier)
  3. For Vast.ai, it hits the public bundles endpoint and returns all live marketplace offers with full hardware specs
  4. Records are normalized to a common schema and saved — every field uses the same units, the same naming, and the same price-per-GPU-hour calculation regardless of source

Input

{
"maxItems": 15,
"providers": [],
"gpuFilter": ""
}
FieldTypeDefaultDescription
maxItemsinteger15Maximum records to return across all providers. Set to 0 for no limit.
providersarray[]Providers to query. Options: runpod, vastai. Empty array queries all.
gpuFilterstring""Substring filter on GPU model name (case-insensitive). Use "H100" to filter to H100 variants only.

Runpod H100s only

{
"providers": ["runpod"],
"gpuFilter": "H100",
"maxItems": 0
}

Vast.ai RTX 4090 spot market

{
"providers": ["vastai"],
"gpuFilter": "RTX 4090",
"maxItems": 50
}

Cloud GPU Pricing Aggregator Output Fields

{
"provider": "runpod",
"sku": "runpod-secure-NVIDIA H100 80GB HBM3",
"gpu_model": "H100 80GB HBM3",
"gpu_count": 1,
"vram_gb": 80,
"vcpu": 0,
"ram_gb": 0,
"storage_gb": 0,
"region": "global",
"tier": "secure",
"price_per_hour_usd": 2.99,
"price_per_gpu_hour_usd": 2.99,
"availability": "available",
"min_rental_duration_hours": 0,
"interconnect": "none",
"snapshotted_at": "2026-05-09T08:22:57.613Z",
"source_url": "https://api.runpod.io/graphql"
}
FieldTypeDescription
providerstringSource provider: runpod or vastai
skustringProvider-specific offer identifier
gpu_modelstringNormalized GPU model name (e.g., H100 80GB HBM3, RTX 4090)
gpu_countnumberNumber of GPUs in this SKU
vram_gbnumberTotal VRAM in GB across all GPUs in the SKU
vcpunumberVirtual CPU count (Vast.ai only; 0 for Runpod list prices)
ram_gbnumberSystem RAM in GB (Vast.ai only; 0 for Runpod list prices)
storage_gbnumberLocal storage in GB (Vast.ai only)
regionstringGeographic location — global for Runpod list prices, geolocation string for Vast.ai
tierstringRental tier: secure, community (Runpod), or community (Vast.ai marketplace)
price_per_hour_usdnumberTotal cost per hour in USD for the full SKU
price_per_gpu_hour_usdnumberCost per GPU per hour — the normalized comparison metric
availabilitystringOffer status: available
min_rental_duration_hoursnumberMinimum rental commitment in hours (0 = no minimum)
interconnectstringGPU interconnect type: nvlink, pcie, or none
snapshotted_atstringISO 8601 timestamp when the record was fetched
source_urlstringAPI endpoint URL that provided this data

🔍 FAQ

How do I get just H100 prices across all providers?

Cloud GPU Pricing Aggregator accepts a gpuFilter input. Set it to "H100" and leave providers empty — you'll get every H100 variant from both Runpod and Vast.ai in one run.

How much does the Cloud GPU Pricing Aggregator cost to run?

Each run costs $0.10 to start plus $0.001 per record. A full run across both providers returns roughly 100–120 records — total cost around $0.21, which is considerably less than the GPU time you're trying to compare.

Does the Cloud GPU Pricing Aggregator need proxies or a browser?

No proxies. No browser. Both Runpod and Vast.ai expose public APIs with no authentication required — this actor is a straight HTTP client hitting JSON endpoints.

Yes. Schedule it with Apify's built-in scheduler and each run adds a new snapshot with snapshotted_at timestamps. Feed the output into a spreadsheet or time-series store and you have a price-history dataset automatically.

Why are Runpod records missing vCPU and RAM values?

Runpod's public GPU types API returns per-GPU-hour prices but doesn't include host machine specs in the same response. The vcpu, ram_gb, and storage_gb fields are 0 for Runpod records. Vast.ai machine offers include the full host spec.


Need More Features?

Need additional providers, more granular Runpod pod data, or different output formats? File an issue or get in touch.

Why Use Cloud GPU Pricing Aggregator?

  • No API keys or proxies — both sources are fully public, so the actor runs anywhere without credentials or extra setup
  • Normalized per-GPU-hour pricing — the only comparison metric that matters across providers is calculated for every record, which is more useful than raw pricing that requires mental arithmetic to interpret
  • Timestamp-first design — every record carries a snapshotted_at field, making it trivial to build price-trend datasets by scheduling regular runs