Website Screenshot — Full Pages, Any Resolution, PNG, No Limits avatar

Website Screenshot — Full Pages, Any Resolution, PNG, No Limits

Pricing

Pay per usage

Go to Apify Store
Website Screenshot — Full Pages, Any Resolution, PNG, No Limits

Website Screenshot — Full Pages, Any Resolution, PNG, No Limits

Website screenshots as PNG/JPG/PDF in 2 min — full-page, desktop + mobile, custom viewport, bulk URL input. No headless-Chrome ops, no rate limits, no login walls. Built for design QA, SEO audits, archives. Custom pipeline — spinov001@gmail.com · blog.spinov.online · t.me/scraping_ai

Pricing

Pay per usage

Rating

0.0

(0)

Developer

Alex

Alex

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

0

Monthly active users

2 hours ago

Last modified

Categories

Share

Website Screenshot Scraper — Full-Page PNG/JPEG Capture with Custom Viewport (No Puppeteer Setup)

Capture YOUR batch of webpage screenshots — full-page, mobile or desktop viewport, PNG or JPEG — to an Apify key-value store in under 30 seconds per URL, with ZERO local browser install or Puppeteer/Playwright boilerplate. Feed in 1 to 1,000 URLs and get back a signed image URL per capture plus metadata (viewport, dimensions, timestamp). No API key juggling, no serverless-browser quota games — pay only for actor-seconds consumed.

Who buys this actor

  • Visual-regression QA engineers running nightly screenshot diffs against staging + prod to catch CSS regressions before users do.
  • Competitive-intel teams archiving weekly snapshots of competitor landing pages (pricing, feature lists, hero copy) for deal-review decks.
  • Content archival / journalism preserving webpage state for takedown resilience (source of truth when page later changes or 404s).
  • Link-preview / OG-image fallback services generating thumbnail cards for social feeds when the upstream page lacks proper og:image tags.
  • Brand/trademark monitoring capturing how your logo/copy is displayed on partner, affiliate, and unauthorized resale sites.
  • Growth/SEO teams tracking SERP-listed pages' Core Web Vitals "above-the-fold" visual (LCP element in a picture, not just a number).

Why this over the obvious alternatives

Concern you haveHow this actor handles it
"Urlbox / ScreenshotAPI / ApiFlash charge $0.005–$0.015 per capture — where's the savings?"Apify PPR: ~$0.0005–$0.001/capture at actor-second granularity for standard pages. 5–30× cheaper at batch scale, no monthly plan, no per-seat ceiling.
"I could just run my own Puppeteer on Lambda — why pay anything?"Self-hosted Puppeteer needs a Chromium layer (~80MB), GPU shim hacks for full-page, request-timeout tuning, and a retry/rotate IP strategy. This actor is zero-setup: URL in, PNG URL out.
"How does this handle JS-heavy / SPA pages that paint after load?"Built-in networkidle2 wait + configurable extra delay. For stubborn SPAs (client-rendered dashboards), increase waitForSelector or waitForTimeout in advanced input.
"Full-page on infinite-scroll pages — does it grab everything or give up at 20K pixels?"Auto-scroll pass before capture with lazy-load trigger. Hard ceiling at 60K px height (browser limit). If page is taller, we cap and flag in metadata so you don't silently miss content.
"Can I get JPEG at quality 80 for thumbnails instead of 4MB PNGs?"Yes — format: "jpeg" input. PNG default for pixel-perfect diffs, JPEG for bandwidth-efficient thumbnails/OG-images.
"What about authenticated pages or pages behind cookie consent banners?"Supply cookies via advanced input. Consent banners: we don't auto-dismiss them (legal risk — you must consent legitimately or run without them). For internal dashboards, pair with your auth session cookie.
"Where do the images live — am I copying them out to S3 every time?"Apify key-value store holds them with a signed URL valid for the run retention window (default 14 days). For permanent storage, pipe run output into S3/R2 via Apify webhook or your own post-run script.

Input

{
"urls": [
"https://stripe.com",
"https://vercel.com/pricing",
"https://linear.app"
],
"fullPage": true,
"width": 1440,
"height": 900,
"format": "png"
}
  • urls (array, required) — list of 1 to 1,000 URLs per run.
  • fullPage (boolean, default false) — true for entire scroll height; false for viewport-only.
  • width (integer, default 1280, min 320, max 3840) — browser viewport width in px.
  • height (integer, default 720, min 240, max 2160) — browser viewport height in px.
  • format (enum png|jpeg, default png) — image format.

Output schema (per URL)

{
"url": "https://stripe.com",
"screenshotUrl": "https://api.apify.com/v2/key-value-stores/<STORE>/records/screenshot_0.png",
"width": 1440,
"height": 900,
"fullPage": true,
"format": "png",
"byteSize": 428390,
"pageHeight": 4210,
"capturedAt": "2026-04-23T02:15:00.000Z",
"status": "success",
"loadTimeMs": 1840,
"httpStatus": 200
}

Batch-failure aware: each URL gets its own object even if some fail. Failed captures carry status: "error" + error: "<reason>" (DNS, timeout, 4xx/5xx, blocked) so your downstream pipeline can retry selectively.

Python copy-paste — nightly visual-regression diff

Capture the same 10 pages twice (staging + prod) and flag any byte-size delta >5% for manual review. A 30-second job that catches 90% of accidental CSS regressions before QA even wakes up.

from apify_client import ApifyClient
client = ApifyClient("<YOUR_APIFY_TOKEN>")
pages = [
"/", "/pricing", "/docs", "/blog", "/login",
"/signup", "/about", "/careers", "/contact", "/privacy",
]
def capture(base_url: str) -> dict[str, dict]:
run = client.actor("knotless_cadence/website-screenshot-scraper").call(run_input={
"urls": [base_url + p for p in pages],
"fullPage": True,
"width": 1440,
"height": 900,
"format": "png",
})
items = client.dataset(run["defaultDatasetId"]).list_items().items
return {i["url"].replace(base_url, ""): i for i in items}
staging = capture("https://staging.example.com")
prod = capture("https://www.example.com")
for path in pages:
s = staging.get(path, {}).get("byteSize", 0)
p = prod.get(path, {}).get("byteSize", 0)
if p and abs(s - p) / p > 0.05:
print(f"⚠ {path} diff {((s-p)/p)*100:+.1f}% staging={s}B prod={p}B")
print(f" staging: {staging[path]['screenshotUrl']}")
print(f" prod: {prod[path]['screenshotUrl']}")

Pipe output to Slack webhook for morning standup. No SaaS screenshot-diff tool needed at this price point.

MCP / LLM-agent usage

If you run an LLM agent that occasionally needs to "see" a webpage (because structured HTML won't capture visual layout issues), this actor slots directly in.

tools = [{
"name": "capture_webpage",
"description": "Take a screenshot of a webpage and return the image URL.",
"input_schema": {
"type": "object",
"properties": {
"url": {"type": "string"},
"fullPage": {"type": "boolean", "default": False},
},
"required": ["url"],
},
}]
# Tool handler:
def capture_webpage(url: str, fullPage: bool = False):
run = client.actor("knotless_cadence/website-screenshot-scraper").call(run_input={
"urls": [url], "fullPage": fullPage, "format": "png",
})
return list(client.dataset(run["defaultDatasetId"]).iterate_items())[0]["screenshotUrl"]

Pair with Claude Vision / GPT-4o to have the model describe visual state — useful for accessibility audits, brand-compliance checks, and end-to-end QA that tests "looks right" not just "DOM matches".

Frequent questions (real ones buyers ask)

1. "Can I capture a specific element instead of the whole page?" Not in this actor (would need CSS-selector cropping). Workaround: capture full page, crop locally with Pillow / sharp using the element's bounding box from a companion DOM query. If there is demand (≥5 paying users asking), element-crop mode will ship.

2. "How do I get screenshots at multiple breakpoints (320px, 768px, 1440px) in one run?" Call the actor 3 times with different width, or script it as a 3-run loop — the overhead per run is ~2 seconds. Native multi-viewport input is on the roadmap for v2.

3. "What about pages behind a login wall?" Advanced input accepts a cookie string (same format as document.cookie). Put your session cookie there and the browser loads as that user. Never share production cookies across teammates — rotate frequently.

4. "Does this bypass Cloudflare / captcha walls?" No. The actor uses standard Chromium + default fingerprint. If the site blocks headless browsers, combine with Apify proxy (residential) — set useApifyProxy: true in advanced input. Still not a captcha bypass (which we don't do on principle — it's against Cloudflare ToS and yours).

5. "Can I schedule this as a nightly cron?" Yes — Apify has native scheduling. Set it to run every 24h, pipe output dataset to your webhook, done. Or trigger on-demand from GitHub Actions cron / n8n / Zapier.

6. "Data retention — how long do screenshots stay?" Default 14 days in the Apify key-value store. For longer retention, enable "unlimited" on the store (Apify Platinum plan), or post-process the run and copy PNGs to your own S3/R2/Backblaze bucket.

Visual & Monitoring Toolkit (companion actors)

ToolPurpose
Website Screenshot Scraper (this)Capture any page visually
Website Uptime CheckerMonitor availability / latency
Broken Links CheckerFind 404s on your site
PageSpeed Insights ScraperLighthouse / Core Web Vitals
HTTP Headers CheckerSecurity headers audit
Webpage Text ExtractorClean article text from HTML
URL ExpanderResolve shortlink chains

All 78 scrapers and utilities: apify.com/knotless_cadence.


Part of 78 data-extraction actors by knotless_cadence.

Questions, bug reports, or a missing feature? Email spinov001@gmail.com or open an issue — happy to hear what you'd use next.