Image Transform — WebP / JPEG / PNG Batch Converter avatar

Image Transform — WebP / JPEG / PNG Batch Converter

Pricing

from $2.00 / 1,000 image transformed (xs — up to 500 kb)s

Go to Apify Store
Image Transform — WebP / JPEG / PNG Batch Converter

Image Transform — WebP / JPEG / PNG Batch Converter

Shrink images to WebP, JPEG, or PNG with lossy or lossless encoding, web-ready resize, EXIF stripping. Drop in URLs, upload via KVS, base64, or pipe a scraper's dataset. Photos go ~85% smaller, pages load faster. Tiered pricing $0.002–$0.060/image, $0.005 typical. Sharp-powered, no subscription.

Pricing

from $2.00 / 1,000 image transformed (xs — up to 500 kb)s

Rating

0.0

(0)

Developer

Marielise

Marielise

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Share

Image Transform — Batch WebP / JPEG / PNG Converter

Drop in image URLs (or chain a scraper's dataset). Get back compressed, web-ready files in WebP, JPEG, or PNG. Powered by sharp. Tiered PPE pricing from $0.002 (XS) to $0.060 (XXL) per image — you only pay for what your images actually weigh.

💸 Tiered pricing by source size: XS $0.002 → XXL $0.060. A typical phone photo (M tier, 2-10 MB) is $0.005. Logos/icons (XS) are $0.002. Failures, broken URLs, HEIC, and oversize (>100 MB) images are NEVER charged. See full table in Pricing.

What it does

  • Accepts images from public URLs, uploaded files (Apify KVS), inline base64, or an upstream dataset — all four can be mixed in one run
  • Re-encodes to WebP (recommended), JPEG (universal), or PNG (lossless)
  • Web-ready resize with aspect-preserving cap (inside, cover, or contain fit)
  • Strips EXIF / ICC / XMP metadata for smaller, privacy-safe output (EXIF orientation honored first)
  • Returns a metadata row per image to your dataset, plus the encoded file in the run's key-value store

Best for

  • E-commerce image prep — bulk-convert product photos to WebP at 80-quality, capped at 2000px
  • Blog / CMS optimization — strip a folder of giant 4000px CMS uploads into 1600px WebP without touching originals
  • Post-scrape pipelines — pipe a scraper's image-URL dataset directly into this actor (no glue code)
  • Asset migration — convert legacy JPEG / PNG archives to modern formats with predictable bytes-saved metrics
  • Archival masters — produce lossless WebP from PNG to shave 20-40% off storage with zero quality loss

How to feed it images

Pick whichever source you have. All four can be combined in a single run — they merge into one queue and process together.

1. Public URL (easiest, works with anything online)

Paste any direct image link. Examples that all work as-is:

ServiceURL pattern
Your own CDN / S3 publichttps://cdn.example.com/photo.jpg
Cloudflare R2 (public bucket)https://pub-xxxxx.r2.dev/photo.jpg
AWS S3 (public)https://bucket.s3.amazonaws.com/photo.jpg
Google Drive ("Anyone with link")https://drive.google.com/uc?id=FILE_ID&export=download (how to extract FILE_ID)
Dropbox shared linktake the share link and replace ?dl=0 with ?dl=1
Imgur directhttps://i.imgur.com/xxxxxx.jpg
GitHub rawhttps://raw.githubusercontent.com/user/repo/main/img.png

→ Set imageUrls to a list of { "url": "..." } entries.

2. Upload local files via Apify KVS (no public URL needed)

For images you can't or don't want to host publicly:

  1. In Apify Console, open any Key-Value Store (Storage → Key-value stores → "Create new" or use the run's input store).
  2. Drag your files onto the store. Each file becomes a record with the filename as the key.
  3. Note the store's ID or name, and the list of keys (filenames you uploaded).
  4. In this actor's input:
    • inputKvsId: the store ID (leave blank to use the run's default INPUT store)
    • inputKvsKeys: ["photo1.jpg", "photo2.jpg", "logo.png"]

The actor reads each key as binary and feeds it through the same pipeline.

3. Inline base64 (tiny tests + API integrations)

For one-off tests or programmatic API calls where the image is already in memory:

{
"base64Images": [
{ "name": "screenshot.png", "base64": "iVBORw0KGgo..." }
]
}

name controls the output filename. data:image/...;base64, prefixes are stripped automatically. Don't use this for batches above ~10 images — JSON gets huge.

4. Chain from another actor (dataset)

Already running a scraper that emits image URLs? Point at its dataset:

  • inputDatasetId: the upstream dataset ID
  • urlField: which field holds the URL (default url)

The actor will pull every item, read the URL field, and queue it. Great for "scrape product → optimize images" pipelines.

What this actor cannot accept

  • ❌ Local file paths from your machine (the actor runs on Apify cloud — no access to your disk). Upload via KVS instead.
  • ❌ HEIC files (Linux can't decode Apple's HEVC codec — pre-convert to JPEG first using macOS sips or any HEIC→JPEG tool).
  • ❌ OAuth-gated cloud storage (Drive private folders, Dropbox without share link). Use a public link instead.

Input fields

All fields are optional (sane defaults). Provide images via at least one of: imageUrls, inputKvsKeys, base64Images, or inputDatasetId.

FieldTypeDefaultDescription
imageUrlsArray<{url}>Public image URLs (any service: R2, S3, Drive direct, Dropbox ?dl=1, etc.). JPEG / PNG / WebP / GIF / TIFF / AVIF supported. HEIC is skipped.
inputKvsIdstringrun's defaultApify KVS holding uploaded source files. Leave blank to use the run's INPUT store.
inputKvsKeysstring[]Keys (filenames) to read from inputKvsId.
base64ImagesArray<{name,base64}>Inline base64 images for tiny tests / API calls.
inputDatasetIdstringPull URLs from an existing Apify dataset. Use to chain after a scraper.
urlFieldstringurlWhich field on each dataset item holds the image URL.
formatwebp | jpeg | pngwebpOutput format. PNG is always lossless. JPEG always lossy. WebP honors mode.
modelossy | losslesslossyWebP only. Ignored for JPEG / PNG.
quality1–10082Encoder quality (JPEG always; WebP lossy only; PNG ignored). 75–85 is the sweet spot for web photos.
maxWidthint2000Cap output width. 0 = no limit. Aspect ratio preserved (no upscaling).
maxHeightint2000Cap output height. 0 = no limit.
fitinside | cover | containinsideHow the image fits the box. inside = preserve aspect, no crop (recommended).
stripMetadatabooltrueDrop EXIF / ICC / XMP for smaller, privacy-safe files. Orientation honored first.
effort0–94Encoder effort. Higher = smaller files, slower encode.
outputKvsKeyTemplatestring{slug}.{format}Filename template. Tokens: {slug} {format} {index} {originalName}.
slugifyFilenamesbooltrueSlug-clean source filenames before substitution.
concurrency0–320 (auto)Parallel images. 0 matches CPU count. sharp is CPU-bound.
useApifyProxyboolfalseRoute source-URL fetches through Apify Proxy datacenter group.

Tokens for outputKvsKeyTemplate

TokenExample valueNotes
{slug}photo_1015Slugified source filename. NFD-normalized, diacritic-free, [^A-Za-z0-9._-] replaced with _.
{originalName}Photo 1015 (final)Raw source filename, no slugification. Useful only if your source URLs are already file-safe.
{format}webpThe output format extension.
{index}001Zero-padded position in the input list. Width matches input count.

Output

Per-image dataset record

{
"kvsKey": "photo_1015.webp",
"kvsUrl": "https://api.apify.com/v2/key-value-stores/<id>/records/photo_1015.webp",
"format": "webp",
"mode": "lossy",
"width": 2000,
"height": 1333,
"bytes": 184273,
"originalBytes": 1842739,
"originalFormat": "jpeg",
"originalWidth": 4000,
"originalHeight": 2666,
"compressionRatio": 0.1,
"compressionPct": 10,
"sourceUrl": "https://picsum.photos/id/1015/4000/2666.jpg",
"tier": "image-s",
"chargedUsd": 0.003,
"error": null,
"skipped": false
}

Encoded files

Each successful transform writes the encoded buffer to the run's default key-value store. Download via kvsUrl (one click from the dataset table view) or programmatically through the Apify API. KVS retention follows your account's default policy (named stores never expire).

Failures

Failed rows still appear in the dataset, with error populated and skipped: false. HEIC inputs appear with error: "HEIC not supported on Linux" and skipped: true. Neither pattern is charged.

Lossy vs lossless

⚠️ Default is lossy (WebP at quality 82) — best for photos. Switch to lossless if you're converting logos, icons, screenshots, line art, or UI assets — lossy compression makes hard edges fuzzy and can put visible ringing around text. For pure photographs, lossy is essentially indistinguishable from the source and 5–10× smaller. When in doubt: photos = lossy, anything with crisp edges = lossless. PNG is forced lossless regardless of mode, so picking format: png is a one-flag way to be safe.

GoalPickWhy
Smallest possible files for the web (photos)webp + lossy + quality: 78–855–10× smaller than source JPEG, near-imperceptible loss
Pixel-perfect compression for masterswebp + lossless20–40% smaller than PNG, zero loss
Universal compatibility (old browsers, email)jpeg + quality: 82Always lossy, decoded everywhere
Logos, icons, screenshots, transparency, line artpngAlways lossless, supports alpha
UI assets / brand assets you'll re-edit laterwebp + lossless OR pngRe-encoding lossy compounds quality loss; lossless is stable forever

Format compatibility matrix

FormatLossy supported?Lossless supported?Transparency?
WebP✅ (default)✅ (set mode: "lossless")
JPEG✅ (always — mode ignored)
PNG✅ (always — mode ignored)

Examples

1. Bulk web optimization (default)

{
"imageUrls": [
{ "url": "https://example.com/hero.jpg" },
{ "url": "https://example.com/banner.png" }
],
"format": "webp",
"mode": "lossy",
"quality": 82,
"maxWidth": 2000,
"maxHeight": 2000,
"fit": "inside",
"stripMetadata": true
}

Two source images → two WebPs in your KVS, capped at 2000px, no metadata, ~85% smaller on average.

2. Archival masters (lossless WebP)

{
"imageUrls": [{ "url": "https://example.com/master.png" }],
"format": "webp",
"mode": "lossless",
"maxWidth": 0,
"maxHeight": 0,
"stripMetadata": false,
"outputKvsKeyTemplate": "{originalName}.{format}"
}

No resize, no metadata stripping, lossless WebP. Typical 25-35% smaller than the source PNG.

3. Format conversion to JPEG

{
"imageUrls": [{ "url": "https://example.com/photo.png" }],
"format": "jpeg",
"quality": 90,
"maxWidth": 2400,
"fit": "inside"
}

PNG → JPEG, no transparency (PNG alpha is composited onto white).

4. Chain after a scraper (dataset input)

{
"inputDatasetId": "<dataset-id-from-upstream-scraper>",
"urlField": "imageUrl",
"format": "webp",
"quality": 80,
"maxWidth": 1600,
"concurrency": 8,
"useApifyProxy": true
}

Reads imageUrl from each item in the upstream dataset, fetches via Apify Proxy, transforms, and pushes a metadata row + KVS file per image. Drop this directly into your scraping schedule.

HEIC limitation

HEIC / HEIF inputs are not supported on Linux (sharp does not ship libheif support in its prebuilt linux-x64 binaries — adding it would require a custom build chain that breaks on the Apify base image).

When the actor sees a HEIC URL or image/heic content type, it pushes a row with error: "HEIC not supported on Linux" and skipped: true, and does not charge the PPE event.

Workaround: pre-convert HEIC sources to JPEG before submission. Local options:

  • macOS: sips -s format jpeg input.heic --out output.jpg
  • Linux: heif-convert input.heic output.jpg (requires libheif-examples)
  • Web: any HEIC→JPEG service / Cloud Convert API

Then submit the converted JPEGs to this actor.

Pricing

Tiered Pay-Per-Event by source byte size. You pay only for successfully transformed images, priced according to how heavy the original file was — small images cost less, big images cost more.

TierSource sizePrice per imageWhat it coversApify event
XS≤ 500 KB$0.002Icons, favicons, simple logos, thumbnailsimage-xs
S500 KB – 2 MB$0.003Web thumbnails, screenshots, small photosimage-s
M2 – 10 MB$0.005Phone photos, product shots, web hero images (default workhorse)image-m
L10 – 25 MB$0.012DSLR JPEG, drone photos, large uploadsimage-l
XL25 – 50 MB$0.025High-res photography, archive jobsimage-xl
XXL50 – 100 MB$0.060Huge originals near platform limitimage-xxl
reject> 100 MBReturns error in dataset row, NOT charged. Pre-resize and retry.

The dataset row for each successful image includes the tier and chargedUsd fields so you can audit exactly what each image cost.

Quick cost estimates

RunMixYou pay
100 phone photos (3 MB each)all M$0.50
1,000 phone photosall M$5.00
1,000 e-commerce thumbnails (200 KB)all XS$2.00
1,000 logo conversions (50 KB)all XS$2.00
100 DSLR photos (15 MB)all L$1.20
Mixed: 700 M + 200 S + 100 XStypical blog$4.30
80,000 archive migration (~3 MB avg)mostly M~$400

What is NOT charged

  • Failed fetches (404, timeout, broken host) → free
  • Decode errors (corrupt files, unsupported format) → free
  • HEIC inputs (skipped row) → free
  • Oversize inputs > 100 MB (rejected with error) → free
  • Empty / aborted runs → free

You only pay for images that successfully reach your key-value store.

Why tiered?

Encode cost scales with image size. A 100 MB scan needs 30× more compute than a 500 KB logo, and uses 20× more memory. Flat pricing would either rip off small-image users or lose money on huge ones. Tiered pricing keeps it fair: pay-for-what-you-use.

There is no subscription, no minimum, no per-run fee. Mixed-size batches are charged per-image at the appropriate tier — the run summary log breaks down exactly what was charged at which tier.

FAQ

Why WebP by default? Best size/quality trade-off for the modern web. Supported by every browser shipped after ~2020, ~25–35% smaller than equivalent JPEG, ~20–40% smaller than equivalent PNG. If you need universal compatibility (email clients, ancient browsers), pick JPEG.

Can I keep the originals? Yes — the actor only fetches; it never writes back to the source URL. Your originals are untouched. The transformed copies live in this run's key-value store, downloadable individually or via the Apify API.

Does it strip my watermark? No. Pixel content is preserved; only the EXIF / ICC / XMP metadata blocks are stripped (when stripMetadata: true). A watermark burned into the pixels stays.

How big can input files be? 100 MB hard ceiling. Files above 100 MB are rejected with a clear error in the dataset row (and NOT charged) to protect your run from out-of-memory crashes that could fail in-flight images alongside the giant one. The actor's memory is configured 1024–8192 MB, which comfortably handles XXL inputs up to the 100 MB cap. For larger inputs (gigapixel maps, archive TIFFs), pre-resize locally before submitting.

Is concurrency tuned per memory tier? Default concurrency: 0 matches CPU count, which Apify scales 1 core per 4 GB. So at 4096 MB you get 1 core → concurrency 1; at 16384 MB you get 4 cores → concurrency 4. Override manually if you need more throughput on a smaller tier.

Does it handle animated GIF or animated WebP? The first frame only. sharp's animation support is incomplete; multi-frame round-tripping is out of scope for this actor. If you need full animation preservation, run sharp directly with animated: true in custom code.

Does it support AVIF? AVIF is supported as input (decoded transparently). AVIF as output is intentionally not exposed yet — encode times are 5–20× WebP for marginal size wins, which doesn't fit this actor's price point. If demand picks up we'll add it as an opt-in format.

Changelog

v1.0.0 — Launch

  • WebP, JPEG, PNG output
  • Lossy / lossless mode for WebP
  • Web-ready resize with inside / cover / contain fit
  • EXIF / ICC / XMP metadata stripping
  • Four input sources: direct URLs, Apify KVS uploads, inline base64, upstream dataset
  • Apify Proxy support for source fetches
  • Configurable concurrency (auto-matches CPU count)
  • 6-tier PPE pricing by source byte size (XS $0.002 → XXL $0.060)
  • 100 MB hard ceiling — oversize sources rejected without charge
  • HEIC inputs gracefully skipped (not charged)