Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users avatar

Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users

Pricing

from $1.99 / 1,000 results

Go to Apify Store
Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users

Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users

Bulk-scrape Reddit posts, comments, subreddits, and users in a single call. Pick one of 5 endpoints and paste up to 10000 inputs — IDs, stripped IDs, URLs, or usernames (depending on endpoint). Returns full GQL metadata as one dataset record per item. No Reddit account or proxy required.

Pricing

from $1.99 / 1,000 results

Rating

5.0

(1)

Developer

Red Crawler

Red Crawler

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Reddit Bulk Scrape V2

Endpoints Auth Proxy Pricing

Hydrate large lists of Reddit IDs in a single run — posts, comments, subreddits, and users. 5 bulk-by-ID endpoints in one actor. No Reddit account, OAuth, or proxy required.

Paste up to 10000 IDs / usernames per run and get one full record per item back in the dataset.

Need feeds, comment trees, or single-record lookups? They live in the companion actor Reddit Scraper V2 — 11 endpoints covering post comments, profile feeds, subreddit feeds, and detailed comment lookups.


Endpoints at a glance

#EndpointInputCap per runBest for
1Bulk Posts by IDpost IDs (raw / t3_ / URLs)10000post-list enrichment, hydrating stored IDs
2Bulk Comments by IDcomment IDs (raw / t1_ / URLs)10000comment-list hydration, archival pipelines
3Bulk Communities by IDsubreddit IDs (stripped or t5_)10000community-list enrichment by ID
4Bulk Profiles by IDuser IDs (stripped or t2_)10000user-list enrichment by ID
5Bulk Profiles by Nameusernames / u/name / profile URLs10000user-list enrichment by username

Inputs accept the most-permissive format Reddit uses for each entity:

EntityAccepted formats
postfull URL · prefixed t3_1s4a4j6 · stripped ID 1s4a4j6
commentfull URL · prefixed t1_lwbnv0t · stripped ID lwbnv0t
subreddit (by ID)prefixed t5_2qh1i · stripped ID 2qh1i
user (by ID)prefixed t2_1w72 · stripped ID 1w72
user (by name)username spez · prefixed u/spez · profile URL https://reddit.com/user/spez

Separate inputs with commas or newlines — both work. Mix prefixed and stripped freely; duplicates are removed automatically.


What you can fetch

1. Bulk Posts by ID

Hydrate a list of post IDs to full post records in one call.

Inputs

FieldNotes
bulk_posts_idsComma- or newline-separated post inputs. Up to 10000.

Accepted formats — full IDs (t3_1s4a4j6), stripped IDs (1s4a4j6), and full URLs (https://www.reddit.com/r/Wordpress/comments/1s4a4j6/). Mix freely.

Returns per post — title, body, score, comment count, awards, flair, media (images / video / gallery), all post flags, subreddit, author, created timestamp.

Use it when — you have a list of post IDs (from your DB, a previous scrape, or a CSV) and want full post payloads back in one run.


2. Bulk Comments by ID

Hydrate a list of comment IDs to full comment records in one call.

Inputs

FieldNotes
bulk_comments_idsComma- or newline-separated comment inputs. Up to 10000.

Accepted formats — full IDs (t1_lwbnv0t), stripped IDs (lwbnv0t), and full URLs (https://www.reddit.com/r/Wordpress/comments/1s4a4j6/comment/lwbnv0t/). Mix freely.

Returns per comment — body (markdown + HTML), score, author, depth, all comment flags, parent post / parent comment IDs, awards, created + edited timestamps, permalink.

Use it when — comment-list hydration, archival pipelines, refreshing a stored set of comment IDs.


3. Bulk Communities by ID

Hydrate a list of subreddit t5_ IDs to full community records.

Inputs

FieldNotes
bulk_communities_idsComma- or newline-separated subreddit IDs. Up to 10000.

Accepted formats — full IDs (t5_2qh1i) and stripped IDs (2qh1i). Mix prefixed and stripped freely.

ID-only endpoint. Reddit's bulk-by-IDs operation does not exist by name. To look up subreddits by name (AskReddit), r/name, or URL, use the V1 actor Reddit Bulk Scrape.

Returns per subreddit — subscriber count, public + full description, theme (banner, icon, colors), allowed submission types, NSFW flag, type (public / private / restricted), created timestamp.

Use it when — community-list enrichment, sidebar / theme audits, hydrating a list of subreddits stored by ID.


4. Bulk Profiles by ID

Hydrate a list of user t2_ IDs to full Redditor records.

Inputs

FieldNotes
bulk_profiles_by_id_idsComma- or newline-separated user IDs. Up to 10000.

Accepted formats — full IDs (t2_1w72) and stripped IDs (1w72). Mix prefixed and stripped freely.

ID-only endpoint. To look up users by username, u/name, or profile URL, use Bulk Profiles by Name below.

Returns per user — karma split into post / comment / award / awardee, account creation date, snoovatar, banner, accepted-DMs flag, mod info, employee / verified flags, premium status.

Use it when — you have a list of stable t2_ IDs (which never change, even after a username rename) and want full profile records back.


5. Bulk Profiles by Name

Hydrate a list of usernames to full Redditor records.

Inputs

FieldNotes
bulk_profiles_namesComma- or newline-separated user inputs. Up to 10000.

Accepted formats — usernames (spez), prefixed names (u/spez), and profile URLs (https://reddit.com/user/spez). Mix freely.

Returns per user — same rich profile record as Bulk Profiles by ID.

Use it when — you have a list of usernames (from comments, mentions, a CSV) and want full profiles in one run.


How to run

  1. Pick an endpoint in the "What to fetch" dropdown.
  2. Open the matching section and paste your IDs / usernames (comma- or newline-separated). Each section is independent.
  3. Click Start.

Default endpoint is Bulk Posts by ID with a small prefilled list so the actor runs out of the box.


Output

Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.

EndpointRows pushed
Bulk Posts by IDone record per ID (up to 10000)
Bulk Comments by IDone record per ID (up to 10000)
Bulk Communities by IDone record per ID (up to 10000)
Bulk Profiles by IDone record per ID (up to 10000)
Bulk Profiles by Nameone record per username (up to 10000)

Every record carries an endpoint field. Most useful columns (id, title / name, score / karma, created date) are placed first. You only ever pay per record pushed to the dataset (see Pricing below).


Status & error reference

Run status (Apify-side, shown on the run page)

Apify UI cueStatusApify messageMeaningWhat to do
green checkSUCCEEDED"Actor succeeded with N results in the dataset"Run finished. Some or zero results pushed.Open the dataset.
red exclamationFAILED"The Actor process failed…"Validation error or upstream Reddit fault.Check the run log. You are NOT charged.
red clockTIMED-OUT"The Actor timed out…"Run exceeded its timeout.Re-run with a smaller batch.
red square outlineABORTED"The Actor process was aborted…"You stopped the run manually.No charge for unpushed results.

Common in-run conditions (visible in run log)

ConditionCauseResult
Empty result setNone of the IDs / names matched a live entity.Run SUCCEEDED, 0 records, no charge.
Missing IDs in outputSome IDs were deleted, banned, or never existed.Run SUCCEEDED; only resolvable IDs are returned.
Suspended accountUsername / t2_ is suspended.Run SUCCEEDED, mostly-null record for that user.
Input list too longMore than 10000 IDs / usernames.Run FAILED with a clear validation error. No charge.

Common edge cases

  • Deleted / removed posts and comments — partial metadata returned with removed_by_category populated.
  • Suspended / deleted accounts — minimal data; expect most fields to be null.
  • Banned subreddits — return zero records for that ID.
  • ID format flexibility — raw IDs, prefixed (t1_, t3_, t5_, t2_), and full Reddit URLs are all accepted on post / comment endpoints.
  • Username renamet2_ IDs are stable; usernames are not. Use Bulk Profiles by ID if you need long-term-stable references.
  • Single-record + feed lookups live in the companion actor Reddit Scraper V2 — use it for post comments, profile feeds, subreddit feeds, single-record lookups, and linked-comment context.

Why this actor is fast

  • Speed — a full 10000-item run completes in around 75 seconds. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per item.
  • Reliability — zero browser flakiness. No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
  • Footprint — runs at 512 MB with ~4× headroom on full-size runs.
Run profilePeak memoryAvg memoryAvg CPUPeak CPU
Bulk by ID, 10000 items~95 MB (~18% of 512 MB)~91 MB~10%~57%

Leave the Memory field at its default and you have plenty of headroom for spiky inputs, slow networks, or large lists. There's no benefit to bumping it higher.


Pricing

Pay-per-result. You're only charged for records actually pushed to the dataset.

OutcomeCharged?
SUCCEEDED with resultsYes — per record pushed.
SUCCEEDED with zero recordsNo.
FAILED (validation / upstream)No.
ABORTEDOnly for records already pushed before you stopped.

See the actor's Pricing tab for the current per-result rate.