Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users
Pricing
from $1.99 / 1,000 results
Reddit Bulk Scrape 10000 IDs V2 — Posts, Comments, Subs, Users
Bulk-scrape Reddit posts, comments, subreddits, and users in a single call. Pick one of 5 endpoints and paste up to 10000 inputs — IDs, stripped IDs, URLs, or usernames (depending on endpoint). Returns full GQL metadata as one dataset record per item. No Reddit account or proxy required.
Pricing
from $1.99 / 1,000 results
Rating
5.0
(1)
Developer
Red Crawler
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Reddit Bulk Scrape V2
Hydrate large lists of Reddit IDs in a single run — posts, comments, subreddits, and users. 5 bulk-by-ID endpoints in one actor. No Reddit account, OAuth, or proxy required.
Paste up to 10000 IDs / usernames per run and get one full record per item back in the dataset.
Need feeds, comment trees, or single-record lookups? They live in the companion actor Reddit Scraper V2 — 11 endpoints covering post comments, profile feeds, subreddit feeds, and detailed comment lookups.
Endpoints at a glance
| # | Endpoint | Input | Cap per run | Best for |
|---|---|---|---|---|
| 1 | Bulk Posts by ID | post IDs (raw / t3_ / URLs) | 10000 | post-list enrichment, hydrating stored IDs |
| 2 | Bulk Comments by ID | comment IDs (raw / t1_ / URLs) | 10000 | comment-list hydration, archival pipelines |
| 3 | Bulk Communities by ID | subreddit IDs (stripped or t5_) | 10000 | community-list enrichment by ID |
| 4 | Bulk Profiles by ID | user IDs (stripped or t2_) | 10000 | user-list enrichment by ID |
| 5 | Bulk Profiles by Name | usernames / u/name / profile URLs | 10000 | user-list enrichment by username |
Inputs accept the most-permissive format Reddit uses for each entity:
| Entity | Accepted formats |
|---|---|
| post | full URL · prefixed t3_1s4a4j6 · stripped ID 1s4a4j6 |
| comment | full URL · prefixed t1_lwbnv0t · stripped ID lwbnv0t |
| subreddit (by ID) | prefixed t5_2qh1i · stripped ID 2qh1i |
| user (by ID) | prefixed t2_1w72 · stripped ID 1w72 |
| user (by name) | username spez · prefixed u/spez · profile URL https://reddit.com/user/spez |
Separate inputs with commas or newlines — both work. Mix prefixed and stripped freely; duplicates are removed automatically.
What you can fetch
1. Bulk Posts by ID
Hydrate a list of post IDs to full post records in one call.
Inputs
| Field | Notes |
|---|---|
bulk_posts_ids | Comma- or newline-separated post inputs. Up to 10000. |
Accepted formats — full IDs (t3_1s4a4j6), stripped IDs (1s4a4j6), and full URLs (https://www.reddit.com/r/Wordpress/comments/1s4a4j6/). Mix freely.
Returns per post — title, body, score, comment count, awards, flair, media (images / video / gallery), all post flags, subreddit, author, created timestamp.
Use it when — you have a list of post IDs (from your DB, a previous scrape, or a CSV) and want full post payloads back in one run.
2. Bulk Comments by ID
Hydrate a list of comment IDs to full comment records in one call.
Inputs
| Field | Notes |
|---|---|
bulk_comments_ids | Comma- or newline-separated comment inputs. Up to 10000. |
Accepted formats — full IDs (t1_lwbnv0t), stripped IDs (lwbnv0t), and full URLs (https://www.reddit.com/r/Wordpress/comments/1s4a4j6/comment/lwbnv0t/). Mix freely.
Returns per comment — body (markdown + HTML), score, author, depth, all comment flags, parent post / parent comment IDs, awards, created + edited timestamps, permalink.
Use it when — comment-list hydration, archival pipelines, refreshing a stored set of comment IDs.
3. Bulk Communities by ID
Hydrate a list of subreddit t5_ IDs to full community records.
Inputs
| Field | Notes |
|---|---|
bulk_communities_ids | Comma- or newline-separated subreddit IDs. Up to 10000. |
Accepted formats — full IDs (t5_2qh1i) and stripped IDs (2qh1i). Mix prefixed and stripped freely.
ID-only endpoint. Reddit's bulk-by-IDs operation does not exist by name. To look up subreddits by name (
AskReddit),r/name, or URL, use the V1 actor Reddit Bulk Scrape.
Returns per subreddit — subscriber count, public + full description, theme (banner, icon, colors), allowed submission types, NSFW flag, type (public / private / restricted), created timestamp.
Use it when — community-list enrichment, sidebar / theme audits, hydrating a list of subreddits stored by ID.
4. Bulk Profiles by ID
Hydrate a list of user t2_ IDs to full Redditor records.
Inputs
| Field | Notes |
|---|---|
bulk_profiles_by_id_ids | Comma- or newline-separated user IDs. Up to 10000. |
Accepted formats — full IDs (t2_1w72) and stripped IDs (1w72). Mix prefixed and stripped freely.
ID-only endpoint. To look up users by username,
u/name, or profile URL, use Bulk Profiles by Name below.
Returns per user — karma split into post / comment / award / awardee, account creation date, snoovatar, banner, accepted-DMs flag, mod info, employee / verified flags, premium status.
Use it when — you have a list of stable t2_ IDs (which never change, even after a username rename) and want full profile records back.
5. Bulk Profiles by Name
Hydrate a list of usernames to full Redditor records.
Inputs
| Field | Notes |
|---|---|
bulk_profiles_names | Comma- or newline-separated user inputs. Up to 10000. |
Accepted formats — usernames (spez), prefixed names (u/spez), and profile URLs (https://reddit.com/user/spez). Mix freely.
Returns per user — same rich profile record as Bulk Profiles by ID.
Use it when — you have a list of usernames (from comments, mentions, a CSV) and want full profiles in one run.
How to run
- Pick an endpoint in the "What to fetch" dropdown.
- Open the matching section and paste your IDs / usernames (comma- or newline-separated). Each section is independent.
- Click Start.
Default endpoint is Bulk Posts by ID with a small prefilled list so the actor runs out of the box.
Output
Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.
| Endpoint | Rows pushed |
|---|---|
| Bulk Posts by ID | one record per ID (up to 10000) |
| Bulk Comments by ID | one record per ID (up to 10000) |
| Bulk Communities by ID | one record per ID (up to 10000) |
| Bulk Profiles by ID | one record per ID (up to 10000) |
| Bulk Profiles by Name | one record per username (up to 10000) |
Every record carries an endpoint field. Most useful columns (id, title / name, score / karma, created date) are placed first. You only ever pay per record pushed to the dataset (see Pricing below).
Status & error reference
Run status (Apify-side, shown on the run page)
| Apify UI cue | Status | Apify message | Meaning | What to do |
|---|---|---|---|---|
| green check | SUCCEEDED | "Actor succeeded with N results in the dataset" | Run finished. Some or zero results pushed. | Open the dataset. |
| red exclamation | FAILED | "The Actor process failed…" | Validation error or upstream Reddit fault. | Check the run log. You are NOT charged. |
| red clock | TIMED-OUT | "The Actor timed out…" | Run exceeded its timeout. | Re-run with a smaller batch. |
| red square outline | ABORTED | "The Actor process was aborted…" | You stopped the run manually. | No charge for unpushed results. |
Common in-run conditions (visible in run log)
| Condition | Cause | Result |
|---|---|---|
| Empty result set | None of the IDs / names matched a live entity. | Run SUCCEEDED, 0 records, no charge. |
| Missing IDs in output | Some IDs were deleted, banned, or never existed. | Run SUCCEEDED; only resolvable IDs are returned. |
| Suspended account | Username / t2_ is suspended. | Run SUCCEEDED, mostly-null record for that user. |
| Input list too long | More than 10000 IDs / usernames. | Run FAILED with a clear validation error. No charge. |
Common edge cases
- Deleted / removed posts and comments — partial metadata returned with
removed_by_categorypopulated. - Suspended / deleted accounts — minimal data; expect most fields to be null.
- Banned subreddits — return zero records for that ID.
- ID format flexibility — raw IDs, prefixed (
t1_,t3_,t5_,t2_), and full Reddit URLs are all accepted on post / comment endpoints. - Username rename —
t2_IDs are stable; usernames are not. Use Bulk Profiles by ID if you need long-term-stable references. - Single-record + feed lookups live in the companion actor Reddit Scraper V2 — use it for post comments, profile feeds, subreddit feeds, single-record lookups, and linked-comment context.
Why this actor is fast
- Speed — a full 10000-item run completes in around 75 seconds. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per item.
- Reliability — zero browser flakiness. No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
- Footprint — runs at 512 MB with ~4× headroom on full-size runs.
| Run profile | Peak memory | Avg memory | Avg CPU | Peak CPU |
|---|---|---|---|---|
| Bulk by ID, 10000 items | ~95 MB (~18% of 512 MB) | ~91 MB | ~10% | ~57% |
Leave the Memory field at its default and you have plenty of headroom for spiky inputs, slow networks, or large lists. There's no benefit to bumping it higher.
Pricing
Pay-per-result. You're only charged for records actually pushed to the dataset.
| Outcome | Charged? |
|---|---|
SUCCEEDED with results | Yes — per record pushed. |
SUCCEEDED with zero records | No. |
FAILED (validation / upstream) | No. |
ABORTED | Only for records already pushed before you stopped. |
See the actor's Pricing tab for the current per-result rate.