Reddit Scraper V2 — Posts, Comments, Users & Subreddits (11) avatar

Reddit Scraper V2 — Posts, Comments, Users & Subreddits (11)

Pricing

from $1.99 / 1,000 results

Go to Apify Store
Reddit Scraper V2 — Posts, Comments, Users & Subreddits (11)

Reddit Scraper V2 — Posts, Comments, Users & Subreddits (11)

Scrape Reddit at scale: single posts, comment trees, user profiles, subreddit feeds, and detailed comment lookups (Get Comment by ID + Linked Comment Info). 11 endpoints, no Reddit account or proxy required. For bulk-by-ID lookups see the c

Pricing

from $1.99 / 1,000 results

Rating

0.0

(0)

Developer

Red Crawler

Red Crawler

Maintained by Community

Actor stats

1

Bookmarked

1

Total users

0

Monthly active users

2 days ago

Last modified

Share

Reddit Scraper V2

Endpoints Auth Proxy Pricing

Scrape Reddit at scale — single posts, comment trees, user profiles, subreddit feeds, and detailed comment lookups. 11 self-contained endpoints in one actor. No Reddit account, OAuth, or proxy required.

Pick the endpoint, fill the matching section, hit Start.

Looking for bulk-by-ID lookups? They live in the companion actor Reddit Bulk Scrape V2 — paste up to 500 IDs/usernames (5000 for comments) per run and get one full record per item.


Endpoints at a glance

#EndpointRecords returnedBest for
1Post Commentsup to 1500 (or all)sentiment, debate threads, archives, training data
2Post by ID1 recordsingle-post deep dive
3Profile (Full)1 recordfull-profile dashboards, lead enrichment
4Profile (Details)1 recordmoderation tooling, contributor audits
5Profile Postsup to 1250author monitoring, content audits
6Profile Commentsup to 1250brand-mention tracking, reputation monitoring
7User Info1 recordfilling field gaps from Profile (Full)
8Community Info1 recordcommunity discovery, audience sizing
9Community Feedup to 1250content scraping, trending posts
10Get Comment by ID1 recordquoting, refreshing one comment
11Linked Comment Info1 recordcomment with parent post + author context

Inputs accept the most-permissive format Reddit uses for each entity:

EntityAccepted
postURL · t3_ fullname · raw ID
commentURL · t1_ fullname · raw ID
userusername · u/name
subredditname · r/name · subreddit URL

What you can fetch

1. Post Comments

Every comment on a single post, with control over how the tree is traversed.

Inputs

FieldTypeDefaultNotes
poststring(required)URL or post ID.
sortenumbestbest / confidence / top / new / controversial / old / qa.
modeenumcustomcustom (capped) / top_level / all (uncapped).
limitint1001 – 1500. Used by custom mode.

Returns per comment — ID, fullname, parent comment / parent post IDs, author, body (markdown + HTML), score, depth, OP flag, all comment flags, subreddit, awards, created + edited timestamps, permalink.

Use it when — sentiment, debate threads, support-ticket mining, comment archives, training data.


2. Post by ID

Full payload of a single post.

Inputs

FieldNotes
postURL or post ID.

Returns — title, body, score, comment count, awards, flair, media (images / video / gallery), all post flags, subreddit, author, created timestamp.

Use it when — single-post deep dive, refreshing a stored post, importing one thread into your DB.


3. Profile (Full)

Full Redditor identity with the richest set of profile fields.

Inputs

FieldNotes
usernameRaw or u/name.

Returns — karma split into post / comment / award / awardee, account creation date, snoovatar, banner, social links, accepted-DMs flag, accepted-chats flag, accepted-followers flag, mod info, employee / verified flags, premium status, trophy-case totals.

Use it when — full-profile dashboards, lead enrichment, account-quality scoring, brand-monitor profile cards.


4. Profile (Details)

Profile-as-subreddit settings (every Reddit profile is also a subreddit u_username).

Inputs

FieldNotes
usernameRaw or u/name.

Returns — post permissions, flair settings, mod permissions, contributor / subscriber state, whitelist status, NSFW flag, the user's authorFlair on their own profile.

Use it when — moderation tooling, contributor / whitelist audits, profile-page gating logic.


5. Profile Posts

The user's submitted posts.

Inputs

FieldTypeDefaultNotes
usernamestring(required)Raw or u/name.
sortenumnewhot / new / top / controversial.
timeenum(none)Used with top / controversial.
limitint251 – 1250.

Returns per post — same rich post record as Post by ID.

Use it when — author monitoring, content audits, building feeds of a creator's submissions.


6. Profile Comments

The user's comment history.

Inputs — same controls as Profile Posts (sort / time / limit 1–1250).

Returns per comment — same record as Post Comments.

Use it when — brand-mention tracking, reputation monitoring, conversation mining on a single user.


7. User Info

Alternate user-info read with a different field set than Profile (Full) — useful for filling gaps.

Inputs

FieldNotes
usernameRaw or u/name.

Returns — complementary profile fields.

Use it when — Profile (Full) is missing fields you need; combining both gives the most complete record.


8. Community Info

Subreddit metadata.

Inputs

FieldNotes
subredditName / r/name / URL.

Returns — subscriber count, public + full description, rules summary, theme (banner, icon, colors), allowed submission types, NSFW flag, type (public / private / restricted), created timestamp.

Use it when — community discovery, sizing audiences, sidebar / theme audits.


9. Community Feed

A subreddit's post feed with all 6 sort modes.

Inputs

FieldTypeDefaultNotes
subredditstring(required)Name / r/name / URL.
sortenumhotbest / hot / new / top / rising / controversial.
timeenum(none)Used with top / controversial.
limitint251 – 1250.

Returns per post — same rich post record as Post by ID.

Use it when — content scraping, trending-post tracking, building feeds of a niche community.


10. Get Comment by ID

Full payload of a single comment.

Inputs

FieldNotes
commentURL, t1_ fullname, or raw ID.

Returns — body, score, author, flair, awards, parent post / comment IDs, created timestamp, permalink.

Use it when — quoting a comment, refreshing one stored row, importing a single comment into your DB.


11. Linked Comment Info

Comment payload plus the parent post and the comment author profile in a single record — handy when you need full conversation context without firing three separate calls.

Inputs

FieldNotes
commentURL, t1_ fullname, or raw ID.

Returns — full comment record + parent post (title, subreddit, created, score, flair) + author profile snapshot (karma, account age, flags).

Use it when — building rich comment cards, audit trails, moderation tooling, or any flow that needs the comment alongside its post and author in one row.


How to run

  1. Pick an endpoint in the "What to fetch" dropdown.
  2. Open the matching section and fill its fields. Each section is independent.
  3. Click Start.

Default endpoint is Community Feed on r/python so the actor runs out of the box.


Output

Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.

Endpoint kindRows pushed
Single-record (Post by ID, Profile (Full), Community Info, Get Comment by ID, Linked Comment Info, etc.)1 record
Feed (Post Comments, Profile Posts / Comments, Community Feed)up to your limit

Every record carries an endpoint field. Most useful columns (id, title / name, score / karma, created date) are placed first.


Status & error reference

Run status (Apify-side, shown on the run page)

Apify UI cueStatusApify messageMeaningWhat to do
green checkSUCCEEDED"Actor succeeded with N results in the dataset"Run finished. Some or zero results pushed.Open the dataset.
red exclamationFAILED"The Actor process failed…"Validation error or upstream Reddit fault.Check the run log. You are NOT charged.
red clockTIMED-OUT"The Actor timed out…"Run exceeded its timeout.Re-run with a smaller limit or a less popular thread / feed.
red square outlineABORTED"The Actor process was aborted…"You stopped the run manually.No charge for unpushed results.

Common in-run conditions (visible in run log)

ConditionCauseResult
Empty result setUsername / post / subreddit doesn't exist or is banned.Run SUCCEEDED, 0 records, no charge.
Removed post stubPost was removed; partial metadata returned.Run SUCCEEDED, includes removed_by_category.
Suspended accountUsername is suspended.Run SUCCEEDED, mostly-null record.

Common edge cases

  • Removed / banned subreddits return zero records.
  • Suspended / deleted accounts return minimal data; expect most fields to be null.
  • Long Post Comments threadsall mode (uncapped) on huge threads can return tens of thousands of records.
  • ID format flexibility — raw IDs, prefixed (t1_, t3_), and full Reddit URLs are all accepted.
  • Bulk-by-ID lookups live in the companion actor Reddit Bulk Scrape V2 — use it when you have a list of post / comment / subreddit / user IDs to hydrate in a single call.
  • NSFW content — fully supported; the over_18 flag tells you if a post is age-gated.

Why this actor is fast

  • Speed — 1–3 seconds per call, end-to-end. Pure HTTP to Reddit's API. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per call.
  • Reliability — zero browser flakiness. No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
  • Footprint: see memory profile below.

Runs in Apify's lowest 128 MB tier — typically peaks around 45 MB (~35% of the allocation).

The actor is a thin async dispatcher: one HTTP call out, one push_data in. Most of the heavy lifting (Reddit auth, proxy rotation, retry, GraphQL persisted-query handling) is done off-actor on our backend, so the actor itself stays small.

Run profilePeak memory observed
Single post / comment / profile lookup~45 MB
Linked Comment Info (comment + post + author in one row)~46 MB
Subreddit feed (up to 1250 posts)~48 MB

That gives ~64% headroom inside 128 MB. You can leave the Memory field at the default and never think about it. If you want extra margin (e.g. unusually large all-mode comment threads), bumping to 256 MB is supported and costs more compute units per second on Apify's side — most users won't need it.


Pricing

Pay-per-result. You're only charged for records actually pushed to the dataset.

OutcomeCharged?
SUCCEEDED with resultsYes — per record pushed.
SUCCEEDED with zero recordsNo.
FAILED (validation / upstream)No.
ABORTEDOnly for records already pushed before you stopped.

See the actor's Pricing tab for the current per-result rate.