Reddit Feeds V2 — 11 sitewide, subreddit & account feeds
Pricing
from $1.99 / 1,000 results
Reddit Feeds V2 — 11 sitewide, subreddit & account feeds
Fetch any of Reddit's 11 feeds at scale: Popular, News, r/All, Watch (videos), Games, Explore, Topic, Recommended Media, any subreddit's posts feed, plus your account's Latest and Saved Posts feeds. 11 self-contained endpoints — 9 anonymous, 2 require a Reddit account.
Pricing
from $1.99 / 1,000 results
Rating
5.0
(1)
Developer
Red Crawler
Actor stats
1
Bookmarked
2
Total users
1
Monthly active users
10 hours ago
Last modified
Categories
Share
Reddit Feeds V2
Pull posts from any of Reddit's eleven post feeds at scale. Nine of them — sitewide tabs (Popular, News, r/all), discovery feeds (Explore, Games, Watch, Topic, Recommended Media), and any individual subreddit's feed — work anonymously with no account required. Two of them — your account's Latest Feed and Saved Posts Feed — read your personal Reddit account and need a Token V2 (either pasted directly or stored in the reddit-vault actor).
Eleven self-contained endpoints. Each call returns a paginated batch of post records, one row per post. Pick the endpoint, fill the matching section, hit Start.
What you can fetch
The first nine feeds accept the same three controls — Sort, Time filter, and Limit — so once you learn one, you know them all. The Subreddit, Topic, and Recommended Media feeds add one extra input each (the subreddit name, topic ID, or seed IDs respectively). The two account feeds at the end add the credentials section at the bottom of the form.
1. Subreddit Feed — posts from one subreddit
The classic "scrape r/python" use case. Returns posts from a single subreddit.
Inputs:
- Subreddit — bare name (
python), prefixed (r/python), or full URL (https://reddit.com/r/python) - Sort —
best,hot,new,top,controversial,rising - Time filter —
hour,day,week,month,year,all(only applies when sort istoporcontroversial— ignored otherwise) - Limit — 1 to 500
Use it when: you want the standard subreddit-feed scrape — recent posts, top-of-week roundups, monitoring a niche community.
2. Popular Feed — Reddit's sitewide Popular tab
Returns whatever Reddit currently shows on its main Popular page — the homepage of logged-out Reddit.
Inputs: Sort, Time filter, Limit (1–500) — same controls as Subreddit Feed.
Use it when: trend-watching, daily front-page snapshots, training models on what's currently going viral.
3. News Feed — Reddit's News tab
Reddit's curated News feed — pulls posts from news-oriented subreddits the way Reddit's News tab does.
Inputs: Sort (with one extra option, awarded), Time filter, Limit (1–500).
Use it when: news monitoring, link aggregation, headline scraping.
4. All Feed — r/all
Posts from r/all — the firehose of every public subreddit on Reddit (minus the few opted out).
Inputs: Sort, Time filter, Limit (1–500).
Use it when: maximum breadth — high-volume scraping across all of Reddit, training data, dataset bootstrapping.
5. Explore Feed — Reddit's discovery / Communities feed
Pulls Reddit's "Explore" tab content — discovery-style posts plus surfaced communities.
Inputs: Sort, Time filter, Limit.
Use it when: finding new subreddits Reddit is recommending, mapping community discovery surfaces, market-sizing the long tail.
6. Games Feed — Reddit's Games-focused feed
A vertical feed weighted toward gaming communities and content.
Inputs: Sort, Time filter, Limit.
Use it when: gaming-niche scraping, esports / launch-day monitoring, game-marketing intelligence.
7. Watch Feed — video-focused content
Reddit's video-first feed — favors posts with embedded videos / GIFs / short clips over self-text.
Inputs: Sort, Time filter, Limit.
Use it when: video / clip mining, social-media short-form content sourcing, media-asset collection.
8. Topic Feed — interest-topic feed
Returns posts for a specific Reddit topic ID. Topic IDs look like tx1_2unn29s and you can pull them from the Explore feed (each Explore item carries its topic ID).
Inputs:
- Topic ID — the
tx1_...ID from Explore (defaulttx1_2unn29sis a working example) - Scheme name (optional) — Reddit's internal topic scheme name; usually leave blank
- Sort, Time filter, Limit
Use it when: scraping a specific interest topic at scale (e.g., a particular hobby, fandom, or category Reddit has clustered).
9. Recommended Media — image / video recommendations seeded by subreddits
Reddit returns image/video posts recommended off a list of seed subreddit IDs. Useful for visual content discovery anchored to communities you care about.
Inputs:
- Seed subreddit IDs — comma-separated
t5_...IDs (defaultt5_2qh33,t5_2cneqis r/funny + r/aww) - Sort, Time filter, Limit
Tip: Use the Reddit Subreddits V2 actor's ID endpoint to convert a subreddit name (e.g. funny) into its t5_ ID.
Use it when: building image / meme datasets seeded by topic, content recommendation experiments, visual-trend scraping.
10. Latest Feed — your account's Latest feed (bearer required)
The personalised "Latest" feed your Reddit account sees — chronologically newest posts from the subreddits and users you follow, plus recommendations Reddit attaches to the account.
Inputs: Sort (defaults to new), Time filter, Limit, plus credentials (saved account name from the vault, or pasted Token V2 + proxy).
Use it when: monitoring brand-new posts in your followed communities, watching a niche-feed of accounts you've subscribed to, building a fresh-content notifier.
11. Saved Posts Feed — your account's saved posts (bearer required)
Pulls every post you've saved on your Reddit account. Useful for exporting bookmarks, building a personal archive, or auditing a moderator-account's saved review queue.
Inputs: Sort, Time filter, Limit, plus credentials (saved account name from the vault, or pasted Token V2 + proxy).
Use it when: exporting your saved-posts archive to JSON / CSV, syncing a personal bookmark database, auditing moderator review history.
How to run
- Pick a feed in the "What to fetch" dropdown at the top.
- Open the matching section below it and fill in the fields (subreddit name, topic ID, or seed IDs if needed — sort / time / limit always show up).
- For the Latest Feed or Saved Posts Feed: also fill the credentials section at the bottom of the form. Two ways to authenticate:
- Saved account (recommended) — store your account once in the reddit-vault actor, then enter that name in "Saved account name" here. Token V2 + matching proxy are loaded automatically.
- Manual — paste your
token_v2cookie (eyJ...) and a matching proxy inip:port:user:passformat. Reddit IP-binds Token V2, so the proxy MUST be the same IP that minted the cookie.
- Click Start.
Each endpoint section is independent — fields outside your chosen section are ignored, so you can leave them as-is between runs. The default endpoint is popular so the actor runs out of the box without any credentials.
Time filter gotcha: the Time filter is only applied when Sort is top or controversial. Reddit ignores it for hot, new, best, rising. The actor passes whatever you set — it's just a no-op for the other sorts.
Output
Results are pushed to the actor's default dataset. View them as a table or download as JSON / CSV / Excel / XML.
Every feed pushes one record per post (up to your limit). Each record carries:
- Identity —
id,__typename,endpoint - Headline —
postTitle,score,commentCount,createdAt,url,domain - State flags —
isNsfw,isSpoiler,isLocked,isStickied,isArchived,voteState - Content —
content(selftext / body),thumbnail,media,flair,authorInfo
The most useful columns are placed first so the dataset Table view is readable without horizontal scrolling.
Common edge cases
- Empty pages — if a sort/time combination has no matching posts (e.g.
topofhouron a quiet subreddit), the actor returns an empty dataset. This is normal Reddit behavior, not an error. - Quarantined or NSFW subreddits — Subreddit Feed works on most NSFW subreddits. Quarantined ones may return no posts unless the feed is logged-in-only (those are out of scope for this actor).
- Topic IDs change — Reddit occasionally renames or retires topic IDs. If your saved Topic ID stops returning posts, pull a fresh one from the Explore feed.
- Recommended Media seed limits — Reddit caps the number of seed IDs it'll consider; pass your most relevant 2–10 subreddit IDs rather than dozens.
- Time filter on wrong sorts — passing
time=weekwithsort=hotis silently ignored by Reddit. Usetoporcontroversialif you want time-window filtering. - Latest / Saved Posts return nothing — usually means the Token V2 expired (~24 h lifetime) or the proxy IP doesn't match the IP that minted it. Refresh the cookie in your browser, save it again in reddit-vault, and re-run.
- Empty Saved Posts Feed — if your Reddit account has never saved a post, this feed legitimately returns nothing. Save one post via Reddit's UI to confirm the pipeline works.
Why this actor is fast
- Speed — 1–4 seconds per call, end-to-end. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per call.
- Reliability — zero browser flakiness. No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
- Footprint — under 100 MB RAM per run. Most browser-based scrapers need 1–4 GB. Reddit auth, IP rotation, and retry are all handled for you — the actor itself is a thin client.
Pricing
Pay-per-result. You're only charged for posts actually pushed to the dataset — failed runs cost nothing. See the actor's pricing tab for the current per-result rate.
Need a different shape of data?
- For search (find posts by keyword across Reddit), see Reddit Search V2.
- For single-post / comment-tree scraping (one post + its full comment tree), see Reddit Scraper V2.
- For user-feed scraping (posts and comments by a specific user), see Reddit Users V2.
- For subreddit metadata (rules, flairs, wiki, style, etc., not posts), see Reddit Subreddits V2.
- For bulk lookups (up to 1500 posts / comments / users / subreddits in one run), see Reddit Bulk Scrape.
- For storing Reddit accounts so the Latest / Saved Posts feeds (and other authed actors) can re-use them, see Reddit Vault.