Reddit Subreddit Posts Scraper. No Login
Pricing
from $1.50 / 1,000 reddit subreddit posts
Reddit Subreddit Posts Scraper. No Login
Scrape posts from any public subreddit. Title, author, score, comment count, body text, link, flair, and timestamp. Filter by hot, new, top, rising, or controversial. No login.
Pricing
from $1.50 / 1,000 reddit subreddit posts
Rating
0.0
(0)
Developer
Andrew
Maintained by CommunityActor stats
0
Bookmarked
2
Total users
1
Monthly active users
3 days ago
Last modified
Categories
Share
Reddit Subreddit Posts Scraper — No Login
Scrape posts from any public subreddit — title, author, score, comment count, body text, link, flair, and timestamp. Filter by Hot, New, Top, Rising, or Controversial. No login, no cookies.
What you get
- Post ID, title, full body text (and HTML), author, and permalink
- Engagement: score, upvote ratio, comment count, awards count
- Listing sort: Hot, New, Top, Rising, Controversial — with Top/Controversial supporting Past Hour through All Time
- Flair, post hint, domain, thumbnail, NSFW / spoiler / stickied / locked flags
- Cursor-based pagination — fetch unlimited posts across multiple runs
- Direct export to JSON, CSV, Excel, or Google Sheets
Use cases
- Track a community's most-discussed topics for content research
- Build a dataset for sentiment, trend, or topic analysis
- Monitor mentions of a brand, product, or competitor across subreddits
- Archive a subreddit's top posts for historical research
- Feed downstream LLM pipelines with current Reddit content
How to use
- Enter a Subreddit name (e.g.
programming,r/programming, or a full URL) - Choose a Sort — Hot, New, Top, Rising, or Controversial
- If using Top or Controversial, pick a Time Filter (Past Day, Past Week, etc.)
- Set Max Posts (default 100; 0 for unlimited)
- Run the actor — one post per row in the Dataset tab
- To fetch the next page, open the Key-value store tab → copy the
NEXT_PAGE_IDvalue → paste it into Page ID on your next run
Output format
One post per dataset row — perfect for direct CSV, Excel, or Google Sheets export:
{"id": "1abc234","name": "t3_1abc234","subreddit": "programming","title": "Show HN: My new project","author": "exampleuser","permalink": "https://www.reddit.com/r/programming/comments/1abc234/show_hn_my_new_project/","url": "https://example.com/project","isSelf": false,"selftext": "","score": 1234,"upvoteRatio": 0.94,"numComments": 87,"createdUtc": 1762900000,"createdAt": "2026-05-12T10:00:00.000Z","linkFlairText": "Show HN","postHint": "link","domain": "example.com","over18": false,"stickied": false,"locked": false,"isVideo": false}
Pagination
If the listing has more posts than Max Posts allows, the actor saves a resume cursor to the default Key-value store under the key NEXT_PAGE_ID.
- Open the Key-value store tab on the run page
- Copy the value of
NEXT_PAGE_ID - Start a new run and paste it into Page ID
When NEXT_PAGE_ID is null, the listing has been fully scraped.
Input options
| Field | Type | Description |
|---|---|---|
| Subreddit | string | Subreddit name, with or without r/ (required) |
| Sort | enum | Hot, New, Top, Rising, Controversial — default Hot |
| Time Filter | enum | Past Hour through All Time — only used by Top and Controversial |
| Max Posts | integer | Cap per run — default 100, 0 for unlimited |
| Page ID | string | NEXT_PAGE_ID from the previous run, to resume pagination |