Reddit Subreddit Posts Scraper. No Login avatar

Reddit Subreddit Posts Scraper. No Login

Pricing

from $1.50 / 1,000 reddit subreddit posts

Go to Apify Store
Reddit Subreddit Posts Scraper. No Login

Reddit Subreddit Posts Scraper. No Login

Scrape posts from any public subreddit. Title, author, score, comment count, body text, link, flair, and timestamp. Filter by hot, new, top, rising, or controversial. No login.

Pricing

from $1.50 / 1,000 reddit subreddit posts

Rating

0.0

(0)

Developer

Andrew

Andrew

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Share

Reddit Subreddit Posts Scraper — No Login

Scrape posts from any public subreddit — title, author, score, comment count, body text, link, flair, and timestamp. Filter by Hot, New, Top, Rising, or Controversial. No login, no cookies.

What you get

  • Post ID, title, full body text (and HTML), author, and permalink
  • Engagement: score, upvote ratio, comment count, awards count
  • Listing sort: Hot, New, Top, Rising, Controversial — with Top/Controversial supporting Past Hour through All Time
  • Flair, post hint, domain, thumbnail, NSFW / spoiler / stickied / locked flags
  • Cursor-based pagination — fetch unlimited posts across multiple runs
  • Direct export to JSON, CSV, Excel, or Google Sheets

Use cases

  • Track a community's most-discussed topics for content research
  • Build a dataset for sentiment, trend, or topic analysis
  • Monitor mentions of a brand, product, or competitor across subreddits
  • Archive a subreddit's top posts for historical research
  • Feed downstream LLM pipelines with current Reddit content

How to use

  1. Enter a Subreddit name (e.g. programming, r/programming, or a full URL)
  2. Choose a Sort — Hot, New, Top, Rising, or Controversial
  3. If using Top or Controversial, pick a Time Filter (Past Day, Past Week, etc.)
  4. Set Max Posts (default 100; 0 for unlimited)
  5. Run the actor — one post per row in the Dataset tab
  6. To fetch the next page, open the Key-value store tab → copy the NEXT_PAGE_ID value → paste it into Page ID on your next run

Output format

One post per dataset row — perfect for direct CSV, Excel, or Google Sheets export:

{
"id": "1abc234",
"name": "t3_1abc234",
"subreddit": "programming",
"title": "Show HN: My new project",
"author": "exampleuser",
"permalink": "https://www.reddit.com/r/programming/comments/1abc234/show_hn_my_new_project/",
"url": "https://example.com/project",
"isSelf": false,
"selftext": "",
"score": 1234,
"upvoteRatio": 0.94,
"numComments": 87,
"createdUtc": 1762900000,
"createdAt": "2026-05-12T10:00:00.000Z",
"linkFlairText": "Show HN",
"postHint": "link",
"domain": "example.com",
"over18": false,
"stickied": false,
"locked": false,
"isVideo": false
}

Pagination

If the listing has more posts than Max Posts allows, the actor saves a resume cursor to the default Key-value store under the key NEXT_PAGE_ID.

  1. Open the Key-value store tab on the run page
  2. Copy the value of NEXT_PAGE_ID
  3. Start a new run and paste it into Page ID

When NEXT_PAGE_ID is null, the listing has been fully scraped.

Input options

FieldTypeDescription
SubredditstringSubreddit name, with or without r/ (required)
SortenumHot, New, Top, Rising, Controversial — default Hot
Time FilterenumPast Hour through All Time — only used by Top and Controversial
Max PostsintegerCap per run — default 100, 0 for unlimited
Page IDstringNEXT_PAGE_ID from the previous run, to resume pagination