Reddit Scraper ✅ Posts, Comments, Users, Communities | NO LOGIN avatar

Reddit Scraper ✅ Posts, Comments, Users, Communities | NO LOGIN

Pricing

Pay per usage

Go to Apify Store
Reddit Scraper ✅ Posts, Comments, Users, Communities | NO LOGIN

Reddit Scraper ✅ Posts, Comments, Users, Communities | NO LOGIN

*ENJOY FREE* for 2 weeks. Reddit scraper for posts, comments, users, listings, communities and more. NO LOGIN Required

Pricing

Pay per usage

Rating

4.5

(6)

Developer

Peaky Dev

Peaky Dev

Maintained by Community

Actor stats

6

Bookmarked

126

Total users

1

Monthly active users

a day ago

Last modified

Share

Reddit Scraper

Extract posts, comments, communities, and user profiles from Reddit, structured, clean, and ready to use.


What It Does

This scraper lets you pull data from any public Reddit URL or search term. Whether you're tracking discussions, researching communities, or building datasets, it returns organised JSON you can pipe straight into your workflow.

Supported content types:

  • Posts and listing pages
  • Comment threads
  • User profiles
  • Subreddit community pages

Getting Started

  1. Paste one or more Reddit URLs or enter a search term
  2. Choose a scrape type — post, comments, user, listing, or community
  3. Set your result limits
  4. Run and collect your dataset

Input URL Examples

ContentExample URL
Subreddit listinghttps://www.reddit.com/r/worldnews/
Post + commentshttps://www.reddit.com/r/learnprogramming/comments/lp1hi4/...
User profilehttps://www.reddit.com/user/lukaskrivka
Search — postshttps://www.reddit.com/search/?q=news
Search — users & communitieshttps://www.reddit.com/search/?q=news&type=sr%2Cuser

Input Parameters

ParameterTypeDescriptionDefault
urlarrayOne or more Reddit URLs to scrape
searchTermstringKeyword to search Reddit""
searchSortstringSort order: relevance, hot, top, new, commentsrelevance
searchTimestringTime filter: all, hour, day, week, month, yearall
searchSubredditstringLimit search to a specific subreddit (no r/ prefix)""
scrapeTypestringWhat to scrape: post, listing, comments, user, communitypost
maxPostsintegerMax posts to return1000
maxCommentsintegerMax comments to return1000
maxListingPostsintegerMax posts from a listing page1000
maxCommunitiesintegerMax communities to return1000
maxUsersintegerMax user profiles to return1000

Use url or searchTerm — not both at the same time.


Sample Input

{
"url": [
"https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn_as_a_beginner/"
],
"scrapeType": "post",
"maxPosts": 1000,
"maxComments": 1000
}

Sample Output

[
{
"contentType": "post",
"id": "lp1hi4",
"parseId": "lp1hi4",
"createdAt": "2021-02-21T17:08:34.853Z",
"scrapedAt": "2025-11-06T20:39:27.038Z",
"communityName": "learnprogramming",
"author": "SadFrodo401",
"title": "Is Web-Scraping a good skill to learn as a Beginner?",
"body": "I'm a python beginner...",
"upvotes": 4,
"noOfcomments": 12,
"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/...",
"thumbnailUrl": null,
"isVideo": false,
"isAd": false,
"isPinned": false,
"isOver18": false
},
{
"contentType": "comments",
"id": "nnegxe8",
"parseId": "nnegxe8",
"createdAt": "2025-11-06T11:33:27.000Z",
"scrapedAt": "2025-11-06T20:46:20.020Z",
"communityName": "ContagiousLaughter",
"author": "AutoModerator",
"body": "Please report this post if...",
"upvotes": 1,
"noOfcomments": 0,
"noOfreplies": 0,
"url": "https://www.reddit.com/r/ContagiousLaughter/comments/1opwizl/.../nnegxe8/",
"isVideo": false,
"isAd": false,
"isPinned": true,
"isOver18": false,
"depth": 0,
"parentId": "1opwizl",
"postUrl": "https://www.reddit.com/r/ContagiousLaughter/comments/1opwizl/eddie_murphys_uncle/"
},
{
"id": "worldnews",
"name": "t5_worldnews",
"title": "r/worldnews",
"headerImage": "https://external-preview.redd.it/...",
"description": "Anyone can view, post, and comment to this community",
"over18": false,
"createdAt": "2008-01-24T23:00:00.000Z",
"scrapedAt": "2025-11-06T21:00:17.080Z",
"numberOfMembers": 46854095,
"url": "https://www.reddit.com/r/worldnews/",
"dataType": "community"
},
{
"id": "4y22fmn1",
"url": "https://www.reddit.com/user/lukaskrivka/",
"username": "lukaskrivka",
"userIcon": "https://styles.redditmedia.com/...",
"postKarma": 44,
"commentKarma": 270,
"over18": false,
"createdAt": "2020-07-22T10:04:07.000Z",
"scrapedAt": "2025-11-06T20:39:27.038Z",
"dataType": "user"
}
]

Resumable Runs

If a run is interrupted — whether by a timeout, error, or manual stop — you can restart it and it will pick up from where it left off rather than starting over.

Progress is tracked at the URL level. Once a URL has been fully scraped, it is recorded as complete. On restart, those URLs are skipped entirely and the scraper picks up on any that were not yet finished.


Use Cases

  • Market research — monitor brand mentions, product feedback, and community sentiment
  • Academic research — collect discussion data for topic modelling, NLP, or social studies
  • Trend analysis — track what's gaining traction across subreddits over time
  • App development — feed live Reddit data into dashboards, bots, or recommendation engines
  • Content strategy — find high-engagement threads and understand what resonates with an audience

Pricing

PlanLimitCost
Free100 items per monthFree

Support

Open a ticket on the Issues tab or contact us directly at peakydev00@gmail.com.