Reddit Scraper avatar

Reddit Scraper

Pricing

$19.99/month + usage

Go to Apify Store
Reddit Scraper

Reddit Scraper

Scrape Reddit posts, comments, communities, and users without login. Extract data from subreddits, search results, user profiles. Sort by hot/new/top, filter by date, include/exclude NSFW. Keyword search, residential proxies, fast and reliable.

Pricing

$19.99/month + usage

Rating

0.0

(0)

Developer

SilentFlow

SilentFlow

Maintained by Community

Actor stats

1

Bookmarked

2

Total users

1

Monthly active users

19 hours ago

Last modified

Categories

Share

Scrape Reddit posts, comments, communities, and users without login. Extract data from any subreddit, search result, or user profile.

โœจ Why use this scraper?

  • ๐Ÿ”“ No login required: Scrape all public Reddit data without authentication
  • ๐Ÿ“ฆ 4 data types: Posts, comments, communities, and users with full metadata
  • ๐Ÿ” Search & URL modes: Scrape specific URLs or search Reddit by keywords
  • ๐ŸŽ›๏ธ Smart filtering: Sort by hot/new/top, filter by date, include/exclude NSFW
  • โšก High reliability: Automatic retries and residential proxy support

๐ŸŽฏ Use cases

IndustryApplication
Market researchMonitor brand mentions and sentiment across subreddits
Content analysisAnalyze trending topics and community discussions
Academic researchStudy online communities, opinions, and user behavior
Competitive intelligenceTrack competitor discussions and product feedback
Trend monitoringIdentify emerging trends before they hit mainstream

๐Ÿ“ฅ Input parameters

URL scraping

ParameterTypeDescription
startUrlsarrayReddit URL(s) to scrape (subreddits, posts, users, search pages)

Supported URL types:

  • Subreddits: https://www.reddit.com/r/programming/
  • Subreddit sort: https://www.reddit.com/r/programming/hot
  • Posts: https://www.reddit.com/r/learnprogramming/comments/abc123/...
  • Users: https://www.reddit.com/user/username
  • User comments: https://www.reddit.com/user/username/comments/
  • Search: https://www.reddit.com/search/?q=keyword
  • Popular: https://www.reddit.com/r/popular/
  • Leaderboards: https://www.reddit.com/subreddits/leaderboard/crypto/
ParameterTypeDescription
searchesarrayKeywords to search on Reddit
searchCommunityNamestringRestrict search to a specific subreddit (e.g. programming)
searchTypesarrayTypes of results: posts, communities, users (default: posts)

Sorting & filtering

ParameterTypeDefaultDescription
sortstringnewSort by: relevance, hot, top, new, rising, comments
timestringallTime filter: all, hour, day, week, month, year
includeNSFWbooleantrueInclude adult/NSFW content
postDateLimitstring-Only posts after this date (YYYY-MM-DD)

Options & limits

ParameterTypeDefaultDescription
includeCommentsbooleantrueAlso scrape comments when visiting posts
maxItemsinteger50Maximum total items to return

๐Ÿ“Š Output data

Post example

{
"id": "t3_abc123",
"parsedId": "abc123",
"url": "https://www.reddit.com/r/programming/comments/abc123/example_post/",
"username": "dev_user",
"userId": "t2_abc123",
"title": "Example Post Title",
"communityName": "r/programming",
"parsedCommunityName": "programming",
"body": "Post body text...",
"html": null,
"numberOfComments": 42,
"upVotes": 256,
"upVoteRatio": 0.95,
"isVideo": false,
"isAd": false,
"over18": false,
"flair": "Discussion",
"link": "https://example.com/article",
"thumbnailUrl": "https://b.thumbs.redditmedia.com/...",
"videoUrl": "",
"imageUrls": ["https://i.redd.it/abc123.jpg"],
"createdAt": "2024-06-01T12:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"dataType": "post"
}

Comment example

{
"id": "t1_xyz789",
"parsedId": "xyz789",
"url": "https://www.reddit.com/r/programming/comments/abc123/example_post/xyz789/",
"parentId": "t3_abc123",
"postId": "abc123",
"username": "commenter",
"userId": "t2_xyz789",
"category": "programming",
"communityName": "r/programming",
"body": "Great post!",
"html": "<div class=\"md\"><p>Great post!</p></div>",
"createdAt": "2024-06-01T13:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"upVotes": 15,
"numberOfreplies": 3,
"dataType": "comment"
}

Community example

{
"id": "2fwo",
"name": "t5_2fwo",
"title": "Programming",
"url": "https://www.reddit.com/r/programming/",
"description": "Computer programming",
"over18": false,
"numberOfMembers": 5800000,
"createdAt": "2006-01-25T00:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"dataType": "community"
}

User example

{
"id": "abc123",
"url": "https://www.reddit.com/user/dev_user/",
"username": "dev_user",
"description": "Software engineer and open source enthusiast",
"postKarma": 15000,
"commentKarma": 42000,
"over18": false,
"createdAt": "2020-01-15T00:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"dataType": "user"
}

๐Ÿ—‚๏ธ Data fields

CategoryFields
Identityid, parsedId, url, username, userId
Contenttitle, body, html, flair
CommunitycommunityName, parsedCommunityName, category
EngagementupVotes, upVoteRatio, numberOfComments, numberOfreplies
MediaimageUrls, videoUrl, thumbnailUrl, link
FlagsisVideo, isAd, over18
MetacreatedAt, scrapedAt, dataType

๐Ÿš€ Examples

Scrape a subreddit

{
"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],
"maxItems": 50,
"sort": "hot"
}

Search for a keyword

{
"searches": ["machine learning"],
"searchTypes": ["posts", "communities"],
"sort": "top",
"time": "month",
"maxItems": 100
}

Scrape a post with comments

{
"startUrls": [{"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn/"}],
"includeComments": true,
"maxItems": 100
}

Search within a community

{
"searches": ["python"],
"searchCommunityName": "programming",
"sort": "new",
"maxItems": 50
}

Get recent posts only

{
"startUrls": [{"url": "https://www.reddit.com/r/technology/"}],
"postDateLimit": "2026-03-01",
"includeComments": false,
"maxItems": 200
}

๐Ÿ’ป Integrations

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("silentflow/reddit-scraper").call(run_input={
"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],
"maxItems": 50,
"sort": "hot"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
if item["dataType"] == "post":
print(f"[{item['upVotes']}] {item['title']}")
elif item["dataType"] == "comment":
print(f" > {item['body'][:80]}")

JavaScript

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('silentflow/reddit-scraper').call({
searches: ['web scraping'],
searchTypes: ['posts'],
sort: 'top',
time: 'week',
maxItems: 100
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => {
if (item.dataType === 'post') {
console.log(`[${item.upVotes}] ${item.title}`);
}
});

๐Ÿ“ˆ Performance & limits

MetricValue
Items per requestup to 100
Average speed~50 items/second
Max items per run10,000
Supported contentPosts, Comments, Communities, Users

๐Ÿ’ก Tips for best results

  1. Target specific subreddits: Focused scraping gives cleaner, more relevant data
  2. Start small: Test with maxItems: 10 before running large scrapes
  3. Use date filters: Combine postDateLimit with sort "new" for recent content monitoring
  4. Disable comments when not needed: Set includeComments: false to speed up subreddit scraping
  5. Combine search types: Use searchTypes: ["posts", "communities"] to find both discussions and relevant subreddits

โ“ FAQ

Q: Can I scrape private subreddits? A: No, this scraper only accesses publicly available data.

Q: Why are some posts missing? A: Reddit may filter certain posts. NSFW content is included by default but can be toggled with includeNSFW.

Q: How often can I run the scraper? A: No limits on run frequency. The scraper handles rate limiting automatically.

Q: What happens if Reddit is temporarily unavailable? A: The scraper automatically retries. If all attempts fail, try again later.

๐Ÿ“ฌ Support

We're building this scraper for you, your feedback makes it better for everyone!

  • ๐Ÿ’ก Need a feature? Tell us what's missing and we'll prioritize it
  • โš™๏ธ Custom solutions: Contact us for enterprise integrations or high-volume needs

Check out our other scrapers: SilentFlow on Apify