Reddit Scraper Ppr
Pricing
from $2.30 / 1,000 results
Reddit Scraper Ppr
Reddit scraper. Only pay for results returned - no compute costs, no proxy fees. Scrape posts, comments, communities, and users without login. No charge for failed runs or empty results. Predictable pricing, guaranteed data.
Pricing
from $2.30 / 1,000 results
Rating
0.0
(0)
Developer
SilentFlow
Actor stats
1
Bookmarked
7
Total users
2
Monthly active users
5 days ago
Last modified
Categories
Share
Reddit Scraper - Pay Per Result
Pay only for the data you get! Proxies included, no compute costs.
✨ Why use this scraper?
- 💰 Pay per result: No compute costs, only pay for the data you actually get
- 🌐 Proxies included: No need to configure or pay for proxies separately
- 🔓 No login required: Scrape all public Reddit data without authentication
- 📦 4 data types: Posts, comments, communities, and users with full metadata
- 🔍 Search & URL modes: Scrape specific URLs or search Reddit by keywords
🎯 Use cases
| Industry | Application |
|---|---|
| Market research | Monitor brand mentions and sentiment across subreddits |
| Content analysis | Analyze trending topics and community discussions |
| Academic research | Study online communities, opinions, and user behavior |
| Competitive intelligence | Track competitor discussions and product feedback |
| Trend monitoring | Identify emerging trends before they hit mainstream |
📥 Input parameters
URL scraping
| Parameter | Type | Description |
|---|---|---|
startUrls | array | Reddit URL(s) to scrape (subreddits, posts, users, search pages) |
Supported URL types:
- Subreddits:
https://www.reddit.com/r/programming/ - Subreddit sort:
https://www.reddit.com/r/programming/hot - Posts:
https://www.reddit.com/r/learnprogramming/comments/abc123/... - Users:
https://www.reddit.com/user/username - User comments:
https://www.reddit.com/user/username/comments/ - Search:
https://www.reddit.com/search/?q=keyword - Popular:
https://www.reddit.com/r/popular/ - Leaderboards:
https://www.reddit.com/subreddits/leaderboard/crypto/
Search
| Parameter | Type | Description |
|---|---|---|
searches | array | Keywords to search on Reddit |
searchCommunityName | string | Restrict search to a specific subreddit (e.g. programming) |
searchTypes | array | Types of results: posts, communities, users (default: posts) |
Sorting & filtering
| Parameter | Type | Default | Description |
|---|---|---|---|
sort | string | new | Sort by: relevance, hot, top, new, rising, comments |
time | string | all | Time filter: all, hour, day, week, month, year |
includeNSFW | boolean | true | Include adult/NSFW content |
postDateLimit | string | - | Only posts after this date (YYYY-MM-DD) |
Options & limits
| Parameter | Type | Default | Description |
|---|---|---|---|
includeComments | boolean | true | Also scrape comments when visiting posts |
maxItems | integer | 50 | Maximum total items to return |
📊 Output data
Post example
{"id": "t3_abc123","parsedId": "abc123","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/","username": "dev_user","userId": "t2_abc123","title": "Example Post Title","communityName": "r/programming","parsedCommunityName": "programming","body": "Post body text...","html": null,"numberOfComments": 42,"upVotes": 256,"upVoteRatio": 0.95,"isVideo": false,"isAd": false,"over18": false,"flair": "Discussion","link": "https://example.com/article","thumbnailUrl": "https://b.thumbs.redditmedia.com/...","videoUrl": "","imageUrls": ["https://i.redd.it/abc123.jpg"],"createdAt": "2024-06-01T12:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "post"}
Comment example
{"id": "t1_xyz789","parsedId": "xyz789","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/xyz789/","parentId": "t3_abc123","postId": "abc123","username": "commenter","userId": "t2_xyz789","category": "programming","communityName": "r/programming","body": "Great post!","html": "<div class=\"md\"><p>Great post!</p></div>","createdAt": "2024-06-01T13:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","upVotes": 15,"numberOfreplies": 3,"dataType": "comment"}
Community example
{"id": "2fwo","name": "t5_2fwo","title": "Programming","url": "https://www.reddit.com/r/programming/","description": "Computer programming","over18": false,"numberOfMembers": 5800000,"createdAt": "2006-01-25T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "community"}
User example
{"id": "abc123","url": "https://www.reddit.com/user/dev_user/","username": "dev_user","description": "Software engineer and open source enthusiast","postKarma": 15000,"commentKarma": 42000,"over18": false,"createdAt": "2020-01-15T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "user"}
🗂️ Data fields
| Category | Fields |
|---|---|
| Identity | id, parsedId, url, username, userId |
| Content | title, body, html, flair |
| Community | communityName, parsedCommunityName, category |
| Engagement | upVotes, upVoteRatio, numberOfComments, numberOfreplies |
| Media | imageUrls, videoUrl, thumbnailUrl, link |
| Flags | isVideo, isAd, over18 |
| Meta | createdAt, scrapedAt, dataType |
🚀 Examples
Scrape a subreddit
{"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"sort": "hot"}
Search for a keyword
{"searches": ["machine learning"],"searchTypes": ["posts", "communities"],"sort": "top","time": "month","maxItems": 100}
Scrape a post with comments
{"startUrls": [{"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn/"}],"includeComments": true,"maxItems": 100}
Search within a community
{"searches": ["python"],"searchCommunityName": "programming","sort": "new","maxItems": 50}
Get recent posts only
{"startUrls": [{"url": "https://www.reddit.com/r/technology/"}],"postDateLimit": "2026-03-01","includeComments": false,"maxItems": 200}
💻 Integrations
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("silentflow/reddit-scraper-ppr").call(run_input={"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"sort": "hot"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():if item["dataType"] == "post":print(f"[{item['upVotes']}] {item['title']}")elif item["dataType"] == "comment":print(f" > {item['body'][:80]}")
JavaScript
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('silentflow/reddit-scraper-ppr').call({searches: ['web scraping'],searchTypes: ['posts'],sort: 'top',time: 'week',maxItems: 100});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => {if (item.dataType === 'post') {console.log(`[${item.upVotes}] ${item.title}`);}});
📈 Performance & limits
| Metric | Value |
|---|---|
| Items per request | up to 100 |
| Average speed | ~50 items/second |
| Max items per run | 10,000 |
| Supported content | Posts, Comments, Communities, Users |
💡 Tips for best results
- Use
maxItemswisely: Only request what you need, you pay per result - Target specific subreddits: Focused scraping gives cleaner, more relevant data
- Disable comments when not needed: Set
includeComments: falseto reduce result count - Test first: Try with
maxItems: 10to verify your setup before large scrapes - Combine search types: Use
searchTypes: ["posts", "communities"]to find both discussions and relevant subreddits
❓ FAQ
Q: Can I scrape private subreddits? A: No, this scraper only accesses publicly available data.
Q: What's the difference from the standard version? A: The standard version charges based on Apify platform compute usage. Pay Per Result charges per item instead, with proxies included.
Q: Can I set a budget limit?
A: Yes, use maxItems to control exactly how many results (and your maximum cost) per run.
Q: What if the run finds no results? A: You pay nothing. No results means no charge.
Q: What happens if Reddit is temporarily unavailable? A: The scraper automatically retries. If all attempts fail, try again later.
📬 Support
We're building this scraper for you, your feedback makes it better for everyone!
- 💡 Need a feature? Tell us what's missing and we'll prioritize it
- ⚙️ Custom solutions: Contact us for enterprise integrations or high-volume needs
Check out our other scrapers: SilentFlow on Apify
