Reddit Scraper Ppr
Pricing
from $2.30 / 1,000 results
Reddit Scraper Ppr
Reddit scraper. Only pay for results returned - no compute costs, no proxy fees. Scrape posts, comments, communities, and users without login. No charge for failed runs or empty results. Predictable pricing, guaranteed data.
Pricing
from $2.30 / 1,000 results
Rating
0.0
(0)
Developer

SilentFlow
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Reddit Scraper - Pay Per Result
by SilentFlow
Only pay for what you get. No compute costs, no proxy fees, no surprises. Just results.
Why Pay Per Result?
| Traditional Scraper | Pay Per Result |
|---|---|
| Pay for compute time | Pay only for data |
| Proxy costs extra | Proxies included |
| Failed runs cost money | No charge if it fails |
| Unpredictable costs | Know exactly what you'll pay |
What you get
Every item includes full metadata:
Post
{"id": "t3_abc123","parsedId": "abc123","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/","username": "dev_user","userId": "t2_abc123","title": "Example Post Title","communityName": "r/programming","parsedCommunityName": "programming","body": "Post body text...","html": null,"numberOfComments": 42,"upVotes": 256,"upVoteRatio": 0.95,"isVideo": false,"isAd": false,"over18": false,"flair": "Discussion","link": "https://example.com/article","thumbnailUrl": "https://b.thumbs.redditmedia.com/...","videoUrl": "","imageUrls": ["https://i.redd.it/abc123.jpg"],"createdAt": "2024-06-01T12:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "post"}
Comment
{"id": "t1_xyz789","parsedId": "xyz789","url": "https://www.reddit.com/r/programming/comments/abc123/example_post/xyz789/","parentId": "t3_abc123","postId": "abc123","username": "commenter","userId": "t2_xyz789","category": "programming","communityName": "r/programming","body": "Great post!","html": "<div class=\"md\"><p>Great post!</p></div>","createdAt": "2024-06-01T13:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","upVotes": 15,"numberOfreplies": 3,"dataType": "comment"}
Community
{"id": "2fwo","name": "t5_2fwo","title": "Programming","url": "https://www.reddit.com/r/programming/","description": "Computer programming","over18": false,"numberOfMembers": 5800000,"createdAt": "2006-01-25T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "community"}
User
{"id": "abc123","url": "https://www.reddit.com/user/dev_user/","username": "dev_user","description": "Software engineer and open source enthusiast","postKarma": 15000,"commentKarma": 42000,"over18": false,"createdAt": "2020-01-15T00:00:00Z","scrapedAt": "2024-06-02T10:30:00Z","dataType": "user"}
Input parameters
URL scraping
| Parameter | Type | Description |
|---|---|---|
startUrls | array | Reddit URL(s) to scrape (subreddits, posts, users, search pages) |
Supported URL types:
- Subreddits:
https://www.reddit.com/r/programming/ - Subreddit channels:
https://www.reddit.com/r/programming/hot - Posts:
https://www.reddit.com/r/learnprogramming/comments/abc123/... - Users:
https://www.reddit.com/user/username - User comments:
https://www.reddit.com/user/username/comments/ - Search:
https://www.reddit.com/search/?q=keyword - Popular:
https://www.reddit.com/r/popular/ - Leaderboards:
https://www.reddit.com/subreddits/leaderboard/crypto/
Keyword search
| Parameter | Type | Description |
|---|---|---|
searches | array | Keywords to search on Reddit |
searchCommunityName | string | Restrict search to a specific community |
searchPosts | boolean | Search for posts (default: true) |
searchComments | boolean | Search for comments (default: false) |
searchCommunities | boolean | Search for communities (default: false) |
searchUsers | boolean | Search for users (default: false) |
Sorting & filtering
| Parameter | Type | Default | Options |
|---|---|---|---|
sort | string | new | relevance, hot, top, new, rising, comments |
time | string | all | all, hour, day, week, month, year |
includeNSFW | boolean | true | Include NSFW content |
postDateLimit | string | - | Only posts after this date (YYYY-MM-DD) |
Limits
| Parameter | Type | Default | Description |
|---|---|---|---|
maxItems | integer | 10 | Maximum total items to save |
maxPostCount | integer | 10 | Maximum posts per page/subreddit |
maxComments | integer | 10 | Maximum comments per post |
maxCommunitiesCount | integer | 2 | Maximum communities to scrape |
maxUserCount | integer | 2 | Maximum users to scrape |
Skip options
| Parameter | Type | Default | Description |
|---|---|---|---|
skipComments | boolean | false | Skip comment extraction |
skipUserPosts | boolean | false | Skip user post extraction |
skipCommunity | boolean | false | Skip community info extraction |
Advanced
| Parameter | Type | Default | Description |
|---|---|---|---|
scrollTimeout | integer | 40 | Request timeout in seconds |
debugMode | boolean | false | Enable detailed logging |
proxy | object | residential | Proxy configuration (useApifyProxy, apifyProxyGroups) |
Pricing
| Item | Cost |
|---|---|
| Per result (post, comment, community, or user) | $0.003 |
| Failed runs (no results) | Free |
| Platform compute | Included |
$3 per 1,000 results. Volume discounts available for higher tiers.
Example costs:
- 100 posts = $0.30
- 50 posts + 200 comments = $0.75
- 1,000 posts + 5,000 comments = $18.00
Examples
Scrape a subreddit
{"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"maxPostCount": 20,"maxComments": 10,"sort": "hot"}
Search for a keyword
{"searches": ["machine learning"],"searchPosts": true,"searchCommunities": true,"sort": "top","time": "month","maxItems": 100}
Scrape a specific post with comments
{"startUrls": [{"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn/"}],"maxComments": 50}
Integrations
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("silentflow/reddit-scraper-ppr").call(run_input={"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],"maxItems": 50,"maxPostCount": 20,"maxComments": 10,"sort": "hot"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():if item["dataType"] == "post":print(f"[{item['upVotes']}] {item['title']}")
JavaScript
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('silentflow/reddit-scraper-ppr').call({searches: ['web scraping'],searchPosts: true,sort: 'top',time: 'week',maxItems: 100});const { items } = await client.dataset(run.defaultDatasetId).listItems();items.forEach(item => {if (item.dataType === 'post') {console.log(`[${item.upVotes}] ${item.title}`);}});
Tips for best results
- Use
maxItemswisely: Only request what you need - you pay per result - Apply filters: Use
skipCommentsorsearchCommunityNameto reduce unnecessary results - Be specific: Target specific subreddits instead of broad searches
- Test first: Try with
maxItems: 5to verify your setup before large scrapes
FAQ
Q: How does Pay Per Result billing work? A: You are charged $0.003 for each item (post, comment, community, or user) that is successfully scraped and saved to the dataset. If the scraper finds no results, you are not charged.
Q: Is there a minimum charge? A: No. You only pay for actual results. A run with zero results costs nothing.
Q: Can I set a budget limit?
A: Yes, use the maxItems parameter to control exactly how many results (and thus your maximum cost) per run.
Q: What's the difference from the standard version? A: The standard version charges based on Apify platform compute usage. PPR charges per result instead, making costs more predictable.
Q: What if the run fails? A: You pay nothing. Failed runs don't charge you.
Q: Are proxy costs included? A: Yes! Residential proxies are included in the per-result price.
When to use PPR vs Standard
| Use PPR when... | Use Standard when... |
|---|---|
| You need predictable costs | You're scraping thousands of items |
| You want guaranteed results | You have Apify compute credits |
| You're testing or prototyping | You're running scheduled jobs |
| Budget is per-result focused | Budget is time-based |
Data fields
| Category | Fields |
|---|---|
| Identity | id, parsedId, url, username, userId |
| Content | title, body, html, flair |
| Community | communityName, parsedCommunityName, category |
| Engagement | upVotes, upVoteRatio, numberOfComments, numberOfreplies |
| Media | imageUrls, videoUrl, thumbnailUrl, link |
| Flags | isVideo, isAd, over18 |
| Meta | createdAt, scrapedAt, dataType |
Support
Need help? We're here for you:
- Bug reports: Open an issue on the actor page
- Questions: Message us via Apify console
- Feature requests: Let us know what you need
- Custom solutions: Contact us for enterprise integrations or high-volume needs
Check out our other scrapers: SilentFlow on Apify
Remember: You only pay for results. No results = no charge.