Reddit Scraper Ppr avatar

Reddit Scraper Ppr

Pricing

from $2.30 / 1,000 results

Go to Apify Store
Reddit Scraper Ppr

Reddit Scraper Ppr

Reddit scraper. Only pay for results returned - no compute costs, no proxy fees. Scrape posts, comments, communities, and users without login. No charge for failed runs or empty results. Predictable pricing, guaranteed data.

Pricing

from $2.30 / 1,000 results

Rating

0.0

(0)

Developer

SilentFlow

SilentFlow

Maintained by Community

Actor stats

1

Bookmarked

7

Total users

2

Monthly active users

5 days ago

Last modified

Categories

Share

Reddit Scraper - Pay Per Result

Pay only for the data you get! Proxies included, no compute costs.

✨ Why use this scraper?

  • 💰 Pay per result: No compute costs, only pay for the data you actually get
  • 🌐 Proxies included: No need to configure or pay for proxies separately
  • 🔓 No login required: Scrape all public Reddit data without authentication
  • 📦 4 data types: Posts, comments, communities, and users with full metadata
  • 🔍 Search & URL modes: Scrape specific URLs or search Reddit by keywords

🎯 Use cases

IndustryApplication
Market researchMonitor brand mentions and sentiment across subreddits
Content analysisAnalyze trending topics and community discussions
Academic researchStudy online communities, opinions, and user behavior
Competitive intelligenceTrack competitor discussions and product feedback
Trend monitoringIdentify emerging trends before they hit mainstream

📥 Input parameters

URL scraping

ParameterTypeDescription
startUrlsarrayReddit URL(s) to scrape (subreddits, posts, users, search pages)

Supported URL types:

  • Subreddits: https://www.reddit.com/r/programming/
  • Subreddit sort: https://www.reddit.com/r/programming/hot
  • Posts: https://www.reddit.com/r/learnprogramming/comments/abc123/...
  • Users: https://www.reddit.com/user/username
  • User comments: https://www.reddit.com/user/username/comments/
  • Search: https://www.reddit.com/search/?q=keyword
  • Popular: https://www.reddit.com/r/popular/
  • Leaderboards: https://www.reddit.com/subreddits/leaderboard/crypto/
ParameterTypeDescription
searchesarrayKeywords to search on Reddit
searchCommunityNamestringRestrict search to a specific subreddit (e.g. programming)
searchTypesarrayTypes of results: posts, communities, users (default: posts)

Sorting & filtering

ParameterTypeDefaultDescription
sortstringnewSort by: relevance, hot, top, new, rising, comments
timestringallTime filter: all, hour, day, week, month, year
includeNSFWbooleantrueInclude adult/NSFW content
postDateLimitstring-Only posts after this date (YYYY-MM-DD)

Options & limits

ParameterTypeDefaultDescription
includeCommentsbooleantrueAlso scrape comments when visiting posts
maxItemsinteger50Maximum total items to return

📊 Output data

Post example

{
"id": "t3_abc123",
"parsedId": "abc123",
"url": "https://www.reddit.com/r/programming/comments/abc123/example_post/",
"username": "dev_user",
"userId": "t2_abc123",
"title": "Example Post Title",
"communityName": "r/programming",
"parsedCommunityName": "programming",
"body": "Post body text...",
"html": null,
"numberOfComments": 42,
"upVotes": 256,
"upVoteRatio": 0.95,
"isVideo": false,
"isAd": false,
"over18": false,
"flair": "Discussion",
"link": "https://example.com/article",
"thumbnailUrl": "https://b.thumbs.redditmedia.com/...",
"videoUrl": "",
"imageUrls": ["https://i.redd.it/abc123.jpg"],
"createdAt": "2024-06-01T12:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"dataType": "post"
}

Comment example

{
"id": "t1_xyz789",
"parsedId": "xyz789",
"url": "https://www.reddit.com/r/programming/comments/abc123/example_post/xyz789/",
"parentId": "t3_abc123",
"postId": "abc123",
"username": "commenter",
"userId": "t2_xyz789",
"category": "programming",
"communityName": "r/programming",
"body": "Great post!",
"html": "<div class=\"md\"><p>Great post!</p></div>",
"createdAt": "2024-06-01T13:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"upVotes": 15,
"numberOfreplies": 3,
"dataType": "comment"
}

Community example

{
"id": "2fwo",
"name": "t5_2fwo",
"title": "Programming",
"url": "https://www.reddit.com/r/programming/",
"description": "Computer programming",
"over18": false,
"numberOfMembers": 5800000,
"createdAt": "2006-01-25T00:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"dataType": "community"
}

User example

{
"id": "abc123",
"url": "https://www.reddit.com/user/dev_user/",
"username": "dev_user",
"description": "Software engineer and open source enthusiast",
"postKarma": 15000,
"commentKarma": 42000,
"over18": false,
"createdAt": "2020-01-15T00:00:00Z",
"scrapedAt": "2024-06-02T10:30:00Z",
"dataType": "user"
}

🗂️ Data fields

CategoryFields
Identityid, parsedId, url, username, userId
Contenttitle, body, html, flair
CommunitycommunityName, parsedCommunityName, category
EngagementupVotes, upVoteRatio, numberOfComments, numberOfreplies
MediaimageUrls, videoUrl, thumbnailUrl, link
FlagsisVideo, isAd, over18
MetacreatedAt, scrapedAt, dataType

🚀 Examples

Scrape a subreddit

{
"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],
"maxItems": 50,
"sort": "hot"
}

Search for a keyword

{
"searches": ["machine learning"],
"searchTypes": ["posts", "communities"],
"sort": "top",
"time": "month",
"maxItems": 100
}

Scrape a post with comments

{
"startUrls": [{"url": "https://www.reddit.com/r/learnprogramming/comments/lp1hi4/is_webscraping_a_good_skill_to_learn/"}],
"includeComments": true,
"maxItems": 100
}

Search within a community

{
"searches": ["python"],
"searchCommunityName": "programming",
"sort": "new",
"maxItems": 50
}

Get recent posts only

{
"startUrls": [{"url": "https://www.reddit.com/r/technology/"}],
"postDateLimit": "2026-03-01",
"includeComments": false,
"maxItems": 200
}

💻 Integrations

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("silentflow/reddit-scraper-ppr").call(run_input={
"startUrls": [{"url": "https://www.reddit.com/r/programming/"}],
"maxItems": 50,
"sort": "hot"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
if item["dataType"] == "post":
print(f"[{item['upVotes']}] {item['title']}")
elif item["dataType"] == "comment":
print(f" > {item['body'][:80]}")

JavaScript

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('silentflow/reddit-scraper-ppr').call({
searches: ['web scraping'],
searchTypes: ['posts'],
sort: 'top',
time: 'week',
maxItems: 100
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => {
if (item.dataType === 'post') {
console.log(`[${item.upVotes}] ${item.title}`);
}
});

📈 Performance & limits

MetricValue
Items per requestup to 100
Average speed~50 items/second
Max items per run10,000
Supported contentPosts, Comments, Communities, Users

💡 Tips for best results

  1. Use maxItems wisely: Only request what you need, you pay per result
  2. Target specific subreddits: Focused scraping gives cleaner, more relevant data
  3. Disable comments when not needed: Set includeComments: false to reduce result count
  4. Test first: Try with maxItems: 10 to verify your setup before large scrapes
  5. Combine search types: Use searchTypes: ["posts", "communities"] to find both discussions and relevant subreddits

❓ FAQ

Q: Can I scrape private subreddits? A: No, this scraper only accesses publicly available data.

Q: What's the difference from the standard version? A: The standard version charges based on Apify platform compute usage. Pay Per Result charges per item instead, with proxies included.

Q: Can I set a budget limit? A: Yes, use maxItems to control exactly how many results (and your maximum cost) per run.

Q: What if the run finds no results? A: You pay nothing. No results means no charge.

Q: What happens if Reddit is temporarily unavailable? A: The scraper automatically retries. If all attempts fail, try again later.

📬 Support

We're building this scraper for you, your feedback makes it better for everyone!

  • 💡 Need a feature? Tell us what's missing and we'll prioritize it
  • ⚙️ Custom solutions: Contact us for enterprise integrations or high-volume needs

Check out our other scrapers: SilentFlow on Apify