Reddit Profile Crawler Pro
Pricing
from $1.00 / 1,000 results
Reddit Profile Crawler Pro
Scrape Reddit user profiles with split karma (post/comment/awarder/awardee), account age, admin/employee/moderator badges, trophies, moderated subreddits, and recent comments. Per-post: awards, gilded count, upvote ratio, media, crosspost parent. Filter by score, age, NSFW. No login.
Pricing
from $1.00 / 1,000 results
Rating
5.0
(12)
Developer
Crawler Bros
Actor stats
13
Bookmarked
2
Total users
1
Monthly active users
4 days ago
Last modified
Categories
Share
Pro-tier version of the Reddit profile crawler. Same browser-automation core, with these additions:
- ๐ Accepts full Reddit URLs (
https://www.reddit.com/user/spez) in addition to plain usernames oru/spezprefixed forms. - ๐ Trophy case extraction (toggle).
- ๐ก๏ธ Moderated subreddits list (toggle).
- ๐ฌ Recent comments scraping (toggle) โ visits the user's
/commentspage in addition to/submitted. - ๐ Split karma fields โ
post_karma,comment_karma,awarder_karma,awardee_karma,total_karma, plus averifiedflag. - ๐ Per-post enrichment โ
awards[],total_awards_received,gilded_count,upvote_ratio,is_video,media_metadata{type,thumbnail_*,domain},crosspost_parent. - ๐ฏ Client-side filters โ
minPostScore,maxPostAgeDays,excludeNsfwPosts. - โ Sentinel errors so daily-test never sees an empty dataset.
- ๐งช 190+ pytest cases covering every helper, parser, and filter combination.
No authentication, no API keys, no proxy required.
Input
| Field | Type | Default | Description |
|---|---|---|---|
usernames | array of strings (required) | ["spez"] | Reddit usernames. Accepts plain (spez), u/spez, or full URLs (https://www.reddit.com/user/spez). |
maxPosts | integer | 100 (1โ1000) | Max posts per user. |
section | enum | submitted | submitted / overview / gilded. |
sort | enum | new | hot / new / top / controversial. |
includeComments | boolean | false | Also scrape the user's /comments page. Adds recent_comments[]. |
includeTrophies | boolean | false | Pull trophy case. Adds trophies[]. |
includeModeratedSubreddits | boolean | false | Pull moderated subs. Adds moderated_subreddits[]. |
minPostScore | integer (optional) | โ | Drop posts below this score. |
maxPostAgeDays | integer (optional) | โ | Drop posts older than N days. |
excludeNsfwPosts | boolean | false | Drop NSFW posts. |
Example input
{"usernames": ["spez", "u/example", "https://www.reddit.com/user/another"],"maxPosts": 50,"section": "submitted","sort": "new","includeTrophies": true,"includeModeratedSubreddits": true,"minPostScore": 10,"excludeNsfwPosts": true}
Output
One record per user. Empty fields are omitted (no nulls).
{"username": "spez","post_karma": 100000,"comment_karma": 200000,"awarder_karma": 50,"awardee_karma": 150,"total_karma": 350000,"verified": true,"account_created": "2005-06-06T00:00:00+00:00","posts": [{"post_id": "abc123","title": "...","subreddit": "r/announcements","score": 5000,"num_comments": 240,"url": "https://www.reddit.com/r/...","old_reddit_url": "https://old.reddit.com/r/...","content": "...","thumbnail_image": "https://...","created_at": "2024-06-15T12:00:00+00:00","is_stickied": false,"is_nsfw": false,"is_video": false,"awards": [{"name": "Gold", "count": 1}],"total_awards_received": 1,"gilded_count": 1,"upvote_ratio": 0.95,"media_metadata": {"type": "image", "thumbnail_url": "https://..."}}],"post_count": 50,"trophies": [{"name": "10-Year Club"}],"trophy_count": 1,"moderated_subreddits": ["r/announcements"],"moderated_subreddit_count": 1,"recent_comments": [{"comment_id": "c1", "body": "...", "score": 25, "subreddit": "r/python", "permalink": "https://...", "created_at": "..."}],"recent_comment_count": 1,"scraped_at": "2024-12-16T14:23:11+00:00"}
Output fields
usernameโ the Reddit username.post_karma/comment_karma/awarder_karma/awardee_karma/total_karmaโ split karma values.verifiedโ Reddit's verified-user flag (when present).account_createdโ ISO-8601 cake-day timestamp.posts[]โ array of post records (see post schema below). Filtered byminPostScore,maxPostAgeDays,excludeNsfwPosts.post_countโ count of posts emitted (after filtering).filtered_post_countโ only present when filters dropped one or more posts (diagnostic).trophies[]/trophy_countโ only whenincludeTrophies: true.moderated_subreddits[]/moderated_subreddit_countโ only whenincludeModeratedSubreddits: true.recent_comments[]/recent_comment_countโ only whenincludeComments: true.scraped_atโ ISO-8601 UTC timestamp.
Post schema (per row in posts[])
post_id, post_name, title, subreddit, score, num_comments, url, old_reddit_url, external_url, content, thumbnail_image, link_flair, created_utc, created_at, is_stickied, is_nsfw, is_promoted, is_spoiler, is_video, awards[], total_awards_received, gilded_count, upvote_ratio, media_metadata, crosspost_parent.
Use cases
- User-activity analysis โ pull posting cadence, top subreddits, karma split.
- Account-age verification โ
account_created+verified+total_karmabuild a quick "is this a bot?" signal. - Influence mapping โ sort users by
total_awards_receivedacross their post history. - Moderator audit โ pull every subreddit a user moderates with one input.
- Content datasets for ML โ bulk-fetch comments + posts with structured metadata.
FAQ
Does it need cookies, login, or proxy?
No. The actor connects directly using a Chrome User-Agent and the public old.reddit.com endpoint.
Why use old.reddit.com?
Stable HTML structure that works across years; the new Reddit UI is React-rendered and hostile to scraping.
What happens if a user doesn't exist?
The actor emits a sentinel record {type: "reddit_profile_crawler_pro_error", reason: "user_not_found", username: "..."} and continues with the rest of the input list.
What if all posts get filtered out?
The user record still emits with post_count: 0 and filtered_post_count: N so you can tell the user existed but the filter eliminated their posts.
How big can the result get?
With includeComments + includeTrophies + includeModeratedSubreddits all on and maxPosts: 1000, expect ~5-10 KB per user. The actor scales linearly with users + max posts.