Reddit User Posts & Comments Scraper | Bulk Export avatar

Reddit User Posts & Comments Scraper | Bulk Export

Pricing

from $1.99 / 1,000 results

Go to Apify Store
Reddit User Posts & Comments Scraper | Bulk Export

Reddit User Posts & Comments Scraper | Bulk Export

Scrape all posts and comments from any Reddit user profile. Supports sorting, time filters, and bulk username input. Up to 1,000 items per user.

Pricing

from $1.99 / 1,000 results

Rating

0.0

(0)

Developer

ClearPath

ClearPath

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

8 hours ago

Last modified

Share

Reddit User Content Scraper | Posts & Comments History (2026)

5,000 posts and comments in under 60 seconds — bulk username support, sorting, and time filters.

Pass one username or a thousand. Get back every post, comment, or both, sorted by new, top, hot, or controversial.

Clearpath Reddit Suite   •  Search, analyze, and monitor Reddit at scale
➤ You are here
User Content Scraper
Posts & comments history
 Reddit Profile Scraper
Bulk profile & karma lookup
 Reddit MCP Server
Search posts & comments for AI agents

Copy to your AI assistant

Copy this block into ChatGPT, Claude, Cursor, or any LLM to start building with this data.

Reddit User Content Scraper (clearpath/reddit-user-content-scraper) on Apify scrapes posts and comments from Reddit user profiles in bulk. Returns 100+ fields per item including title, body text, score, subreddit, permalink, timestamps, upvote ratio, awards, flair, and author metadata. Supports sorting (new, hot, top, controversial) and time filters (hour, day, week, month, year, all). Input: single username, array of usernames, file URL, or uploaded CSV/TXT file. Accepts any format: plain username, u/name, @name, or profile URLs. Content type: posts only, comments only, or both. Max ~1,000 items per user (Reddit server limit). Output: JSON array, one object per post/comment. Pricing: $1.99 per 1,000 items (PPE). No Reddit API key or login required. Apify token required.

Key Features

  • Full post & comment history — scrape everything a user has posted or commented
  • Sorting & time filters — new, hot, top, controversial + time range (hour to all time)
  • Bulk username support — process hundreds of users in parallel
  • 100+ fields per item — title, body, score, subreddit, permalink, flair, awards, author metadata
  • No Reddit login needed — uses public data, no API keys required

How to Scrape Reddit User History

Single user, all content

{
"username": "thisisbillgates"
}

Top posts from a user

{
"username": "GovSchwarzenegger",
"contentType": "posts",
"sort": "top",
"timeFilter": "all",
"maxItemsPerUser": 50
}

Comments from multiple users

{
"usernames": ["thisisbillgates", "GovSchwarzenegger", "kn0thing"],
"contentType": "comments",
"sort": "new",
"maxItemsPerUser": 100
}

Bulk from file URL

{
"usernamesFileUrl": "https://example.com/my-usernames.csv"
}

You can also upload a file directly using the Upload file field in the Apify Console.

Input Parameters

ParameterTypeDefaultDescription
usernamestringA single Reddit username
usernamesstring[][]List of usernames, u/ prefixed names, or profile URLs
contentTypeselectoverviewoverview (both), posts, or comments
sortselectnewnew, hot, top, controversial
timeFilterselectallhour, day, week, month, year, all (for top/controversial)
maxItemsPerUserinteger100Max items per user (1-1000)
usernamesFileUrlstringURL to a hosted .txt or .csv file
usernamesFilestringUpload a .txt or .csv file via the Apify Console

What Data Can You Extract from Reddit User History?

Post fields

{
"_username": "thisisbillgates",
"_status": "found",
"_content_type": "post",
"title": "With all of the negative headlines dominating the news these days, it can be difficult to spot signs of progress. What makes you optimistic about the future?",
"selftext": "",
"subreddit": "AskReddit",
"score": 139503,
"num_comments": 20875,
"created_utc": 1519761227.0,
"permalink": "/r/AskReddit/comments/80phz7/with_all_of_the_negative_headlines_dominating_the/",
"url": "https://www.reddit.com/r/AskReddit/comments/80phz7/...",
"author": "thisisbillgates",
"domain": "self.AskReddit",
"is_self": true,
"over_18": false,
"upvote_ratio": 0.92,
"id": "80phz7",
"name": "t3_80phz7"
}

Comment fields

{
"_username": "thisisbillgates",
"_status": "found",
"_content_type": "comment",
"body": "I would be glad to pass along your thoughts on this to the right person at Microsoft...",
"subreddit": "IAmA",
"score": 37256,
"created_utc": 1519758516.0,
"permalink": "/r/IAmA/comments/80ow6w/.../dux4be8/",
"author": "thisisbillgates",
"link_title": "I'm Bill Gates, co-chair of the Bill & Melinda Gates Foundation. Ask Me Anything.",
"parent_id": "t1_dux2k81",
"id": "dux4be8",
"name": "t1_dux4be8"
}

Each item includes 100+ fields. The examples above show the most commonly used ones. All public fields Reddit returns are included.

Speed

UsersItems/userTime
1100~1 second
11,000~10 seconds
10100 each~5 seconds
100100 each~30 seconds

Bulk speed comes from running multiple users in parallel. Rate limits are handled automatically with proxy rotation and retries.

Pricing — Pay Per Event (PPE)

$1.99 per 1,000 items

FAQ

How much content can I scrape per user? Up to ~1,000 posts and ~1,000 comments per user. This is a Reddit server-side limit, not an actor limit.

Do I need a Reddit account or API key? No. The actor uses publicly available data. No login, no API key, no OAuth.

What happens with deleted or suspended accounts? They're included in the output with "_status": "not_found" so you can see exactly which usernames didn't resolve. You're only charged for found content.

What input formats are supported? Plain username (thisisbillgates), prefixed (u/name, /u/name, @name), full URLs (reddit.com/user/name), and uploaded CSV/TXT files.

What's the difference between "overview" and separate posts/comments? Overview returns posts and comments mixed in chronological order, which is how Reddit's profile page works. Separate modes give you only posts or only comments.

How is the data structured? One JSON object per post or comment. Posts have title, selftext, num_comments. Comments have body, link_title, parent_id. Both share score, subreddit, permalink, created_utc, and 90+ more fields.

Support

Extracts publicly available data. Users must comply with Reddit's terms of service and applicable data protection regulations (GDPR, CCPA).


Bulk Reddit user history at scale. Posts, comments, or both. No login, no API key.