Reddit Posts V1 — Best Posts, Duplicates & Stickied (3 ops)
Pricing
from $1.99 / 1,000 results
Reddit Posts V1 — Best Posts, Duplicates & Stickied (3 ops)
Pull Reddit's best/front-page feed, find duplicate/crosspost listings of any post, and grab the stickied/pinned posts of any subreddit. No Reddit account or OAuth required.
Pricing
from $1.99 / 1,000 results
Rating
5.0
(1)
Developer
Red Crawler
Maintained by CommunityActor stats
2
Bookmarked
3
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Reddit Posts V1
Three Reddit post lookups in one actor — Reddit's front-page feed by sort, duplicates / crossposts of any post, and a subreddit's stickied / pinned posts. All endpoints are anonymous — no Reddit account, no proxy to configure.
What you can do
Post fields accept: full URL (https://reddit.com/r/sub/comments/1sys4r2/...), stripped ID (1sys4r2), or t3_ fullname (t3_1sys4r2). Subreddit fields accept a bare name (AskReddit) — strip the leading r/ if you pasted one.
All endpoints (no credentials needed)
1. Best Posts — Reddit's front-page feed by sort
Pulls posts from Reddit's site-wide front page in your chosen sort order — up to 500 posts per run.
Returns: array of post records with t3_* fullnames. Each record has post-shaped columns front-loaded (id, title, author, subreddit, score, ups, downs, upvote_ratio, num_comments, created_utc, permalink, url, selftext, thumbnail, flair, etc.).
Use it when: sampling what's trending site-wide, training feeds, building daily Reddit digests, monitoring viral posts before they cool down.
Optional fields: best_sort (best / hot / new / top / rising / controversial), best_time_filter (hour / day / week / month / year / all — only for top / controversial), best_limit (1–500).
Example
Input
{"endpoint": "best_posts","best_sort": "best","best_limit": 25}
Output (one record per post)
{"endpoint": "best_posts","id": "1tclobu","name": "t3_1tclobu","title": "What is the worst way anyone you know has died?","author": "[deleted]","author_fullname": "t2_v6r2z6g4","subreddit": "AskReddit","subreddit_name_prefixed": "r/AskReddit","score": 3253,"ups": 3253,"downs": 0,"upvote_ratio": 0.86,"num_comments": 0,"created_utc": 1778727951.0,"permalink": "/r/AskReddit/comments/1tclobu/what_is_the_worst_way_anyone_you_know_has_died/","url": "https://www.reddit.com/r/AskReddit/comments/1tclobu/what_is_the_worst_way_anyone_you_know_has_died/","is_self": true,"over_18": false,"spoiler": false,"locked": false,"selftext": "","thumbnail": "self"}
2. Duplicate Posts — crossposts / duplicates of one post
Finds every duplicate or crosspost of a given post — including the same link reposted to other subreddits.
Returns: array of post records (same shape as Best Posts) — each row is a duplicate / crosspost of the input post.
Use it when: tracking how a story spreads across communities, finding the originals of trending memes, anti-repost moderation, content de-duplication.
Optional fields: duplicates_limit (1–100).
Example
Input
{"endpoint": "duplicate_posts","duplicates_post": "1s4a4j6","duplicates_limit": 25}
Output (one record per duplicate)
{"endpoint": "duplicate_posts","id": "1s4d8gh","name": "t3_1s4d8gh","title": "WordPress 6.6 migration tips — what I wish I knew","author": "another_user","author_fullname": "t2_a1b2c3d4","subreddit": "ProWordPress","subreddit_name_prefixed": "r/ProWordPress","score": 87,"ups": 87,"downs": 0,"upvote_ratio": 0.94,"num_comments": 12,"num_crossposts": 0,"created_utc": 1778500120.0,"permalink": "/r/ProWordPress/comments/1s4d8gh/wordpress_66_migration_tips_what_i_wish_i_knew/","url": "https://www.reddit.com/r/Wordpress/comments/1s4a4j6/","is_self": false,"domain": "reddit.com","thumbnail": "default"}
If the post has no duplicates, the run completes with zero rows pushed — that's normal. Reddit only surfaces duplicates for posts that have actually been reposted / crossposted.
3. Stickied Post — pinned posts of a subreddit
Returns the subreddit's pinned / stickied posts (Reddit exposes at most 2 per subreddit).
Returns: array of up to 2 post records — the subreddit's pinned posts.
Use it when: scraping the rules / AMA / weekly thread of a community, building subreddit dashboards, monitoring mod announcements.
Optional fields: sticky_num — all (default, returns both), 1 (top sticky only), 2 (second sticky only).
Example
Input
{"endpoint": "stickied_post","sticky_subreddit": "wordpress","sticky_num": "all"}
Output (one record per stickied post)
{"endpoint": "stickied_post","id": "1r80vkz","name": "t3_1r80vkz","title": "Monthly AMA - Suggestions wanted!","author": "[deleted]","author_fullname": "t2_6wkumqdb","subreddit": "Wordpress","subreddit_name_prefixed": "r/Wordpress","score": 29,"ups": 29,"downs": 0,"upvote_ratio": 1.0,"num_comments": 0,"created_utc": 1777140000.0,"permalink": "/r/Wordpress/comments/1r80vkz/monthly_ama_suggestions_wanted/","url": "https://www.reddit.com/r/Wordpress/comments/1r80vkz/monthly_ama_suggestions_wanted/","is_self": true,"stickied": true,"pinned": false,"locked": false,"over_18": false,"selftext": "We're launching a monthly AMA series featuring people from across the WordPress ecosystem..."}
Credentials
None required. All 3 endpoints are fully anonymous — no Reddit account, no proxy. Just pick the endpoint and run.
How to run
- What to fetch → pick
Best Posts,Duplicate Posts, orStickied Post. - Fill the matching section's fields (only the chosen endpoint's section matters — others are ignored).
- Hit Start. Each post returned = one row in the dataset.
Output
Records are pushed to the run's default dataset. The shape is the same across all 3 endpoints — Reddit's standard post object with post-shaped columns front-loaded.
| Front-loaded column | Meaning |
|---|---|
endpoint | Which lookup produced the row (best_posts / duplicate_posts / stickied_post) |
id / name | Bare post ID and t3_ fullname |
title / author / author_fullname | Post title + author handle + t2_ ID |
subreddit / subreddit_name_prefixed / subreddit_id | Subreddit name, r/-prefixed name, t5_ ID |
score / ups / downs / upvote_ratio | Vote counts |
num_comments / num_crossposts | Engagement counters |
created_utc / created / edited | Timestamps |
permalink / url / domain | Links and link domain |
is_self / is_video / over_18 / spoiler / locked / stickied / pinned / archived | Status booleans |
link_flair_text / author_flair_text | Flairs (when set) |
selftext / thumbnail | Self-post body + thumbnail URL |
total_awards_received / all_awardings | Award info |
Common edge cases
| Edge case | Cause | How it surfaces |
|---|---|---|
| Subreddit private / banned / quarantined | Reddit returns 404 / "not found" | Stickied Post: error row with Reddit returned 404. Best Posts: not applicable. |
| Post has no duplicates | Original post was never reposted | Duplicate Posts: 0 rows pushed. Normal — not an error. |
| Subreddit has no stickied posts | Mods haven't pinned anything | Stickied Post: 0 rows pushed. Normal — not an error. |
| Removed / deleted post | Post target was removed by mods or deleted by author | Duplicate Posts: 0 rows. |
| Empty input | Forgot to paste the post URL / subreddit | Run FAILED immediately, no row pushed, no charge. |
Why this actor is fast
- Speed — 1–3 seconds per call (250 posts) — Best Posts can return 500 in one call. Pure HTTP. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based actors typically take 15–60 seconds per call.
- Reliability — zero browser flakiness. No headless-Chromium crashes. No JS-render timeouts. No captcha pages.
- Footprint — under 100 MB RAM per run. Most browser-based actors need 1–4 GB.
- No proxy to configure. Anonymous — the backend rotates infrastructure behind the scenes.
Status & error reference
Run status (Apify-side, shown on the run page)
| Status | Apify message | Meaning | What to do |
|---|---|---|---|
| "Actor succeeded with N results in the dataset" | Run finished, records pushed. | Open the dataset to view the result. | |
| "The Actor process failed…" | Validation error or missing required input. | Check the run log. You are NOT charged for failed runs. | |
| "The Actor timed out. You can resurrect it with a longer timeout to continue where you left off." | Run exceeded timeout (rare — lookups are fast). | Re-run; check Reddit is reachable. | |
| "The Actor process was aborted. You can resurrect it to continue where you left off." | You stopped the run manually. | No charge for unpushed results. |
Common in-run conditions (visible in run log and output record)
| Condition | Cause | Result |
|---|---|---|
| 404 from Reddit | Subreddit / post not found, banned, or deleted. | Run SUCCEEDED, single row with error: "Reddit returned 404 ...". |
| Validation error: missing post / subreddit | Required input not provided. | Run FAILED immediately, no charge. |
Pricing
Pay-per-result. You're only charged for records actually pushed to the dataset — failed runs and validation errors cost nothing.
| Event | Trigger | Price (per 1,000) |
|---|---|---|
result | Each post row pushed to the dataset | $0.99 |
A 500-post Best Posts run = 500 rows. A 25-duplicate Duplicate Posts run = 25 rows. A Stickied Post run = 1–2 rows.
Need a different shape of data?
- Reddit Subreddits — subreddit info, browse, join / leave, create, post listings (12 ops)
- Reddit Posts & Feeds V2 — home feed + post state controls (auth)
- Reddit Search V2 — search posts / comments / subreddits / users with full filters
- Reddit Bulk Scrape V2 — bulk fetch posts / comments / users by ID (up to 1500 per run)
- Reddit Users V1 — user profile, posts, comments, friends, follow / block
- Reddit Posting V2 — create text / link / image / gallery / video / GIF / crosspost / poll posts
Support and feedback
Found a bug, want a feature, or hit a Reddit error code we don't translate clearly? Open an issue via the actor's Apify Console feedback link, or reach out at the RedCrawler support channel.
Reddit Posts V1 is part of the RedCrawler family of Reddit actors. RedCrawler is independent — not affiliated with, endorsed by, or sponsored by Reddit, Inc. Use it within Reddit's API terms.