Reddit Subreddits V1 — Info, Browse, Join, Create (12 ops) avatar

Reddit Subreddits V1 — Info, Browse, Join, Create (12 ops)

Pricing

from $1.99 / 1,000 results

Go to Apify Store
Reddit Subreddits V1 — Info, Browse, Join, Create (12 ops)

Reddit Subreddits V1 — Info, Browse, Join, Create (12 ops)

Reddit subreddit toolkit — 8 anonymous lookups (info, rules, sidebar, browse, autocomplete, search names, popular feed, post listings) + 4 auth ops (my subreddits, join/leave, post requirements, create new). Use Reddit Vault or paste Token V2 + proxy.

Pricing

from $1.99 / 1,000 results

Rating

0.0

(0)

Developer

Red Crawler

Red Crawler

Maintained by Community

Actor stats

1

Bookmarked

1

Total users

0

Monthly active users

7 hours ago

Last modified

Share

Reddit Subreddits — Info, Rules & Browse

Look up Reddit subreddits at scale — about / metadata, rules, full sidebar, browse Reddit's curated directories, autocomplete subreddit names, and pull a feed of r/popular. Six self-contained endpoints. No Reddit account, OAuth, or proxy required.

Pick the endpoint, fill the matching section, hit Start.


What you can fetch

Subreddit names accept AskReddit, r/AskReddit, /r/AskReddit, or the full subreddit URL — paste whichever you have.

1. Subreddit Info — about / metadata

The standard "about" payload Reddit exposes for any community.

Input: subreddit name.

Returns: Reddit ID, fullname, display name (raw + prefixed), title, subscriber count, active user count, public + full description, created timestamp, language, type (public / private / restricted), NSFW flag, quarantine flag, URL, header / icon / banner / community-icon images, primary + key + banner-background colors, submit text, allowed submission types (videos / images / polls / galleries), spoilers-enabled flag, wiki-enabled flag.

Use it when: profiling a community in a single call, sizing audiences, importing a subreddit's settings into your own DB.

2. Subreddit Rules

The community's posted rules.

Input: subreddit name.

Returns per rule: subreddit, priority, short name, full description, violation reason, kind (link / comment / both), created timestamp.

Use it when: building rule-aware moderation pipelines, posting bots that need to respect each subreddit's rules, compliance audits, content classifiers.

3. Subreddit Sidebar — full about payload

The full sidebar payload (a richer superset of Subreddit Info — includes the same metadata plus theme, banner styling, public description, submit guidelines, allowed post types, contributor flags, more).

Input: subreddit name.

Returns: complete subreddit settings record. Effectively a one-shot way to mirror a community's identity (theme, branding, posting permissions, allowed content types).

Use it when: theme audits, building branded clones / mirrors, capturing every public knob a subreddit has set.

Reddit's curated directories.

Inputs:

  • Directorypopular (default), new, or default.
  • Limit — 1 to 100 (default 25).

Returns: one Subreddit Info record per community, in the order Reddit ranks them.

Use it when: discovery dashboards, leaderboards (popular vs new vs Reddit's default seed list), competitive audits.

5. Autocomplete — match names by prefix

Subreddit name suggestions for a prefix string.

Inputs:

  • Query — prefix to match (e.g. ask).
  • Limit — 1 to 10 (default 10).
  • Include NSFW results — off by default. Tick to include NSFW communities.

Returns: up to 10 Subreddit Info records ranked by relevance.

Use it when: building search-as-you-type pickers, validating that a subreddit exists, surfacing "did you mean..." suggestions.

Posts directly from r/popular. Supports sort, time window, and country filter.

Inputs:

  • Sorthot (default), new, top, rising, best, controversial.
  • Time filter (optional)hour / day / week / month / year / all. Only used with top or controversial.
  • Country (optional)GLOBAL or any of 50 country codes (US, GB, CA, AU, DE, FR, JP, IN, BR, etc.). Only used with best or hot.
  • Limit — 1 to 100 (default 25).

Returns per post: the same rich post record other actors return — ID, fullname, title, body, author, subreddit, score, ups / downs, comment count, created timestamp, permalink, URL, domain, all flags, flair, media, awards.

Use it when: trending feeds, geo-targeted listicles ("what's hot in Japan / Germany / Brazil right now"), regional newsfeeds, content syndication.


How to run

  1. Pick an endpoint in the "What to fetch" dropdown.
  2. Open the matching section and fill its fields. Each section is independent — fields outside your chosen section are ignored.
  3. Click Start.

Output

Results are pushed to the actor's default dataset. View as a table or download as JSON / CSV / Excel / XML.

  • Subreddit Info / Subreddit Sidebar push one record per run.
  • Subreddit Rules push one record per rule (typically 5–15).
  • Browse Subreddits / Autocomplete / Popular Posts push one record per item (up to your limit).

Every record is tagged with endpoint so you can tell rows apart at a glance. The most useful columns are placed first so the dataset Table view is readable without scrolling.


Common edge cases

  • Private subreddits — not accessible. Reddit hides their about / rules / sidebar from anonymous calls.
  • Quarantined subreddits — return Reddit's quarantine notice rather than full data.
  • Banned subreddits — return an error stub.
  • Lenient name matchingAskReddit, r/AskReddit, /r/AskReddit, and https://www.reddit.com/r/AskReddit/ all resolve to the same community.
  • Autocomplete NSFW filter — by default NSFW results are excluded. Tick Include NSFW results to include them.
  • Popular Posts country filter — only honored when sort is best or hot. With other sorts, Reddit returns the global feed regardless.
  • Empty results — return zero records. The actor reports an empty result rather than failing.

Why this actor is fast

  • Speed — 1–3 seconds per call, end-to-end. Pure HTTP to Reddit's API. No browser to boot, no Playwright / Selenium / Puppeteer overhead. Competing browser-based scrapers typically take 15–60 seconds per call.
  • Reliability — zero browser flakiness. No headless-Chromium crashes. No JS-render timeouts. No captcha pages. No surprise mid-run failures from a browser quirk.
  • Footprint — under 100 MB RAM per run. Most browser-based scrapers need 1–4 GB. We're a thin async dispatcher — Reddit auth, proxy rotation, retry, and GraphQL handling all happen off-actor on our backend.

Pricing

Pay-per-result. You're only charged for records actually pushed to the dataset — failed runs, validation errors, and empty results cost nothing. See the actor's pricing tab for the current per-result rate.


Need a different shape of data?

  • Reddit Scraper V2 — 15 single & bulk reads for posts, comments, profiles, communities (triangular_triangle/reddit-scrape-v2).
  • Reddit Scraper — pull a subreddit's feed, a post's comments, or a single post / comment by URL (triangular_triangle/reddit-content-fetcher).
  • Reddit Bulk Scrape — paste up to 1500 IDs / names / URLs in a single run (triangular_triangle/reddit-bulk-scrape).
  • Reddit Search / Reddit Search V2 — search Reddit by query (triangular_triangle/reddit-search, triangular_triangle/reddit-search-v2).
  • Reddit Users / Reddit Users V2 — single-user lookups (triangular_triangle/reddit-users, triangular_triangle/reddit-users-v2).
  • Reddit Posts — front-page feed, crosspost duplicates, pinned posts (triangular_triangle/reddit-posts).
  • Reddit Wiki, Emojis & Widgets — wiki pages, custom emojis, sidebar widgets (triangular_triangle/reddit-wiki-emojis-widgets).