Free Reddit Scraper

  • trudax/free-reddit-scraper
  • Modified
  • Users 1.4k
  • Runs 14.9k
  • Created by Author's avatarGustavo Rudiger

Free Reddit web scraper to crawl posts, comments, communities, and users without login. Limit web scraping by number of posts or items and extract all data in a dataset in multiple formats.

What does Free Reddit Scraper do?

Free Reddit Scraper enables you to extract data from Reddit such as posts, comments, and limited user info without requiring you to log in. It is built on top of Apify SDK, and you can run it both locally and on the Apify platform.

Free Reddit Scraper enables you to:

  • scrape subreddits to extract the most popular posts, community details (e.g. URL, number of members, category, etc.).
  • scrape Reddit posts for the author's username, post title, text, post comments, and the number of votes.
  • get Reddit comments, including the time of the comments, points received, author usernames, original post, and relevant URLs.
  • scrape user details, comment history, and recent posts.
  • scrape data by specifying keyword or URL(s).

The output of Free Reddit Scraper is limited to 10 posts, 10 comments, 2 communities, and 2 user items.

Need more Reddit data?

Use our powerful Unlimited Reddit Scraper if you want to scrape a lot of Reddit data. Just enter the Reddit URLs or keywords you want to scrape and get your data in a few minutes. The unlimited version of our Reddit scraper gives you complete freedom to scrape all the data available on Reddit without limit restrictions.

How much does it cost to scrape Reddit?

Running Free Reddit Scraper on the Apify platform will give you 100 results for less than $0.40 in platform usage credits. That means you can run the scraper lots of times even with just the free $5 in monthly credits you get on every Apify Free plan.

But if you need to get more data regularly from Reddit, you should grab an Apify subscription.

How to scrape Reddit

Free Reddit Scraper does not require any programming skills or experience. If you do need help to get started, follow our step-by-step guide or watch our short video tutorial. These tutorial steps are also applicable to the Unlimited Reddit Scraper.

How to use scraped Reddit data

  • Keep track of genuine discussions across Reddit communities and stay up to date with your topics of interest.
  • Research. Get a sample of comments spanning a wide range of opinions.
  • Monitor debates over high stakes subjects such as finance, politics, technology, and news in general.
  • Keep up with the latest trends. Follow shifts in the attitude and mentality of communities.

Input parameters

If you choose to run this actor on the Apify platform (recommended), our interface will guide you through the configuration of all relevant parameters. To get started, you simply have to enter a search term or the start URLs you want to scrape from Reddit.

  1. using the Start URLs field - extract all details from the chosen Reddit URL, regardless of whether it's a post, user, or community.
  2. or using the Search Term field - extract all data associated with the selected keyword from Reddit in Communities, Posts, and Users.

How to scrape Reddit by URLs

Almost any Reddit URL will return a result. However, if a URL is not supported, you will see a warning message before scraping the page.

Input examples:

Need a hand to get started? Here are some examples of URLs that you can use to test the scraper.

Note: using the search URL parameter for startUrls, will only extract data from posts. If you want to broaden your search and include data from communities and users, use the search field or specify a particular URL instead.

How to scrape Reddit by search term

  • Search Term or searches - insert the keywords you want to search via Reddit's search engine. You search for as many terms as you want. Don't use this field if you're using the startUrls parameter, as it will be ignored in favor of the URL.
  • Search type or type - choose which seciton of Reddit you want to scrape: "Posts" or "Communities and users".
  • Sort search or sort - use it to sort your search results by Relevance, Hot, Top, New or number of Comments.
  • Filter by date or time - filters the search by the last hour, day, week, month or year. This field is only available if you're scraping Posts.

To see the full list of parameters, their default values, and learn how to set your own values, check the Input Schema tab.

Input example:

In the example below, you can see how the input would look like if you were scraping all Reddit communities and users containing the keyword parrot. Note that the results are sorted by the newest first.

{ "maxItems": 10, "maxPostCount": 10, "maxComments": 10, "maxCommunitiesCount": 10, "scrollTimeout": 40, "proxy": { "useApifyProxy": true }, "debugMode": false, "searches": ["parrots"], "dataType": "communities_and_users", "sort": "new", "time": "all" }


Every time you scrape Reddit, the output data is stored in a dataset. Each post, comment, user, or community is saved as an item inside the dataset. At the end of each actor run, you can download the extracted data onto your computer or export it to any web app in various data formats (JSON, CSV, XML, RSS, HTML Table). Below, you can find a few examples of the outputs you can get for different types of inputs:

Example Reddit post

{ "id": "ss5c25", "title": "Weekly Questions Thread / Open Discussion", "description": "For any questions regarding dough, sauce, baking methods, tools, and more, comment below.You can also post any art, tattoos, comics, etc here. Keep it SFW, though.As always, our wiki has a few sauce recipes and recipes for dough.Feel free to check out threads from weeks ago.This post comes out every Monday and is sorted by 'new'.", "numberOfVotes": "4", "createdAt": "3 days ago", "scrapedAt": "2022-01-09T22:52:48.489Z", "username": "u/AutoModerator", "numberOfComments": "19", "mediaElements": [], "tag": "HELP", "dataType": "post" }

Example Reddit comment

{ "url": "", "username": "Acct-404", "createdAt": "9 h ago", "scrapedAt": "2022-03-09T12:52:48.547Z", "description": "Raises handUhhhh can I get some cheese on my pizza please?", "numberOfVotes": "3", "postUrl": "", "postId": "sud2hm", "dataType": "comment" }

Example Reddit user

{ "id": "orzoy0j1", "url": "", "username": "PizzaPizzaPizzaz", "userIcon": "", "description": "", "over18": false, "createdAt": "2022-06-20T05:37:33.000Z", "scrapedAt": "2023-05-17T03:32:58.817Z", "dataType": "user" }

Example Reddit community

{ "title": "Pizza", "alternativeTitle": "r/Pizza", "createdAt": "Created Aug 26, 2008", "scrapedAt": "2022-03-09T12:54:42.721Z", "members": 366000, "moderatos": ["6745408", "AutoModerator", "BotTerminator", "DuplicateDestroyer"], "url": "", "dataType": "community", "categories": ["hot", "new", "top", "rising"] }

Notes for developers

Limiting results with maxItems

If you need to limit the scope of your search, you can do that by setting the max number of posts you want to scrape inside a community or user.

However, with Free Reddit Scraper, you are only able to scrape a maximum of 10 results for each specified field. If you need to overcome this limit, consider using our Unlimited Reddit Scraper.

{ "maxPostCount": 50, "maxComments": 10, "maxCommunitiesCount": 5, "maxUserCount": 2 }

See the Input Schema tab for the full list of the ways to restrict Reddit Scraper using these parameters:

maxItems, maxPostCount, maxComments, maxCommunitiesCount