Reddit Scraper For Posts & Comments avatar
Reddit Scraper For Posts & Comments
Try for free

1 day trial then $25.00/month - No credit card required now

View all Actors
Reddit Scraper For Posts & Comments

Reddit Scraper For Posts & Comments

creative_tablecloth/reddit-scraper-for-posts
Try for free

1 day trial then $25.00/month - No credit card required now

Access Reddit data freely without authentication. Quickly extract detailed information from Reddit posts and comments, both efficiently and cost-effectively. (approx $0.015 for 1,000 results)

Overview

This is unofficial Reddit API that provides seamless, unrestricted access to Reddit data without the need for authentication. With its straightforward setup and minimal input requirements, you can easily extract detailed information from specific Reddit posts and their associated comments.

Key Features

  • No Authentication Needed: Access a wealth of data without the need to log in.
  • Super Simple Setup: Start scraping with just a single required field—the post URL.
  • Cost-Effective: 1,000 entries for only about $0.015.

User Case

This API is specially designed to extract detailed data from a specific Reddit post URL, focusing on the posts and associated comments.

Data Extraction Details

The API extracts a variety of user-friendly data points from Reddit posts and comments:

  • Posts: Includes title, content, creation time, upvotes, total number of comments, URL to the post, and media details (if any).
  • Comments: Details include the comment text, timestamp, upvotes, and direct URL to the comment.
  • User Info: Usernames and user IDs of the post and comment authors.
  • Community Insights: Name of the subreddit, number of members, and subreddit URL.
  • Media Information: Links to any associated images, videos, and whether the post contains media.
  • Miscellaneous Info: Information about whether the post is an advertisement, pinned, for adults only, etc.

Data Output Formats

  • JSON
  • XML
  • CSV
  • Excel
  • HTML

Example Applications

  • Brand Monitoring: Keep tabs on how your brand or products are discussed across various Reddit communities.
  • Market Research: Gather and analyze discussions and sentiments on a wide array of topics from different subreddits.
  • Trend Analysis: Detect emerging trends and shifts in public opinion in real-time.

Text File Input

Data Output Example

Below is a sample of the JSON output provided by the API, demonstrating the data structure for posts and comments:

1[
2  {
3    "contentType": "post",
4    "createdAt": "2024-06-07T18:10:58Z",
5    "id": "t3_1dawsz6",
6    "parseId": "1dawsz6",
7    "subreddit": "MadeMeSmile",
8    "communityName": "r/MadeMeSmile",
9    "authorId": "t2_g19v0vw3u",
10    "author": "somethingdeido",
11    "title": "Dad surprised daughter in the airplane",
12    "body": "",
13    "upvotes": 9578,
14    "noOfcomments": 124,
15    "communityMembers": 9547058,
16    "url": "https://reddit.com/r/MadeMeSmile/comments/1dawsz6/dad_surprised_daughter_in_the_airplane/",
17    "thumbnailUrl": "https://external-preview.redd.it/OGRpcXhydHZvYTVkMfEOqfXk-XD_VCqP5Tc4o87o6R-9nDdJPvSBaXDLmdsR.png",
18    "videoUrl": "https://v.redd.it/eaa45s5woa5d1/DASH_480.mp4",
19    "isVideo": true,
20    "isAd": false,
21    "isPinned": false,
22    "isOver18": false
23  },
24  {
25    "contentType": "comment",
26    "createdAt": "2024-06-07T19:18:36Z",
27    "id": "t1_l7n8cbx",
28    "parseId": "l7n8cbx",
29    "parentId": "t3_1dawsz6",
30    "postId": "t3_1dawsz6",
31    "subreddit": "MadeMeSmile",
32    "communityName": "r/MadeMeSmile",
33    "authorId": "t2_74rjf",
34    "author": "erayachi",
35    "body": "I friggin' love her going to confront this weirdo, then the look of recognition. This is so damn sweet, my heart's gonna pop.",
36    "upvotes": 1808,
37    "noOfreplies": 6,
38    "url": "https://reddit.com/r/MadeMeSmile/comments/1dawsz6/dad_surprised_daughter_in_the_airplane/l7n8cbx/"
39  }
40]
Developer
Maintained by Community
Actor metrics
  • 10 monthly users
  • 1 star
  • 70.8% runs succeeded
  • Created in Jun 2024
  • Modified 14 days ago