Reddit Scraper
3 days trial then $40.00/month - No credit card required now
Reddit Scraper
3 days trial then $40.00/month - No credit card required now
Tap into the wealth of Reddit's data with our Reddit Scraper. Extract valuable insights from posts, subreddits, comments, and user data effortlessly. Simplify analysis and gain valuable insights from the diverse Reddit community with our user-friendly and efficient tool.
Actor - Reddit Scraper
Reddit scraper
Reddit, the popular social media platform, is home to a vast array of discussions, communities, and user-generated content. From thought-provoking discussions to entertaining memes, Reddit offers a wealth of information and engagement. However, extracting data from Reddit programmatically can be a complex task due to the platform's structure and limitations.
Introducing our Reddit scraper, a versatile tool designed to extract valuable data from Reddit effortlessly. Our scraper can fetch posts, subreddits, comments, and user information, enabling you to access and analyze the content and interactions happening on Reddit.
Since Reddit's official API can be prohibitively expensive to use for many individuals and organizations, our super-fast scraper provides an affordable solution for fetching Reddit data without breaking the bank.
The Reddit data scraper supports the following features:
-
Fetch anything - Our scraper allows you to retrieve data from any Reddit URL. Whether it's a search query, a specific subreddit, a post, a user profile, or even the Reddit homepage itself, you can utilize our scraper to scrape the desired data. Simply input the URL, and our scraper will gather the relevant information for your analysis.
-
Search - With our scraper, you can perform advanced searches on Reddit using various parameters. You have the flexibility to specify keywords, search modes, sorting options, and time frames to refine your search results. This feature enables you to extract specific content from Reddit based on your criteria.
-
Scrape homepage - Our scraper can extract data from Reddit's homepage, allowing you to gather the latest and most popular content from across the platform. Stay up to date with trending topics, viral posts, and engaging discussions by scraping Reddit's homepage effortlessly.
-
Scrape popular posts - If you're interested in fetching popular posts, our scraper provides the functionality to extract the most upvoted and widely discussed content on Reddit. By leveraging this feature, you can gather a curated collection of highly engaging posts for further analysis or exploration.
-
Scrape subreddits - Our scraper enables you to fetch entire subreddits along with all the posts they contain. This capability allows you to gather comprehensive data from specific communities on Reddit, providing insights into their discussions, trends, and user interactions.
-
Scrape users - Extracting user information is made easy with our scraper. You can retrieve data related to Reddit users, including their comments and posts. Analyze user behavior, track their contributions, and gain a deeper understanding of the Reddit community through comprehensive user data scraping.
-
Scrape any post - Our scraper gives you the flexibility to fetch any specific post of your choice. Simply provide the URL of the desired post, and our scraper will extract the content, comments, and other associated data for your analysis.
Bugs, fixes, updates, and changelog
This scraper is under active development. If you have any feature requests you can create an issue from here.
Input Parameters
The input of this scraper should be JSON containing the list of pages on Reddit that should be visited. Required fields are:
-
search
: (Optional) (String) Keyword that you want to search on Reddit. -
startUrls
: (Optional) (Array) List of Reddit URLs. You should only provide a news list, jobs list, or detailed URLs. -
searchMode
: (Optional) (String) Type of search you want to make at Reddit. (Posts
,Communities
,People
) -
sort
: (Optional) (String) Specifies how you want to sort your results. (Relevance
,Hot
,Top
,New
,Most Comments
) -
time
: (Optional) (String) Specifies the time range for sorting your results. (All Time
,Past Year
,Past Month
,Past Week
,Past 24 Hours
,Past Hour
) -
includeComments
: (Optional) (Boolean) Specifies whether to include comments or not within the posts. -
endPage
: (Optional) (Number) Final number of page that you want to scrape. The default isInfinite
. This applies to allsearch
requests andstartUrls
individually. -
maxItems
: (Optional) (Number) You can limit scraped items. This should be useful when you search through the big lists or search results. -
proxy
: (Required) (Proxy Object) Proxy configuration. -
extendOutputFunction
: (Optional) (String) Function that takes a JQuery handle ($) as an argument and returns an object with data. -
customMapFunction
: (Optional) (String) Function that takes each object's handle as an argument and returns the object with executing the function.
This solution requires the use of Proxy servers, either your own proxy servers or you can use Apify Proxy.
Tip
When you want to scrape over a specific list URL, just copy and paste the link as one of the startUrl.
If you would like to scrape only the first page of a list then put the link for the page and have the endPage
as 1.
With the last approach that is explained above you can also fetch any interval of pages. If you provide the 5th page of a list and define the endPage
parameter as 6 then you'll have the 5th and 6th pages only.
Compute Unit Consumption
The actor is optimized to run blazing fast and scrape as many items as possible. Therefore, it forefronts all the detailed requests. If the actor doesn't block very often it'll scrape 100 listings in 2 minutes with ~0.03-0.05 compute units.
Reddit Scraper Input example
1{ 2 "startUrls": [ 3 "https://www.reddit.com/", 4 "https://www.reddit.com/hot", 5 "https://www.reddit.com/best", 6 "https://www.reddit.com/rising", 7 "https://www.reddit.com/r/popular", 8 "https://www.reddit.com/r/popular/hot", 9 "https://www.reddit.com/r/popular/new", 10 "https://www.reddit.com/r/popular/top", 11 "https://www.reddit.com/r/popular/rising", 12 "https://www.reddit.com/new", 13 "https://www.reddit.com/top/?t=hour", 14 "https://www.reddit.com/r/crypto", 15 "https://www.reddit.com/r/crypto/hot", 16 "https://www.reddit.com/r/crypto/new", 17 "https://www.reddit.com/r/crypto/top/?t=week", 18 "https://www.reddit.com/subreddits", 19 "https://www.reddit.com/subreddits/new", 20 "https://www.reddit.com/users", 21 "https://www.reddit.com/user/nationalgeographic", 22 "https://www.reddit.com/user/lukaskrivka/comments", 23 "https://www.reddit.com/user/lukaskrivka/comments/?sort=new", 24 "https://www.reddit.com/user/lukaskrivka/comments/?sort=hot", 25 "https://www.reddit.com/user/lukaskrivka/comments/?sort=top&t=day", 26 "https://www.reddit.com/user/lukaskrivka/submitted", 27 "https://www.reddit.com/user/lukaskrivka/submitted/?sort=new", 28 "https://www.reddit.com/user/lukaskrivka/submitted/?sort=hot", 29 "https://www.reddit.com/user/lukaskrivka/submitted/?sort=top&t=week", 30 "https://www.reddit.com/r/redditisfun/comments/13wxepd/rif_dev_here_reddits_api_changes_will_likely_kill" 31 ], 32 "search": "", 33 "searchMode": "user", 34 "sort": "top", 35 "time": "year", 36 "endPage": 2000, 37 "maxItems": 10000, 38 "includeComments": true, 39 "proxy": { 40 "useApifyProxy": true 41 } 42}
During the Run
During the run, the actor will output messages letting you know what is going on. Each message always contains a short label specifying which page from the provided list is currently specified. When items are loaded from the page, you should see a message about this event with a loaded item count and total item count for each page.
If you provide incorrect input to the actor, it will immediately stop with a failure state and output an explanation of what is wrong.
Reddit Export
During the run, the actor stores results into a dataset. Each item is a separate item in the dataset.
You can manage the results in any language (Python, PHP, Node JS/NPM). See the FAQ or our API reference to learn more about getting results from this Reddit actor.
Scraped Reddit Properties
The structure of each item on Reddit looks like this:
Community
1{ 2 "type": "community", 3 "id": "2qh13", 4 "url": "https://www.reddit.com/r/worldnews/", 5 "title": "World News", 6 "icon": "https://styles.redditmedia.com/t5_2qh13/styles/communityIcon_pldiwqvsyns91.png?width=256&v=enabled&s=3088f291b089bb5bc15599d429a759b258c6cbd5", 7 "subscribers": 32018938, 8 "bannerImage": "https://styles.redditmedia.com/t5_2qh13/styles/bannerBackgroundImage_5q0f5lsk6pu01.png?width=4000&v=enabled&s=bdfa7d8903b848b0f7885868e8228652a1d3ee9d", 9 "subredditType": "public", 10 "headerTitle": "News from Planet Earth", 11 "publicDescription": "A place for major news from around the world, excluding US-internal news.", 12 "publicDescriptionHTML": "<!-- SC_OFF --><div class=\"md\"><p>A place for major news from around the world, excluding US-internal news.</p>\n</div><!-- SC_ON -->", 13 "description": ">>> - **Other Subs:**\n\n>>> - [Related](http://goo.gl/ztbbza)\n>>> - /r/News\n>>> - /r/PoliticalDiscussion\n>>> - /r/WorldEvents\n>>> - /r/GeoPolitics\n>>> - /r/IntheNews\n>>> - /r/GlobalTalk \n>>> - /r/Breakingnews \n>>> - /r/Business\n>>> - /r/Economics\n>>> - /r/Environment\n>>> - /r/History\n>>> - /r/HumanRights\n>>> - /r/Features\n>>> - /r/UpliftingNews\n>>> - /r/NewsOfTheWeird\n>>> - /r/FakeNews\n>>> - /r/ID_News \n\n>>> - [N. America](https://goo.gl/dkfVnB)\n>>> - /r/Politics\n>>> - /r/USA\n>>> - /r/USANews\n>>> - /r/Canada\n>>> - /r/CanadaPolitics\n>>> - /r/OnGuardForThee\n>>> - /r/Cuba\n>>> - /r/Mexico\n>>> - /r/PuertoRico\n\n>>> - [S. America](https://goo.gl/DDaqmY)\n>>> - /r/Argentina\n>>> - /r/Brasil\n>>> - /r/Chile\n>>> - /r/Colombia\n>>> - /r/Ecuador\n>>> - /r/Guyana\n>>> - /r/Nicaragua\n>>> - /r/PanAmerica \n>>> - /r/Suriname \n>>> - /r/Uruguay\n>>> - [/r/Venezuela](/r/vzla)\n\n>>> - [Europe](https://goo.gl/ZF5rou)\n>>> - /r/Armenia\n>>> - /r/Austria\n>>> - /r/Azerbaijan\n>>> - /r/Belgium\n>>> - [/r/Bosnia](/r/BiH)\n>>> - /r/Bulgaria\n>>> - /r/Croatia\n>>> - /r/Denmark\n>>> - /r/Europe\n>>> - /r/Finland\n>>> - /r/France\n>>> - [/r/Georgia](/r/sakartvelo)\n>>> - /r/Germany\n>>> - /r/Greece\n>>> - /r/Hungary\n>>> - /r/Ireland\n>>> - /r/Italy\n>>> - /r/Moldova\n>>> - /r/TheNetherlands\n>>> - /r/Poland\n>>> - /r/Polska\n>>> - /r/Portugal\n>>> - /r/Romania\n>>> - /r/Scotland\n>>> - /r/Serbia\n>>> - /r/Spain\n>>> - /r/Sweden\n>>> - /r/Switzerland\n>>> - /r/Turkey\n>>> - /r/UnitedKingdom\n>>> - /r/UKPolitics\n>>> - /r/Ukraina\n>>> - /r/Ukraine\n>>> - /r/UkrainianConflict\n\n>>> - [Asia](https://goo.gl/az3Ygk)\n>>> - /r/Afghanistan\n>>> - /r/Bangladesh\n>>> - /r/China\n>>> - /r/India\n>>> - /r/Kazakhstan \n>>> - /r/Malaysia\n>>> - /r/Myanmar\n>>> - /r/Nepal\n>>> - /r/NorthKoreaNews\n>>> - /r/Pakistan\n>>> - /r/Philippines\n>>> - /r/Singapore\n\n>>> - /r/Thailand\n>>> - /r/Turkey\n\n>>> - [Middle East](https://goo.gl/Ut3syV)\n>>> - /r/Assyria\n>>> - /r/Iran\n>>> - /r/Iranian\n>>> - /r/Iraq\n>>> - /r/Israel\n>>> - /r/Kurdistan\n>>> - /r/LevantineWar\n>>> - /r/MiddleEastNews\n>>> - /r/MideastPeace\n>>> - /r/Palestine\n>>> - /r/Syria\n>>> - /r/Yemen\n>>> - /r/YemeniCrisis\n\n>>> - [Africa](https://goo.gl/FgJ4Na)\n>>> - /r/Africa\n>>> - /r/Namibia \n>>> - /r/SouthAfrica\n\n>>> - [Oceania](https://goo.gl/5pz9uS)\n>>> - /r/Australia\n>>> - /r/Fijian\n>>> - /r/NewZealand\n>>> - /r/Oceania\n>>> - /r/Westpapua\n\n## **Filter out dominant topics:**\n\n[Display COVID-19 submissions](http://www.reddit.com/r/worldnews#nc)\n\n[Filter COVID-19](http://nc.reddit.com/r/worldnews#www)\n\n[Display Russia/Ukraine submissions](http://www.reddit.com/r/worldnews#nr)\n\n[Filter Russia/Ukraine](http://nr.reddit.com/r/worldnews#www)\n\n[Display North Korea submissions](http://www.reddit.com/r/worldnews#nk)\n\n[Filter North Korea](http://nk.reddit.com/r/worldnews#www)\n\n[Display Afghanistan submissions](http://www.reddit.com/r/worldnews#nh)\n\n[Display Israel/Palestine submissions](http://www.reddit.com/r/worldnews#ni)\n\n[Filter Israel / Palestine](http://ni.reddit.com/r/worldnews#www)\n\n[Display all submissions](http://www.reddit.com/r/worldnews#iu)\n\n[Filter all dominant topics](http://iu.reddit.com/r/worldnews#www)\n\n#### [](#h4-green)\n># Welcome!\n>\n> /r/worldnews is for major news from around the world except US-internal news / US politics\n\n> [See all of our AMA events here] (http://www.reddit.com/r/worldnews/wiki/ama)\n\n###### [](#h6-red)\n>#Worldnews Rules\n>\n>\n>### **Disallowed submissions**\n>\n> * US internal news/US politics\n> * Editorialized titles\n> * Misleading titles\n> * Editorials, opinion, analysis\n> * Feature stories\n> * Non-English articles\n> * Images, videos or audio clips\n> * Petitions, advocacy, surveys\n> * All caps words in titles\n> * Blogspam (if stolen content/direct copy)\n> * Twitter, Facebook, Tumblr\n> * Old news (≥1 week old) articles\n\n>[See the wiki](/r/worldnews/wiki/rules#wiki_disallowed_submissions) for details on each rule\n\n>### **Disallowed comments**\n>\n> * Bigotry / Other offensive content\n> * Personal attacks on other users\n> * Memes/GIFs\n> * Unlabeled NSFW images/videos\n> * URL shorteners\n> * Celebrating death/Advocating violence\n> * Genocide denial/downplaying genocide\n> * Disinformation/misinformation\n> * Health disinformation/misinformation\n\n>[See the wiki](/r/worldnews/wiki/rules#wiki_disallowed_comments) for details on each rule\n\n>[Guidelines for the media](/r/worldnews/wiki/media)\n\n> Violation of our rules may result in a ban from this subreddit. Untimed bans may be lifted when the moderators are confident that you will not continue to infringe on the community rules.\n\n>\n>----\n>\n>**Please don't ever feed the trolls.** \n>Downvote, report and move on.\n>\n>----\n>\n>* [**What moderators do and can't do**](/r/worldnews/wiki/whatmodsdo)\n>* [**Message the moderators**](http://www.reddit.com/message/compose?to=%2Fr%2Fworldnews)\n\n#### [](#h4-green)\n># Sticky Posts\n\n> • [A list of all recent stickied posts.](/r/worldnews/search?q=author%3AWorldNewsMods+OR+title%3A%22worldnews+live+thread%22+OR+self%3Ayes&restrict_sr=on&sort=new&t=all)\n\n> • **[Daily Live Threads](https://www.reddit.com/user/WorldNewsMods/submitted/?sort=new)**\n>", 14 "submitText": "###### [](#h6-red)\n># DISALLOWED SUBMISSIONS\n>\n> * US internal news/US politics\n> * Editorialized titles\n> * Feature stories\n> * Editorials, opinion, analysis\n> * Non-English articles\n> * Raw images, videos or audio clips\n> * Petitions, advocacy, surveys\n> * No all caps words in titles\n> * Blogspam (if stolen content/direct copy)\n> * Twitter, Facebook, Tumblr\n> * Old news (≥1 week old) articles", 15 "createdAt": 1201231119, 16 "submissionType": "link", 17 "allowedMediaInComments": [ 18 "expression" 19 ], 20 "hideCommentScoreForMins": 90 21}
Post
1{ 2 "type": "post", 3 "id": "t3_13wxepd", 4 "url": "https://www.reddit.com/r/redditisfun/comments/13wxepd/rif_dev_here_reddits_api_changes_will_likely_kill/", 5 "title": "RIF dev here - Reddit's API changes will likely kill RIF and other apps, on July 1, 2023", 6 "flair": null, 7 "flairs": [], 8 "subreddit": "redditisfun", 9 "subredditId": "t5_2rfi7", 10 "author": "talklittle", 11 "authorId": "t2_39mle", 12 "authorFlair": "RIF Dev", 13 "text": "I need more time to get all my thoughts together, but posting this quick post since so many users have been asking, and it's been making rounds on news sites.\n\nSummary of what Reddit Inc has announced so far, specifically the parts that will kill many third-party apps:\n\n1. The Reddit API will cost money, and the pricing announced today will cost apps like [Apollo $20 million per year to run](https://old.reddit.com/r/apolloapp/comments/13ws4w3/had_a_call_with_reddit_to_discuss_pricing_bad/). RIF may differ but it would be in the same ballpark. And no, RIF does not earn anywhere remotely near this number.\n\n2. As part of this they are blocking ads in third-party apps, which make up the majority of RIF's revenue. So they want to force a paid subscription model onto RIF's users. **Meanwhile Reddit's official app still continues to make the vast majority of its money from ads.**\n\n3. Removal of sexually explicit material from third-party apps **while keeping said content in the official app**. Some people have speculated that NSFW is going to leave Reddit entirely, but then why would Reddit Inc have recently [*expanded* NSFW upload support on their desktop site](https://old.reddit.com/r/modnews/comments/13evueo/bringing_image_uploads_to_parity/)?\n\nTheir recent moves smell a lot like they want third-party apps gone, RIF included.\n\nI know some users will chime in saying they are willing to pay a monthly subscription to keep RIF going, but trust me that you would be in the minority. There is very little value in paying a high subscription **for less content** (in this case, NSFW). Honestly if I were a user of RIF and not the dev, I'd have a hard time justifying paying the high prices being forced by Reddit Inc, despite how much RIF obviously means to me.\n\nThere is a lot more I want to say, and I kind of scrambled to write this since I didn't expect news reports today. I'll probably write more follow-up posts that are better thought out. But this is the gist of what's been going on with Reddit third-party apps in 2023.", 14 "textHTML": "<!-- SC_OFF --><div class=\"md\"><p>I need more time to get all my thoughts together, but posting this quick post since so many users have been asking, and it&#39;s been making rounds on news sites.</p>\n\n<p>Summary of what Reddit Inc has announced so far, specifically the parts that will kill many third-party apps:</p>\n\n<ol>\n<li><p>The Reddit API will cost money, and the pricing announced today will cost apps like <a href=\"https://old.reddit.com/r/apolloapp/comments/13ws4w3/had_a_call_with_reddit_to_discuss_pricing_bad/\">Apollo $20 million per year to run</a>. RIF may differ but it would be in the same ballpark. And no, RIF does not earn anywhere remotely near this number.</p></li>\n<li><p>As part of this they are blocking ads in third-party apps, which make up the majority of RIF&#39;s revenue. So they want to force a paid subscription model onto RIF&#39;s users. <strong>Meanwhile Reddit&#39;s official app still continues to make the vast majority of its money from ads.</strong></p></li>\n<li><p>Removal of sexually explicit material from third-party apps <strong>while keeping said content in the official app</strong>. Some people have speculated that NSFW is going to leave Reddit entirely, but then why would Reddit Inc have recently <a href=\"https://old.reddit.com/r/modnews/comments/13evueo/bringing_image_uploads_to_parity/\"><em>expanded</em> NSFW upload support on their desktop site</a>?</p></li>\n</ol>\n\n<p>Their recent moves smell a lot like they want third-party apps gone, RIF included.</p>\n\n<p>I know some users will chime in saying they are willing to pay a monthly subscription to keep RIF going, but trust me that you would be in the minority. There is very little value in paying a high subscription <strong>for less content</strong> (in this case, NSFW). Honestly if I were a user of RIF and not the dev, I&#39;d have a hard time justifying paying the high prices being forced by Reddit Inc, despite how much RIF obviously means to me.</p>\n\n<p>There is a lot more I want to say, and I kind of scrambled to write this since I didn&#39;t expect news reports today. I&#39;ll probably write more follow-up posts that are better thought out. But this is the gist of what&#39;s been going on with Reddit third-party apps in 2023.</p>\n</div><!-- SC_ON -->", 15 "gallery": [], 16 "score": 33998, 17 "upvoteRatio": 1, 18 "isOriginal": false, 19 "createdAt": 1685565699, 20 "editedAt": 1685565933, 21 "isOver18": false, 22 "removedByCategory": null, 23 "commentCount": 6265, 24 "awards": [ 25 { 26 "id": "award_3dd248bc-3438-4c5b-98d4-24421fd6d670", 27 "name": "Coin Gift", 28 "description": "Give the gift of %{coin_symbol}250 Reddit Coins.", 29 "icon": "https://i.redd.it/award_images/t5_22cerq/cr1mq4yysv541_CoinGift.png", 30 "count": 1 31 }, 32 { 33 "id": "award_19860e30-3331-4bac-b3d1-bd28de0c7974", 34 "name": "Heartwarming", 35 "description": "I needed this today", 36 "icon": "https://i.redd.it/award_images/t5_22cerq/v1mxw8i6wnf51_Heartwarming.png", 37 "count": 1 38 }, 39 { 40 "id": "award_88fdcafc-57a0-48db-99cc-76276bfaf28b", 41 "name": "Press F", 42 "description": "To pay respects.", 43 "icon": "https://i.redd.it/award_images/t5_22cerq/tcofsbf92md41_PressF.png", 44 "count": 1 45 }, 46 { 47 "id": "award_b92370bb-b7de-4fb3-9608-c5b4a22f714a", 48 "name": "Tree Hug", 49 "description": "Show nature some love.", 50 "icon": "https://i.redd.it/award_images/t5_22cerq/fukjtec638u41_TreeHug.png", 51 "count": 2 52 }, 53 { 54 "id": "award_2ae56630-cfe0-424e-b810-4945b9145358", 55 "name": "Helpful (Pro)", 56 "description": "Thank you stranger. Gives %{coin_symbol}100 Coins to both the author and the community.", 57 "icon": "https://www.redditstatic.com/gold/awards/icon/Animated_Helpful_512.png", 58 "count": 2 59 }, 60 { 61 "id": "award_4ca5a4e6-8873-4ac5-99b9-71b1d5161a91", 62 "name": "Argentium", 63 "description": "Latin for distinguished, this award shimmers like silver and is stronger than steel. It’s for those who deserve outsized recognition. Gives 2,500 Reddit Coins and three months of r/lounge access and ad-free browsing.", 64 "icon": "https://www.redditstatic.com/gold/awards/icon/Mithril_512.png", 65 "count": 1 66 }, 67 { 68 "id": "gid_2", 69 "name": "Gold", 70 "description": "Gives 100 Reddit Coins and a week of r/lounge access and ad-free browsing.", 71 "icon": "https://www.redditstatic.com/gold/awards/icon/gold_512.png", 72 "count": 12 73 }, 74 { 75 "id": "gid_3", 76 "name": "Platinum", 77 "description": "Gives 700 Reddit Coins and a month of r/lounge access and ad-free browsing.", 78 "icon": "https://www.redditstatic.com/gold/awards/icon/platinum_512.png", 79 "count": 4 80 }, 81 { 82 "id": "award_02d9ab2c-162e-4c01-8438-317a016ed3d9", 83 "name": "Take My Energy", 84 "description": "I'm in this with you.", 85 "icon": "https://i.redd.it/award_images/t5_q0gj4/p4yzxkaed5f61_oldtakemyenergy.png", 86 "count": 3 87 }, 88 { 89 "id": "award_58ef8551-8c27-4f03-afa5-748432194e3d", 90 "name": "Defeated", 91 "description": "The process of taking a painful L", 92 "icon": "https://i.redd.it/award_images/t5_22cerq/ooo0r2cq7q161_Defeated.png", 93 "count": 3 94 }, 95 { 96 "id": "award_9583d210-a7d0-4f3c-b0c7-369ad579d3d4", 97 "name": "Mind Blown", 98 "description": "When a thing immediately combusts your brain. Gives %{coin_symbol}100 Coins to both the author and the community.", 99 "icon": "https://i.redd.it/award_images/t5_22cerq/wa987k0p4v541_MindBlown.png", 100 "count": 1 101 }, 102 { 103 "id": "award_43c43a35-15c5-4f73-91ef-fe538426435a", 104 "name": "Bless Up (Pro)", 105 "description": "Prayers up for the blessed. Gives %{coin_symbol}100 Coins to both the author and the community.", 106 "icon": "https://i.redd.it/award_images/t5_22cerq/xe5mw55w5v541_BlessUp.png", 107 "count": 1 108 }, 109 { 110 "id": "award_abcdefe4-c92f-4c66-880f-425962d17098", 111 "name": "Burning Cash", 112 "description": "I don't need it, I don't even necessarily want it, but I've got some cash to burn so I'm gonna get it.", 113 "icon": "https://i.redd.it/award_images/t5_22cerq/kqr00h8b7q161_BurningCash.png", 114 "count": 1 115 }, 116 { 117 "id": "award_31260000-2f4a-4b40-ad20-f5aa46a577bf", 118 "name": "Timeless Beauty", 119 "description": "Beauty that's forever. Gives %{coin_symbol}100 Coins each to the author and the community.", 120 "icon": "https://www.redditstatic.com/gold/awards/icon/Timeless_512.png", 121 "count": 1 122 }, 123 { 124 "id": "award_28e8196b-d4e9-45bc-b612-cd4c7d3ed4b3", 125 "name": "Rocket Like", 126 "description": "When an upvote just isn't enough, smash the Rocket Like.", 127 "icon": "https://i.redd.it/award_images/t5_q0gj4/35d17tf5e5f61_oldrocketlike.png", 128 "count": 1 129 }, 130 { 131 "id": "award_8352bdff-3e03-4189-8a08-82501dd8f835", 132 "name": "Hugz", 133 "description": "Everything is better with a good hug", 134 "icon": "https://i.redd.it/award_images/t5_q0gj4/ks45ij6w05f61_oldHugz.png", 135 "count": 1 136 }, 137 { 138 "id": "award_b4ff447e-05a5-42dc-9002-63568807cfe6", 139 "name": "All-Seeing Upvote", 140 "description": "A glowing commendation for all to see", 141 "icon": "https://www.redditstatic.com/gold/awards/icon/Illuminati_512.png", 142 "count": 10 143 } 144 ] 145}
Comment
1{ 2 "type": "comment", 3 "id": "j2hu3o9", 4 "url": "https://www.reddit.com/r/keto/comments/zzzv8m/expectations_too_high/j2hu3o9/", 5 "author": "lukaskrivka", 6 "authorId": "t2_4y22fmn1", 7 "authorFlair": null, 8 "isSubmitter": false, 9 "body": "It has to be individual because when I tried keto 7 years ago, my anxiety and depression almost disappeared after like 1 week, it was totally magical and unexpected. I guess it depends on your body and what is the main cause of your condition.\n\nI want to try it again this year as my mood got worse again over years.", 10 "bodyHTML": "<div class=\"md\"><p>It has to be individual because when I tried keto 7 years ago, my anxiety and depression almost disappeared after like 1 week, it was totally magical and unexpected. I guess it depends on your body and what is the main cause of your condition.</p>\n\n<p>I want to try it again this year as my mood got worse again over years.</p>\n</div>", 11 "createdAt": 1672575589, 12 "editedAt": false, 13 "score": 2, 14 "parentId": "t3_zzzv8m", 15 "postUrl": "https://www.reddit.com/r/keto/comments/zzzv8m/expectations_too_high/", 16 "postTitle": "Expectations too high?", 17 "postId": "t3_zzzv8m", 18 "subreddit": "keto", 19 "subredditId": "t5_2rske", 20 "awards": [] 21}
User
1{ 2 "type": "user", 3 "id": "129jlh", 4 "url": "https://www.reddit.com/user/MikeMikeGaming", 5 "name": "MikeMikeGaming", 6 "title": "Opanic", 7 "publicDescription": "", 8 "icon": "https://styles.redditmedia.com/t5_bqe06/styles/profileIcon_snoo-nftv2_bmZ0X2VpcDE1NToxMzdfYTMzOTZhZjIwY2U1MmJkM2M3YWI2ZDcwNDZiZTYxNzI1N2Y2MGViOV80NjA0_rare_e94a979b-b6ad-489a-8b25-87226fc848bd-headshot.png?width=256&height=256&crop=256:256,smart&v=enabled&s=41b559dbf2ba43f05b8c483d54b220266f5dd873", 9 "commentKarma": 16543, 10 "linkKarma": 16132, 11 "createdAt": 1476985743, 12 "isGold": false, 13 "isMod": true, 14 "isOver18": false, 15 "isVerified": true, 16 "hasVerifiedEmail": true, 17 "isEmployee": false, 18 "acceptsFollowers": true 19}
Contact
Please visit us through epctex.com to see all the products that are available for you. If you are looking for any custom integration or so, please reach out to us through the chat box in epctex.com. In need of support? devops@epctex.com is at your service.
Actor Metrics
30 monthly users
-
5 stars
>99% runs succeeded
Created in Jun 2023
Modified 4 hours ago