
YouTube Comments Scraper
Pricing
from $1.30 / 1,000 comments

YouTube Comments Scraper
This alternative YouTube Data API has no limits or quotas. Extract YouTube comments data from one or multiple YouTube videos: full comment text, posting date, author username, video title, videoId. Download YouTube comments in JSON, CSV, and Excel.
4.8 (8)
Pricing
from $1.30 / 1,000 comments
138
Total users
6.3K
Monthly users
1.1K
Runs succeeded
99%
Issues response
20 hours
Last modified
4 days ago
Constant "I encountered an issue retrieving comments from the video X"
Closed
The GPT's task is simple, scrape comments from 5 latest videos (I also connected YT channel scraper), but I constantly run into this issue.
According to the GPT itself: GPT
The issue seems to be a connection or timeout error with the YouTube comments retrieval tool, which can occasionally happen if the video has a high volume of comments or if there’s a temporary interruption in the service.
The videos have between 10-30 comments so it's not a problem of volume. I did place a 100 comments limit on the Scraper, but it still shouldn't be a problem, because the GPT trips up on the first video in the cue.

Hi, thanks for opening this issue!
What sort of GPT tool are you using, it seems to be a problem there with it not being able to wait long enough for the scraper to finish? Could you link the run where this has been an issue? Thanks!
JakubKazK
Hey, I'm using my own GPT (GPT-4 OpenAI model), and currently it's on hold (I'm also on the cheapest plan). It could all be a noob error in prompting or some other issue stemming from my inexperience. Here's the link to the GPT: https://chatgpt.com/g/g-673359535a00819089eec36569a26ccd-kaz-co-creator-prototype

Hmm, I'm not really experienced with these new custom GPT applications, but there is in fact one run on your profile that was started from what seems to be OpenAI's API. https://console.apify.com/view/runs/2r61fatQeR7ZlMyJA
It took more than 10 minutes before you've aborted it, so it makes sense that it timed out on OpenAI's end. They probably have some limit there, like e.g. 5 minutes, I don't know.
More importantly though, you've clearly set the maxResults
limit to 5, which the crawler ignored? We will investigate this, as I believe that that's a bug.
I will keep you updated here, thanks!

Hello again. We fixed the maxResults
limit bug and we are assuming that the issue has been solved. If it was not and the problem persists, please provide us more information so that we can help you find a solution. Thank you!