YouTube Comments Scraper
Pricing
Pay per event
YouTube Comments Scraper
Scrape YouTube video comments with full metadata including text, author info, likes, reply counts, and timestamps. Uses the InnerTube API directly — no browser or Playwright needed. Fast, reliable, and cost-efficient with pay-per-comment pricing. Export to JSON, CSV, or Excel.
Pricing
Pay per event
Rating
0.0
(0)
Developer
Stas Persiianenko
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Scrape YouTube video comments with full metadata — comment text, author info, likes, reply counts, and timestamps. Powered by YouTube's internal InnerTube API: no browser, no Selenium, no Playwright. Just fast, reliable HTTP calls that mirror what YouTube's own web client does.
Paste in one or more video URLs, set a comment limit, and get structured JSON in seconds. Works with regular videos, live streams, and any public YouTube video.
What Does It Do?
YouTube Comments Scraper fetches top-level comments from any public YouTube video. For each comment it returns:
- Comment text — the full, untruncated comment body
- Author metadata — display name, channel ID, channel URL, and avatar image URL
- Engagement metrics — like count (parsed as a number, e.g. 203,000 not "203K") and reply thread count
- Publish time — relative timestamp as shown on YouTube (e.g. "11 months ago")
- Author flags — whether the commenter is verified (
isVerified) or the video's creator (isCreator) - Video context — video ID, video title, and channel name are included on every comment row
When includeReplies is enabled, reply threads are expanded and individual replies are included alongside top-level comments.
Who Is It For?
- Brand managers monitoring what audiences say about their videos or competitor videos
- Market researchers analyzing viewer sentiment and feedback at scale
- Content creators studying what resonates with audiences in their niche
- Data scientists building NLP training sets, sentiment classifiers, or social listening pipelines
- Journalists and investigators surfacing notable public reactions to news events or viral content
- Community managers identifying top commenters and engaged fans
- Developers integrating YouTube comment data into dashboards, CRMs, or analytics apps
Why Use This Scraper?
- No browser required — pure HTTP via InnerTube API, 256 MB memory, fast cold starts
- Handles pagination automatically — fetches up to 10,000 comments per video with continuation tokens
- Accepts any URL format — full watch URLs, short
youtu.belinks, live stream URLs, or plain 11-character video IDs - Likes as real numbers — parses "203K" →
203000and "1.2M" →1200000so you can sort and filter - Creator flag included — instantly identify when the video author replied to their own comment section
- Batch processing — scrape multiple videos in a single run, each with its own comment limit
- Cost-efficient pricing — pay only for comments actually extracted, not for compute time or page loads
How Much Does It Cost to Scrape YouTube Comments?
This actor uses pay-per-event (PPE) pricing — you only pay for what you extract:
| Event | Price | Description |
|---|---|---|
| Run start | $0.005 | One-time charge when the actor starts |
| Comment scraped | $0.002 | Per comment successfully extracted |
Example costs:
- 100 comments from 1 video: $0.005 + (100 × $0.002) = $0.205
- 500 comments from 5 videos: $0.005 + (500 × $0.002) = $1.005
- 1,000 comments from 1 viral video: $0.005 + (1,000 × $0.002) = $2.005
- 50 comments from 10 videos: $0.005 + (500 × $0.002) = $1.005
Compute costs (memory/CPU time) are minimal — no browser means near-zero infrastructure overhead.
How to Use
- Open the actor on Apify Store
- Paste video URLs into the "Video URLs or IDs" field — one per line. Accepts full YouTube URLs, short links, or plain video IDs.
- Set max comments — choose how many top-level comments to fetch per video (default: 100, max: 10,000)
- Toggle replies — enable "Include replies" if you want reply threads expanded in the output
- Click Start and wait — most runs complete in seconds to a few minutes depending on comment volume
- Download results as JSON, CSV, or Excel from the dataset tab
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
videoUrls | array | required | Video URLs or IDs to scrape. Accepts multiple formats (see below). |
maxCommentsPerVideo | integer | 100 | Maximum top-level comments per video. Min: 1, Max: 10,000. |
includeReplies | boolean | false | Expand reply threads and include individual replies in the output. |
Accepted Video Input Formats
All of the following are valid entries in the videoUrls array:
https://www.youtube.com/watch?v=dQw4w9WgXcQhttps://youtu.be/dQw4w9WgXcQhttps://www.youtube.com/live/dQw4w9WgXcQdQw4w9WgXcQ
Data Fields
Every item in the output dataset is a comment with the following fields:
| Field | Type | Description |
|---|---|---|
type | string | Always "comment" |
videoId | string | YouTube video ID (e.g. dQw4w9WgXcQ) |
videoTitle | string | Title of the video this comment belongs to |
videoChannelName | string | Name of the channel that posted the video |
commentId | string | Unique YouTube comment ID |
text | string | Full comment text |
authorName | string | Display name of the commenter (e.g. @YouTube) |
authorChannelId | string | YouTube channel ID of the commenter |
authorChannelUrl | string | Full URL to the commenter's channel |
authorAvatarUrl | string | URL to the commenter's profile picture |
isVerified | boolean | Whether the commenter has a verified badge |
isCreator | boolean | Whether the commenter is the video's own creator |
likes | number | Number of likes on the comment (null if unavailable) |
replyCount | number | Number of replies in the comment's thread |
publishedTime | string | Relative publish time as shown on YouTube (e.g. "11 months ago") |
Output Example
{"type": "comment","videoId": "dQw4w9WgXcQ","videoTitle": "Rick Astley - Never Gonna Give You Up (Official Video) (4K Remaster)","videoChannelName": "Rick Astley","commentId": "Ugzge340dBgB75hWBm54AaABAg","text": "can confirm: he never gave us up","authorName": "@YouTube","authorChannelId": "UCBR8-60-B28hp2BmDPdntcQ","authorChannelUrl": "https://www.youtube.com/channel/UCBR8-60-B28hp2BmDPdntcQ","authorAvatarUrl": "https://yt3.ggpht.com/...","isVerified": true,"isCreator": false,"likes": 203000,"replyCount": 960,"publishedTime": "11 months ago"}
Tips and Best Practices
Getting comments from multiple videos in one run
- Add all video URLs to the
videoUrlsarray — the actor processes them sequentially in a single run with no extra overhead. - Each video respects its own
maxCommentsPerVideolimit, so costs scale predictably.
Choosing the right comment limit
- For sentiment analysis or NLP tasks, 200–500 top comments per video is usually enough to capture the main themes.
- For comprehensive community research, set
maxCommentsPerVideoto 1,000–5,000 to surface long-tail reactions. - YouTube sorts comments by popularity by default (most-liked first), so lower limits still capture the most impactful comments.
Using the isCreator flag
- Filter for
isCreator: trueto quickly find all videos where the creator personally responded in the comments — a useful signal for creator engagement analysis.
Replies vs. top-level only
- Leave
includeReplies: false(default) for most tasks — top-level comments capture the bulk of sentiment with lower cost and faster runs. - Enable
includeReplies: truewhen you need full conversation threads, Q&A exchanges, or creator responses buried in replies.
When likes is null
- A small number of comments return no like count from the API. This is a YouTube platform behavior, not a scraper bug. Filter for
likes != nullbefore sorting by engagement.
Handling private or restricted videos
- The actor skips videos that are private, age-restricted, or have comments disabled. It logs a warning per skipped video and continues processing the rest of the list.
Integrations
Google Sheets — live comment dashboard
Push scraped comments directly into a Google Sheet using the Apify → Google Sheets integration. Go to the Integrations tab after a run → connect Google Sheets → map dataset fields to columns. Schedule the actor to run daily to keep your sheet refreshed with new comments.
Zapier / Make — comment moderation alerts
Use a webhook trigger to forward comments matching keywords (e.g. negative sentiment, competitor mentions) to Slack, email, or a moderation queue. Set up a Zapier Zap: Apify dataset item → filter by text contains → send Slack message.
n8n — sentiment analysis pipeline
Connect the actor to an n8n workflow: run the scraper → pass each comment through an OpenAI sentiment node → write results to a database or spreadsheet. Full no-code pipeline, no server required.
Scheduled monitoring
Use Apify's built-in scheduler to run this actor daily or weekly on a fixed set of video URLs. Track comment volume and engagement trends over time to spot virality or reputation issues early.
Export formats
All results are available for download as JSON, CSV, Excel (XLSX), and XML from the dataset tab. Use CSV or Excel for spreadsheet tools, JSON for programmatic processing.
API Usage
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/youtube-comments-scraper').call({videoUrls: ['https://www.youtube.com/watch?v=dQw4w9WgXcQ','https://www.youtube.com/watch?v=9bZkp7q19f0',],maxCommentsPerVideo: 200,includeReplies: false,});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Scraped ${items.length} comments`);console.log(items[0]);
Python
from apify_client import ApifyClientclient = ApifyClient(token="YOUR_API_TOKEN")run = client.actor("automation-lab/youtube-comments-scraper").call(run_input={"videoUrls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ","https://www.youtube.com/watch?v=9bZkp7q19f0",],"maxCommentsPerVideo": 200,"includeReplies": False,})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(item["authorName"], ":", item["text"][:80])
cURL
curl -X POST \"https://api.apify.com/v2/acts/automation-lab~youtube-comments-scraper/runs?token=YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"videoUrls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"],"maxCommentsPerVideo": 100}'
Use with Claude AI (MCP)
This actor is available as a tool in Claude AI through the Model Context Protocol (MCP). Add it to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com"
Setup for Claude Desktop, Cursor, or VS Code
Add this to your MCP config file:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com"}}}
Example prompts
- "Scrape the top 100 comments from this YouTube video and summarize the overall sentiment: [URL]"
- "Get 500 comments from these 3 videos and identify the most common complaints people mention."
- "Pull 200 comments from this viral video and find all replies made by the video's creator."
Learn more in the Apify MCP documentation.
Legal and Ethical Use
This actor accesses publicly available YouTube comment data using the same InnerTube API that YouTube's own web client uses. All data extracted is publicly visible to any visitor on YouTube without logging in.
Important notes:
- Only comments on public videos are accessible — private videos, age-restricted videos, and videos with comments disabled are skipped
- This actor does not bypass any authentication, CAPTCHA, or access control
- Do not use this tool for harassment campaigns, doxxing, or bulk-tracking individual users without legitimate research or business purpose
- Respect YouTube's Terms of Service and applicable privacy laws in your jurisdiction
- This actor collects publicly posted comments only — it does not access private messages or account data
FAQ
Q: Can I scrape comments from live streams?
A: Yes — the actor accepts live stream URLs (youtube.com/live/ID) in addition to regular video URLs. Note: YouTube Shorts currently use a different internal API for comments, so Shorts are not supported in v0.1.
Q: Why does publishedTime say "11 months ago" instead of an exact date?
A: YouTube's InnerTube API returns relative timestamps for comments, not absolute dates. This matches what you see on YouTube itself. For most use cases (sorting, recency analysis) relative time is sufficient. Exact timestamps are not available from the public API.
Q: The actor skipped one of my videos — what happened? A: The actor logs a warning and continues when a video cannot be processed. Common causes: the video is private, comments are disabled, the video was deleted, or the URL format was not recognized. Check the actor log for the specific error message per video.
Q: How are comments ordered? A: Comments are returned in YouTube's default sort order, which is typically "Top comments" (most-liked first). This is the same order you see when you open a video on YouTube without changing the sort setting. There is currently no option to sort by "Newest first" — this is a YouTube API constraint.
Q: Can I get the exact like count or is it approximate? A: The actor parses YouTube's like count strings (e.g. "203K") into exact integers (203,000). For very high-engagement comments the number may be slightly rounded by YouTube itself (e.g. "1.2M" becomes 1,200,000), but it matches exactly what YouTube displays.
Q: Does enabling includeReplies significantly increase cost?
A: Yes — replies can multiply the number of scraped items by 5–20x on popular videos. Each reply counts as one comment-scraped charge event ($0.002). Disable replies unless you specifically need conversation thread data.
Q: What happens if a video has no comments?
A: The actor logs an info message and moves on to the next video. No comment-scraped charges are generated for videos with zero comments.
Related Scrapers
- YouTube Channel Scraper — Extract channel metadata and full video listings from any YouTube channel
- YouTube Transcript Scraper — Extract full video transcripts and captions with timestamps
- YouTube Shorts Scraper — Scrape YouTube Shorts with engagement metrics and metadata
- Google News Scraper — Collect news articles and headlines from Google News
- Google Trends Scraper — Track keyword search trends and interest over time