YouTube Comments Scraper avatar

YouTube Comments Scraper

Pricing

Pay per event

Go to Apify Store
YouTube Comments Scraper

YouTube Comments Scraper

Scrape YouTube video comments with full metadata including text, author info, likes, reply counts, and timestamps. Uses the InnerTube API directly — no browser or Playwright needed. Fast, reliable, and cost-efficient with pay-per-comment pricing. Export to JSON, CSV, or Excel.

Pricing

Pay per event

Rating

0.0

(0)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Scrape YouTube video comments with full metadata — comment text, author info, likes, reply counts, and timestamps. Powered by YouTube's internal InnerTube API: no browser, no Selenium, no Playwright. Just fast, reliable HTTP calls that mirror what YouTube's own web client does.

Paste in one or more video URLs, set a comment limit, and get structured JSON in seconds. Works with regular videos, live streams, and any public YouTube video.


What Does It Do?

YouTube Comments Scraper fetches top-level comments from any public YouTube video. For each comment it returns:

  • Comment text — the full, untruncated comment body
  • Author metadata — display name, channel ID, channel URL, and avatar image URL
  • Engagement metrics — like count (parsed as a number, e.g. 203,000 not "203K") and reply thread count
  • Publish time — relative timestamp as shown on YouTube (e.g. "11 months ago")
  • Author flags — whether the commenter is verified (isVerified) or the video's creator (isCreator)
  • Video context — video ID, video title, and channel name are included on every comment row

When includeReplies is enabled, reply threads are expanded and individual replies are included alongside top-level comments.


Who Is It For?

  • Brand managers monitoring what audiences say about their videos or competitor videos
  • Market researchers analyzing viewer sentiment and feedback at scale
  • Content creators studying what resonates with audiences in their niche
  • Data scientists building NLP training sets, sentiment classifiers, or social listening pipelines
  • Journalists and investigators surfacing notable public reactions to news events or viral content
  • Community managers identifying top commenters and engaged fans
  • Developers integrating YouTube comment data into dashboards, CRMs, or analytics apps

Why Use This Scraper?

  • No browser required — pure HTTP via InnerTube API, 256 MB memory, fast cold starts
  • Handles pagination automatically — fetches up to 10,000 comments per video with continuation tokens
  • Accepts any URL format — full watch URLs, short youtu.be links, live stream URLs, or plain 11-character video IDs
  • Likes as real numbers — parses "203K" → 203000 and "1.2M" → 1200000 so you can sort and filter
  • Creator flag included — instantly identify when the video author replied to their own comment section
  • Batch processing — scrape multiple videos in a single run, each with its own comment limit
  • Cost-efficient pricing — pay only for comments actually extracted, not for compute time or page loads

How Much Does It Cost to Scrape YouTube Comments?

This actor uses pay-per-event (PPE) pricing — you only pay for what you extract:

EventPriceDescription
Run start$0.005One-time charge when the actor starts
Comment scraped$0.002Per comment successfully extracted

Example costs:

  • 100 comments from 1 video: $0.005 + (100 × $0.002) = $0.205
  • 500 comments from 5 videos: $0.005 + (500 × $0.002) = $1.005
  • 1,000 comments from 1 viral video: $0.005 + (1,000 × $0.002) = $2.005
  • 50 comments from 10 videos: $0.005 + (500 × $0.002) = $1.005

Compute costs (memory/CPU time) are minimal — no browser means near-zero infrastructure overhead.


How to Use

  1. Open the actor on Apify Store
  2. Paste video URLs into the "Video URLs or IDs" field — one per line. Accepts full YouTube URLs, short links, or plain video IDs.
  3. Set max comments — choose how many top-level comments to fetch per video (default: 100, max: 10,000)
  4. Toggle replies — enable "Include replies" if you want reply threads expanded in the output
  5. Click Start and wait — most runs complete in seconds to a few minutes depending on comment volume
  6. Download results as JSON, CSV, or Excel from the dataset tab

Input Parameters

ParameterTypeDefaultDescription
videoUrlsarrayrequiredVideo URLs or IDs to scrape. Accepts multiple formats (see below).
maxCommentsPerVideointeger100Maximum top-level comments per video. Min: 1, Max: 10,000.
includeRepliesbooleanfalseExpand reply threads and include individual replies in the output.

Accepted Video Input Formats

All of the following are valid entries in the videoUrls array:

https://www.youtube.com/watch?v=dQw4w9WgXcQ
https://youtu.be/dQw4w9WgXcQ
https://www.youtube.com/live/dQw4w9WgXcQ
dQw4w9WgXcQ

Data Fields

Every item in the output dataset is a comment with the following fields:

FieldTypeDescription
typestringAlways "comment"
videoIdstringYouTube video ID (e.g. dQw4w9WgXcQ)
videoTitlestringTitle of the video this comment belongs to
videoChannelNamestringName of the channel that posted the video
commentIdstringUnique YouTube comment ID
textstringFull comment text
authorNamestringDisplay name of the commenter (e.g. @YouTube)
authorChannelIdstringYouTube channel ID of the commenter
authorChannelUrlstringFull URL to the commenter's channel
authorAvatarUrlstringURL to the commenter's profile picture
isVerifiedbooleanWhether the commenter has a verified badge
isCreatorbooleanWhether the commenter is the video's own creator
likesnumberNumber of likes on the comment (null if unavailable)
replyCountnumberNumber of replies in the comment's thread
publishedTimestringRelative publish time as shown on YouTube (e.g. "11 months ago")

Output Example

{
"type": "comment",
"videoId": "dQw4w9WgXcQ",
"videoTitle": "Rick Astley - Never Gonna Give You Up (Official Video) (4K Remaster)",
"videoChannelName": "Rick Astley",
"commentId": "Ugzge340dBgB75hWBm54AaABAg",
"text": "can confirm: he never gave us up",
"authorName": "@YouTube",
"authorChannelId": "UCBR8-60-B28hp2BmDPdntcQ",
"authorChannelUrl": "https://www.youtube.com/channel/UCBR8-60-B28hp2BmDPdntcQ",
"authorAvatarUrl": "https://yt3.ggpht.com/...",
"isVerified": true,
"isCreator": false,
"likes": 203000,
"replyCount": 960,
"publishedTime": "11 months ago"
}

Tips and Best Practices

Getting comments from multiple videos in one run

  • Add all video URLs to the videoUrls array — the actor processes them sequentially in a single run with no extra overhead.
  • Each video respects its own maxCommentsPerVideo limit, so costs scale predictably.

Choosing the right comment limit

  • For sentiment analysis or NLP tasks, 200–500 top comments per video is usually enough to capture the main themes.
  • For comprehensive community research, set maxCommentsPerVideo to 1,000–5,000 to surface long-tail reactions.
  • YouTube sorts comments by popularity by default (most-liked first), so lower limits still capture the most impactful comments.

Using the isCreator flag

  • Filter for isCreator: true to quickly find all videos where the creator personally responded in the comments — a useful signal for creator engagement analysis.

Replies vs. top-level only

  • Leave includeReplies: false (default) for most tasks — top-level comments capture the bulk of sentiment with lower cost and faster runs.
  • Enable includeReplies: true when you need full conversation threads, Q&A exchanges, or creator responses buried in replies.

When likes is null

  • A small number of comments return no like count from the API. This is a YouTube platform behavior, not a scraper bug. Filter for likes != null before sorting by engagement.

Handling private or restricted videos

  • The actor skips videos that are private, age-restricted, or have comments disabled. It logs a warning per skipped video and continues processing the rest of the list.

Integrations

Google Sheets — live comment dashboard

Push scraped comments directly into a Google Sheet using the Apify → Google Sheets integration. Go to the Integrations tab after a run → connect Google Sheets → map dataset fields to columns. Schedule the actor to run daily to keep your sheet refreshed with new comments.

Zapier / Make — comment moderation alerts

Use a webhook trigger to forward comments matching keywords (e.g. negative sentiment, competitor mentions) to Slack, email, or a moderation queue. Set up a Zapier Zap: Apify dataset item → filter by text contains → send Slack message.

n8n — sentiment analysis pipeline

Connect the actor to an n8n workflow: run the scraper → pass each comment through an OpenAI sentiment node → write results to a database or spreadsheet. Full no-code pipeline, no server required.

Scheduled monitoring

Use Apify's built-in scheduler to run this actor daily or weekly on a fixed set of video URLs. Track comment volume and engagement trends over time to spot virality or reputation issues early.

Export formats

All results are available for download as JSON, CSV, Excel (XLSX), and XML from the dataset tab. Use CSV or Excel for spreadsheet tools, JSON for programmatic processing.


API Usage

Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/youtube-comments-scraper').call({
videoUrls: [
'https://www.youtube.com/watch?v=dQw4w9WgXcQ',
'https://www.youtube.com/watch?v=9bZkp7q19f0',
],
maxCommentsPerVideo: 200,
includeReplies: false,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Scraped ${items.length} comments`);
console.log(items[0]);

Python

from apify_client import ApifyClient
client = ApifyClient(token="YOUR_API_TOKEN")
run = client.actor("automation-lab/youtube-comments-scraper").call(run_input={
"videoUrls": [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://www.youtube.com/watch?v=9bZkp7q19f0",
],
"maxCommentsPerVideo": 200,
"includeReplies": False,
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item["authorName"], ":", item["text"][:80])

cURL

curl -X POST \
"https://api.apify.com/v2/acts/automation-lab~youtube-comments-scraper/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"videoUrls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"],
"maxCommentsPerVideo": 100
}'

Use with Claude AI (MCP)

This actor is available as a tool in Claude AI through the Model Context Protocol (MCP). Add it to Claude Desktop, Cursor, Windsurf, or any MCP-compatible client.

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com"

Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com"
}
}
}

Example prompts

  • "Scrape the top 100 comments from this YouTube video and summarize the overall sentiment: [URL]"
  • "Get 500 comments from these 3 videos and identify the most common complaints people mention."
  • "Pull 200 comments from this viral video and find all replies made by the video's creator."

Learn more in the Apify MCP documentation.


This actor accesses publicly available YouTube comment data using the same InnerTube API that YouTube's own web client uses. All data extracted is publicly visible to any visitor on YouTube without logging in.

Important notes:

  • Only comments on public videos are accessible — private videos, age-restricted videos, and videos with comments disabled are skipped
  • This actor does not bypass any authentication, CAPTCHA, or access control
  • Do not use this tool for harassment campaigns, doxxing, or bulk-tracking individual users without legitimate research or business purpose
  • Respect YouTube's Terms of Service and applicable privacy laws in your jurisdiction
  • This actor collects publicly posted comments only — it does not access private messages or account data

FAQ

Q: Can I scrape comments from live streams? A: Yes — the actor accepts live stream URLs (youtube.com/live/ID) in addition to regular video URLs. Note: YouTube Shorts currently use a different internal API for comments, so Shorts are not supported in v0.1.

Q: Why does publishedTime say "11 months ago" instead of an exact date? A: YouTube's InnerTube API returns relative timestamps for comments, not absolute dates. This matches what you see on YouTube itself. For most use cases (sorting, recency analysis) relative time is sufficient. Exact timestamps are not available from the public API.

Q: The actor skipped one of my videos — what happened? A: The actor logs a warning and continues when a video cannot be processed. Common causes: the video is private, comments are disabled, the video was deleted, or the URL format was not recognized. Check the actor log for the specific error message per video.

Q: How are comments ordered? A: Comments are returned in YouTube's default sort order, which is typically "Top comments" (most-liked first). This is the same order you see when you open a video on YouTube without changing the sort setting. There is currently no option to sort by "Newest first" — this is a YouTube API constraint.

Q: Can I get the exact like count or is it approximate? A: The actor parses YouTube's like count strings (e.g. "203K") into exact integers (203,000). For very high-engagement comments the number may be slightly rounded by YouTube itself (e.g. "1.2M" becomes 1,200,000), but it matches exactly what YouTube displays.

Q: Does enabling includeReplies significantly increase cost? A: Yes — replies can multiply the number of scraped items by 5–20x on popular videos. Each reply counts as one comment-scraped charge event ($0.002). Disable replies unless you specifically need conversation thread data.

Q: What happens if a video has no comments? A: The actor logs an info message and moves on to the next video. No comment-scraped charges are generated for videos with zero comments.