YouTube Comments Scraper avatar

YouTube Comments Scraper

Pricing

$2.50/month + usage

Go to Apify Store
YouTube Comments Scraper

YouTube Comments Scraper

Extract comprehensive YouTube comments data including text, authors, timestamps, likes, replies, and engagement metrics. Perfect for sentiment analysis, competitor research, and content strategy optimization.

Pricing

$2.50/month + usage

Rating

0.0

(0)

Developer

Akash Kumar Naik

Akash Kumar Naik

Maintained by Community

Actor stats

1

Bookmarked

10

Total users

0

Monthly active users

14 hours ago

Last modified

Share

YouTube Comments Scraper – Extract YouTube comment data

The YouTube Comments Scraper extracts comment data from YouTube videos, including text, author information, timestamps, likes, replies, and engagement metrics.

This YouTube scraper collects publicly available comments from YouTube videos, making it useful for content analysis, audience research, and sentiment analysis. The scraper works by navigating to video pages and extracting comment information using browser automation.


Why scrape YouTube comments?

YouTube comments provide valuable insights into audience sentiment, content reception, and community engagement. By collecting YouTube comment data, you can:

  • Analyze audience reactions to understand what works in your content
  • Monitor competitor videos to see what viewers respond to positively
  • Track brand mentions and user feedback across videos
  • Conduct sentiment analysis at scale
  • Identify trending topics and audience pain points
  • Research audience demographics and behavior patterns

The data is ideal for digital marketers, content creators, researchers, and businesses looking to understand YouTube audience engagement.


How to scrape YouTube comments

The YouTube Comments Scraper works by:

  1. Accepting YouTube video URLs as input
  2. Navigating to each video page using a browser
  3. Extracting comment data from the page
  4. Handling pagination to load more comments
  5. Saving the data in a structured dataset

You can scrape comments from individual videos or multiple videos at once by providing a list of URLs.

Getting started

  1. Create a free Apify account
  2. Add YouTube video URLs in the input
  3. Configure your settings (optional)
  4. Run the scraper
  5. Download your data in JSON, CSV, or Excel format

Pricing

The YouTube Comments Scraper can be monetized using Apify's Pay-Per-Event pricing model. This model allows you to charge users for actual results rather than compute time.

How pay-per-event pricing works

With Pay-Per-Event (PPE), you configure pricing for specific events that occur during scraping. This scraper uses a single event type:

  • comment event: Charged for each item saved to the dataset (comments, replies, or processing results)

The code automatically charges this event using Actor.pushData(data, 'comment'). Each item pushed triggers one charge.

What users are charged for

Users are charged for:

  • Every comment extracted from YouTube videos
  • Every reply (when includeReplies: true)
  • Every error item created when processing fails
  • Every item when a video returns no comments

This transparent billing means users pay for all work performed, not just successful results.

Setting up Pay-Per-Event pricing

Pricing is configured in the Apify Console, not in the project code. To set up PPE:

  1. Go to your Actor in Apify Console
  2. Navigate to the Monetization tab
  3. Select Pay-Per-Event pricing model
  4. Add a new event:
    • Event name: comment
    • Price per event: $0.001 (or your preferred price)
  5. Optional: Enable synthetic apify-actor-start event for startup costs
  6. Set a default spending limit for users

The code is already prepared for PPE. You just need to configure the pricing in the Console.

Important features implemented

Automatic charging: Each push to dataset triggers a charge
Spending limits: Users set max charge; scraper stops when reached
Cost control: Users can limit comments per video
Graceful stop: Automatic停止when spending limit reached, preserving data

For Actor creators

To monetize this Actor:

  1. Configure the comment event pricing in Apify Console
  2. Set a competitive price that covers your costs (proxies, compute time)
  3. Consider enabling apify-actor-start for the synthetic 5-second free compute
  4. Test with small runs to ensure pricing is reasonable

For Actor users

To control your costs:

  • Use maxcomments to limit results per video
  • Set includeReplies: false to reduce charges
  • Test with small batches first
  • Set a spending limit in the input before running
  • Monitor progress; scraping stops automatically at your limit

Cost estimation

The number of charges equals the number of items in the dataset:

ScenarioItems in DatasetCharges
1 video, 100 comments~100~100 × event price
1 video, 0 comments11 × event price
1 video, 50 comments + 20 replies70 (with replies on)70 × event price
5 videos, some errorsAll successful + error itemsTotal items × event price

Note: The actual price per event depends on the pricing you configure in Apify Console.


This scraper extracts publicly available data from YouTube videos. We believe this scraping is ethical and for legitimate purposes. However, you should:

  • Only scrape data you have a legitimate reason to collect
  • Respect YouTube's Terms of Service
  • Be aware that results may contain personal data protected by GDPR or similar regulations
  • Consult legal counsel if unsure about your intended use

We strongly recommend reviewing YouTube's Terms of Service and any applicable data protection regulations in your jurisdiction.


Input

The YouTube Comments Scraper accepts the following input options. Click on the Input tab for detailed information about each field.

Required fields

  • videoUrls: YouTube video URLs to scrape comments from (single or multiple)

Optional fields

  • maxComments: Maximum comments per video (set to 0 for unlimited, default: 100)
  • sortBy: Comment sorting order - top, new, or old (default: top)
  • includeReplies: Whether to extract nested replies (default: true)
  • maxRetryAttempts: Maximum retry attempts for failed requests (default: 3)
  • delayBetweenRequests: Delay in milliseconds between requests (default: 1500)
  • proxyConfig: Proxy settings for avoiding rate limits

Input example

{
"videourls": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"maxcomments": 100,
"sortby": "top",
"includereplies": true
}

Bulk scraping

To scrape multiple videos, provide URLs separated by commas or new lines:

{
"videourls": "https://www.youtube.com/watch?v=dQw4w9WgXcQ\nhttps://www.youtube.com/watch?v=jNQXAC9IVRw\nhttps://www.youtube.com/watch?v=video-id",
"maxcomments": 500,
"sortby": "new",
"includereplies": true,
"proxyConfig": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"]
}
}

Or use comma-separated URLs:

{
"videourls": "https://www.youtube.com/watch?v=dQw4w9WgXcQ,https://www.youtube.com/watch?v=jNQXAC9IVRw",
"maxcomments": 200,
"sortby": "top",
"includereplies": false
}

Output

You can download the dataset extracted by the YouTube Comments Scraper in various formats such as JSON, HTML, CSV, or Excel from the Storage tab.

Each comment item includes the following fields:

FieldTypeDescription
commentTextstringThe text content of the comment
authorstringDisplay name of the comment author
authorUrlstringLink to the author's YouTube channel
authorIdstringUnique identifier for the author
authorThumbnailstringURL of the author's profile picture
publishedTimestringISO 8601 formatted timestamp
likeCountnumberNumber of likes on the comment
replyCountnumberNumber of replies to the comment
isHeartedbooleanWhether the comment was hearted by the creator
isPinnedbooleanWhether the comment is pinned
commentIdstringUnique identifier for the comment
videoIdstringYouTube video ID
parentIdstringnull
repliesarrayArray of reply objects (if included)

Output example

{
"commentText": "This video was incredibly helpful! Thank you for sharing this.",
"author": "Sarah Johnson",
"authorUrl": "https://www.youtube.com/@sarahcreates",
"authorId": "UCX1234567890",
"authorThumbnail": "https://yt3.ggpht.com/ytc/example",
"publishedTime": "2025-01-15T14:30:00Z",
"likeCount": 247,
"replyCount": 12,
"isHearted": true,
"isPinned": false,
"commentId": "Ugz1234567890",
"videoId": "dQw4w9WgXcQ",
"parentId": null,
"replies": [
{
"commentText": "Great tip!",
"author": "John Doe",
"publishedTime": "2025-01-15T14:45:00Z",
"likeCount": 15
}
]
}

Error handling

The scraper includes error items for videos that could not be processed:

{
"#error": true,
"url": "https://www.youtube.com/watch?v=invalid",
"error": "INVALID_URL",
"errorMessage": "Invalid Youtube URL provided",
"videoId": null,
"status": "FAILED"
}

Tips & Best Practices

Getting more results

  • Use maxcomments: 0 to extract all available comments (costs increase with comment count)
  • Enable proxies (proxyConfig.useApifyProxy: true) for larger scraping tasks
  • Use residential proxies (RESIDENTIAL proxy group) for better reliability on large scrapes

Reducing costs

  • Set a strict maxcomments limit to control comment volume
  • Set includeReplies: false if you don't need reply data (replies count as separate items)
  • Test with a small batch first (1-2 videos, low comment limit) to estimate costs
  • Use sortby: "top" to get the most relevant comments first
  • Set a spending limit in the Apify Console before running

Improving reliability

  • Enable proxies when scraping multiple videos to avoid rate limiting
  • The scraper automatically handles retries with a delay between requests
  • The scraper will stop automatically when your spending limit is reached
  • Check error items in your dataset to understand any processing issues

Common issues

  • Rate limiting: Comments may stop loading. Enable proxies to improve success rate.
  • No comments found: Some videos have comments disabled or restricted access.
  • Duplicate videos: The scraper automatically skips duplicate video IDs.
  • Failed requests: Error items in the dataset indicate videos that could not be processed. You are charged for these processing attempts.
  • Spending limit reached: The scraper stops automatically when your configured limit is reached.

Limitations

  • Comment availability: Not all videos have comments enabled
  • Rate limits: YouTube may limit access; use proxies and delays
  • Video structure: Changes to YouTube's layout may affect scraping
  • Processing time: Large comment counts take longer to extract
  • Live data: Scraper fetches data at runtime; may not reflect real-time changes

Use Cases

Sentiment Analysis

Extract comments to analyze audience sentiment using natural language processing tools. Identify positive, negative, and neutral reactions to content.

Competitive Research

Analyze comments on competitor videos to understand what audiences respond to, identify content gaps, and discover trending topics.

Content Strategy

Review audience feedback to inform video topics, improve engagement, and tailor content to audience preferences.

Brand Monitoring

Track brand mentions and user feedback across YouTube videos to measure reputation and identify emerging issues.

Academic Research

Use comment data for social media research, community behavior studies, or digital communication analysis.


FAQ

What data can I extract?

You can extract public comment data including text, author information, timestamps, engagement metrics (likes, replies), and metadata. The scraper does not extract private user data or information that requires login.

How does pay-per-event pricing work?

You're charged for each event that occurs during scraping. The main events are:

  • Each comment saved to the dataset
  • Each video URL processed (including errors or no-comment cases)

Pricing per event is configured in the Apify Console. Set a spending limit to control your maximum cost per run.

How many comments can I scrape?

There's no technical limit, but practical limits apply based on YouTube's rate limiting and your budget. Set maxcomments to control volume and costs. Testing with small batches first is recommended.

Can I scrape comments from any video?

The scraper works on most publicly available YouTube videos with comments enabled. Videos with comments disabled, age-gated content, or private access restrictions may not work.

Does it scrape replies?

Yes, set includeReplies: true to extract nested replies. Each reply counts as a separate charged item, which significantly increases your comment count and cost.

What if a video has no comments?

The scraper returns an item with status: "NO_COMMENTS" to indicate processing completed successfully. You are charged 1 event for this processing result.

What if processing fails?

Failed processing returns an error item in the dataset with #error: true and details about the failure. You are charged 1 event for this processing attempt. Review error items to understand what went wrong.

How do I control costs?

  • Set maxcomments to limit comments per video
  • Set includeReplies: false to avoid reply charges
  • Configure a spending limit in the Apify Console
  • Test with small batches first to estimate costs
  • Monitor your spending in real-time during the run

Can I export the data?

Yes, download your dataset in JSON, CSV, Excel, or HTML format from the Storage tab after the run completes.

Will I be charged if the scraper stops early?

You are only charged for events that actually occurred (comments extracted, videos processed). If the scraper stops due to reaching your spending limit, you are billed only for the events up to that point. All collected data is preserved.

How is this different from pay-per-result?

Pay-per-event provides more flexible pricing than pay-per-result. With pay-per-event:

  • You control exactly what events cost money
  • You can charge for processing attempts, not just successful results
  • Pricing is configured per event type in Apify Console
  • You can set spending limits for automatic cost control

Need help?

If you encounter issues or have questions:

  • Check the Logs tab in the run for detailed error messages
  • Review error items in the dataset to identify failed URLs
  • Adjust input settings based on the tips above
  • Open an issue if you find a bug or have feature requests

Disclaimer

This scraper is provided as-is for educational and research purposes. Users are responsible for ensuring their use complies with YouTube's Terms of Service and applicable laws. The scraper may not work if YouTube changes its website structure. We are not affiliated with or endorsed by YouTube.