YouTube Comments Scraper
Pricing
$2.50/month + usage
YouTube Comments Scraper
Extract comprehensive YouTube comments data including text, authors, timestamps, likes, replies, and engagement metrics. Perfect for sentiment analysis, competitor research, and content strategy optimization.
Pricing
$2.50/month + usage
Rating
0.0
(0)
Developer

Akash Kumar Naik
Actor stats
1
Bookmarked
10
Total users
0
Monthly active users
14 hours ago
Last modified
Categories
Share
YouTube Comments Scraper – Extract YouTube comment data
The YouTube Comments Scraper extracts comment data from YouTube videos, including text, author information, timestamps, likes, replies, and engagement metrics.
This YouTube scraper collects publicly available comments from YouTube videos, making it useful for content analysis, audience research, and sentiment analysis. The scraper works by navigating to video pages and extracting comment information using browser automation.
Why scrape YouTube comments?
YouTube comments provide valuable insights into audience sentiment, content reception, and community engagement. By collecting YouTube comment data, you can:
- Analyze audience reactions to understand what works in your content
- Monitor competitor videos to see what viewers respond to positively
- Track brand mentions and user feedback across videos
- Conduct sentiment analysis at scale
- Identify trending topics and audience pain points
- Research audience demographics and behavior patterns
The data is ideal for digital marketers, content creators, researchers, and businesses looking to understand YouTube audience engagement.
How to scrape YouTube comments
The YouTube Comments Scraper works by:
- Accepting YouTube video URLs as input
- Navigating to each video page using a browser
- Extracting comment data from the page
- Handling pagination to load more comments
- Saving the data in a structured dataset
You can scrape comments from individual videos or multiple videos at once by providing a list of URLs.
Getting started
- Create a free Apify account
- Add YouTube video URLs in the input
- Configure your settings (optional)
- Run the scraper
- Download your data in JSON, CSV, or Excel format
Pricing
The YouTube Comments Scraper can be monetized using Apify's Pay-Per-Event pricing model. This model allows you to charge users for actual results rather than compute time.
How pay-per-event pricing works
With Pay-Per-Event (PPE), you configure pricing for specific events that occur during scraping. This scraper uses a single event type:
commentevent: Charged for each item saved to the dataset (comments, replies, or processing results)
The code automatically charges this event using Actor.pushData(data, 'comment'). Each item pushed triggers one charge.
What users are charged for
Users are charged for:
- Every comment extracted from YouTube videos
- Every reply (when
includeReplies: true) - Every error item created when processing fails
- Every item when a video returns no comments
This transparent billing means users pay for all work performed, not just successful results.
Setting up Pay-Per-Event pricing
Pricing is configured in the Apify Console, not in the project code. To set up PPE:
- Go to your Actor in Apify Console
- Navigate to the Monetization tab
- Select Pay-Per-Event pricing model
- Add a new event:
- Event name:
comment - Price per event: $0.001 (or your preferred price)
- Event name:
- Optional: Enable synthetic
apify-actor-startevent for startup costs - Set a default spending limit for users
The code is already prepared for PPE. You just need to configure the pricing in the Console.
Important features implemented
✓ Automatic charging: Each push to dataset triggers a charge
✓ Spending limits: Users set max charge; scraper stops when reached
✓ Cost control: Users can limit comments per video
✓ Graceful stop: Automatic停止when spending limit reached, preserving data
For Actor creators
To monetize this Actor:
- Configure the
commentevent pricing in Apify Console - Set a competitive price that covers your costs (proxies, compute time)
- Consider enabling
apify-actor-startfor the synthetic 5-second free compute - Test with small runs to ensure pricing is reasonable
For Actor users
To control your costs:
- Use
maxcommentsto limit results per video - Set
includeReplies: falseto reduce charges - Test with small batches first
- Set a spending limit in the input before running
- Monitor progress; scraping stops automatically at your limit
Cost estimation
The number of charges equals the number of items in the dataset:
| Scenario | Items in Dataset | Charges |
|---|---|---|
| 1 video, 100 comments | ~100 | ~100 × event price |
| 1 video, 0 comments | 1 | 1 × event price |
| 1 video, 50 comments + 20 replies | 70 (with replies on) | 70 × event price |
| 5 videos, some errors | All successful + error items | Total items × event price |
Note: The actual price per event depends on the pricing you configure in Apify Console.
Is it legal to scrape YouTube comments?
This scraper extracts publicly available data from YouTube videos. We believe this scraping is ethical and for legitimate purposes. However, you should:
- Only scrape data you have a legitimate reason to collect
- Respect YouTube's Terms of Service
- Be aware that results may contain personal data protected by GDPR or similar regulations
- Consult legal counsel if unsure about your intended use
We strongly recommend reviewing YouTube's Terms of Service and any applicable data protection regulations in your jurisdiction.
Input
The YouTube Comments Scraper accepts the following input options. Click on the Input tab for detailed information about each field.
Required fields
- videoUrls: YouTube video URLs to scrape comments from (single or multiple)
Optional fields
- maxComments: Maximum comments per video (set to 0 for unlimited, default: 100)
- sortBy: Comment sorting order -
top,new, orold(default:top) - includeReplies: Whether to extract nested replies (default:
true) - maxRetryAttempts: Maximum retry attempts for failed requests (default: 3)
- delayBetweenRequests: Delay in milliseconds between requests (default: 1500)
- proxyConfig: Proxy settings for avoiding rate limits
Input example
{"videourls": "https://www.youtube.com/watch?v=dQw4w9WgXcQ","maxcomments": 100,"sortby": "top","includereplies": true}
Bulk scraping
To scrape multiple videos, provide URLs separated by commas or new lines:
{"videourls": "https://www.youtube.com/watch?v=dQw4w9WgXcQ\nhttps://www.youtube.com/watch?v=jNQXAC9IVRw\nhttps://www.youtube.com/watch?v=video-id","maxcomments": 500,"sortby": "new","includereplies": true,"proxyConfig": {"useApifyProxy": true,"apifyProxyGroups": ["RESIDENTIAL"]}}
Or use comma-separated URLs:
{"videourls": "https://www.youtube.com/watch?v=dQw4w9WgXcQ,https://www.youtube.com/watch?v=jNQXAC9IVRw","maxcomments": 200,"sortby": "top","includereplies": false}
Output
You can download the dataset extracted by the YouTube Comments Scraper in various formats such as JSON, HTML, CSV, or Excel from the Storage tab.
Each comment item includes the following fields:
| Field | Type | Description |
|---|---|---|
commentText | string | The text content of the comment |
author | string | Display name of the comment author |
authorUrl | string | Link to the author's YouTube channel |
authorId | string | Unique identifier for the author |
authorThumbnail | string | URL of the author's profile picture |
publishedTime | string | ISO 8601 formatted timestamp |
likeCount | number | Number of likes on the comment |
replyCount | number | Number of replies to the comment |
isHearted | boolean | Whether the comment was hearted by the creator |
isPinned | boolean | Whether the comment is pinned |
commentId | string | Unique identifier for the comment |
videoId | string | YouTube video ID |
parentId | string | null |
replies | array | Array of reply objects (if included) |
Output example
{"commentText": "This video was incredibly helpful! Thank you for sharing this.","author": "Sarah Johnson","authorUrl": "https://www.youtube.com/@sarahcreates","authorId": "UCX1234567890","authorThumbnail": "https://yt3.ggpht.com/ytc/example","publishedTime": "2025-01-15T14:30:00Z","likeCount": 247,"replyCount": 12,"isHearted": true,"isPinned": false,"commentId": "Ugz1234567890","videoId": "dQw4w9WgXcQ","parentId": null,"replies": [{"commentText": "Great tip!","author": "John Doe","publishedTime": "2025-01-15T14:45:00Z","likeCount": 15}]}
Error handling
The scraper includes error items for videos that could not be processed:
{"#error": true,"url": "https://www.youtube.com/watch?v=invalid","error": "INVALID_URL","errorMessage": "Invalid Youtube URL provided","videoId": null,"status": "FAILED"}
Tips & Best Practices
Getting more results
- Use
maxcomments: 0to extract all available comments (costs increase with comment count) - Enable proxies (
proxyConfig.useApifyProxy: true) for larger scraping tasks - Use residential proxies (
RESIDENTIALproxy group) for better reliability on large scrapes
Reducing costs
- Set a strict
maxcommentslimit to control comment volume - Set
includeReplies: falseif you don't need reply data (replies count as separate items) - Test with a small batch first (1-2 videos, low comment limit) to estimate costs
- Use
sortby: "top"to get the most relevant comments first - Set a spending limit in the Apify Console before running
Improving reliability
- Enable proxies when scraping multiple videos to avoid rate limiting
- The scraper automatically handles retries with a delay between requests
- The scraper will stop automatically when your spending limit is reached
- Check error items in your dataset to understand any processing issues
Common issues
- Rate limiting: Comments may stop loading. Enable proxies to improve success rate.
- No comments found: Some videos have comments disabled or restricted access.
- Duplicate videos: The scraper automatically skips duplicate video IDs.
- Failed requests: Error items in the dataset indicate videos that could not be processed. You are charged for these processing attempts.
- Spending limit reached: The scraper stops automatically when your configured limit is reached.
Limitations
- Comment availability: Not all videos have comments enabled
- Rate limits: YouTube may limit access; use proxies and delays
- Video structure: Changes to YouTube's layout may affect scraping
- Processing time: Large comment counts take longer to extract
- Live data: Scraper fetches data at runtime; may not reflect real-time changes
Use Cases
Sentiment Analysis
Extract comments to analyze audience sentiment using natural language processing tools. Identify positive, negative, and neutral reactions to content.
Competitive Research
Analyze comments on competitor videos to understand what audiences respond to, identify content gaps, and discover trending topics.
Content Strategy
Review audience feedback to inform video topics, improve engagement, and tailor content to audience preferences.
Brand Monitoring
Track brand mentions and user feedback across YouTube videos to measure reputation and identify emerging issues.
Academic Research
Use comment data for social media research, community behavior studies, or digital communication analysis.
FAQ
What data can I extract?
You can extract public comment data including text, author information, timestamps, engagement metrics (likes, replies), and metadata. The scraper does not extract private user data or information that requires login.
How does pay-per-event pricing work?
You're charged for each event that occurs during scraping. The main events are:
- Each comment saved to the dataset
- Each video URL processed (including errors or no-comment cases)
Pricing per event is configured in the Apify Console. Set a spending limit to control your maximum cost per run.
How many comments can I scrape?
There's no technical limit, but practical limits apply based on YouTube's rate limiting and your budget. Set maxcomments to control volume and costs. Testing with small batches first is recommended.
Can I scrape comments from any video?
The scraper works on most publicly available YouTube videos with comments enabled. Videos with comments disabled, age-gated content, or private access restrictions may not work.
Does it scrape replies?
Yes, set includeReplies: true to extract nested replies. Each reply counts as a separate charged item, which significantly increases your comment count and cost.
What if a video has no comments?
The scraper returns an item with status: "NO_COMMENTS" to indicate processing completed successfully. You are charged 1 event for this processing result.
What if processing fails?
Failed processing returns an error item in the dataset with #error: true and details about the failure. You are charged 1 event for this processing attempt. Review error items to understand what went wrong.
How do I control costs?
- Set
maxcommentsto limit comments per video - Set
includeReplies: falseto avoid reply charges - Configure a spending limit in the Apify Console
- Test with small batches first to estimate costs
- Monitor your spending in real-time during the run
Can I export the data?
Yes, download your dataset in JSON, CSV, Excel, or HTML format from the Storage tab after the run completes.
Will I be charged if the scraper stops early?
You are only charged for events that actually occurred (comments extracted, videos processed). If the scraper stops due to reaching your spending limit, you are billed only for the events up to that point. All collected data is preserved.
How is this different from pay-per-result?
Pay-per-event provides more flexible pricing than pay-per-result. With pay-per-event:
- You control exactly what events cost money
- You can charge for processing attempts, not just successful results
- Pricing is configured per event type in Apify Console
- You can set spending limits for automatic cost control
Need help?
If you encounter issues or have questions:
- Check the Logs tab in the run for detailed error messages
- Review error items in the dataset to identify failed URLs
- Adjust input settings based on the tips above
- Open an issue if you find a bug or have feature requests
Disclaimer
This scraper is provided as-is for educational and research purposes. Users are responsible for ensuring their use complies with YouTube's Terms of Service and applicable laws. The scraper may not work if YouTube changes its website structure. We are not affiliated with or endorsed by YouTube.