Get Youtube videos from multiple @channels in one fast run
Pricing
$29.00/month + usage
Get Youtube videos from multiple @channels in one fast run
Get the videos id, title and links from a Youtube page. Multiple Youtube urls in one fast run.
Pricing
$29.00/month + usage
Rating
5.0
(1)
Developer

scraping automation
Actor stats
3
Bookmarked
88
Total users
4
Monthly active users
10 days ago
Last modified
Categories
Share
YouTube Channel Video Scraper
A powerful Apify actor that extracts video information from YouTube channels. This tool allows you to scrape video IDs, titles, and URLs from multiple YouTube channels in a single run. If you focus on one channels and need more features please see https://apify.com/runtime/youtube-channel-scraper
Features
- Extract video data from multiple YouTube channels simultaneously
- Get video IDs, titles, and direct URLs
- Extract view counts (e.g., "12K views", "1.2M views")
- Extract publication dates (e.g., "2 weeks ago", "1 month ago")
- Convert views to integers (e.g., "2.5K views" → 2500)
- Convert dates to days (e.g., "1 month ago" → 30)
- Filter videos by publication date (last 1, 7, 15, 30, 60, or 90 days)
- Filter videos by minimum views (e.g., only videos with 1000+ views)
- Memory-optimized for large-scale crawling (100+ channels)
- Progress tracking with detailed logging for each channel
- Enhanced debugging with metadata extraction summaries
- Handles infinite scrolling and lazy loading
- Automatic early stopping when filter criteria are met
- Robust error handling - continues processing even if individual channels fail
- JSON output format for easy integration
- No API key required
Use Cases
- Content research and analysis
- Video cataloging
- Channel monitoring
- Data collection for analytics
- Content aggregation
- Channel Activity Tracking: Use
filterByDaysto follow a channel's recent activity and update your database periodically (e.g., daily or weekly) to track only the latest content without re-scraping the entire channel history
Input Format
Provide an array of YouTube channel URLs in the following format:
https://www.youtube.com/@channelname
⚠️ Memory Note: If you have 50+ channels in your input list, you must increase the Actor's memory to 4-8 GB (or more) to prevent OOM errors. See Memory Requirements section below.
Basic Input Example:
{"urls": ["https://www.youtube.com/@channel1","https://www.youtube.com/@channel2"]}
Advanced Input Options:
{"urls": ["https://www.youtube.com/@channel1"],"convertViewsToInteger": true,"convertDateToDays": true,"filterByDays": 30,"minViews": 1000}
Input Parameters:
-
urls(required): Array of YouTube channel URLs- Supports formats:
@channelname,/c/channelname,/channel/channelname - Automatically normalizes to
/videosendpoint
- Supports formats:
-
convertViewsToInteger(optional, default:false):- If
true, converts view strings to integers - Example:
"2.5K views"→2500,"1.2M views"→1200000 - Supports: K (thousands), M (millions), B (billions)
- If
-
convertDateToDays(optional, default:false):- If
true, converts date strings to number of days - Example:
"1 month ago"→30,"2 weeks ago"→14 - Approximations: 30 days/month, 365 days/year
- If
-
filterByDays(optional, default:60):- Filters videos published within the last N days
- Options:
0(all videos),1,7,15,30,60,90 - Default:
60(returns videos from the last 60 days) - Example:
30returns only videos from the last 30 days - Note: The scraper will automatically stop scrolling early when enough recent videos are found, optimizing performance
- Use Case: Perfect for tracking channel activity and keeping your database updated periodically. Instead of scraping the entire channel history each time, you can run the scraper daily/weekly with
filterByDays: 7to capture only the latest content, making it ideal for monitoring channels and staying up-to-date with new releases
-
minViews(optional, default:0):- Filters videos with at least this number of views (integer)
- Set to
0or leave empty to disable this filter - Example:
1000returns only videos with at least 1000 views - Note: Works with both string format ("2.5K views") and integer format (if
convertViewsToIntegeris enabled). The filter automatically converts view strings to integers for comparison. - Use Case: Useful for finding popular content, filtering out low-engagement videos, or focusing on videos that have reached a certain view threshold
Output Format
The actor returns a structured JSON array containing channel information and their respective videos:
[{"title": "Channel1 - YouTube","videos": [{"id": "RfTgYhUU791","url": "https://www.youtube.com/watch?v=RfTgYhUU791","title": "Video one of page","views": "12K views","publishedDate": "2 weeks ago"},{"id": "RfTgYhUU792","url": "https://www.youtube.com/watch?v=RfTgYhUU792","title": "Video two of page","views": "1.2M views","publishedDate": "1 month ago"}// ... more videos]}]
Data Fields Explained
Each video object contains the following fields:
id: Unique YouTube video identifierurl: Direct link to the videotitle: Video title as displayed on YouTubeviews: View count (format depends onconvertViewsToIntegeroption)- Default: String format (e.g., "12K views", "1.2M views", "500 views")
- With
convertViewsToInteger: true: Integer (e.g.,12000,1200000,500)
publishedDate: Publication date (format depends onconvertDateToDaysoption)- Default: String format (e.g., "2 weeks ago", "1 month ago", "3 days ago")
- With
convertDateToDays: true: Number of days (e.g.,14,30,3)
Note: View counts and publication dates are extracted from YouTube's metadata and may not be available for all videos depending on YouTube's display settings.
Example Output with Conversions:
With convertViewsToInteger: true and convertDateToDays: true:
{"id": "RfTgYhUU791","url": "https://www.youtube.com/watch?v=RfTgYhUU791","title": "Video Title","views": 2500,"publishedDate": 14}
With filterByDays: 30:
- Only videos published within the last 30 days are included in the output
Advanced Usage
For more advanced features and channel-specific scraping, check out our dedicated YouTube Channel Scraper.
Performance & Memory Optimization
The scraper is optimized for large-scale crawling with 100+ channels:
- Memory-efficient: Uses optimized browser settings and sequential processing
- Progress tracking: Shows
[X/Total]progress for each channel being processed - Early stopping: Automatically stops scrolling when filter criteria are met (e.g., enough recent videos found)
- Resource cleanup: Cleans up memory between channels to prevent OOM errors
- Error recovery: Continues processing remaining channels even if some fail
Large-Scale Crawling
The scraper can handle 100+ channels in a single run:
- Processes channels sequentially to minimize memory usage
- Estimated time: ~2-5 minutes per channel
- Automatic garbage collection between channels
- Detailed logging for monitoring progress
Real-World Performance Statistics
Based on a production run with 113 YouTube channels:
| Metric | Value |
|---|---|
| Total Channels Processed | 113 |
| Total Videos Extracted | 13,101 |
| Channels with Videos | 112 (99.1%) |
| Channels without Videos | 1 (0.9%) |
| Average Videos per Channel | 115.9 |
| Average (channels with videos) | 117.0 |
| Min Videos per Channel | 4 |
| Max Videos per Channel | 508 |
| Processing Time | 52.7 minutes (3,161.8 seconds) |
| Average Time per Channel | 28.0 seconds |
| Success Rate | 99.1% |
Performance Notes:
- Processing speed: 28 seconds per channel (4-10x faster than initial estimates)
- Efficient filtering: Early stopping when filter criteria are met reduces processing time
- Memory optimized: Successfully processed 113 channels without memory issues
- High success rate: 99.1% of channels processed successfully
⚠️ Memory Requirements for Long Input Lists
IMPORTANT: If you have a long list of URLs (50+ channels), you MUST increase the Actor's memory allocation to prevent OOM (Out Of Memory) errors.
Memory recommendations by number of channels:
| Number of Channels | Recommended Memory | Notes |
|---|---|---|
| 1-20 channels | Default (2 GB) | Usually sufficient |
| 21-50 channels | 4 GB | Recommended for medium batches |
| 51-100 channels | 8 GB | Required for large batches |
| 101-200 channels | 16 GB | Required for very large batches |
| 200+ channels | Split into smaller batches | Process 50-100 at a time |
How to increase memory in Apify:
- Go to your Actor settings in Apify Console
- Navigate to Settings → Resources
- Increase Memory to the recommended value based on your channel count
- Save and restart the Actor
If you get "Exit Code 137" or "OOM" error:
- This means the Actor ran out of memory
- Solution: Increase memory allocation and restart the Actor
- The Actor will continue where it left off if you use the same run
- Alternatively, split your URL list into smaller batches (50-100 channels per run)
Debugging & Logging
The scraper provides comprehensive logging to help diagnose issues:
- Extraction summary: Shows total videos found and metadata extraction stats
- Sample videos: Displays first 3 videos with their metadata for verification
- Filtering details: Shows which videos are filtered and why
- Warnings: Alerts when videos are filtered out due to missing metadata
Example log output:
=== Video Extraction Summary ===Total video links extracted: 45Metadata extraction:Videos with date: 43 (2 without date)Videos with views: 45 (0 without views)Filtered videos by days: 45 total, 12 within last 35 daysFiltered videos by minViews (900): 12 before, 8 after
Troubleshooting
Getting 0 videos from a channel?
If you get 0 videos from a channel, check the logs for:
-
No videos found: The channel may have no videos, or the page structure is different
- Check:
Total video links extracted: 0 - Solution: Verify the channel URL manually in a browser
- Check:
-
Videos filtered out by date: No videos match the
filterByDayscriteria- Check:
Filtered videos by days: X total, 0 within last N days - Solution: Increase
filterByDaysor set to0to get all videos
- Check:
-
Videos filtered out by views: No videos match the
minViewscriteria- Check:
Filtered videos by minViews: X before, 0 after - Solution: Lower
minViewsor set to0to disable
- Check:
-
Missing metadata: Videos found but no date/views extracted
- Check:
Videos with date: 0orVideos with views: 0 - Solution: The channel's page structure may be different. Check logs for sample videos
- Check:
Memory Issues (Exit Code 137)
If the crawler is killed with exit code 137 (OOM - Out Of Memory):
This error means the Actor ran out of memory. Here's how to fix it:
-
Increase Actor Memory (Recommended):
- Go to Actor Settings → Resources → Memory
- Increase to 4-8 GB for 50-100 channels
- Increase to 16 GB for 100+ channels
- Restart the Actor - it will continue where it left off
-
Split into Smaller Batches (Alternative):
- Process channels in batches of 50-100 at a time
- This avoids memory issues but requires multiple runs
-
Check Your Input List Size:
- 1-20 channels: Default memory (2 GB) should be sufficient
- 21-50 channels: Requires 4 GB minimum
- 51-100 channels: Requires 8 GB minimum
- 100+ channels: Requires 16 GB or split into batches
Note: Even with memory optimizations, Playwright browsers require significant memory. Long input lists (50+ URLs) will require increased memory allocation to complete successfully.
Fatal JavaScript Error (Exit Code 132)
If you see "Fatal JavaScript invalid size error" or "Illegal instruction (core dumped)" with exit code 132:
This error occurs when JavaScript tries to create an array that's too large for memory.
Solutions:
-
Increase Actor Memory (Primary solution):
- This error typically occurs with large datasets
- Increase memory to 8-16 GB depending on your channel count
- The scraper now automatically chunks large datasets to prevent this
-
The scraper now includes automatic protection:
- Videos are automatically limited to 10,000 per channel (safety limit)
- Large datasets are sent in chunks of 1,000 videos to prevent memory overflow
- This prevents the "invalid size error" from occurring
-
If the error persists:
- Split your URL list into smaller batches (25-50 channels per run)
- Use filters (
filterByDays,minViews) to reduce the number of videos per channel - Increase memory allocation to 16 GB or more
Note: This error usually happens after all crawls complete, when writing the final dataset. The chunking optimization should prevent this issue.
Slow Processing
Processing is sequential (one channel at a time) to prevent memory issues:
- Normal speed: ~2-5 minutes per channel
- For 100 channels: expect 3-8 hours total
- This is intentional to prevent memory exhaustion
Timeout Issues
If you encounter timeout errors, you may need to increase the timeout values in the code.
Current timeout settings:
- Request handler timeout: 300 seconds (5 minutes) - Main timeout for entire page processing
- Initial selector wait: 10 seconds - Wait for videos to appear
- Consent button click: 3 seconds - Wait for cookie consent dialog
- Scroll wait: 3 seconds - Wait after each scroll
- New videos detection: 5 seconds - Wait for new videos to load
- Network idle wait: 1.5 seconds - Wait for network to be idle
- Final wait: 1 second - Final wait before extraction
How to increase timeouts:
-
Open
src/main.jsin your editor -
Increase main request handler timeout (line ~145):
const crawler = new PlaywrightCrawler({requestHandlerTimeoutSecs: 300, // Current: 5 minutesChange to:
requestHandlerTimeoutSecs: 600, // 10 minutes for large channels -
Increase initial selector timeout (line ~187):
await page.waitForSelector("ytd-rich-item-renderer...", {timeout: 10000 // Current: 10 seconds})Change to:
await page.waitForSelector("ytd-rich-item-renderer...", {timeout: 30000 // 30 seconds for slow loading}) -
Increase scroll wait times (line ~375):
await page.waitForTimeout(3000); // Current: 3 secondsChange to:
await page.waitForTimeout(5000); // 5 seconds for slow connections -
Increase new videos detection timeout (line ~384):
await page.waitForFunction(..., {timeout: 5000 // Current: 5 seconds})Change to:
await page.waitForFunction(..., {timeout: 10000 // 10 seconds for slow loading}) -
Increase network idle timeout (line ~397):
await page.waitForLoadState('networkidle', {timeout: 1500 // Current: 1.5 seconds})Change to:
await page.waitForLoadState('networkidle', {timeout: 5000 // 5 seconds for slow networks})
When to increase timeouts:
- Channels with many videos (500+)
- Slow internet connections
- YouTube pages that load slowly
- Getting timeout errors in logs
- Processing channels with heavy content
Recommended timeout values for different scenarios:
| Scenario | requestHandlerTimeoutSecs | waitForSelector | waitForTimeout |
|---|---|---|---|
| Normal (default) | 300 (5 min) | 10000 (10s) | 3000 (3s) |
| Large channels (500+ videos) | 600 (10 min) | 30000 (30s) | 5000 (5s) |
| Slow connection | 900 (15 min) | 60000 (60s) | 8000 (8s) |
| Very large channels (1000+ videos) | 1200 (20 min) | 60000 (60s) | 10000 (10s) |
Limitations
- Rate limiting may apply based on YouTube's policies
- Some channels may have restricted access
- Video count may be limited by YouTube's pagination
- Videos without metadata (views/date) will be filtered out when using filters
- Processing time increases with the number of channels (sequential processing)
- Memory usage scales with the number of channels (optimized but not unlimited)
Support
For questions, issues, or feature requests, make an issue.
⚠️ Legal Disclaimer
This project is intended for educational and research purposes only. Use of this Actor must comply with Youtube's Terms of Service and robots.txt policies.
- Compliance: Ensure your scraping activities do not violate Youtube's policies.
- Ethical Considerations: Avoid aggressive scraping practices that might harm Youtube's infrastructure.
- Intended Use: For commercial or production use, consider exploring Youtube's official API solutions.
