Youtube Scraper
Pricing
$19.99/month + usage
Pricing
$19.99/month + usage
Rating
0.0
(0)
Developer
ScrapePilot
Actor stats
0
Bookmarked
2
Total users
0
Monthly active users
22 days ago
Last modified
Categories
Share
Youtube Scraper
Pricing
$19.99/month + usage
Pricing
$19.99/month + usage
Rating
0.0
(0)
Developer
ScrapePilot
Actor stats
0
Bookmarked
2
Total users
0
Monthly active users
22 days ago
Last modified
Categories
Share
Enter one or more YouTube search keywords (for example "Crawlee", "fitness workout"). The actor will run a full scrape for each term and collect matching videos, shorts, and streams.
💬 For custom solutions or feature requests, contact us at dev.scraperengine@gmail.com
Set how many regular (non‑Shorts, non‑live) videos to scrape for each search term. Use 0 to skip long‑form videos completely and focus only on Shorts or streams.
Control how many YouTube Shorts (vertical clips) to collect per keyword. Use 0 if you do not want to include Shorts in your dataset.
Limit how many live or upcoming streams are scraped for each search term. Use 0 to ignore live content entirely.
Provide direct YouTube video, channel, playlist, or results page URLs to scrape without using search terms. This is ideal for monitoring specific assets.
Download video subtitles/transcripts when available. When enabled, the actor will try to fetch caption tracks and optionally full transcripts for each scraped video.
When enabled, every downloaded transcript is stored in the default Apify key‑value store under its own key (e.g. "transcript-VIDEO_ID") so you can download large subtitle files separately from the main dataset.
Choose the primary language for subtitles/transcripts (e.g. en, es, fr, de). The actor will look for this language first and fall back to available tracks where possible.
If turned on, the actor will prefer auto‑generated subtitles over manually uploaded caption tracks. This can increase coverage for less localized videos at the cost of some accuracy.
Decide how transcripts should look in the output: classic SRT (with timestamps), simple plain text, or structured timestamped JSON that is easy to post‑process programmatically.
Sort the final dataset by relevance (original order), upload date, view count, or rating. Applied as post-processing for reliable results.
Apply YouTube’s built‑in "Upload date" filter: last hour, today, this week, this month, or this year — just like clicking the filter in the YouTube interface.
Filter to only standard videos (exclude Shorts). Select 'video' to keep only long-form videos. Channel/playlist/movie apply when supported.
Use YouTube’s length presets to keep only short clips, medium‑length videos, or long‑form content over 20 minutes.
Only include HD videos (720p or higher). The actor inspects YouTube's streaming formats to verify resolution before including the video.
Only include videos that have at least one proper closed‑caption track (not just auto‑generated). Great for accessibility‑critical workflows.
Filter for videos marked by YouTube as Creative Commons licensed. This can help discover content that is more remix‑friendly (always check final license conditions yourself).
Keep only stereoscopic 3D videos that YouTube flags as special 3D content.
Restrict results to live or live‑style content. Combine this with maxStreams to build focused dashboards of live events or streams.
Best-effort filter for purchased/paid content. YouTube rarely exposes this in scraped data, so results may be limited. Use for niche use cases only.
Keep only videos that offer at least one 2160p (4K) stream in their available formats.
Filter results down to immersive 360° videos (spherical / equirectangular projection) that can be explored in all directions.
Only keep videos where YouTube exposes explicit location metadata in the player response (for example city/country information).
Limit the dataset to High Dynamic Range (HDR) videos, detected from color information and HDR‑specific flags in the available formats.
Filter for VR180 immersive content suitable for VR headsets when YouTube marks the video as VR180.
Only include videos published after this date. Pick a date in the calendar (absolute format YYYY-MM-DD). Leave empty to include all dates.
After scraping, optionally sort the final dataset by a chosen field (date, viewCount, or likes) so that the default dataset view is ordered exactly how you like it.
Select the starting proxy setup for this actor. By default it uses no proxy and, if YouTube blocks the traffic, the actor automatically escalates to Apify datacenter proxy and then to residential proxy with up to 3 retries, locking onto residential for the rest of the run.