YouTube Video Scraper avatar

YouTube Video Scraper

Pricing

from $5.00 / 1,000 video scrapeds

Go to Apify Store
YouTube Video Scraper

YouTube Video Scraper

Scrape any YouTube link to get full metadata, transcript with timestamps, comments with replies, and channel data - all in one call. HTTP-only.

Pricing

from $5.00 / 1,000 video scrapeds

Rating

0.0

(0)

Developer

Tubelens

Tubelens

Maintained by Community

Actor stats

0

Bookmarked

3

Total users

1

Monthly active users

7 days ago

Last modified

Share

What does YouTube Video Scraper do?

YouTube Video Scraper extracts metadata, transcript, comments, and channel data from any YouTube URL in one call. It runs on Apify, hits YouTube's public InnerTube API directly (no headless browser), and returns JSON, CSV, or Excel. Accepts 5 URL formats: watch, youtu.be, Shorts, embed, and bare 11-character video ID.

  • One actor returns all four data types in a single dataset record. No stitching together separate scrapers.
  • Toggle off transcript, comments, or channel to cut cost. You only pay for what you enable.
  • HTTP-only. 3 to 5 seconds per video bundle, roughly 60 videos per minute on default memory.
  • Unlisted, age-gated, and unplayable videos return an error field and aren't billed.
  • Datacenter proxies by default, auto-escalates to residential after 3 consecutive 429s mid-run.
  • Works with the Apify API, Python and Node SDKs, and MCP clients including Claude Desktop and Cursor.

Quick start

Run from the terminal with curl:

curl -X POST "https://api.apify.com/v2/acts/tubelens~youtube-video-scraper/run-sync-get-dataset-items?token=APIFY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"videoUrls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"]}'

Run from Python (pip install apify-client):

from apify_client import ApifyClient
client = ApifyClient("APIFY_TOKEN")
run = client.actor("tubelens/youtube-video-scraper").call(run_input={
"videoUrls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"],
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(item["title"], item["viewCount"])

Run from Node (npm i apify-client):

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'APIFY_TOKEN' });
const run = await client.actor('tubelens/youtube-video-scraper').call({
videoUrls: ['https://www.youtube.com/watch?v=dQw4w9WgXcQ'],
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items[0].title, items[0].viewCount);

What data can you extract from a YouTube video?

FieldTypeDescription
videoId, url, title, descriptionstringCore identifiers and text
viewCount, likeCount, commentCountnumberPublic engagement counters
durationSeconds, publishedAt, uploadDatenumber, ISO dateDuration and publish timestamps
category, tags, thumbnailsstring, string[], object[]YouTube category, creator tags, full-resolution thumbnail URLs
isLive, isAgeRestricted, isPrivatebooleanVideo status flags
transcript.text, transcript.segmentsstring, object[]Full transcript plus per-segment startMs, durationMs, text
transcript.language, transcript.isAutoGeneratedstring, booleanCaption language and auto vs manual captions
comments[].text, comments[].author, comments[].likeCountstring, string, numberComment body, author handle, like count
comments[].replies[]object[]Nested replies with the same fields as top-level comments
channel.name, channel.handle, channel.subscriberCountstring, string, numberChannel name, @handle, subscriber count
channel.country, channel.joinedDate, channel.totalViewsstring, string, numberCountry, join date, lifetime views
channel.externalLinks[]object[]Creator's external URLs, resolved from YouTube redirect wrappers

Extract YouTube transcripts with timestamps

Set includeTranscript: true to return the full transcript plus per-segment timestamps in milliseconds. Each segment has startMs, durationMs, and text, which makes the output ready for video chaptering, subtitle rebuilds, LLM training datasets, or full-text search inside video content.

The scraper first tries your transcriptLanguage (for example en, es, pt), then auto-generated English, then the first available track. Both manual and auto-generated captions are supported. Videos with captions disabled, live streams mid-broadcast, and unaired premieres return transcript: null at no charge. Around 10 to 15% of random videos have no available transcript.

Transcript is a separate billable event at $0.005 per video, so disable it on runs that only need metadata.

Scrape YouTube comments and replies

Set includeComments: true to pull top-level comments and nested replies. Control volume with maxCommentsPerVideo (default 100, no hard ceiling) and order with commentSort: "top" or commentSort: "newest".

Each comment returns text, author, authorHandle, authorChannelId, likeCount, publishedTimeText, replyCount, isPinned, isHearted, isCreator, plus a replies[] array with the same shape. No YouTube Data API quota applies, so you can pull unlimited comments per video within your Apify compute budget.

Comments are billed at $0.0005 each, including replies. A video with 100 comments costs $0.05 in comment fees.

Get YouTube channel data

Set includeChannel: true to attach channel metadata to every video record. Returned fields: id, name, handle, url, subscriberCount, videoCount, description, country, joinedDate, totalViews, keywords, and an externalLinks[] array with title and the real URL (resolved from YouTube's redirect wrapper).

Country, join date, total views, and external links come from the channel's /about page. Subscriber count and video count come from the channel header. Channel is a separate billable event at $0.003, charged once per video where the channel block is returned.

How to scrape YouTube videos, transcripts, comments, and channel data

  1. Create a free Apify account. Takes 30 seconds, no card needed.
  2. Open YouTube Video Scraper in the Apify Console.
  3. Paste the video URLs you want to scrape into the input. Any format, one per line.
  4. Toggle off any part you don't need (transcript, comments, channel) to cut cost.
  5. Click Start. Runs finish in 3 to 5 seconds per video bundle.
  6. Export as JSON, CSV, or Excel, or fetch the dataset via the Apify API.

How much does YouTube Video Scraper cost?

Pricing is pay-per-event, so you're billed for the pieces you actually pulled.

EventPrice
Video (metadata)$0.005
Transcript$0.005
Channel$0.003
Comment or reply$0.0005

The base rate is $5 per 1,000 videos when you want metadata only. The Apify Free plan gives you $5 in monthly credits, which covers about 1,000 metadata-only scrapes per month, or roughly 79 full bundles with 100 comments each. The $29/month Starter plan covers about 5,800 metadata-only scrapes or 460 full bundles per month.

Sample run costs:

  • Metadata only: $0.005/video
  • Metadata + channel: $0.008/video
  • Metadata + transcript + channel: $0.013/video
  • Full bundle with 100 comments: $0.063/video
  • Full bundle with 500 comments: $0.263/video

Private, deleted, or members-only videos return an error field and aren't billed. No subscription lock-in.

YouTube Video Scraper vs YouTube Data API

The official YouTube Data API v3 is free but capped at 10,000 quota units per day and has no transcript endpoint. YouTube Video Scraper has no quota and returns transcripts, comments, and channel data in one call.

FeatureYouTube Data API v3YouTube Video Scraper
Transcripts with timestampsNot availableReturned, with startMs per segment
CommentsUp to 100 per call, paginateUnlimited, maxCommentsPerVideo configurable
Comment repliesSeparate endpoint, extra quotaNested replies[] in same response
Channel metadataSeparate callIncluded per video
Daily quota10,000 units (~100 full video fetches)None
AuthOAuth or API key requiredApify token, no Google account needed
Cost for 1,000 full bundlesRequires multiple projects to lift quota$13 (metadata + transcript + channel)
Rate limiting429 on quota exhaustionAuto proxy escalation, automatic backoff

Input

{
"videoUrls": [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://youtu.be/jNQXAC9IVRw",
"https://www.youtube.com/shorts/abc123xyz45"
],
"includeMetadata": true,
"includeChannel": true,
"includeTranscript": true,
"transcriptLanguage": "en",
"includeComments": true,
"maxCommentsPerVideo": 100,
"commentSort": "top"
}

Output

One dataset record per video with four blocks: top-level video metadata, a channel object, a transcript object, and a comments array. Each block only appears when its corresponding include* flag is true.

Example record:

{
"videoId": "dQw4w9WgXcQ",
"url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"title": "Rick Astley - Never Gonna Give You Up (Official Video) (4K Remaster)",
"description": "The official video for 'Never Gonna Give You Up' by Rick Astley...",
"publishedAt": "2009-10-24T23:57:33-07:00",
"uploadDate": "2009-10-25",
"durationSeconds": 213,
"viewCount": 1764677070,
"likeCount": 18951890,
"category": "Music",
"tags": ["rick astley", "never gonna give you up", "official video"],
"thumbnails": [
{"url": "https://i.ytimg.com/vi/dQw4w9WgXcQ/maxresdefault.jpg", "width": 1280, "height": 720}
],
"isLive": false,
"isAgeRestricted": false,
"isPrivate": false,
"channel": {
"id": "UCuAXFkgsw1L7xaCfnd5JJOw",
"name": "Rick Astley",
"handle": "@RickAstleyYT",
"url": "https://www.youtube.com/@RickAstleyYT",
"subscriberCount": 4480000,
"subscribers": "4.48M subscribers",
"videoCount": 408,
"description": "Official YouTube home of Rick Astley.",
"country": "United Kingdom",
"joinedDate": "Joined Feb 1, 2015",
"totalViews": 2459513931,
"keywords": "rick astley, 80s, pop",
"externalLinks": [
{"title": "2026 Reflection Tour", "url": "https://lnk.to/RickAstley2026"},
{"title": "Website", "url": "https://www.rickastley.co.uk/"}
]
},
"transcript": {
"language": "English",
"isAutoGenerated": false,
"text": "We're no strangers to love. You know the rules and so do I...",
"segments": [
{"startMs": 18640, "durationMs": 4000, "text": "We're no strangers to love"},
{"startMs": 22640, "durationMs": 4000, "text": "You know the rules and so do I"}
]
},
"comments": [
{
"id": "UgxXXX",
"text": "can confirm: he never gave us up",
"author": "@YouTube",
"authorHandle": "@YouTube",
"authorChannelId": "UCvC4D8onUfXzvjTOM-dBfEA",
"likeCount": 227,
"publishedTimeText": "2 weeks ago",
"replyCount": 962,
"isPinned": false,
"isHearted": true,
"isCreator": false,
"replies": []
}
],
"commentCount": 100,
"scrapedAt": "2026-04-22T03:45:00.000Z",
"error": null
}

Other scrapers you might like

Apple Podcast transcript and episode scraperSpotify podcast and track scraperThreads profile and post scraper
Google Play app and review scraperTelegram channel and message scraperSimilarWeb traffic and competitor scraper

Frequently asked questions

Scraping public data is generally allowed in the US and most of the EU, as long as you don't collect personal data that falls under GDPR or CCPA without a lawful basis. This actor touches public watch pages and YouTube's public InnerTube endpoints, the same ones a signed-out browser hits. You're responsible for how you use the output.

Apify published a detailed breakdown: Is web scraping legal?.

Does YouTube Video Scraper replace the YouTube Data API?

For most scraping use cases, yes. The official YouTube Data API v3 does not return transcripts, caps comments at 100 per call, and applies a 10,000-unit daily quota that runs out fast. YouTube Video Scraper returns transcripts with millisecond timestamps, unlimited comments with nested replies, and channel metadata in one call, with no quota and no Google OAuth.

Can I scrape YouTube transcripts without an API key?

Yes. You do not need a YouTube API key or a Google account. Pass one or more video URLs with includeTranscript: true and the scraper returns the transcript text plus per-segment startMs, durationMs, and text. Around 10 to 15% of videos have captions disabled by the creator, in which case transcript: null is returned at no charge.

How do I bulk-extract YouTube channel data?

Pass a list of video URLs from the channel (or videos that reference the channel) with includeChannel: true. Each record attaches subscriber count, video count, country, join date, total views, and external links. For pulling every video from a channel by handle or channel ID, use a dedicated YouTube channel scraper or feed the output of YouTube search into this actor.

What is the YouTube comments limit per video?

There is no hard comments limit. maxCommentsPerVideo defaults to 100 and accepts any positive integer. The scraper paginates YouTube's internal comments endpoint and returns top-level comments plus nested replies in the same response. Pulling 10,000 comments per video is supported if your Apify memory and run time allow it.

How much does YouTube Video Scraper cost?

Pay-per-event at $0.005 per video (metadata), $0.005 per transcript, $0.003 per channel, and $0.0005 per comment or reply. The Apify Free plan's $5 monthly credit covers about 1,000 metadata-only scrapes or 79 full bundles with 100 comments each. No subscription required.

Can I integrate YouTube Video Scraper with other tools?

Results push to Make, Zapier, Slack, Airbyte, GitHub, Google Sheets, and Google Drive. The Apify platform treats every actor as a webhook source, so any webhook or API consumer can read the output.

See Apify integrations for the full list.

Can I use YouTube Video Scraper with the Apify API?

Yes. Every run is available via the Apify REST API. A minimal start-run call looks like this:

curl -X POST "https://api.apify.com/v2/acts/tubelens~youtube-video-scraper/runs?token=APIFY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"videoUrls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"]}'

Full docs: Apify API reference.

Can I use YouTube Video Scraper through an MCP Server?

Yes. Apify ships an MCP server that exposes every actor as a tool, so Claude Desktop, Cursor, and any other MCP-capable client can call YouTube Video Scraper directly. Setup: Apify MCP docs.

Why does my video sometimes come back with no transcript?

Roughly 10 to 15% of videos have captions disabled by the creator or no captions in any language. When that happens, transcript returns null and no transcript charge is applied. The scraper first tries your preferred language (via transcriptLanguage), then auto-generated English, then the first available track. Live streams and unaired premieres also return no transcript, since captions appear only after the stream ends.

Can I scrape Shorts, live streams, and premieres?

Yes for Shorts. A Shorts URL works the same as a normal watch URL and returns the same fields. Live streams return metadata fine but usually have no transcript until the broadcast ends. Scheduled premieres that haven't aired yet return partial metadata with the isLive flag set so you can filter them out.

How many videos can I scrape per run?

No hard cap. Throughput is around 60 videos per minute on default memory, so a 3,000-URL run finishes in roughly 50 to 60 minutes. Videos process in parallel with built-in rate-limit backoff. If a run times out, restart with the remaining URLs.

Your feedback

Ran into a bug or want a new field? Drop a note in the Issues tab. Reports are triaged within 7 days.

Last updated: April 2026