YouTube Transcript Scraper avatar

YouTube Transcript Scraper

Pricing

from $7.00 / 1,000 transcript extracteds

Go to Apify Store
YouTube Transcript Scraper

YouTube Transcript Scraper

YouTube transcript API for bulk video-to-text extraction. Export timestamped JSON, SRT, VTT, Markdown, or plain text with metadata. Built for AI, RAG, subtitles, research, and content repurposing.

Pricing

from $7.00 / 1,000 transcript extracteds

Rating

0.0

(0)

Developer

Tugelbay Konabayev

Tugelbay Konabayev

Maintained by Community

Actor stats

0

Bookmarked

14

Total users

7

Monthly active users

3 days ago

Last modified

Share

YouTube Transcript API — Bulk SRT, VTT, Markdown & JSON

Try it free — Apify's free plan includes $5 of compute, enough for ~500 video transcripts on this actor. No credit card, no subscription. Bulk URL input via the YouTube transcript API — run one video or a full URL list and keep all results in one dataset. Pay-per-use after — $0.010 per successful transcript. Feed it YouTube URLs or video IDs and get structured JSON, SRT, VTT, Markdown, or plain text.

YouTube Transcript Scraper overview: bulk transcript extraction with timestamps and multiple output formats

YouTube Transcript Scraper input and output example YouTube Transcript Scraper dataset preview

Extract transcripts from YouTube videos with timestamps, metadata, and multi-format output. Use it as a YouTube transcript API for AI agents, RAG pipelines, subtitle export, content repurposing, research, SEO workflows, and bulk video-to-text jobs.


Extract YouTube Video Transcripts in Bulk

Process one URL or a large batch of YouTube video URLs in a single run. Extract transcripts with timestamps in 5 formats.

YouTube Subtitle Downloader — SRT, VTT, Markdown

Download video captions as SRT (for video editors), VTT (for web players), Markdown (for documentation), plain text, or structured JSON.

YouTube Transcript for AI Training Data

Extract video transcripts at scale for LLM fine-tuning, RAG datasets, content analysis, and accessibility compliance.

YouTube Transcript API for ChatGPT, Claude, RAG, and Agents

Turn YouTube videos into structured text that AI systems can actually use:

  • Feed transcripts into ChatGPT, Claude, Gemini, or custom LLM workflows
  • Build RAG datasets from webinars, podcasts, tutorials, interviews, and lectures
  • Store each video as timestamped JSON for search, citations, and retrieval
  • Export Markdown for Notion, docs, blogs, or long-form summaries
  • Use SRT/VTT when the final output is subtitles rather than pure text

YouTube Video to Text, Captions, and Subtitles

This actor is optimized for high-intent transcript jobs:

  • YouTube transcript API — call from Python, JavaScript, CLI, HTTP, Zapier, Make, n8n, or Apify MCP
  • YouTube subtitle downloader — export SRT or VTT with timecodes
  • YouTube video to text — return clean plain text for summaries and notes
  • YouTube Shorts transcript extractor — supports Shorts URLs when captions are available
  • Bulk YouTube transcript scraper — process URL lists and keep one structured Apify dataset

What Does It Do?

This actor downloads transcripts from YouTube videos and converts them into five different formats:

  1. JSON — segments array with timestamps (start time, duration, text) — ideal for programmatic processing and AI/LLM integration
  2. SRT — SubRip subtitle format — compatible with all video editors and subtitle tools
  3. VTT — WebVTT subtitle format — for web players and modern subtitle systems
  4. Markdown — human-readable with inline timestamps — perfect for documentation and blogs
  5. Plain text — transcript text without timestamps — for simple text-based workflows

Each output includes video metadata: title, channel name, thumbnail URL, language, segment count, and extraction timestamp.

Key advantage: one Apify actor for bulk transcript extraction, subtitle files, timestamped JSON, Markdown, metadata, API access, and clean per-video error rows.


Apify Competitor Snapshot

YouTube transcript extraction is a competitive Apify Store category. The current market includes high-volume actors such as pintostudio/youtube-transcript-scraper, starvibe/youtube-video-transcript, karamelo/youtube-transcripts, topaz_sharingan/youtube-transcript-scraper-1, and newer low-price bulk actors.

This actor is positioned for users who want:

NeedThis actor
Bulk URL inputYes
JSON with timestamped segmentsYes
SRT subtitle exportYes
VTT subtitle exportYes
Markdown with inline timestampsYes
Plain text transcriptYes
Metadata in the same rowTitle, channel, thumbnail
Manual + auto-generated captionsYes
Apify API / MCP compatibilityYes
Per-video error rowsYes

Features

  • Bulk processing — Handle one video or a large URL list in a single run. No local scripts, no manual loops, one dataset.
  • Five output formats — JSON (programmatic), SRT (video editors), VTT (web players), Markdown (readable docs), plain text (simplicity).
  • Full timestamp precision — Every segment includes start time and duration (in seconds). Perfect for timestamped links and video navigation.
  • Smart language fallback — Request English; get auto-generated captions if manual transcripts are unavailable. Or accept any available language.
  • Video metadata extraction — Title, channel name, thumbnail URL, and video ID — all in one payload. No separate oEmbed API call needed.
  • Transcript detection — Automatically detects whether captions are manual or auto-generated and reports in output.
  • Graceful error handling — Video unavailable, transcripts disabled, no transcript in requested language? Detailed error message per video. Run continues.
  • Proxy-ready — Uses Apify Proxy by default. YouTube blocks cloud IPs; proxy configuration is pre-integrated.
  • Fast enough for batch workflows — No browser rendering or video download. Runtime depends on caption availability, proxy latency, and batch size.
  • Cost-effective — PPE pricing ($0.01 per transcript) means bulk runs scale down your per-video cost.

Input Parameters

Required

ParameterTypeDescription
urlsArray of stringsYouTube video URLs or IDs. Accepts standard URLs (https://www.youtube.com/watch?v=dQw4w9WgXcQ), short URLs (https://youtu.be/dQw4w9WgXcQ), Shorts URLs, embed URLs, and raw video IDs (dQw4w9WgXcQ).

Optional

ParameterTypeDefaultDescription
outputFormatstringjsonOutput format. Options: json (segments with timestamps), text (plain text, no timestamps), srt (SubRip format), vtt (WebVTT format), markdown (readable with inline timestamps).
languagestringenLanguage code for transcript (e.g., en, es, fr, ja, zh, de). If not available, falls back to auto-generated or any available language.
includeAutoGeneratedbooleantrueIf manual transcript not available, also try auto-generated captions.
includeMetadatabooleantrueExtract and include video metadata (title, channel, thumbnail, duration). Disabling may speed up processing slightly.
maxItemsinteger10 (max 10,000)Maximum number of videos to process in this run. Useful for controlling costs on large URL lists.
proxyConfigurationobject{ "useApifyProxy": true }Proxy settings. YouTube blocks cloud IPs. Default uses Apify Proxy. Can override with custom proxy URL.

Output Fields

Per-Video Result

FieldTypeDescription
videoIdstring11-character YouTube video ID (extracted from URL).
videoUrlstringFull YouTube video URL (https://www.youtube.com/watch?v={videoId}).
titlestring | nullVideo title (from oEmbed API). null if metadata extraction failed.
channelstring | nullChannel/author name (from oEmbed API). null if metadata extraction failed.
thumbnailUrlstring | nullHigh-resolution thumbnail URL. null if metadata extraction failed.
languagestring | nullLanguage code of the transcript found (e.g., en, es). null if no transcript available.
isAutoGeneratedboolean | nulltrue if transcript is auto-generated captions; false if manual captions. null if no transcript available.
segmentCountintegerNumber of segments/lines in transcript. 0 if error.
segmentsarray | nullJSON format only. Array of segment objects: [{ "text": "...", "start": 12.5, "duration": 3.2 }, ...]. Start time in seconds. Duration in seconds. null for other formats.
transcriptTextstringPlain text transcript (segments joined with spaces). Always populated when transcript is available.
transcriptSrtstring | nullSRT format only. Complete SRT subtitle file (numbered segments with HH:MM:SS,mmm timecodes). null for other formats.
transcriptVttstring | nullVTT format only. Complete WebVTT subtitle file (HH:MM:SS.mmm format). null for other formats.
transcriptMarkdownstring | nullMarkdown format only. Markdown text with inline timestamps **[MM:SS]** segment text. null for other formats.
errorstring | nullError message if transcript extraction failed. Examples: "No transcript available for video {id}", "Transcripts are disabled for this video", "Video is unavailable or private". null on success.
extractedAtstringISO 8601 timestamp (UTC) when transcript was extracted.

Input Examples

Example 1: Single Video → JSON with Metadata (Simplest)

{
"urls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"]
}

Output: JSON segments with title, channel, thumbnail.

Example 2: Bulk URLs → SRT Subtitles (Multiple Videos)

{
"urls": [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://youtu.be/jNQXAC9IVRw",
"LCpyWYAcJRM"
],
"outputFormat": "srt",
"maxItems": 10
}

Output: SRT subtitle files for up to 10 videos. Ready to import into DaVinci Resolve, Premiere, or any video editor.

Example 3: Spanish Transcripts with Auto-Generated Fallback

{
"urls": [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://www.youtube.com/watch?v=kJQP7kiw9Fk"
],
"language": "es",
"includeAutoGenerated": true,
"outputFormat": "markdown"
}

Output: Markdown transcripts in Spanish. If Spanish manual captions not available, tries auto-generated Spanish. Falls back to any available language.

Example 4: Bulk Transcripts → JSON, No Metadata (Fast Mode)

{
"urls": [
"https://www.youtube.com/watch?v=video1",
"https://www.youtube.com/watch?v=video2",
"https://www.youtube.com/watch?v=video3"
],
"outputFormat": "json",
"includeMetadata": false,
"maxItems": 50
}

Output: Pure JSON segments (no oEmbed calls). Faster processing, lower latency.

Example 5: Custom Proxy Configuration

{
"urls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"],
"proxyConfiguration": {
"proxyUrls": ["http://proxy.example.com:8080"]
}
}

Output: Uses custom proxy instead of Apify Proxy. Useful for on-premise or private proxy setups.


Example Output

JSON Format (with segments)

{
"videoId": "dQw4w9WgXcQ",
"videoUrl": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"title": "Example Tutorial Video",
"channel": "Example Channel",
"thumbnailUrl": "https://i.ytimg.com/vi/dQw4w9WgXcQ/maxresdefault.jpg",
"language": "en",
"isAutoGenerated": false,
"segmentCount": 61,
"segments": [
{
"text": "Welcome to this tutorial",
"start": 0.5,
"duration": 2.1
},
{
"text": "Today we will cover the main workflow",
"start": 2.6,
"duration": 2.0
},
{
"text": "Then we will review the results",
"start": 4.6,
"duration": 2.8
}
],
"transcriptText": "Welcome to this tutorial Today we will cover the main workflow Then we will review the results...",
"extractedAt": "2024-01-15T10:23:45.123456+00:00",
"error": null
}

SRT Format (subtitles)

1
00:00:00,500 --> 00:00:02,600
Welcome to this tutorial
2
00:00:02,600 --> 00:00:04,600
Today we will cover the main workflow
3
00:00:04,600 --> 00:00:07,400
Then we will review the results

Markdown Format (with timestamps)

**[00:00]** Welcome to this tutorial
**[00:02]** Today we will cover the main workflow
**[00:04]** Then we will review the results

Error Case

{
"videoId": "invalidID12",
"videoUrl": "https://www.youtube.com/watch?v=invalidID12",
"title": null,
"channel": null,
"thumbnailUrl": null,
"language": null,
"isAutoGenerated": null,
"segmentCount": 0,
"segments": null,
"transcriptText": null,
"error": "Video is unavailable or private",
"extractedAt": "2024-01-15T10:23:50.234567+00:00"
}

Integrations

Python SDK

from apify_client import ApifyClient
client = ApifyClient("YOUR_APIFY_TOKEN")
# Run the actor
run = client.actor("tugelbay/youtube-transcript").call(
{
"urls": [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://www.youtube.com/watch?v=jNQXAC9IVRw"
],
"outputFormat": "json",
"language": "en"
}
)
# Get dataset
dataset_items = client.dataset(run["defaultDatasetId"]).list_items().items
for item in dataset_items:
print(f"Title: {item['title']}")
print(f"Segments: {item['segmentCount']}")
print(f"Text: {item['transcriptText'][:100]}...")

JavaScript/Node.js SDK

const { ApifyClient } = require("apify-client");
const client = new ApifyClient({ token: "YOUR_APIFY_TOKEN" });
// Run the actor
const run = await client.actor("tugelbay/youtube-transcript").call({
urls: [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://www.youtube.com/watch?v=jNQXAC9IVRw",
],
outputFormat: "json",
language: "en",
});
// Get dataset
const datasetItems = await client.dataset(run.defaultDatasetId).listItems();
datasetItems.items.forEach((item) => {
console.log(`Title: ${item.title}`);
console.log(`Segments: ${item.segmentCount}`);
console.log(`Text: ${item.transcriptText.substring(0, 100)}...`);
});

LangChain Integration (LLM + Transcripts)

from langchain.schema import Document
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from apify_client import ApifyClient
# Get transcripts via Apify
client = ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("tugelbay/youtube-transcript").call({
"urls": ["https://www.youtube.com/watch?v=dQw4w9WgXcQ"],
"outputFormat": "json"
})
# Convert to LangChain documents
documents = []
for item in client.dataset(run["defaultDatasetId"]).list_items().items:
doc = Document(
page_content=item["transcriptText"],
metadata={
"source": item["videoUrl"],
"title": item["title"],
"channel": item["channel"],
"language": item["language"]
}
)
documents.append(doc)
# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(documents, embeddings)
# Query transcripts with LLM
results = vectorstore.similarity_search("main topics discussed", k=3)
for doc in results:
print(f"From: {doc.metadata['title']}")
print(f"Content: {doc.page_content[:200]}...")

MCP (Model Context Protocol) for Claude / LLM Agents

{
"name": "apify_youtube_transcript",
"description": "Extract transcripts from YouTube videos via Apify",
"url": "https://api.apify.com/v2/actor-tasks/{TASK_ID}/runs",
"params": {
"urls": "array of YouTube URLs",
"outputFormat": "json|srt|vtt|markdown|text",
"language": "language code",
"maxItems": "max videos to process"
}
}

Export to File

Export as JSONL (one video per line):

# After running actor, export dataset as JSONL
curl "https://api.apify.com/v2/datasets/{DATASET_ID}/items?format=jsonl" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
> transcripts.jsonl

Export as CSV:

curl "https://api.apify.com/v2/datasets/{DATASET_ID}/items?format=csv" \
-H "Authorization: Bearer YOUR_APIFY_TOKEN" \
> transcripts.csv

Export as ZIP (all formats):

# Use Apify CLI
apify dataset download {DATASET_ID}

Use Cases

  1. Content Creator Archiving — Extract transcripts from your own YouTube videos for documentation, blog posts, and searchable archives. Bulk process URL lists from a channel in one run.

  2. Research & Literature Review — Transcribe educational videos, conference talks, and webinars. Convert to plain text for NLP analysis, topic modeling, or citation tracking.

  3. SEO & Content Repurposing — Convert video transcripts to blog posts, articles, and social media snippets. Bulk processing means you can refresh your content library in hours instead of weeks.

  4. Accessibility & Subtitle Creation — Generate SRT/VTT subtitles for your video library. For creators with no manual captions, auto-generated fallback ensures every video has a transcript.

  5. Video Search & Indexing — Index YouTube transcripts full-text for internal video search. Extract metadata (title, channel, thumbnail) and segment timestamps for clickable search results.

  6. LLM Fine-Tuning & Training Data — Use video transcripts as training data for AI models. Timestamps allow you to correlate text with video segments for multimodal training.

  7. Podcast & Audio Content Analysis — Transcribe YouTube uploads of podcasts, interviews, and audio documentaries. Markdown format with timestamps works as a readable episode guide.

  8. Educational Curriculum Building — Compile transcripts from course videos. Organize by topic, language, or creator. Convert to Markdown for e-books or learning materials.

  9. Market Research & Competitor Analysis — Extract competitor's video content. Monitor what's being discussed, analyze sentiment, track topic trends.

  10. Subtitling for Non-English Speakers — Request Spanish, French, German, or any language. Auto-generated fallback ensures coverage even for videos with limited captions.


Cost Estimation

YouTube Transcript Scraper uses Pay-Per-Event (PPE) pricing: $0.01 per transcript extracted.

Pricing Examples

ScenarioVideosCostNotes
Single video1$0.01Minimal cost for testing
Small batch10$0.10Daily content review
Medium batch100$1.00Weekly channel archive
Large batch1,000$10.00Monthly bulk project
Bulk processing10,000$100.00Entire channel or research dataset
Failed videosAnySee run outputError rows explain unavailable videos, disabled transcripts, or language misses

Cost Breakdown

  • Transcript extraction: $0.01 per video
  • Metadata (oEmbed): Included in PPE (no additional cost)
  • Proxy usage: Included in PPE (Apify Proxy overhead absorbed)
  • Format conversion: Included in PPE (JSON, SRT, VTT, Markdown all same price)
  • Failed videos: Returned as error rows so you can inspect unavailable videos or disabled transcripts

When this is the right fit

Use this actor when you need transcript extraction as an API or dataset workflow rather than a manual web app:

  • Bulk URL lists
  • Repeatable Apify tasks and schedules
  • JSON/CSV/JSONL export
  • SRT/VTT subtitle files
  • Markdown for summaries and documentation
  • Downstream automation through Apify API, MCP, Make, Zapier, n8n, Google Sheets, or your own backend

FAQ

Q: Do I need a proxy?

A: Yes. YouTube detects and blocks cloud hosting IPs (where Apify runs). The actor uses Apify Proxy by default. If you disable it, you'll get 403 errors. Custom proxies are supported via the proxyConfiguration parameter.

Q: What if a video doesn't have a transcript?

A: The result includes an error field explaining why: "Video is unavailable or private", "Transcripts are disabled for this video", or "No transcript in requested language". The run continues and keeps detailed error info per video.

Q: How many videos can I process in one run?

A: Up to 10,000 videos per run (configurable via maxItems). For operational reliability, recommended batches are 500–1,000 videos when processing very large lists.

Q: Can I get transcripts in multiple languages?

A: Not in a single run. Run the actor once per language. For example, to get both English and Spanish transcripts, run with language: "en" once, then language: "es" on the same URLs. Both results will be in your dataset (use filters to separate them).

Q: What timestamp format does it use?

A: JSON/Markdown: Seconds as decimal (e.g., 12.5 = 12.5 seconds). SRT: HH:MM:SS,mmm (e.g., 00:00:12,500). VTT: HH:MM:SS.mmm (e.g., 00:00:12.500). All formats preserve full precision; you can synchronize subtitles pixel-perfectly.

Q: Does it handle YouTube Shorts?

A: Yes. Shorts with captions/transcripts are supported. Just pass the Shorts URL (e.g., https://www.youtube.com/shorts/dQw4w9WgXcQ). Note: Most Shorts don't have manual captions, so includeAutoGenerated: true is recommended.

Q: Can I use this with LangChain or other AI frameworks?

A: Yes. Use the Apify SDK or REST API to fetch transcripts, convert them to LangChain Document objects, and feed into vector stores, LLMs, or RAG pipelines. See the Integrations section for example code.

Q: What's the difference between "auto-generated" and "manual" captions?

A: Manual: Creator or translator wrote captions, usually more accurate. Auto-generated: YouTube's speech-to-text algorithm, may have errors but covers almost all videos. The isAutoGenerated field tells you which you got. Set includeAutoGenerated: false if you want manual captions only (may result in "no transcript" errors).

Q: Can I filter or transform the output?

A: The actor outputs raw results to the dataset. Use Apify's Data Extraction or post-process with a downstream actor. Or download the dataset (JSON/CSV/JSONL) and transform locally. Example: filter for videos >1,000 segments, extract only transcriptText, convert to Markdown.

Q: How long does it take to process a batch?

A: Runtime depends on caption availability, proxy latency, metadata fetching, and batch size. For faster runs, keep includeMetadata enabled only when you need title/channel/thumbnail and split very large lists into 500–1,000 video batches.


Troubleshooting

Issue: "403 Forbidden" or "Video unavailable"

Cause: YouTube is blocking your request. Usually a cloud IP issue.

Solution:

  1. Ensure proxyConfiguration is enabled (default: Apify Proxy).
  2. Check your Apify account has available proxy credits.
  3. Verify the video URL is public (not private/unlisted).
  4. Try a different proxy or contact Apify support.

Issue: "No transcript available for video {id}"

Cause: Video has no captions (manual or auto-generated) in the requested language.

Solution:

  1. Check the video on YouTube manually — does it have captions?
  2. If yes but in a different language, set language to that language code.
  3. If no captions exist, this video can't be transcribed (no workaround).
  4. Ensure includeAutoGenerated: true (default) to use auto-generated as fallback.

Issue: "Transcripts are disabled for this video"

Cause: Video creator explicitly disabled comments and transcripts.

Solution: None. Creator must enable transcripts in YouTube Studio. You cannot transcribe disabled videos.

Issue: "Request timeout" or "Connection reset"

Cause: Proxy or network latency. Rare but possible with very large batches or slow proxies.

Solution:

  1. Reduce maxItems and rerun (e.g., 500 instead of 5,000).
  2. Try again; transient network errors usually resolve on retry.
  3. Check Apify's proxy status page.
  4. Use custom proxy if available.

Issue: Language fallback gave me wrong language

Cause: Requested language not found; actor fell back to available language.

Explanation: If you request language: "fr" but video only has English and Spanish, you'll get Spanish (first available). Set language: "en" and includeAutoGenerated: false to fail cleanly instead of falling back.

Solution: Check the language field in the result. If it doesn't match your request, manually re-request with explicit language or skip that video.


Limitations

  1. Requires Proxy — YouTube blocks cloud IPs. All runs require a proxy (Apify Proxy or custom). Cost is absorbed in the PPE price.

  2. Manual Captions Only (Optional) — If you disable includeAutoGenerated: true, videos without manual captions will fail. ~70% of YouTube videos rely on auto-generated captions.

  3. No Multilingual Output — Can't extract English and Spanish in one run. Must run twice (once per language). Results go to the same dataset; use filters to separate.

  4. oEmbed Metadata Limitations — Title, channel, and thumbnail come from YouTube's oEmbed API, not direct video pages. Occasionally missing or outdated. Disable with includeMetadata: false to speed up.

  5. Rate Limiting — YouTube and Apify Proxy both rate-limit. Very large batches (>10k) may hit limits. Recommended: split into 1k–2k batches if processing 100k+ videos.

  6. No Video Download — This actor extracts transcripts only, not video audio or metadata like resolution, frame rate, or duration. Use YouTube-DL actors for that.

  7. No Translation — Transcripts are in the video's original language. Can't translate on the fly. Use Google Translate API as a downstream step if needed.

  8. Segment Duration Estimates — Segment duration is calculated from the next segment's start time. Last segment duration may be imprecise.


Changelog

v1.2.0 (Latest)

  • Added: Support for YouTube Shorts URLs
  • Improved: Metadata extraction now handles edge cases (private videos, deleted channels)
  • Fixed: SRT timestamp formatting for videos >1 hour
  • Positioning: README now focuses on API, AI/RAG, SRT/VTT, Markdown, and bulk dataset workflows

v1.1.5

  • Added: Markdown output format with inline timestamps
  • Added: includeMetadata toggle to skip oEmbed API calls for faster processing
  • Fixed: Language fallback now respects includeAutoGenerated flag
  • Fixed: Error handling for videos with no segments

v1.1.0

  • Added: VTT subtitle format output
  • Added: Automatic fallback to auto-generated captions
  • Improved: Error messages now include video ID and language
  • Changed: Default maxItems reduced to 10 for safer first runs

v1.0.5

  • Fixed: Proxy configuration parsing for custom proxies
  • Fixed: Timestamp precision for segments <1 second
  • Improved: Logging now shows segment count per video

v1.0.0 (Initial Release)

  • Bulk YouTube transcript extraction
  • JSON and SRT output formats
  • Language selection with fallback
  • Video metadata (title, channel, thumbnail)
  • Apify Proxy integration
  • PPE pricing ($0.01/transcript)

Support & Documentation


Questions? Issues? Feedback? Post on the Apify actor discussion page or contact the developer directly.

See all actors: apify.com/tugelbay