YouTube Channel Search Scraper avatar

YouTube Channel Search Scraper

Pricing

from $4.99 / 1,000 results

Go to Apify Store
YouTube Channel Search Scraper

YouTube Channel Search Scraper

Search videos inside a YouTube channel by keyword and export matching items with metadata and timestamps.

Pricing

from $4.99 / 1,000 results

Rating

0.0

(0)

Developer

PowerAI

PowerAI

Maintained by Community

Actor stats

1

Bookmarked

3

Total users

2

Monthly active users

9 days ago

Last modified

Share

Search videos within a specific YouTube channel and export one row per matching item.

Key Features

  • Required Channel ID (id) and search query (q)
  • Collect up to your chosen maximum number of matching items; continues with cursor paging until that limit or until there are no more pages
  • Typical fields include type, video.videoId, video.title, video.author, video.descriptionSnippet, video.thumbnails, video.movingThumbnails, video.lengthSeconds, video.publishedTimeText, and video.stats
  • Supports optional localization hints: hl (language) and gl (country)
  • Each record includes scrapedAt (ISO 8601)

Why Use It?

  • Practical: Quickly find channel videos on a specific topic
  • Structured: Clean JSON rows for analysis, filtering, and automation
  • Efficient: Cursor pagination until cap or feed end

Great For

  • Topic-focused channel research
  • Building filtered datasets from creator archives
  • Tracking coverage of keywords by a channel

Input Parameters

ParameterRequiredDescription
idYesYouTube channel ID.
qYesKeyword to search within the channel.
maxResultsNoMaximum rows to collect (default: 48).
hlNoLanguage hint, e.g. en.
glNoCountry hint, e.g. US.

Output

Each dataset row is one item from contents[] plus scrapedAt appended by the actor.

AreaFieldDescription
RoottypeItem type returned by the feed.
RootvideoVideo object payload.
RootscrapedAtCollection timestamp (ISO 8601).
videovideoIdYouTube video ID.
videotitleVideo title.
videoauthorAuthor metadata, including channel identity and badges.
videodescriptionSnippetShort description preview text.
videobadgesVideo-level badges (for example CC).
videothumbnailsThumbnail URL variants.
videomovingThumbnailsAnimated thumbnail variants (when available).
videolengthSecondsDuration in seconds (when available).
videopublishedTimeTextRelative publish time text.
videostats.viewsView count when available.
videoisLiveNowWhether this item is currently live (when available).

Sample record

{
"type": "video",
"video": {
"author": {
"avatar": [
{
"height": 68,
"url": "https://yt3.googleusercontent.com/WZ_63J_-745xyW_DGxGi3VUyTZAe0Jvhw2ZCg7fdz-tv9esTbNPZTFR9X79QzA0ArIrMjYJCDA=s68-c-k-c0x00ffffff-no-rj",
"width": 68
}
],
"badges": [
{
"text": "Verified",
"type": "VERIFIED_CHANNEL"
}
],
"canonicalBaseUrl": "/@GoogleDevelopers",
"channelId": "UC_x5XG1OV2P6uZZ5FSM9Ttw",
"title": "Google for Developers"
},
"badges": ["CC"],
"descriptionSnippet": "Oh, Javascript. What are your thoughts on this programming language?\\n\\nSubscribe to Google for Developers → https://goo.gle/developers \\n\\nSpeakers: Meg Bauman, M.E Francis",
"isLiveNow": false,
"lengthSeconds": 27,
"movingThumbnails": [
{
"height": 180,
"url": "https://i.ytimg.com/an_webp/gPoz3Lt5gv8/mqdefault_6s.webp?du=3000&sqp=CPK-iM4G&rs=AOn4CLDOlI2WRYpYNvBa9nTp5x43ysYWbw",
"width": 320
}
],
"publishedTimeText": "8 months ago",
"stats": {
"views": 34211
},
"thumbnails": [
{
"height": 94,
"url": "https://i.ytimg.com/vi/gPoz3Lt5gv8/hqdefault.jpg?sqp=-oaymwE1CKgBEF5IVfKriqkDKAgBFQAAiEIYAXABwAEG8AEB-AG2CIACgA-KAgwIABABGDwgTShyMA8=&rs=AOn4CLBawq5xXnSQBVJzoLF-DrVnL-z7Xw",
"width": 168
},
{
"height": 110,
"url": "https://i.ytimg.com/vi/gPoz3Lt5gv8/hqdefault.jpg?sqp=-oaymwE1CMQBEG5IVfKriqkDKAgBFQAAiEIYAXABwAEG8AEB-AG2CIACgA-KAgwIABABGDwgTShyMA8=&rs=AOn4CLCLWSqC_ws26ZhC2WwHQPgQ8MfFQQ",
"width": 196
}
],
"title": "Just JavaScript things. 🙃",
"videoId": "gPoz3Lt5gv8"
},
"scrapedAt": "2026-03-24T06:50:43.710Z"
}

Notes

  • Returned fields vary by result type and channel content.
  • Respect YouTube terms and applicable laws when using scraped data.