YouTube Channel Community Scraper
Pricing
from $4.99 / 1,000 results
Go to Apify Store
YouTube Channel Community Scraper
Export posts from a YouTube channel Community tab, including author, text, stats, attachments, and timestamps.
Pricing
from $4.99 / 1,000 results
Rating
0.0
(0)
Developer
PowerAI
Maintained by Community
Actor stats
1
Bookmarked
2
Total users
1
Monthly active users
4 days ago
Last modified
Categories
Share
Collect posts from a YouTube channel Community tab and export one row per item.
Key Features
- Required Channel ID (
id) - Collect up to your chosen maximum number of posts; continues with cursor paging until that limit or until there are no more pages
- Typical fields include
type,post.postId,post.text,post.publishedTimeText,post.stats,post.author, andpost.attachment(e.g. image sets) - Each record includes scrapedAt (ISO 8601)
Why Use It?
- Practical: Snapshot a channel's community activity in structured form
- Structured: JSON rows ready for analysis, monitoring, and automation
- Reliable: Cursor-based pagination until cap or feed end
Great For
- Community engagement monitoring
- Creator posting cadence tracking
- Competitor community content audits
Input Parameters
| Parameter | Required | Description |
|---|---|---|
id | Yes | YouTube channel ID. |
maxResults | No | Maximum rows to collect (default: 48). |
Output
Each dataset row is one item from contents[] plus scrapedAt appended by the actor.
| Area | Field | Description |
|---|---|---|
| Root | type | Item type returned by the feed. |
| Root | post | Community post object payload. |
| Root | scrapedAt | Collection timestamp (ISO 8601). |
post | postId | Community post ID. |
post | text | Post text (when available). |
post | publishedTimeText | Relative publish time text. |
post | stats.likes | Like count when available. |
post | stats.comments | Comment count when available. |
post | author.title | Author display name. |
post | author.channelId | Author channel ID. |
post | author.canonicalBaseUrl | Author channel handle/base URL path when available. |
post | author.avatar | Author avatar image variants with different sizes. |
post | attachment.type | Attachment kind (e.g. images). |
post | attachment.images | Image groups when the post includes photos. |
attachment.images[] | source[] | Image size variants (url, width, height). |
Sample record
{"post": {"attachment": {"images": [{"source": [{"height": 288,"url": "https://yt3.ggpht.com/g2ZkpUd6U73HcEswaIjNM9Zhk33fFnjWBslqJRQbEB95Zr4dCFFumw8UAkC2BpWiv65bOVCG5Wi_0g=s288-c-fcrop64=1,00000000ffffffff-rw-nd-v1","width": 288},{"height": 400,"url": "https://yt3.ggpht.com/g2ZkpUd6U73HcEswaIjNM9Zhk33fFnjWBslqJRQbEB95Zr4dCFFumw8UAkC2BpWiv65bOVCG5Wi_0g=s400-c-fcrop64=1,00000000ffffffff-rw-nd-v1","width": 400}]},{"source": [{"height": 288,"url": "https://yt3.ggpht.com/NDDNxUJveCrBE5vxITenRK3FB78__FO68iwabAEn7FiG31VAvz8DwleLrJYkCdGsLWSNThYaZO_zQg=s288-c-fcrop64=1,00000000ffffffff-rw-nd-v1","width": 288},{"height": 400,"url": "https://yt3.ggpht.com/NDDNxUJveCrBE5vxITenRK3FB78__FO68iwabAEn7FiG31VAvz8DwleLrJYkCdGsLWSNThYaZO_zQg=s400-c-fcrop64=1,00000000ffffffff-rw-nd-v1","width": 400}]}],"type": "images"},"author": {"avatar": [{"height": 32,"url": "https://yt3.googleusercontent.com/WZ_63J_-745xyW_DGxGi3VUyTZAe0Jvhw2ZCg7fdz-tv9esTbNPZTFR9X79QzA0ArIrMjYJCDA=s32-c-k-c0x00ffffff-no-rj-mo","width": 32}],"canonicalBaseUrl": "/@GoogleDevelopers","channelId": "UC_x5XG1OV2P6uZZ5FSM9Ttw","title": "Google for Developers"},"postId": "UgkxPZZsModS3W475sorobWJ4finx06wvGtF","publishedTimeText": "14 hours ago","stats": {"comments": 9,"likes": 277},"text": "A quick JavaScript brain teaser. We start with an array created using new Array(3). Then we run a small piece of code that looks completely reasonable.\\n\\n At the end, we log a value, which may not be what you expected. What number appears, and why? → https://youtube.com/shorts/kXIgLG9INQU \\n\\nWatch the video and share your answer in the comments."},"type": "post","scrapedAt": "2026-03-24T06:43:14.286Z"}
Notes
- Returned fields vary by post type and attachment availability.
- Respect YouTube terms and applicable laws when using scraped data.