Skool Community Scraper — Posts, Comments & Engagement
Pricing
from $5.00 / 1,000 posts
Skool Community Scraper — Posts, Comments & Engagement
Scrape Skool community posts, comments & engagement data in seconds. 10x faster than browser scrapers — uses Next.js data API with cached auth. Track user replies, filter unanswered posts. Perfect for community managers & engagement automation.
Pricing
from $5.00 / 1,000 posts
Rating
0.0
(0)
Developer

Cristian Tala S.
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
a day ago
Last modified
Categories
Share
Skool Scraper — Extract Posts, Comments & Member Engagement from Any Skool Community
The fastest Skool scraper on Apify. Extract community posts, comments, engagement metrics, and member activity from any Skool.com group in seconds — not minutes.
Perfect for community managers, growth marketers, and anyone who needs Skool data at scale.
Why Choose This Skool Scraper?
Most Skool scrapers use full browser automation — slow, expensive, and fragile. This scraper uses Skool's internal Next.js data API, making it:
- ⚡ 10x faster — 35 posts in 6 seconds (vs 5-7 minutes with browser scrapers)
- 💰 90% cheaper — $0.004 compute per run (vs $0.05+ with browser-based tools)
- 🔒 More reliable — No DOM parsing that breaks on UI changes
- 🧠 Smart auth caching — Logs in once, caches for 3.5 days. No repeated logins.
Speed Comparison
| Metric | This Scraper | Browser Scrapers |
|---|---|---|
| 35 posts | 6 seconds | 5-7 minutes |
| Speed | ~10 posts/sec | ~1 post/min |
| Memory | 256 MB | 1-4 GB |
| Cost/run | ~$0.004 | ~$0.05+ |
What Data Do You Get?
Every scraped post includes structured, clean JSON:
{"post_id": "c/abc123","title": "How I automated my community engagement","content": "Here's my exact workflow for responding to every post...","author": "John Doe","authorSlug": "john-doe-1234","timestamp": "2026-02-20T15:30:00.000Z","url": "https://www.skool.com/my-community/how-i-automated-abc123","likes": 12,"comment_count": 5,"comments": [{"author": "Jane Smith","content": "This is exactly what I needed!","likes": 3,"timestamp": "2026-02-20T16:00:00.000Z"}],"labels": ["Tips & Tricks"],"pinned": false,"userReplied": true}
Full Data Fields
| Field | Type | Description |
|---|---|---|
post_id | string | Unique post identifier |
title | string | Post title |
content | string | Full post body (text) |
author | string | Author display name |
authorSlug | string | Author profile slug |
timestamp | string | ISO 8601 date |
url | string | Direct link to post |
likes | number | Upvotes count |
comment_count | number | Total comments |
comments | array | Full comment threads with replies, authors, likes |
labels | array | Post category labels |
pinned | boolean | Pinned post flag |
userReplied | boolean | Whether your tracked user replied |
Use Cases
🎯 Community Management
Scrape your Skool community to find unanswered posts, track response rates, and ensure every member gets engagement. Set onlyUnanswered: true to see only posts without replies.
📈 Growth & Lead Generation
Extract member activity patterns, identify top contributors, and analyze what content drives the most engagement. Feed the data into your CRM or spreadsheet.
🤖 Engagement Automation
Connect this scraper to n8n, Make, or Zapier to build automated workflows: scrape → analyze → respond. Run it on a schedule to never miss a new post.
🔍 Competitor Research
Monitor competitor Skool communities. See what topics get traction, how active their community is, and what their members are asking about.
📊 Content Strategy
Analyze which post types (questions, tutorials, wins, introductions) get the most engagement. Use data to plan your content calendar.
💾 Data Backup & Export
Export your entire Skool community to JSON, CSV, or Excel. Back up your content before it's gone.
Input Parameters
Required
| Parameter | Type | Description |
|---|---|---|
groupSlug | string | Community slug from the URL. For https://www.skool.com/my-community → my-community |
Authentication
| Parameter | Type | Description |
|---|---|---|
email | string | Your Skool account email |
password | string | Your Skool account password (encrypted in transit) |
No credentials? The scraper runs in demo mode and returns basic public group info — great for testing before you buy.
Optional
| Parameter | Type | Default | Description |
|---|---|---|---|
maxPosts | integer | 100 | Maximum posts to scrape (1–1,000) |
userToCheck | string | — | Your Skool username slug — marks which posts you've already replied to |
onlyUnanswered | boolean | false | Only return posts with zero comments |
Quick Start Examples
Scrape 50 posts from any community
{"groupSlug": "my-community","email": "your@email.com","password": "your_password","maxPosts": 50}
Find unanswered posts (community management)
{"groupSlug": "my-community","email": "your@email.com","password": "your_password","maxPosts": 200,"userToCheck": "your-username-1234","onlyUnanswered": true}
Track your engagement across all posts
{"groupSlug": "my-community","email": "your@email.com","password": "your_password","maxPosts": 500,"userToCheck": "your-username-1234"}
Each post will have userReplied: true/false so you can instantly see where you've engaged and where you haven't.
Integration Examples
Apify API (cURL)
curl -X POST "https://api.apify.com/v2/acts/cristiantala~skool-community-scraper/runs?token=YOUR_TOKEN" \-H "Content-Type: application/json" \-d '{"groupSlug": "my-community","email": "you@email.com","password": "your_password","maxPosts": 50}'
JavaScript / Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_TOKEN' });const run = await client.actor('cristiantala/skool-community-scraper').call({groupSlug: 'my-community',email: 'you@email.com',password: 'your_password',maxPosts: 50});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(`Found ${items.length} posts`);// Find unanswered postsconst unanswered = items.filter(p => p.comment_count === 0);console.log(`${unanswered.length} posts need a reply!`);
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_TOKEN")run = client.actor("cristiantala/skool-community-scraper").call(run_input={"groupSlug": "my-community","email": "you@email.com","password": "your_password","maxPosts": 50,})items = client.dataset(run["defaultDatasetId"]).list_items().itemsfor post in items:status = "✅" if post.get("userReplied") else "📌"print(f"{status} {post['title']} — {post['likes']} likes, {post['comment_count']} comments")
n8n / Make / Zapier
Use the Apify integration node to run this actor on a schedule. Combine with Slack, email, or Google Sheets to build a complete community management dashboard.
Pricing
Simple, transparent, pay-per-result pricing:
| Event | Price |
|---|---|
| Per post scraped | $0.005 |
| Actor start (per run) | $0.10 |
Cost Examples
| Posts | Cost | Per post |
|---|---|---|
| 50 posts | $0.35 | $0.007 |
| 200 posts | $1.10 | $0.006 |
| 500 posts | $2.60 | $0.005 |
| 1,000 posts | $5.10 | $0.005 |
Platform compute costs are included — no hidden fees. What you see is what you pay.
How Authentication Works
- First run: The scraper opens a browser, logs into Skool with your credentials, and extracts auth tokens
- Token caching: Auth tokens are encrypted and cached for 3.5 days in Apify's secure KeyValueStore
- Subsequent runs: Skip the browser entirely — pure HTTP requests for maximum speed
- Auto-refresh: When tokens expire, the scraper automatically re-authenticates
Your credentials are never stored externally — only the encrypted session tokens are cached, and only within your Apify account.
⚠️ You must be a member of the Skool community you want to scrape.
FAQ
How do I find my community slug?
It's the part after skool.com/ in your community URL. For https://www.skool.com/my-awesome-group the slug is my-awesome-group.
How do I find my user slug?
Go to your Skool profile. Your URL will be skool.com/@your-slug. Use your-slug as the userToCheck parameter.
Can I scrape private/paid communities? Yes — as long as your account is a member of that community. The scraper uses your account's access level.
Can I scrape multiple communities? Yes. Run the scraper once per community. Auth caching works across runs for the same Skool account.
What if Skool updates their platform? The scraper auto-recovers from stale build IDs (common after Skool deploys). For breaking changes, updates are pushed within 24 hours.
Is there a limit on posts? Up to 1,000 posts per run. For larger communities, run multiple times with pagination.
Can I export to CSV/Excel? Yes — Apify's default dataset supports JSON, CSV, Excel, XML, and RSS exports. Just click "Export" in the dataset view.
Support
Found a bug? Need a feature? Open an issue on the Issues tab.
Built by Cristian Tala — entrepreneur, angel investor, and automation enthusiast. Creator of El Ecosistema Startup.