Skool Community Scraper — Posts, Comments & Engagement avatar

Skool Community Scraper — Posts, Comments & Engagement

Pricing

from $5.00 / 1,000 posts

Go to Apify Store
Skool Community Scraper — Posts, Comments & Engagement

Skool Community Scraper — Posts, Comments & Engagement

Scrape Skool community posts, comments & engagement data in seconds. 10x faster than browser scrapers — uses Next.js data API with cached auth. Track user replies, filter unanswered posts. Perfect for community managers & engagement automation.

Pricing

from $5.00 / 1,000 posts

Rating

0.0

(0)

Developer

Cristian Tala S.

Cristian Tala S.

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Skool Scraper — Extract Posts, Comments & Member Engagement from Any Skool Community

The fastest Skool scraper on Apify. Extract community posts, comments, engagement metrics, and member activity from any Skool.com group in seconds — not minutes.

Perfect for community managers, growth marketers, and anyone who needs Skool data at scale.

Why Choose This Skool Scraper?

Most Skool scrapers use full browser automation — slow, expensive, and fragile. This scraper uses Skool's internal Next.js data API, making it:

  • 10x faster — 35 posts in 6 seconds (vs 5-7 minutes with browser scrapers)
  • 💰 90% cheaper — $0.004 compute per run (vs $0.05+ with browser-based tools)
  • 🔒 More reliable — No DOM parsing that breaks on UI changes
  • 🧠 Smart auth caching — Logs in once, caches for 3.5 days. No repeated logins.

Speed Comparison

MetricThis ScraperBrowser Scrapers
35 posts6 seconds5-7 minutes
Speed~10 posts/sec~1 post/min
Memory256 MB1-4 GB
Cost/run~$0.004~$0.05+

What Data Do You Get?

Every scraped post includes structured, clean JSON:

{
"post_id": "c/abc123",
"title": "How I automated my community engagement",
"content": "Here's my exact workflow for responding to every post...",
"author": "John Doe",
"authorSlug": "john-doe-1234",
"timestamp": "2026-02-20T15:30:00.000Z",
"url": "https://www.skool.com/my-community/how-i-automated-abc123",
"likes": 12,
"comment_count": 5,
"comments": [
{
"author": "Jane Smith",
"content": "This is exactly what I needed!",
"likes": 3,
"timestamp": "2026-02-20T16:00:00.000Z"
}
],
"labels": ["Tips & Tricks"],
"pinned": false,
"userReplied": true
}

Full Data Fields

FieldTypeDescription
post_idstringUnique post identifier
titlestringPost title
contentstringFull post body (text)
authorstringAuthor display name
authorSlugstringAuthor profile slug
timestampstringISO 8601 date
urlstringDirect link to post
likesnumberUpvotes count
comment_countnumberTotal comments
commentsarrayFull comment threads with replies, authors, likes
labelsarrayPost category labels
pinnedbooleanPinned post flag
userRepliedbooleanWhether your tracked user replied

Use Cases

🎯 Community Management

Scrape your Skool community to find unanswered posts, track response rates, and ensure every member gets engagement. Set onlyUnanswered: true to see only posts without replies.

📈 Growth & Lead Generation

Extract member activity patterns, identify top contributors, and analyze what content drives the most engagement. Feed the data into your CRM or spreadsheet.

🤖 Engagement Automation

Connect this scraper to n8n, Make, or Zapier to build automated workflows: scrape → analyze → respond. Run it on a schedule to never miss a new post.

🔍 Competitor Research

Monitor competitor Skool communities. See what topics get traction, how active their community is, and what their members are asking about.

📊 Content Strategy

Analyze which post types (questions, tutorials, wins, introductions) get the most engagement. Use data to plan your content calendar.

💾 Data Backup & Export

Export your entire Skool community to JSON, CSV, or Excel. Back up your content before it's gone.

Input Parameters

Required

ParameterTypeDescription
groupSlugstringCommunity slug from the URL. For https://www.skool.com/my-communitymy-community

Authentication

ParameterTypeDescription
emailstringYour Skool account email
passwordstringYour Skool account password (encrypted in transit)

No credentials? The scraper runs in demo mode and returns basic public group info — great for testing before you buy.

Optional

ParameterTypeDefaultDescription
maxPostsinteger100Maximum posts to scrape (1–1,000)
userToCheckstringYour Skool username slug — marks which posts you've already replied to
onlyUnansweredbooleanfalseOnly return posts with zero comments

Quick Start Examples

Scrape 50 posts from any community

{
"groupSlug": "my-community",
"email": "your@email.com",
"password": "your_password",
"maxPosts": 50
}

Find unanswered posts (community management)

{
"groupSlug": "my-community",
"email": "your@email.com",
"password": "your_password",
"maxPosts": 200,
"userToCheck": "your-username-1234",
"onlyUnanswered": true
}

Track your engagement across all posts

{
"groupSlug": "my-community",
"email": "your@email.com",
"password": "your_password",
"maxPosts": 500,
"userToCheck": "your-username-1234"
}

Each post will have userReplied: true/false so you can instantly see where you've engaged and where you haven't.

Integration Examples

Apify API (cURL)

curl -X POST "https://api.apify.com/v2/acts/cristiantala~skool-community-scraper/runs?token=YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"groupSlug": "my-community",
"email": "you@email.com",
"password": "your_password",
"maxPosts": 50
}'

JavaScript / Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_TOKEN' });
const run = await client.actor('cristiantala/skool-community-scraper').call({
groupSlug: 'my-community',
email: 'you@email.com',
password: 'your_password',
maxPosts: 50
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(`Found ${items.length} posts`);
// Find unanswered posts
const unanswered = items.filter(p => p.comment_count === 0);
console.log(`${unanswered.length} posts need a reply!`);

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_TOKEN")
run = client.actor("cristiantala/skool-community-scraper").call(run_input={
"groupSlug": "my-community",
"email": "you@email.com",
"password": "your_password",
"maxPosts": 50,
})
items = client.dataset(run["defaultDatasetId"]).list_items().items
for post in items:
status = "✅" if post.get("userReplied") else "📌"
print(f"{status} {post['title']}{post['likes']} likes, {post['comment_count']} comments")

n8n / Make / Zapier

Use the Apify integration node to run this actor on a schedule. Combine with Slack, email, or Google Sheets to build a complete community management dashboard.

Pricing

Simple, transparent, pay-per-result pricing:

EventPrice
Per post scraped$0.005
Actor start (per run)$0.10

Cost Examples

PostsCostPer post
50 posts$0.35$0.007
200 posts$1.10$0.006
500 posts$2.60$0.005
1,000 posts$5.10$0.005

Platform compute costs are included — no hidden fees. What you see is what you pay.

How Authentication Works

  1. First run: The scraper opens a browser, logs into Skool with your credentials, and extracts auth tokens
  2. Token caching: Auth tokens are encrypted and cached for 3.5 days in Apify's secure KeyValueStore
  3. Subsequent runs: Skip the browser entirely — pure HTTP requests for maximum speed
  4. Auto-refresh: When tokens expire, the scraper automatically re-authenticates

Your credentials are never stored externally — only the encrypted session tokens are cached, and only within your Apify account.

⚠️ You must be a member of the Skool community you want to scrape.

FAQ

How do I find my community slug? It's the part after skool.com/ in your community URL. For https://www.skool.com/my-awesome-group the slug is my-awesome-group.

How do I find my user slug? Go to your Skool profile. Your URL will be skool.com/@your-slug. Use your-slug as the userToCheck parameter.

Can I scrape private/paid communities? Yes — as long as your account is a member of that community. The scraper uses your account's access level.

Can I scrape multiple communities? Yes. Run the scraper once per community. Auth caching works across runs for the same Skool account.

What if Skool updates their platform? The scraper auto-recovers from stale build IDs (common after Skool deploys). For breaking changes, updates are pushed within 24 hours.

Is there a limit on posts? Up to 1,000 posts per run. For larger communities, run multiple times with pagination.

Can I export to CSV/Excel? Yes — Apify's default dataset supports JSON, CSV, Excel, XML, and RSS exports. Just click "Export" in the dataset view.

Support

Found a bug? Need a feature? Open an issue on the Issues tab.


Built by Cristian Tala — entrepreneur, angel investor, and automation enthusiast. Creator of El Ecosistema Startup.