Skool Followers Scraper avatar

Skool Followers Scraper

Pricing

from $5.99 / 1,000 results

Go to Apify Store
Skool Followers Scraper

Skool Followers Scraper

📊 Skool Followers Scraper extracts followers from Skool communities and profiles—names, usernames, profile links, join dates, activity. 🚀 Export CSV/JSON for audience insights, growth marketing, lead generation, competitor tracking, and targeted outreach.

Pricing

from $5.99 / 1,000 results

Rating

0.0

(0)

Developer

Scrapier

Scrapier

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

3 days ago

Last modified

Share

Skool Followers Scraper

Skool Followers Scraper is an Apify actor that turns public Skool profile pages into a clean, structured follower list — fast. It fetches the followers tab, parses embedded NEXT_DATA JSON, and streams flat records you can export as CSV/JSON. Built for marketers, developers, analysts, and researchers, this Skool followers scraper tool and Skool followers extractor helps you build a Skool followers list export for growth, outreach, and audience analysis at scale.

What data / output can you get?

Below are the exact fields this Skool audience scraper outputs to the dataset, one row per follower:

Data fieldDescriptionExample value
idUnique user ID from Skool"1048292"
nameSkool username/handle"john_doe"
linkToProfileDirect link to the follower’s profile"https://www.skool.com/@john_doe"
pictureProfileProfile photo URL (full-size)"https://cdn.skoolcdn.com/profiles/john_doe.jpg"
lastNameLast name (if available)"Doe"
firstNameFirst name (if available)"John"
bioBio text from profile metadata"Growth marketer. Building communities."
onlineOnline status placeholder"N/A"
lastOfflineLast seen (formatted from epoch nanoseconds)"14:32:10 - 21/04/2026"
pictureBubbleAvatar bubble URL (if available)"https://cdn.skoolcdn.com/avatars/john_doe.png"
createdAtAccount creation timestamp (raw)"2024-03-18T12:05:27.000Z"
updatedAtLast profile update timestamp (raw)"2025-07-02T09:11:43.000Z"

Notes:

  • Results stream live (one follower at a time) as the run progresses.
  • Export results from the Output tab as JSON or CSV for your Skool community members export CSV workflows.

Key features

  • 🔎 Precise JSON extraction from Skool pages
    Reads the embedded NEXT_DATA and maps each follower to a flat record — a reliable Skool user profiles scraper approach for clean, structured data.

  • 📥 Row-by-row streaming to the dataset
    Uses Actor.push_data per follower so you see live rows in Output while the run continues — perfect for Skool followers automation pipelines.

  • 🧳 Bulk input for multiple profiles
    Paste many profile URLs at once in urls and build a Skool followers database builder workflow at scale.

  • 🎛️ Per‑profile limits with sensible defaults
    Control output depth with maxItems (1–500). The UI defaults to 100; internally it falls back to 30 if unset or 0 to match the original script behavior.

  • 🌐 Smart connection routing
    Tries a direct request first, then datacenter proxy (if available), then rotates residential proxies (sticky after the first residential success) for resilient scraping.

  • 🔐 No login or cookies
    Scrapes only public Skool profile pages — no account or browser automation needed.

  • 🐍 Developer‑friendly foundation
    Built with apify>=2.0.0 and httpx, making it a lightweight, production‑ready Skool members scraper for Python-based workflows.

  • ⚡ Lightweight and fast
    Pure HTTP fetching with conservative delays between profiles for stability and throughput.

How to use Skool Followers Scraper - step by step

  1. Create or log in to your Apify account.
  2. Open the Skool Followers Scraper actor in your Apify workspace.
  3. Add input URLs:
  4. Set maxItems (optional):
    • Choose how many followers to save per profile (1–500). The UI default is 100; if you omit or set 0, the runtime falls back to 30.
  5. Advanced (proxy) — optional:
    • Leave proxyConfiguration off for typical runs. If your workspace requires Apify Proxy, expand Advanced and set useApifyProxy accordingly.
  6. Start the run:
    • Click Start. The log will show connection mode (direct/datacenter/residential), and each saved follower appears in Output immediately.
  7. Monitor progress:
    • The actor adds a short randomized delay between profiles and will switch to sticky residential if needed for stability.
  8. Export your results:
    • Go to the Output tab and export as JSON or CSV for your CRM, analytics, or enrichment workflows.

Pro tip: Prefer local runs? Use the commands below to iterate quickly and version your inputs.

cd Skool-Followers-Scraper
pip install -r requirements.txt
apify run
# or specify your own input file:
apify run --input-file path/to/input.json

Use cases

Use caseDescription
Growth marketing – audience buildingExport followers to CSV/JSON to build a Skool followers database for segmented campaigns and lookalike targeting.
Lead generation – targeted outreachEnrich CRM records with follower names, profile links, and bios to prioritize outreach.
Competitor tracking – follower trendsCompare follower snapshots across creator profiles to inform Skool followers tracker workflows.
Audience research – persona insightsAnalyze bios and timestamps (createdAt/updatedAt) to understand audience backgrounds and recency.
Content strategy – community alignmentMap profiles and bios to content themes and measure alignment with target communities.
Data enrichment – automated pipelinesUse exported JSON in ETL to power internal dashboards and downstream automations.
Academic & market researchBuild datasets for longitudinal studies of creator ecosystems and engagement patterns.

Why choose Skool Followers Scraper?

A precision-built Skool followers extractor focused on reliability, automation, and structured output.

  • 🎯 Accurate by design: Parses NEXT_DATA directly for consistent, structured records.
  • 🚀 Scales with your list: Paste many profile URLs and limit rows per profile for predictable output sizes.
  • 💾 Easy exports: Download clean JSON or CSV for pipelines and analysis.
  • 🐍 Built for developers: Python-based actor using apify and httpx with clear, flat output fields.
  • 🛡️ Safe & public-only: Accesses public profile pages without login or cookies.
  • 🌐 Robust connectivity: Automatic routing from direct to datacenter to residential (with sticky mode) to reduce failures.
  • 🧰 Better than extensions: No flaky browser automation — server-side HTTP for stable runs and streaming datasets.

Bottom line: if you need a dependable Skool community members scraper focused on public follower data and clean exports, this actor delivers.

Yes — when used responsibly. This actor reads public Skool profile pages only. It does not access private or authenticated data.

Guidelines for compliant use:

  • Scrape only publicly available pages you’re allowed to access.
  • Respect Skool’s terms of service and applicable data protection laws (e.g., GDPR, CCPA).
  • Avoid collecting sensitive personal information and do not use data for spam.
  • Consult your legal team for edge cases or jurisdiction-specific requirements.

You are responsible for ensuring your use complies with Skool’s rules and local regulations.

Input parameters & output format

Example JSON input

{
"urls": [
{"url": "https://www.skool.com/@liamottley"},
"https://www.skool.com/@yourcreator"
],
"maxItems": 100,
"proxyConfiguration": {
"useApifyProxy": false
}
}
FieldTypeRequiredDefaultDescription
urlsarrayYesList of Skool profile URLs to process. Accepts plain strings or objects with a "url" field. Supports bulk input.
maxItemsinteger (1–500)No100 (UI); 30 if unset/0 at runtimeMaximum follower rows to keep per profile (newest/first on page first). Use a small value for tests; raise it for full exports.
proxyConfigurationobjectNoprefill useApifyProxy: falseOptional Apify Proxy settings. Leave off for typical runs. If enabled, may route via datacenter pool; actor also has a residential fallback internally.

Notes:

  • If maxItems is omitted or set to 0 in a raw JSON run, the actor falls back to 30 internally for compatibility with the original script.

Example JSON output

{
"id": "1048292",
"name": "john_doe",
"linkToProfile": "https://www.skool.com/@john_doe",
"pictureProfile": "https://cdn.skoolcdn.com/profiles/john_doe.jpg",
"lastName": "Doe",
"firstName": "John",
"bio": "Growth marketer. Building communities.",
"online": "N/A",
"lastOffline": "14:32:10 - 21/04/2026",
"pictureBubble": "https://cdn.skoolcdn.com/avatars/john_doe.png",
"createdAt": "2024-03-18T12:05:27.000Z",
"updatedAt": "2025-07-02T09:11:43.000Z"
}

Field notes:

  • online is a placeholder set to "N/A".
  • lastOffline is a formatted timestamp; if the source value is missing/invalid it becomes "N/A".
  • Some fields may be empty strings ("") when not present in the source JSON.

FAQ

Do I need a Skool login or cookies to use this?

✅ No. The scraper reads public Skool profile pages via HTTP requests and does not require login or cookies.

Which fields are included in the export?

✅ The dataset contains id, name, linkToProfile, pictureProfile, lastName, firstName, bio, online, lastOffline, pictureBubble, createdAt, and updatedAt — one row per follower.

Can I scrape multiple profiles at once?

✅ Yes. Add as many profile URLs as your plan allows in the urls array. The actor processes them sequentially with a short randomized delay between profiles.

How many followers can I collect per profile?

✅ You control this with maxItems (1–500). The UI defaults to 100; if you leave it unset or 0 in raw JSON, the runtime falls back to 30.

How does the proxy/connection logic work?

✅ The actor tries a direct connection first. If the page doesn’t load cleanly, it falls back to an Apify datacenter proxy (if configured/available), and then to residential proxies with up to 3 attempts. After a successful residential fetch, it switches to sticky residential for the rest of the run.

What formats can I export?

✅ You can export results from the Output tab as JSON or CSV for analysis, enrichment, and automation.

Can I run it locally?

✅ Yes. Install dependencies, then run apify run. You can also pass a custom input file with apify run --input-file path/to/input.json for repeatable local workflows.

What does this scrape exactly on Skool?

✅ It loads the followers tab for each provided profile URL (e.g., https://www.skool.com/@username) and extracts public follower records embedded in the page’s NEXT_DATA JSON.

Closing CTA / Final thoughts

Skool Followers Scraper is built to turn public Skool profile followers into clean, structured datasets you can use immediately. With precise JSON parsing, resilient connection handling, bulk input support, and live streaming to the dataset, it’s ideal for marketers, analysts, researchers, and developers who need reliable Skool followers data scraping at scale. Export JSON/CSV from the Output tab or run locally with Python for automation-ready flows. Start extracting smarter audience insights and build your next Skool followers list export with confidence.