Twitter Followers Scraper avatar
Twitter Followers Scraper
Under maintenance

Pricing

$19.00/month + usage

Go to Apify Store
Twitter Followers Scraper

Twitter Followers Scraper

Under maintenance

This Actor allows you to extract all followers of any public Twitter account quickly, reliably, and without needing API keys. Perfect for lead generation, competitor analysis, audience research, marketing, automation tools, or building detailed datasets of real Twitter users.

Pricing

$19.00/month + usage

Rating

0.0

(0)

Developer

Mahmoud Alhamdo

Mahmoud Alhamdo

Maintained by Community

Actor stats

0

Bookmarked

5

Total users

2

Monthly active users

6 days ago

Last modified

Share

A professional Apify Actor for scraping Twitter followers and following lists using the twikit library. This Actor efficiently fetches user data in batches, respects rate limits, and provides pagination support for large-scale data collection.

Features

  • Batch Processing: Fetches users in batches of 20 (Twitter's default) for optimal performance
  • Pagination Support: Uses cursor-based pagination to continue scraping from where you left off
  • Rate Limiting: Configurable delays between batch requests to respect Twitter's rate limits
  • Comprehensive Data: Extracts complete user profiles matching Twitter API response format
  • Resume Capability: Save and resume scraping using cursor values
  • Efficient Output: Collects all users in a single array and pushes them at once

Input Parameters

Required Fields

  • cookies (array, required): Twitter authentication cookies as JSON array. Must include valid session cookies from a logged-in Twitter account.
    [
    {
    "name": "auth_token",
    "value": "your_auth_token_here"
    },
    {
    "name": "ct0",
    "value": "your_csrf_token_here"
    }
    ]

How to Obtain Your Twitter Cookies

To run this Actor, you need to provide your own Twitter authentication cookies. Follow these steps to safely export your cookies:

  1. Install the Cookie-Editor Chrome Extension:

    • Click the link to open Chrome Web Store, then click "Add to Chrome" to install.
  2. Log in to your Twitter account:

  3. Open Cookie-Editor on the Twitter tab:

    • Click the Cookie-Editor icon in your browser toolbar while on the Twitter tab.
  4. Export Cookies:

    • In the extension, click “Export” to copy all your Twitter cookies in JSON format.
    • Ensure the exported data contains at least your auth_token and ct0 cookies.
  5. Paste Your Cookies into the Input:

    • Go to the Actor input form and paste the entire cookie array into the cookies input section.

Tip: For best results, export cookies while logged in and active on Twitter.
Security Note: Never share your cookies publicly—they grant access to your Twitter account. Use a dedicated or disposable account for scraping if possible.

  • profileUrl (string, required): Twitter profile URL to scrape followers/following from.
    • Format: https://twitter.com/username or https://x.com/username
    • Example: https://twitter.com/elonmusk

Optional Fields

  • friendshipType (string, default: "followers"): Type of relationships to scrape.

    • Options: "followers" or "following"
    • "followers": Scrapes users who follow the target account
    • "following": Scrapes users that the target account follows
  • count (integer, default: 100, min: 1, max: 10000): Maximum number of users to scrape.

  • minDelay (integer, default: 3, min: 1, max: 60): Minimum delay in seconds between batch requests.

  • maxDelay (integer, default: 15, min: 1, max: 300): Maximum delay in seconds between batch requests.

    • A random delay between minDelay and maxDelay is applied between each batch request.
  • cursor (integer, default: 0): Cursor for pagination. Use this to continue scraping from where you left off.

    • Use 0 or leave empty to start from the beginning
    • Use the saved cursor value from previous runs to resume

Output

The Actor outputs an array of user objects, each containing comprehensive profile information:

{
"id": "1151281581769859073",
"rest_id": "1151281581769859073",
"screen_name": "username",
"name": "Display Name",
"description": "User bio",
"created_at": "Wed Jul 17 00:04:53 +0000 2019",
"location": "Location",
"url": "https://example.com",
"profile_image_url_https": "https://pbs.twimg.com/...",
"profile_banner_url": "https://pbs.twimg.com/...",
"followers_count": 3596,
"friends_count": 2894,
"statuses_count": 12840,
"favourites_count": 36576,
"verified": false,
"is_blue_verified": true,
"protected": false,
"can_dm": true,
"can_media_tag": false,
"entities": {
"description": {"urls": []},
"url": {"urls": [...]}
},
"__typename": "User"
}

How It Works

  1. Authentication: The Actor uses provided cookies to authenticate with Twitter via the twikit library.

  2. User Resolution: Extracts the username from the profile URL and resolves it to a user ID.

  3. Batch Fetching:

    • Requests users in batches of 20 (Twitter's default batch size)
    • Processes all users in a batch without delays
    • Applies random delay between batch requests (between minDelay and maxDelay)
  4. Pagination:

    • Uses cursor-based pagination to fetch subsequent batches
    • Saves the cursor after each run for resumption
    • Stops when the target count is reached or no more users are available
  5. Data Collection:

    • Collects all users in memory
    • Limits results to the exact requested count
    • Pushes all users to the dataset at once for efficiency

Example Usage

Basic Usage

{
"cookies": [
{
"name": "auth_token",
"value": "your_auth_token"
},
{
"name": "ct0",
"value": "your_csrf_token"
}
],
"profileUrl": "https://twitter.com/elonmusk",
"friendshipType": "followers",
"count": 100,
"minDelay": 3,
"maxDelay": 15
}

Resume from Cursor

{
"cookies": [...],
"profileUrl": "https://twitter.com/elonmusk",
"friendshipType": "followers",
"count": 200,
"cursor": 1814161939997035050
}

Performance Considerations

  • Batch Size: Fixed at 50 users per request (Twitter's default)
  • Delays: Applied only between batches, not between individual users
  • Memory: All users are collected in memory before pushing to dataset
  • Rate Limiting: Random delays between minDelay and maxDelay help avoid rate limits

Error Handling

  • 404 Errors: Usually indicate invalid user ID or authentication issues
  • Rate Limits: The Actor automatically retries with delays
  • Invalid Cookies: The Actor validates cookies and provides clear error messages
  • Network Errors: Retries with exponential backoff

Limitations

  • Requires valid Twitter authentication cookies
  • Subject to Twitter's rate limits and Terms of Service
  • Maximum 10,000 users per run (configurable via count parameter)
  • Batch size is fixed at 50 (Twitter API limitation)

Best Practices

  1. Cookie Management:

    • Extract cookies from a logged-in browser session
    • Keep cookies secure and rotate them regularly
    • Use browser extensions like "EditThisCookie" to export cookies
  2. Rate Limiting:

    • Start with conservative delays (minDelay: 5, maxDelay: 20)
    • Monitor for rate limit errors and adjust accordingly
    • Don't scrape too aggressively to avoid account restrictions
  3. Resume Strategy:

    • Save cursor values from successful runs
    • Use cursors to resume large scraping jobs
    • Test with small counts first before large-scale scraping

Rate Limit Errors

  • Increase minDelay and maxDelay values
  • Reduce the count parameter
  • Wait before retrying

License

This project is provided as-is for educational and research purposes. Users are responsible for complying with Twitter's Terms of Service and applicable laws.

Support