Twitter Followers Scraper
Pricing
$19.00/month + usage
Twitter Followers Scraper
This Actor allows you to extract all followers of any public Twitter account quickly, reliably, and without needing API keys. Perfect for lead generation, competitor analysis, audience research, marketing, automation tools, or building detailed datasets of real Twitter users.
Pricing
$19.00/month + usage
Rating
0.0
(0)
Developer

Mahmoud Alhamdo
Actor stats
0
Bookmarked
5
Total users
2
Monthly active users
6 days ago
Last modified
Categories
Share
A professional Apify Actor for scraping Twitter followers and following lists using the twikit library. This Actor efficiently fetches user data in batches, respects rate limits, and provides pagination support for large-scale data collection.
Features
- Batch Processing: Fetches users in batches of 20 (Twitter's default) for optimal performance
- Pagination Support: Uses cursor-based pagination to continue scraping from where you left off
- Rate Limiting: Configurable delays between batch requests to respect Twitter's rate limits
- Comprehensive Data: Extracts complete user profiles matching Twitter API response format
- Resume Capability: Save and resume scraping using cursor values
- Efficient Output: Collects all users in a single array and pushes them at once
Input Parameters
Required Fields
cookies(array, required): Twitter authentication cookies as JSON array. Must include valid session cookies from a logged-in Twitter account.[{"name": "auth_token","value": "your_auth_token_here"},{"name": "ct0","value": "your_csrf_token_here"}]
How to Obtain Your Twitter Cookies
To run this Actor, you need to provide your own Twitter authentication cookies. Follow these steps to safely export your cookies:
-
Install the Cookie-Editor Chrome Extension:
- Click the link to open Chrome Web Store, then click "Add to Chrome" to install.
-
Log in to your Twitter account:
- Open twitter.com or x.com and log in with the account you want to scrape.
-
Open Cookie-Editor on the Twitter tab:
- Click the Cookie-Editor icon in your browser toolbar while on the Twitter tab.
-
Export Cookies:
- In the extension, click “Export” to copy all your Twitter cookies in JSON format.
- Ensure the exported data contains at least your
auth_tokenandct0cookies.
-
Paste Your Cookies into the Input:
- Go to the Actor input form and paste the entire cookie array into the
cookiesinput section.
- Go to the Actor input form and paste the entire cookie array into the
Tip: For best results, export cookies while logged in and active on Twitter.
Security Note: Never share your cookies publicly—they grant access to your Twitter account. Use a dedicated or disposable account for scraping if possible.
profileUrl(string, required): Twitter profile URL to scrape followers/following from.- Format:
https://twitter.com/usernameorhttps://x.com/username - Example:
https://twitter.com/elonmusk
- Format:
Optional Fields
-
friendshipType(string, default:"followers"): Type of relationships to scrape.- Options:
"followers"or"following" "followers": Scrapes users who follow the target account"following": Scrapes users that the target account follows
- Options:
-
count(integer, default:100, min: 1, max: 10000): Maximum number of users to scrape. -
minDelay(integer, default:3, min: 1, max: 60): Minimum delay in seconds between batch requests. -
maxDelay(integer, default:15, min: 1, max: 300): Maximum delay in seconds between batch requests.- A random delay between
minDelayandmaxDelayis applied between each batch request.
- A random delay between
-
cursor(integer, default:0): Cursor for pagination. Use this to continue scraping from where you left off.- Use
0or leave empty to start from the beginning - Use the saved cursor value from previous runs to resume
- Use
Output
The Actor outputs an array of user objects, each containing comprehensive profile information:
{"id": "1151281581769859073","rest_id": "1151281581769859073","screen_name": "username","name": "Display Name","description": "User bio","created_at": "Wed Jul 17 00:04:53 +0000 2019","location": "Location","url": "https://example.com","profile_image_url_https": "https://pbs.twimg.com/...","profile_banner_url": "https://pbs.twimg.com/...","followers_count": 3596,"friends_count": 2894,"statuses_count": 12840,"favourites_count": 36576,"verified": false,"is_blue_verified": true,"protected": false,"can_dm": true,"can_media_tag": false,"entities": {"description": {"urls": []},"url": {"urls": [...]}},"__typename": "User"}
How It Works
-
Authentication: The Actor uses provided cookies to authenticate with Twitter via the
twikitlibrary. -
User Resolution: Extracts the username from the profile URL and resolves it to a user ID.
-
Batch Fetching:
- Requests users in batches of 20 (Twitter's default batch size)
- Processes all users in a batch without delays
- Applies random delay between batch requests (between
minDelayandmaxDelay)
-
Pagination:
- Uses cursor-based pagination to fetch subsequent batches
- Saves the cursor after each run for resumption
- Stops when the target count is reached or no more users are available
-
Data Collection:
- Collects all users in memory
- Limits results to the exact requested count
- Pushes all users to the dataset at once for efficiency
Example Usage
Basic Usage
{"cookies": [{"name": "auth_token","value": "your_auth_token"},{"name": "ct0","value": "your_csrf_token"}],"profileUrl": "https://twitter.com/elonmusk","friendshipType": "followers","count": 100,"minDelay": 3,"maxDelay": 15}
Resume from Cursor
{"cookies": [...],"profileUrl": "https://twitter.com/elonmusk","friendshipType": "followers","count": 200,"cursor": 1814161939997035050}
Performance Considerations
- Batch Size: Fixed at 50 users per request (Twitter's default)
- Delays: Applied only between batches, not between individual users
- Memory: All users are collected in memory before pushing to dataset
- Rate Limiting: Random delays between
minDelayandmaxDelayhelp avoid rate limits
Error Handling
- 404 Errors: Usually indicate invalid user ID or authentication issues
- Rate Limits: The Actor automatically retries with delays
- Invalid Cookies: The Actor validates cookies and provides clear error messages
- Network Errors: Retries with exponential backoff
Limitations
- Requires valid Twitter authentication cookies
- Subject to Twitter's rate limits and Terms of Service
- Maximum 10,000 users per run (configurable via
countparameter) - Batch size is fixed at 50 (Twitter API limitation)
Best Practices
-
Cookie Management:
- Extract cookies from a logged-in browser session
- Keep cookies secure and rotate them regularly
- Use browser extensions like "EditThisCookie" to export cookies
-
Rate Limiting:
- Start with conservative delays (minDelay: 5, maxDelay: 20)
- Monitor for rate limit errors and adjust accordingly
- Don't scrape too aggressively to avoid account restrictions
-
Resume Strategy:
- Save cursor values from successful runs
- Use cursors to resume large scraping jobs
- Test with small counts first before large-scale scraping
Rate Limit Errors
- Increase
minDelayandmaxDelayvalues - Reduce the
countparameter - Wait before retrying
License
This project is provided as-is for educational and research purposes. Users are responsible for complying with Twitter's Terms of Service and applicable laws.