All-in-One X/Twitter Scraper
Pricing
from $0.20 / 1,000 tweet results
All-in-One X/Twitter Scraper
X/Twitter scraper — 10 modes: tweets, profiles, followers, comments, timelines, lists, search & more. From $0.09/1K — up to 90% cheaper than alternatives. Premium residential proxy (~95% success rate). apidojo-compatible output. MCP-ready for AI agents.
Pricing
from $0.20 / 1,000 tweet results
Rating
0.0
(0)
Developer
Japi Cricket
Actor stats
0
Bookmarked
42
Total users
28
Monthly active users
44 minutes ago
Last modified
Categories
Share

What does All-in-One X/Twitter Scraper do?
Scrape tweets, profiles, followers, timelines, comments, lists & more — no proxy needed, 128MB memory. 10 modes, pay-per-result from $0.09/1K. Works with AI agents (Claude, GPT, Cursor) via MCP.
Unlike most X/Twitter scrapers that charge $0.40+/1K results, this scraper delivers the same data at $0.24/1K or less — 40-50% cheaper. Three modes work without any login cookies (posts, profiles, data extractor).
Why choose this over 5 separate X scrapers?
- Cheapest on Apify — $0.24/1K tweets vs $0.40/1K (apidojo) = 40% savings
- 10 modes in one actor — tweets, profiles, search, timelines, followers, comments, lists, user search, all-in-one, data extractor
- No proxy needed — residential proxy included when required
- 128MB memory — lowest possible compute unit cost
- AI-ready — works with Claude, GPT, and Cursor via MCP protocol
Getting Started
- Click "Try for free" at the top of this page
- Choose a scraping mode (Post, Profile, Tweet Search, Timeline, Follower, Comment, User Search, List, All-in-One, or Data Extractor)
- Paste tweet URLs, usernames, or enter a search query
- Click Start — results appear in the Dataset tab within seconds
- Download as JSON, CSV, or Excel — or connect via API, n8n, Make, or Zapier
No proxy needed. 3 modes work without login cookies.
Easiest Way to Start: Paste a URL
Just paste any X/Twitter URL into the "Start URLs" field and hit Start. The scraper auto-detects the type:
| URL Pattern | Auto-Detected Mode |
|---|---|
x.com/user/status/1234567890 | Post Scraper (tweet by URL) |
x.com/username | Profile Scraper |
x.com/i/lists/1234567890 | List Scraper |
For Tweet Search and User Search, enter keywords in the "Search Queries" field.
10 Scraping Modes
| Mode | Description | Auth Required | Best For |
|---|---|---|---|
| x-post-scraper | Scrape specific tweets by URL | None | Tweet archiving, engagement tracking |
| x-profile-scraper | Full user profiles with stats | None | Lead enrichment, influencer research |
| x-data-extractor | Lightweight tweet URL scraper | None | Quick tweet data extraction |
| x-tweet-scraper | Search tweets by keywords/hashtags | Login cookies | Social listening, trend monitoring |
| x-timeline-scraper | Scrape a user's tweet timeline | Login cookies | Content analysis, user monitoring |
| x-follower-scraper | Get followers/following lists | Login cookies | Audience analysis, lead discovery |
| x-comment-scraper | Get replies/conversation threads | Login cookies | Sentiment analysis, community insights |
| x-user-search-scraper | Search for users by keywords | Login cookies | Prospecting, influencer discovery |
| x-list-scraper | Scrape tweets from X/Twitter lists | Login cookies | Curated feed monitoring |
| x-scraper | All-in-one (combines all above) | Depends | Mixed workflows, automation |
Standard vs Authenticated Mode
3 modes work without any login cookies. 7 modes require your X/Twitter cookies for full functionality.
What Works Without Cookies (Standard Mode)
No login, no risk. Just paste and scrape:
- Post Scraper: Full tweet data with engagement metrics, media, author info, quoted tweets
- Profile Scraper: Full user profile with followers, bio, verification, account stats
- Data Extractor: Same as Post Scraper (lightweight billing event)
What Requires Cookies (Authenticated Mode)
Provide your auth_token and ct0 cookies to unlock search and social graph features:
| Mode | What It Unlocks |
|---|---|
| Tweet Search | Search tweets by keywords, hashtags, and advanced operators |
| Timeline | Scrape a user's full tweet history |
| Followers | Get follower/following lists |
| Comments | Get replies and conversation threads |
| User Search | Search for users by keywords |
| List Scraper | Scrape tweets from curated lists |
| All-in-One | Combines all modes (auth needed for search/followers/comments) |
When to Use Which Mode
| Your Goal | Recommended Mode | Cookies Needed? |
|---|---|---|
| Save a specific tweet with engagement data | Post Scraper | No |
| Get a user's profile info and follower count | Profile Scraper | No |
| Monitor brand mentions or trending topics | Tweet Search | Yes |
| Analyze what someone has been tweeting | Timeline Scraper | Yes |
| Build a list of an account's followers | Follower Scraper | Yes |
| Analyze public reactions to a tweet | Comment Scraper | Yes |
| Find influencers or experts by keyword | User Search | Yes |
| Monitor a curated Twitter list | List Scraper | Yes |
Pricing — Pay Per Result, No Monthly Fee
| Mode | Event | Starter/1K | Scale/1K | Business/1K | vs Best Competitor |
|---|---|---|---|---|---|
| Tweet Search | tweet-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo ($0.40) |
| Post Scraper | post-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| Profile Scraper | profile-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| Timeline | tweet-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| Comments | comment-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| User Search | search-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| List Scraper | tweet-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| Data Extractor | extractor-result | $0.24 | $0.22 | $0.20 | 40-50% cheaper than apidojo |
| Followers | follower-result | $0.09 | $0.08 | $0.07 | 40-53% cheaper than kaitoeasyapi ($0.15) |
Apify Subscription Discounts
Higher Apify subscription plans get automatic discounts on all modes:
| Apify Plan | Discount Tier | Discount |
|---|---|---|
| Free / Starter | Standard | — |
| Scale | Bronze | 5% off |
| Business | Silver | 10% off |
| Enterprise | Gold | 15% off |
Cost examples:
- 1,000 tweets from search: $0.24
- 500 user profiles: $0.12
- 10,000 followers: $0.90
- 100 tweet URLs: $0.024
You only pay for results delivered. Platform compute costs are included.
Why This X/Twitter Scraper?
- Cheapest on Apify — $0.24/1K tweets vs $0.40/1K (apidojo) = 40% savings
- 10 modes in one actor — tweets, profiles, search, timelines, followers, comments, lists, user search, all-in-one, data extractor — one integration to maintain
- No proxy required — residential proxy included when needed
- HTTP-only architecture — Impit with Chrome TLS fingerprint impersonation (no bloated browser)
- 128 MB memory — runs on minimal resources, keeping your compute costs low
- Cookie rotation — support multiple login cookies for large-scale runs
- Resume capability — pick up where failed runs left off via
resumeFromDataset - Human-like behavior — randomized delays with Box-Muller distribution jitter
- MCP-compatible — works with AI agents (Claude, GPT, Cursor) out of the box
How We Compare
| Feature | This Scraper | API Dojo | KaitoEasyAPI | Automation Lab | XT Data |
|---|---|---|---|---|---|
| Tweets / 1K | $0.24 | $0.40 | $0.25 | $0.30 | $0.50 |
| Profiles / 1K | $0.24 | $0.40 | $0.15 | $0.30 | $0.50 |
| Followers / 1K | $0.09 | — | $0.10 | — | — |
| Comments / 1K | $0.24 | — | $0.25 | — | — |
| Cookies required | 7 of 10 modes | No | No | Optional | No |
| Modes in one actor | 10 | 1 (+ 4 companion) | 1 (4 separate) | 1 | 1 |
| Cookie rotation | Yes | No | No | No | No |
| Resume capability | Yes | No | No | No | No |
| No proxy needed | Yes | Yes | Yes | Yes | Yes |
| Memory | 128 MB | Unknown | Unknown | Unknown | Unknown |
| MCP integration | Yes | No | No | No | No |
Key advantages:
- One actor, 10 modes — competitors split into 4-15 separate actors, each requiring its own integration. We give you everything in one.
- Lowest price per tweet — $0.24/1K vs $0.40/1K (apidojo, market leader with 40K users)
- Cheapest followers — $0.09/1K vs $0.10-$0.15/1K elsewhere
- Lightweight — 128 MB HTTP-only means lower compute costs vs browser-based scrapers
How to Get Login Cookies
7 of 10 modes require X/Twitter login cookies. Here's how to get them:
- Open x.com in your browser and log in
- Open Developer Tools (F12 or Cmd+Opt+I)
- Go to Application > Cookies > x.com
- Copy the values of
auth_tokenandct0 - Paste as:
auth_token=YOUR_TOKEN; ct0=YOUR_CT0
Modes that do NOT require cookies: x-post-scraper, x-profile-scraper, x-data-extractor.
Cookie rotation: For large runs, provide multiple cookies separated by |||:
auth_token=aaa; ct0=bbb|||auth_token=ccc; ct0=ddd
Residential Proxy (Optional)
By default, the scraper uses standard datacenter IPs — which work fine for most use cases. If you're hitting rate limits or seeing cookies expire faster than expected on authenticated modes, enable the built-in residential proxy.
How to Enable
Set proxyTier to "residential" in your input:
{"scrapeMode": "x-tweet-scraper","searchQueries": ["web scraping"],"loginCookies": "auth_token=xxx; ct0=yyy","proxyTier": "residential"}
When to Use Residential Proxy
- Cookies expiring fast — residential IPs look more like real users, reducing session invalidation
- Rate limits on authenticated modes — tweet search, timeline, followers, comments
- Large-scale runs — scraping thousands of results per session with login cookies
Residential proxy is included.
Alternative: Bring Your Own Proxy
If you already have a residential proxy subscription, set proxyTier to "custom" and paste your proxy URL into the Proxy Configuration field.
MCP Integration for AI Agents
This scraper works with AI agents via the Model Context Protocol (MCP). Connect it to Claude Desktop, Cursor, GPT, or any MCP-compatible client.
Setup:
- Go to mcp.apify.com
- Add "All-in-One X/Twitter Scraper" to your MCP server
- Ask your AI: "Find the latest tweets about AI from NASA"
Example prompts for your AI agent:
- "Scrape the profile of @elonmusk on X"
- "Search X for tweets about web scraping from the last week"
- "Get the followers of @apify on Twitter"
- "Find all replies to this tweet: https://x.com/..."
Works with Claude Desktop, Cursor, GPT via MCP, and any other MCP-compatible AI client.
Integrations
n8n
- Add the Apify node in your n8n workflow
- Select "All-in-One X/Twitter Scraper" as the actor
- Configure the mode and input parameters
- Connect the output to your CRM, Google Sheets, or database
Make.com (Integromat)
- Add the Apify module to your scenario
- Select "Run Actor" and choose this scraper
- Map the JSON output fields to your downstream modules
- Use for automated tweet monitoring, lead enrichment, or CRM syncing
Zapier
- Create a new Zap with Apify as the trigger or action
- Select "Run Actor" and configure with this scraper's actor ID
- Map output fields to Google Sheets, HubSpot, Salesforce, or Slack
- Trigger on schedule or from a webhook
REST API & SDKs
Use the Apify API, JavaScript SDK, or Python SDK for programmatic access. See the Python examples in each mode section below.
Mode 1: Post Scraper (x-post-scraper)
Scrape specific tweets by URL with full engagement data, media, author info, and quoted tweets. No login cookies required.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-post-scraper" |
tweetURLs | string[] | Yes | Tweet URLs or tweet IDs |
maxResults | integer | No | Max results (default: 100) |
Input Example
{"scrapeMode": "x-post-scraper","tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825","https://x.com/NASA/status/1630332507265589248"]}
Output Fields
| Field | Type | Description | Example |
|---|---|---|---|
type | string | Always "tweet" | "tweet" |
tweetId | string | Unique tweet ID | "1728108619189874825" |
url | string | Full X.com URL | "https://x.com/elonmusk/status/..." |
twitterUrl | string | Legacy twitter.com URL | "https://twitter.com/elonmusk/status/..." |
text | string | Tweet content | "More than 10 per human on average" |
retweetCount | integer | Retweet count | 8880 |
replyCount | integer | Reply count | 5931 |
likeCount | integer | Like count | 92939 |
quoteCount | integer | Quote tweet count | 2862 |
bookmarkCount | integer | Bookmark count | 605 |
viewCount | integer | View count | 37422895 |
createdAt | string | Tweet timestamp | "Fri Nov 24 17:49:36 +0000 2023" |
lang | string | Language code (ISO 639-1) | "en" |
source | string | Posting app/device | "Twitter for iPhone" |
isReply | boolean | Is a reply | false |
isRetweet | boolean | Is a retweet | false |
isQuote | boolean | Is a quote tweet | true |
conversationId | string | Thread root ID | "1728108619189874825" |
inReplyToTweetId | string | Parent tweet ID (if reply) | null |
inReplyToUserId | string | Parent user ID (if reply) | null |
inReplyToUserName | string | Parent username (if reply) | null |
hashtags | array | Hashtag strings | ["AI", "space"] |
urls | array | URL objects | [{url, expandedUrl, displayUrl}] |
userMentions | array | Mention objects | [{userName, name, twitterId}] |
media | array | Media objects (photos, videos, GIFs) | [{type, url, videoUrl, width, height}] |
card | object | Link preview card | {type, title, description, url} |
place | object | Geolocation data | null |
author | object | Author profile | {userName, name, twitterId, followers, ...} |
quotedTweet | object | Quoted tweet (full object) | {tweetId, text, author, ...} |
Output Example
{"type": "tweet","tweetId": "1728108619189874825","url": "https://x.com/elonmusk/status/1728108619189874825","twitterUrl": "https://twitter.com/elonmusk/status/1728108619189874825","text": "More than 10 per human on average","retweetCount": 8880,"replyCount": 5931,"likeCount": 92939,"quoteCount": 2862,"bookmarkCount": 605,"viewCount": 37422895,"createdAt": "Fri Nov 24 17:49:36 +0000 2023","lang": "en","source": "Twitter for iPhone","isReply": false,"isRetweet": false,"isQuote": true,"conversationId": "1728108619189874825","hashtags": [],"urls": [],"userMentions": [],"media": [],"author": {"type": "user","userName": "elonmusk","name": "Elon Musk","twitterId": "44196397","isBlueVerified": true,"profilePicture": "https://pbs.twimg.com/profile_images/.../image_400x400.jpg","followers": 237889308,"following": 1308},"quotedTweet": {"type": "tweet","tweetId": "1728107610631729415","text": "The posts on X gets ~ 100 billion impressions every day.","likeCount": 3396,"viewCount": 38475358,"author": { "userName": "cb_doge", "name": "DogeDesigner" }}}
Use Cases
- Tweet archiving: Save important tweets with full engagement data before they're deleted
- Engagement tracking: Monitor how specific tweets perform over time
- Content research: Analyze viral tweets to understand what resonates
- Fact-checking: Capture tweet content with metadata for verification
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-post-scraper","tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825","https://x.com/NASA/status/1630332507265589248"]})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['author']['userName']}: {item['text'][:80]} ({item['likeCount']} likes, {item['viewCount']} views)")
Mode 2: Profile Scraper (x-profile-scraper)
Scrape full X/Twitter user profiles with follower counts, bio, verification status, and account metadata. No login cookies required.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-profile-scraper" |
profiles | string[] | Yes | Usernames, @handles, or profile URLs |
maxResults | integer | No | Max results (default: 100) |
Input Example
{"scrapeMode": "x-profile-scraper","profiles": ["elonmusk", "NASA", "OpenAI"]}
Output Fields
| Field | Type | Description | Example |
|---|---|---|---|
type | string | Always "user" | "user" |
twitterId | string | User REST ID | "11348282" |
userName | string | @handle | "NASA" |
name | string | Display name | "NASA" |
url | string | X.com URL | "https://x.com/NASA" |
twitterUrl | string | Legacy twitter.com URL | "https://twitter.com/NASA" |
description | string | Bio text | "The Moon is the mission..." |
location | string | Self-reported location | "Moonbound" |
website | string | Profile website URL | "https://t.co/9NkQJKAVks" |
isVerified | boolean | Legacy verified (pre-2023) | false |
isBlueVerified | boolean | X Premium subscriber | true |
verifiedType | string | Verification type | "Government", "Business", or null |
profilePicture | string | Avatar URL (400x400) | "https://pbs.twimg.com/..." |
coverPicture | string | Banner image URL | "https://pbs.twimg.com/..." |
followers | integer | Follower count | 90633649 |
following | integer | Following count | 117 |
statusesCount | integer | Total tweets posted | 73634 |
favouritesCount | integer | Total likes given | 16673 |
listedCount | integer | Lists the user is on | 96785 |
mediaCount | integer | Total media posted | 27827 |
createdAt | string | Account creation date | "Wed Dec 19 20:20:32 +0000 2007" |
isProtected | boolean | Private account | false |
canDm | boolean | DMs open | false |
professional | object | Professional account info | {type: "Creator", category: []} |
pinnedTweetId | string | Pinned tweet ID | "2040213736883892403" |
Output Example
{"type": "user","twitterId": "44196397","userName": "elonmusk","name": "Elon Musk","url": "https://x.com/elonmusk","twitterUrl": "https://twitter.com/elonmusk","description": "https://t.co/dDtDyVssfm","location": null,"website": null,"isVerified": false,"isBlueVerified": true,"verifiedType": null,"profilePicture": "https://pbs.twimg.com/profile_images/.../image_400x400.jpg","coverPicture": "https://pbs.twimg.com/profile_banners/44196397/...","followers": 237890854,"following": 1308,"statusesCount": 100639,"favouritesCount": 221162,"listedCount": 168145,"mediaCount": 4434,"createdAt": "Tue Jun 02 20:12:29 +0000 2009","isProtected": false,"canDm": false,"professional": { "type": "Creator", "category": [] },"pinnedTweetId": null}
Use Cases
- Lead enrichment: Enrich CRM contacts with X profile data (bio, followers, verification)
- Influencer research: Identify key accounts by follower count and verification status
- Competitor tracking: Monitor competitor accounts for follower growth
- Account verification: Check if accounts are verified, professional, or protected
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-profile-scraper","profiles": ["elonmusk", "NASA", "OpenAI"]})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['userName']} — {item['followers']} followers — {item['description'][:60]}")
Mode 3: Tweet Search (x-tweet-scraper)
Search tweets by keywords, hashtags, and advanced operators with the Query Wizard. Supports date ranges, engagement filters, media filters, and geo-targeting.
Login cookies required. Provide
auth_tokenandct0cookies from x.com.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-tweet-scraper" |
searchQueries | string[] | Yes | Search queries with advanced syntax |
maxResults | integer | No | Max results per query (default: 100) |
sort | string | No | "Latest", "Top", or "Latest + Top" |
loginCookies | string | Yes | "auth_token=xxx; ct0=yyy" |
languageFilter | string | No | ISO 639-1 language code (e.g., "en") |
start | string | No | Start date ("YYYY-MM-DD") |
end | string | No | End date ("YYYY-MM-DD") |
minimumFavorites | integer | No | Min likes filter |
author | string | No | From user (Query Wizard) |
Input Example
{"scrapeMode": "x-tweet-scraper","searchQueries": ["web scraping", "from:NASA #space"],"maxResults": 100,"sort": "Latest","languageFilter": "en","start": "2026-01-01","minimumFavorites": 10,"loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Same 29 tweet fields as Mode 1 (Post Scraper). Plus optional searchTerm field when includeSearchTerms is enabled.
Output Example
Same format as Mode 1. Each tweet includes full engagement data, author profile, media, entities, and metadata.
Use Cases
- Social listening: Monitor mentions of your brand, product, or industry
- Trend monitoring: Track emerging topics and hashtags in real-time
- Competitive analysis: Monitor what competitors' audiences are saying
- Market research: Analyze public sentiment around events, launches, or announcements
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-tweet-scraper","searchQueries": ["web scraping"],"maxResults": 100,"sort": "Latest","loginCookies": "auth_token=xxx; ct0=yyy"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['author']['userName']}: {item['text'][:80]} ({item['likeCount']} likes)")
Mode 4: Timeline Scraper (x-timeline-scraper)
Scrape a user's tweet timeline including original tweets, retweets, and optionally replies.
Login cookies required for best results. Guest token works for basic access.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-timeline-scraper" |
profiles | string[] | Yes | Usernames or profile URLs |
maxResults | integer | No | Max tweets per profile (default: 100) |
includeReplies | boolean | No | Include replies (default: false) |
loginCookies | string | Recommended | "auth_token=xxx; ct0=yyy" |
Input Example
{"scrapeMode": "x-timeline-scraper","profiles": ["NASA"],"maxResults": 50,"includeReplies": false,"loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Same 29 tweet fields as Mode 1 (Post Scraper).
Use Cases
- Content analysis: Analyze a user's posting patterns, topics, and engagement
- User monitoring: Track what key accounts are tweeting about
- Research: Collect a user's tweet history for academic or business research
- Archiving: Save a user's tweet timeline for record-keeping
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-timeline-scraper","profiles": ["NASA"],"maxResults": 100,"loginCookies": "auth_token=xxx; ct0=yyy"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"{item['createdAt']}: {item['text'][:80]} ({item['likeCount']} likes)")
Mode 5: Follower Scraper (x-follower-scraper)
Get followers or following lists for any X/Twitter account.
Login cookies required.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-follower-scraper" |
profiles | string[] | Yes | Usernames or profile URLs |
maxResults | integer | No | Max followers per profile (default: 100) |
followerMode | string | No | "followers" or "following" (default: "followers") |
loginCookies | string | Yes | "auth_token=xxx; ct0=yyy" |
Input Example
{"scrapeMode": "x-follower-scraper","profiles": ["apify"],"maxResults": 500,"followerMode": "followers","loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Same 25 user profile fields as Mode 2 (Profile Scraper), plus:
| Field | Type | Description | Example |
|---|---|---|---|
followerOf | string | Source profile (followers mode) | "apify" |
followingOf | string | Source profile (following mode) | "apify" |
Use Cases
- Audience analysis: Understand who follows a competitor or influencer
- Lead discovery: Find potential customers from a competitor's follower list
- Influencer marketing: Identify relevant followers for partnership outreach
- Network mapping: Map connections between accounts
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-follower-scraper","profiles": ["apify"],"maxResults": 500,"followerMode": "followers","loginCookies": "auth_token=xxx; ct0=yyy"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['userName']} — {item['followers']} followers — {item['description'][:50] if item.get('description') else ''}")
Mode 6: Comment Scraper (x-comment-scraper)
Get replies and conversation threads for specific tweets.
Login cookies required.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-comment-scraper" |
tweetURLs | string[] | Yes | Tweet URLs to get replies for |
maxResults | integer | No | Max replies per tweet (default: 100) |
loginCookies | string | Yes | "auth_token=xxx; ct0=yyy" |
Input Example
{"scrapeMode": "x-comment-scraper","tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825"],"maxResults": 50,"loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Same 29 tweet fields as Mode 1, but with type: "comment" instead of type: "tweet".
Use Cases
- Sentiment analysis: Analyze public reactions to announcements or events
- Community insights: Understand what topics drive conversation
- Customer feedback: Monitor replies to brand tweets for support issues
- Trend analysis: Identify emerging opinions in reply threads
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-comment-scraper","tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825"],"maxResults": 50,"loginCookies": "auth_token=xxx; ct0=yyy"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['author']['userName']}: {item['text'][:80]} ({item['likeCount']} likes)")
Mode 7: User Search (x-user-search-scraper)
Search for X/Twitter users by keywords, job titles, or interests.
Login cookies required.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-user-search-scraper" |
searchQueries | string[] | Yes | Search terms |
maxResults | integer | No | Max users per query (default: 100) |
loginCookies | string | Yes | "auth_token=xxx; ct0=yyy" |
Input Example
{"scrapeMode": "x-user-search-scraper","searchQueries": ["AI researcher", "data scientist"],"maxResults": 50,"loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Same 25 user profile fields as Mode 2 (Profile Scraper).
Use Cases
- Prospecting: Find potential customers or partners by keyword
- Influencer discovery: Identify key accounts in your industry
- Recruiting: Find candidates with specific expertise on X
- Market research: Map thought leaders in a given space
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-user-search-scraper","searchQueries": ["AI researcher"],"maxResults": 50,"loginCookies": "auth_token=xxx; ct0=yyy"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['userName']} — {item['name']} — {item['followers']} followers")
Mode 8: List Scraper (x-list-scraper)
Scrape tweets from X/Twitter lists (curated collections of accounts).
Login cookies required.
Input Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
scrapeMode | string | Yes | "x-list-scraper" |
listURLs | string[] | Yes | List URLs or list IDs |
maxResults | integer | No | Max tweets per list (default: 100) |
loginCookies | string | Yes | "auth_token=xxx; ct0=yyy" |
Input Example
{"scrapeMode": "x-list-scraper","listURLs": ["https://x.com/i/lists/1234567890"],"maxResults": 100,"loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Same 29 tweet fields as Mode 1 (Post Scraper).
Use Cases
- Curated feed monitoring: Track tweets from industry-specific lists
- News aggregation: Collect tweets from journalist or media lists
- Research: Monitor topic-specific lists for academic or market research
- Competitive intelligence: Track curated competitor lists
How to Run
Python:
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={"scrapeMode": "x-list-scraper","listURLs": ["https://x.com/i/lists/1234567890"],"maxResults": 100,"loginCookies": "auth_token=xxx; ct0=yyy"})for item in client.dataset(run["defaultDatasetId"]).iterate_items():print(f"@{item['author']['userName']}: {item['text'][:80]}")
Mode 9: All-in-One (x-scraper)
Combines all modes in one run. Auto-detects URLs (tweets, profiles, lists) and processes search queries. Ideal for automation workflows.
Input Example
{"scrapeMode": "x-scraper","startUrls": ["https://x.com/elonmusk/status/1728108619189874825","https://x.com/NASA"],"searchQueries": ["from:apify"],"maxResults": 20,"loginCookies": "auth_token=xxx; ct0=yyy"}
Output Fields
Mix of tweet and user objects depending on input. Tweet URLs produce tweet objects, profile URLs produce user objects, search queries produce tweet objects.
Mode 10: Data Extractor (x-data-extractor)
Lightweight tweet scraper for extracting data from tweet URLs. Same output as Mode 1 but billed as extractor-result. No login cookies required.
Input Example
{"scrapeMode": "x-data-extractor","tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825"]}
Output Fields
Same 29 tweet fields as Mode 1 (Post Scraper).
Query Wizard
Build complex search queries without learning advanced syntax. These fields auto-append to your search queries:
| Field | What it does | Equivalent syntax |
|---|---|---|
| From User | Tweets from specific user | from:username |
| In Reply To | Replies to specific user | to:username |
| Mentioning User | Mentions of user | @username |
| Start Date | Tweets after date | since:YYYY-MM-DD |
| End Date | Tweets before date | until:YYYY-MM-DD |
| Minimum Likes | Min like threshold | min_faves:N |
| Minimum Retweets | Min RT threshold | min_retweets:N |
| Minimum Replies | Min reply threshold | min_replies:N |
| Language Filter | Filter by language | lang:xx |
| Only Verified | Verified users only | filter:verified |
| Only Images | Tweets with images | filter:images |
| Only Videos | Tweets with videos | filter:videos |
| Only Quotes | Quote tweets only | filter:quote |
| Geotagged Near | Location filter | near:"city" |
| Within Radius | Radius from location | within:10km |
Advanced Search Syntax
Combine operators: from:NASA #space since:2024-01-01 min_faves:100 lang:en -filter:retweets
| Operator | Example | Description |
|---|---|---|
from:user | from:NASA | Tweets from specific user |
to:user | to:support | Replies to specific user |
@user | @OpenAI | Mentions of specific user |
#hashtag | #AI | Tweets with hashtag |
$cashtag | $TSLA | Tweets with cashtag |
since:date | since:2024-01-01 | Tweets after date |
until:date | until:2024-12-31 | Tweets before date |
min_faves:N | min_faves:100 | Minimum likes |
min_retweets:N | min_retweets:50 | Minimum retweets |
lang:xx | lang:en | Filter by language |
filter:media | filter:media | Only tweets with media |
filter:images | filter:images | Only tweets with images |
filter:videos | filter:videos | Only tweets with videos |
filter:verified | filter:verified | Only verified users |
filter:blue_verified | filter:blue_verified | Only X Premium users |
-filter:retweets | -filter:retweets | Exclude retweets |
-filter:replies | -filter:replies | Exclude replies |
filter:quote | filter:quote | Only quote tweets |
conversation_id:ID | conversation_id:123 | Thread replies |
near:"city" | near:"San Francisco" | Geotagged tweets |
within:radius | within:10km | Within radius |
Technical Details
- Stack: Node.js 20, Impit (Chrome TLS fingerprinting), Apify SDK 3
- Memory: 128MB (HTTP-only, no browser)
- Speed: 2-8 seconds per run, ~50MB peak memory
- No proxy required for any mode
- Cookie rotation for large-scale runs (multiple cookies via
|||) - Resume capability via
resumeFromDataset - Custom output via
customMapFunction
Troubleshooting
Getting no results?
- Check that your search query isn't too restrictive
- Try
sort: Topinstead ofLatest— X sometimes returns fewer results with Latest - Remove
until:date filters if getting low counts
Authentication errors?
- Your cookies may have expired. Get fresh
auth_tokenandct0from x.com - Both
auth_tokenandct0are required - Format:
auth_token=YOUR_TOKEN; ct0=YOUR_CT0
Missing tweets?
- Some tweets are shadow-banned or filtered by X. This is outside our control.
- Try different date ranges or search terms
Want to resume a failed run?
- Copy the dataset ID from the previous run
- Add it to
resumeFromDataset— already-scraped items will be skipped
Support
Found a bug or need help? Open an issue on the Issues tab.