All-in-One X/Twitter Scraper avatar

All-in-One X/Twitter Scraper

Pricing

from $0.20 / 1,000 tweet results

Go to Apify Store
All-in-One X/Twitter Scraper

All-in-One X/Twitter Scraper

X/Twitter scraper — 10 modes: tweets, profiles, followers, comments, timelines, lists, search & more. From $0.09/1K — up to 90% cheaper than alternatives. Premium residential proxy (~95% success rate). apidojo-compatible output. MCP-ready for AI agents.

Pricing

from $0.20 / 1,000 tweet results

Rating

0.0

(0)

Developer

Japi Cricket

Japi Cricket

Maintained by Community

Actor stats

0

Bookmarked

42

Total users

28

Monthly active users

44 minutes ago

Last modified

Share

All-in-One X/Twitter Scraper

What does All-in-One X/Twitter Scraper do?

Scrape tweets, profiles, followers, timelines, comments, lists & more — no proxy needed, 128MB memory. 10 modes, pay-per-result from $0.09/1K. Works with AI agents (Claude, GPT, Cursor) via MCP.

Unlike most X/Twitter scrapers that charge $0.40+/1K results, this scraper delivers the same data at $0.24/1K or less — 40-50% cheaper. Three modes work without any login cookies (posts, profiles, data extractor).

Why choose this over 5 separate X scrapers?

  • Cheapest on Apify — $0.24/1K tweets vs $0.40/1K (apidojo) = 40% savings
  • 10 modes in one actor — tweets, profiles, search, timelines, followers, comments, lists, user search, all-in-one, data extractor
  • No proxy needed — residential proxy included when required
  • 128MB memory — lowest possible compute unit cost
  • AI-ready — works with Claude, GPT, and Cursor via MCP protocol

Getting Started

  1. Click "Try for free" at the top of this page
  2. Choose a scraping mode (Post, Profile, Tweet Search, Timeline, Follower, Comment, User Search, List, All-in-One, or Data Extractor)
  3. Paste tweet URLs, usernames, or enter a search query
  4. Click Start — results appear in the Dataset tab within seconds
  5. Download as JSON, CSV, or Excel — or connect via API, n8n, Make, or Zapier

No proxy needed. 3 modes work without login cookies.

Easiest Way to Start: Paste a URL

Just paste any X/Twitter URL into the "Start URLs" field and hit Start. The scraper auto-detects the type:

URL PatternAuto-Detected Mode
x.com/user/status/1234567890Post Scraper (tweet by URL)
x.com/usernameProfile Scraper
x.com/i/lists/1234567890List Scraper

For Tweet Search and User Search, enter keywords in the "Search Queries" field.

10 Scraping Modes

ModeDescriptionAuth RequiredBest For
x-post-scraperScrape specific tweets by URLNoneTweet archiving, engagement tracking
x-profile-scraperFull user profiles with statsNoneLead enrichment, influencer research
x-data-extractorLightweight tweet URL scraperNoneQuick tweet data extraction
x-tweet-scraperSearch tweets by keywords/hashtagsLogin cookiesSocial listening, trend monitoring
x-timeline-scraperScrape a user's tweet timelineLogin cookiesContent analysis, user monitoring
x-follower-scraperGet followers/following listsLogin cookiesAudience analysis, lead discovery
x-comment-scraperGet replies/conversation threadsLogin cookiesSentiment analysis, community insights
x-user-search-scraperSearch for users by keywordsLogin cookiesProspecting, influencer discovery
x-list-scraperScrape tweets from X/Twitter listsLogin cookiesCurated feed monitoring
x-scraperAll-in-one (combines all above)DependsMixed workflows, automation

Standard vs Authenticated Mode

3 modes work without any login cookies. 7 modes require your X/Twitter cookies for full functionality.

What Works Without Cookies (Standard Mode)

No login, no risk. Just paste and scrape:

  • Post Scraper: Full tweet data with engagement metrics, media, author info, quoted tweets
  • Profile Scraper: Full user profile with followers, bio, verification, account stats
  • Data Extractor: Same as Post Scraper (lightweight billing event)

What Requires Cookies (Authenticated Mode)

Provide your auth_token and ct0 cookies to unlock search and social graph features:

ModeWhat It Unlocks
Tweet SearchSearch tweets by keywords, hashtags, and advanced operators
TimelineScrape a user's full tweet history
FollowersGet follower/following lists
CommentsGet replies and conversation threads
User SearchSearch for users by keywords
List ScraperScrape tweets from curated lists
All-in-OneCombines all modes (auth needed for search/followers/comments)

When to Use Which Mode

Your GoalRecommended ModeCookies Needed?
Save a specific tweet with engagement dataPost ScraperNo
Get a user's profile info and follower countProfile ScraperNo
Monitor brand mentions or trending topicsTweet SearchYes
Analyze what someone has been tweetingTimeline ScraperYes
Build a list of an account's followersFollower ScraperYes
Analyze public reactions to a tweetComment ScraperYes
Find influencers or experts by keywordUser SearchYes
Monitor a curated Twitter listList ScraperYes

Pricing — Pay Per Result, No Monthly Fee

ModeEventStarter/1KScale/1KBusiness/1Kvs Best Competitor
Tweet Searchtweet-result$0.24$0.22$0.2040-50% cheaper than apidojo ($0.40)
Post Scraperpost-result$0.24$0.22$0.2040-50% cheaper than apidojo
Profile Scraperprofile-result$0.24$0.22$0.2040-50% cheaper than apidojo
Timelinetweet-result$0.24$0.22$0.2040-50% cheaper than apidojo
Commentscomment-result$0.24$0.22$0.2040-50% cheaper than apidojo
User Searchsearch-result$0.24$0.22$0.2040-50% cheaper than apidojo
List Scrapertweet-result$0.24$0.22$0.2040-50% cheaper than apidojo
Data Extractorextractor-result$0.24$0.22$0.2040-50% cheaper than apidojo
Followersfollower-result$0.09$0.08$0.0740-53% cheaper than kaitoeasyapi ($0.15)

Apify Subscription Discounts

Higher Apify subscription plans get automatic discounts on all modes:

Apify PlanDiscount TierDiscount
Free / StarterStandard
ScaleBronze5% off
BusinessSilver10% off
EnterpriseGold15% off

Cost examples:

  • 1,000 tweets from search: $0.24
  • 500 user profiles: $0.12
  • 10,000 followers: $0.90
  • 100 tweet URLs: $0.024

You only pay for results delivered. Platform compute costs are included.

Why This X/Twitter Scraper?

  • Cheapest on Apify — $0.24/1K tweets vs $0.40/1K (apidojo) = 40% savings
  • 10 modes in one actor — tweets, profiles, search, timelines, followers, comments, lists, user search, all-in-one, data extractor — one integration to maintain
  • No proxy required — residential proxy included when needed
  • HTTP-only architecture — Impit with Chrome TLS fingerprint impersonation (no bloated browser)
  • 128 MB memory — runs on minimal resources, keeping your compute costs low
  • Cookie rotation — support multiple login cookies for large-scale runs
  • Resume capability — pick up where failed runs left off via resumeFromDataset
  • Human-like behavior — randomized delays with Box-Muller distribution jitter
  • MCP-compatible — works with AI agents (Claude, GPT, Cursor) out of the box

How We Compare

FeatureThis ScraperAPI DojoKaitoEasyAPIAutomation LabXT Data
Tweets / 1K$0.24$0.40$0.25$0.30$0.50
Profiles / 1K$0.24$0.40$0.15$0.30$0.50
Followers / 1K$0.09$0.10
Comments / 1K$0.24$0.25
Cookies required7 of 10 modesNoNoOptionalNo
Modes in one actor101 (+ 4 companion)1 (4 separate)11
Cookie rotationYesNoNoNoNo
Resume capabilityYesNoNoNoNo
No proxy neededYesYesYesYesYes
Memory128 MBUnknownUnknownUnknownUnknown
MCP integrationYesNoNoNoNo

Key advantages:

  • One actor, 10 modes — competitors split into 4-15 separate actors, each requiring its own integration. We give you everything in one.
  • Lowest price per tweet — $0.24/1K vs $0.40/1K (apidojo, market leader with 40K users)
  • Cheapest followers — $0.09/1K vs $0.10-$0.15/1K elsewhere
  • Lightweight — 128 MB HTTP-only means lower compute costs vs browser-based scrapers

How to Get Login Cookies

7 of 10 modes require X/Twitter login cookies. Here's how to get them:

  1. Open x.com in your browser and log in
  2. Open Developer Tools (F12 or Cmd+Opt+I)
  3. Go to Application > Cookies > x.com
  4. Copy the values of auth_token and ct0
  5. Paste as: auth_token=YOUR_TOKEN; ct0=YOUR_CT0

Modes that do NOT require cookies: x-post-scraper, x-profile-scraper, x-data-extractor.

Cookie rotation: For large runs, provide multiple cookies separated by |||:

auth_token=aaa; ct0=bbb|||auth_token=ccc; ct0=ddd

Residential Proxy (Optional)

By default, the scraper uses standard datacenter IPs — which work fine for most use cases. If you're hitting rate limits or seeing cookies expire faster than expected on authenticated modes, enable the built-in residential proxy.

How to Enable

Set proxyTier to "residential" in your input:

{
"scrapeMode": "x-tweet-scraper",
"searchQueries": ["web scraping"],
"loginCookies": "auth_token=xxx; ct0=yyy",
"proxyTier": "residential"
}

When to Use Residential Proxy

  • Cookies expiring fast — residential IPs look more like real users, reducing session invalidation
  • Rate limits on authenticated modes — tweet search, timeline, followers, comments
  • Large-scale runs — scraping thousands of results per session with login cookies

Residential proxy is included.

Alternative: Bring Your Own Proxy

If you already have a residential proxy subscription, set proxyTier to "custom" and paste your proxy URL into the Proxy Configuration field.

MCP Integration for AI Agents

This scraper works with AI agents via the Model Context Protocol (MCP). Connect it to Claude Desktop, Cursor, GPT, or any MCP-compatible client.

Setup:

  1. Go to mcp.apify.com
  2. Add "All-in-One X/Twitter Scraper" to your MCP server
  3. Ask your AI: "Find the latest tweets about AI from NASA"

Example prompts for your AI agent:

  • "Scrape the profile of @elonmusk on X"
  • "Search X for tweets about web scraping from the last week"
  • "Get the followers of @apify on Twitter"
  • "Find all replies to this tweet: https://x.com/..."

Works with Claude Desktop, Cursor, GPT via MCP, and any other MCP-compatible AI client.

Integrations

n8n

  1. Add the Apify node in your n8n workflow
  2. Select "All-in-One X/Twitter Scraper" as the actor
  3. Configure the mode and input parameters
  4. Connect the output to your CRM, Google Sheets, or database

Make.com (Integromat)

  1. Add the Apify module to your scenario
  2. Select "Run Actor" and choose this scraper
  3. Map the JSON output fields to your downstream modules
  4. Use for automated tweet monitoring, lead enrichment, or CRM syncing

Zapier

  1. Create a new Zap with Apify as the trigger or action
  2. Select "Run Actor" and configure with this scraper's actor ID
  3. Map output fields to Google Sheets, HubSpot, Salesforce, or Slack
  4. Trigger on schedule or from a webhook

REST API & SDKs

Use the Apify API, JavaScript SDK, or Python SDK for programmatic access. See the Python examples in each mode section below.


Mode 1: Post Scraper (x-post-scraper)

Scrape specific tweets by URL with full engagement data, media, author info, and quoted tweets. No login cookies required.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-post-scraper"
tweetURLsstring[]YesTweet URLs or tweet IDs
maxResultsintegerNoMax results (default: 100)

Input Example

{
"scrapeMode": "x-post-scraper",
"tweetURLs": [
"https://x.com/elonmusk/status/1728108619189874825",
"https://x.com/NASA/status/1630332507265589248"
]
}

Output Fields

FieldTypeDescriptionExample
typestringAlways "tweet""tweet"
tweetIdstringUnique tweet ID"1728108619189874825"
urlstringFull X.com URL"https://x.com/elonmusk/status/..."
twitterUrlstringLegacy twitter.com URL"https://twitter.com/elonmusk/status/..."
textstringTweet content"More than 10 per human on average"
retweetCountintegerRetweet count8880
replyCountintegerReply count5931
likeCountintegerLike count92939
quoteCountintegerQuote tweet count2862
bookmarkCountintegerBookmark count605
viewCountintegerView count37422895
createdAtstringTweet timestamp"Fri Nov 24 17:49:36 +0000 2023"
langstringLanguage code (ISO 639-1)"en"
sourcestringPosting app/device"Twitter for iPhone"
isReplybooleanIs a replyfalse
isRetweetbooleanIs a retweetfalse
isQuotebooleanIs a quote tweettrue
conversationIdstringThread root ID"1728108619189874825"
inReplyToTweetIdstringParent tweet ID (if reply)null
inReplyToUserIdstringParent user ID (if reply)null
inReplyToUserNamestringParent username (if reply)null
hashtagsarrayHashtag strings["AI", "space"]
urlsarrayURL objects[{url, expandedUrl, displayUrl}]
userMentionsarrayMention objects[{userName, name, twitterId}]
mediaarrayMedia objects (photos, videos, GIFs)[{type, url, videoUrl, width, height}]
cardobjectLink preview card{type, title, description, url}
placeobjectGeolocation datanull
authorobjectAuthor profile{userName, name, twitterId, followers, ...}
quotedTweetobjectQuoted tweet (full object){tweetId, text, author, ...}

Output Example

{
"type": "tweet",
"tweetId": "1728108619189874825",
"url": "https://x.com/elonmusk/status/1728108619189874825",
"twitterUrl": "https://twitter.com/elonmusk/status/1728108619189874825",
"text": "More than 10 per human on average",
"retweetCount": 8880,
"replyCount": 5931,
"likeCount": 92939,
"quoteCount": 2862,
"bookmarkCount": 605,
"viewCount": 37422895,
"createdAt": "Fri Nov 24 17:49:36 +0000 2023",
"lang": "en",
"source": "Twitter for iPhone",
"isReply": false,
"isRetweet": false,
"isQuote": true,
"conversationId": "1728108619189874825",
"hashtags": [],
"urls": [],
"userMentions": [],
"media": [],
"author": {
"type": "user",
"userName": "elonmusk",
"name": "Elon Musk",
"twitterId": "44196397",
"isBlueVerified": true,
"profilePicture": "https://pbs.twimg.com/profile_images/.../image_400x400.jpg",
"followers": 237889308,
"following": 1308
},
"quotedTweet": {
"type": "tweet",
"tweetId": "1728107610631729415",
"text": "The posts on X gets ~ 100 billion impressions every day.",
"likeCount": 3396,
"viewCount": 38475358,
"author": { "userName": "cb_doge", "name": "DogeDesigner" }
}
}

Use Cases

  • Tweet archiving: Save important tweets with full engagement data before they're deleted
  • Engagement tracking: Monitor how specific tweets perform over time
  • Content research: Analyze viral tweets to understand what resonates
  • Fact-checking: Capture tweet content with metadata for verification

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-post-scraper",
"tweetURLs": [
"https://x.com/elonmusk/status/1728108619189874825",
"https://x.com/NASA/status/1630332507265589248"
]
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['author']['userName']}: {item['text'][:80]} ({item['likeCount']} likes, {item['viewCount']} views)")

Mode 2: Profile Scraper (x-profile-scraper)

Scrape full X/Twitter user profiles with follower counts, bio, verification status, and account metadata. No login cookies required.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-profile-scraper"
profilesstring[]YesUsernames, @handles, or profile URLs
maxResultsintegerNoMax results (default: 100)

Input Example

{
"scrapeMode": "x-profile-scraper",
"profiles": ["elonmusk", "NASA", "OpenAI"]
}

Output Fields

FieldTypeDescriptionExample
typestringAlways "user""user"
twitterIdstringUser REST ID"11348282"
userNamestring@handle"NASA"
namestringDisplay name"NASA"
urlstringX.com URL"https://x.com/NASA"
twitterUrlstringLegacy twitter.com URL"https://twitter.com/NASA"
descriptionstringBio text"The Moon is the mission..."
locationstringSelf-reported location"Moonbound"
websitestringProfile website URL"https://t.co/9NkQJKAVks"
isVerifiedbooleanLegacy verified (pre-2023)false
isBlueVerifiedbooleanX Premium subscribertrue
verifiedTypestringVerification type"Government", "Business", or null
profilePicturestringAvatar URL (400x400)"https://pbs.twimg.com/..."
coverPicturestringBanner image URL"https://pbs.twimg.com/..."
followersintegerFollower count90633649
followingintegerFollowing count117
statusesCountintegerTotal tweets posted73634
favouritesCountintegerTotal likes given16673
listedCountintegerLists the user is on96785
mediaCountintegerTotal media posted27827
createdAtstringAccount creation date"Wed Dec 19 20:20:32 +0000 2007"
isProtectedbooleanPrivate accountfalse
canDmbooleanDMs openfalse
professionalobjectProfessional account info{type: "Creator", category: []}
pinnedTweetIdstringPinned tweet ID"2040213736883892403"

Output Example

{
"type": "user",
"twitterId": "44196397",
"userName": "elonmusk",
"name": "Elon Musk",
"url": "https://x.com/elonmusk",
"twitterUrl": "https://twitter.com/elonmusk",
"description": "https://t.co/dDtDyVssfm",
"location": null,
"website": null,
"isVerified": false,
"isBlueVerified": true,
"verifiedType": null,
"profilePicture": "https://pbs.twimg.com/profile_images/.../image_400x400.jpg",
"coverPicture": "https://pbs.twimg.com/profile_banners/44196397/...",
"followers": 237890854,
"following": 1308,
"statusesCount": 100639,
"favouritesCount": 221162,
"listedCount": 168145,
"mediaCount": 4434,
"createdAt": "Tue Jun 02 20:12:29 +0000 2009",
"isProtected": false,
"canDm": false,
"professional": { "type": "Creator", "category": [] },
"pinnedTweetId": null
}

Use Cases

  • Lead enrichment: Enrich CRM contacts with X profile data (bio, followers, verification)
  • Influencer research: Identify key accounts by follower count and verification status
  • Competitor tracking: Monitor competitor accounts for follower growth
  • Account verification: Check if accounts are verified, professional, or protected

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-profile-scraper",
"profiles": ["elonmusk", "NASA", "OpenAI"]
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['userName']}{item['followers']} followers — {item['description'][:60]}")

Mode 3: Tweet Search (x-tweet-scraper)

Search tweets by keywords, hashtags, and advanced operators with the Query Wizard. Supports date ranges, engagement filters, media filters, and geo-targeting.

Login cookies required. Provide auth_token and ct0 cookies from x.com.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-tweet-scraper"
searchQueriesstring[]YesSearch queries with advanced syntax
maxResultsintegerNoMax results per query (default: 100)
sortstringNo"Latest", "Top", or "Latest + Top"
loginCookiesstringYes"auth_token=xxx; ct0=yyy"
languageFilterstringNoISO 639-1 language code (e.g., "en")
startstringNoStart date ("YYYY-MM-DD")
endstringNoEnd date ("YYYY-MM-DD")
minimumFavoritesintegerNoMin likes filter
authorstringNoFrom user (Query Wizard)

Input Example

{
"scrapeMode": "x-tweet-scraper",
"searchQueries": ["web scraping", "from:NASA #space"],
"maxResults": 100,
"sort": "Latest",
"languageFilter": "en",
"start": "2026-01-01",
"minimumFavorites": 10,
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Same 29 tweet fields as Mode 1 (Post Scraper). Plus optional searchTerm field when includeSearchTerms is enabled.

Output Example

Same format as Mode 1. Each tweet includes full engagement data, author profile, media, entities, and metadata.

Use Cases

  • Social listening: Monitor mentions of your brand, product, or industry
  • Trend monitoring: Track emerging topics and hashtags in real-time
  • Competitive analysis: Monitor what competitors' audiences are saying
  • Market research: Analyze public sentiment around events, launches, or announcements

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-tweet-scraper",
"searchQueries": ["web scraping"],
"maxResults": 100,
"sort": "Latest",
"loginCookies": "auth_token=xxx; ct0=yyy"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['author']['userName']}: {item['text'][:80]} ({item['likeCount']} likes)")

Mode 4: Timeline Scraper (x-timeline-scraper)

Scrape a user's tweet timeline including original tweets, retweets, and optionally replies.

Login cookies required for best results. Guest token works for basic access.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-timeline-scraper"
profilesstring[]YesUsernames or profile URLs
maxResultsintegerNoMax tweets per profile (default: 100)
includeRepliesbooleanNoInclude replies (default: false)
loginCookiesstringRecommended"auth_token=xxx; ct0=yyy"

Input Example

{
"scrapeMode": "x-timeline-scraper",
"profiles": ["NASA"],
"maxResults": 50,
"includeReplies": false,
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Same 29 tweet fields as Mode 1 (Post Scraper).

Use Cases

  • Content analysis: Analyze a user's posting patterns, topics, and engagement
  • User monitoring: Track what key accounts are tweeting about
  • Research: Collect a user's tweet history for academic or business research
  • Archiving: Save a user's tweet timeline for record-keeping

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-timeline-scraper",
"profiles": ["NASA"],
"maxResults": 100,
"loginCookies": "auth_token=xxx; ct0=yyy"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"{item['createdAt']}: {item['text'][:80]} ({item['likeCount']} likes)")

Mode 5: Follower Scraper (x-follower-scraper)

Get followers or following lists for any X/Twitter account.

Login cookies required.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-follower-scraper"
profilesstring[]YesUsernames or profile URLs
maxResultsintegerNoMax followers per profile (default: 100)
followerModestringNo"followers" or "following" (default: "followers")
loginCookiesstringYes"auth_token=xxx; ct0=yyy"

Input Example

{
"scrapeMode": "x-follower-scraper",
"profiles": ["apify"],
"maxResults": 500,
"followerMode": "followers",
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Same 25 user profile fields as Mode 2 (Profile Scraper), plus:

FieldTypeDescriptionExample
followerOfstringSource profile (followers mode)"apify"
followingOfstringSource profile (following mode)"apify"

Use Cases

  • Audience analysis: Understand who follows a competitor or influencer
  • Lead discovery: Find potential customers from a competitor's follower list
  • Influencer marketing: Identify relevant followers for partnership outreach
  • Network mapping: Map connections between accounts

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-follower-scraper",
"profiles": ["apify"],
"maxResults": 500,
"followerMode": "followers",
"loginCookies": "auth_token=xxx; ct0=yyy"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['userName']}{item['followers']} followers — {item['description'][:50] if item.get('description') else ''}")

Mode 6: Comment Scraper (x-comment-scraper)

Get replies and conversation threads for specific tweets.

Login cookies required.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-comment-scraper"
tweetURLsstring[]YesTweet URLs to get replies for
maxResultsintegerNoMax replies per tweet (default: 100)
loginCookiesstringYes"auth_token=xxx; ct0=yyy"

Input Example

{
"scrapeMode": "x-comment-scraper",
"tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825"],
"maxResults": 50,
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Same 29 tweet fields as Mode 1, but with type: "comment" instead of type: "tweet".

Use Cases

  • Sentiment analysis: Analyze public reactions to announcements or events
  • Community insights: Understand what topics drive conversation
  • Customer feedback: Monitor replies to brand tweets for support issues
  • Trend analysis: Identify emerging opinions in reply threads

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-comment-scraper",
"tweetURLs": ["https://x.com/elonmusk/status/1728108619189874825"],
"maxResults": 50,
"loginCookies": "auth_token=xxx; ct0=yyy"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['author']['userName']}: {item['text'][:80]} ({item['likeCount']} likes)")

Mode 7: User Search (x-user-search-scraper)

Search for X/Twitter users by keywords, job titles, or interests.

Login cookies required.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-user-search-scraper"
searchQueriesstring[]YesSearch terms
maxResultsintegerNoMax users per query (default: 100)
loginCookiesstringYes"auth_token=xxx; ct0=yyy"

Input Example

{
"scrapeMode": "x-user-search-scraper",
"searchQueries": ["AI researcher", "data scientist"],
"maxResults": 50,
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Same 25 user profile fields as Mode 2 (Profile Scraper).

Use Cases

  • Prospecting: Find potential customers or partners by keyword
  • Influencer discovery: Identify key accounts in your industry
  • Recruiting: Find candidates with specific expertise on X
  • Market research: Map thought leaders in a given space

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-user-search-scraper",
"searchQueries": ["AI researcher"],
"maxResults": 50,
"loginCookies": "auth_token=xxx; ct0=yyy"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['userName']}{item['name']}{item['followers']} followers")

Mode 8: List Scraper (x-list-scraper)

Scrape tweets from X/Twitter lists (curated collections of accounts).

Login cookies required.

Input Parameters

ParameterTypeRequiredDescription
scrapeModestringYes"x-list-scraper"
listURLsstring[]YesList URLs or list IDs
maxResultsintegerNoMax tweets per list (default: 100)
loginCookiesstringYes"auth_token=xxx; ct0=yyy"

Input Example

{
"scrapeMode": "x-list-scraper",
"listURLs": ["https://x.com/i/lists/1234567890"],
"maxResults": 100,
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Same 29 tweet fields as Mode 1 (Post Scraper).

Use Cases

  • Curated feed monitoring: Track tweets from industry-specific lists
  • News aggregation: Collect tweets from journalist or media lists
  • Research: Monitor topic-specific lists for academic or market research
  • Competitive intelligence: Track curated competitor lists

How to Run

Python:

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("get-leads/all-in-one-x-scraper").call(run_input={
"scrapeMode": "x-list-scraper",
"listURLs": ["https://x.com/i/lists/1234567890"],
"maxResults": 100,
"loginCookies": "auth_token=xxx; ct0=yyy"
})
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{item['author']['userName']}: {item['text'][:80]}")

Mode 9: All-in-One (x-scraper)

Combines all modes in one run. Auto-detects URLs (tweets, profiles, lists) and processes search queries. Ideal for automation workflows.

Input Example

{
"scrapeMode": "x-scraper",
"startUrls": [
"https://x.com/elonmusk/status/1728108619189874825",
"https://x.com/NASA"
],
"searchQueries": ["from:apify"],
"maxResults": 20,
"loginCookies": "auth_token=xxx; ct0=yyy"
}

Output Fields

Mix of tweet and user objects depending on input. Tweet URLs produce tweet objects, profile URLs produce user objects, search queries produce tweet objects.


Mode 10: Data Extractor (x-data-extractor)

Lightweight tweet scraper for extracting data from tweet URLs. Same output as Mode 1 but billed as extractor-result. No login cookies required.

Input Example

{
"scrapeMode": "x-data-extractor",
"tweetURLs": [
"https://x.com/elonmusk/status/1728108619189874825"
]
}

Output Fields

Same 29 tweet fields as Mode 1 (Post Scraper).


Query Wizard

Build complex search queries without learning advanced syntax. These fields auto-append to your search queries:

FieldWhat it doesEquivalent syntax
From UserTweets from specific userfrom:username
In Reply ToReplies to specific userto:username
Mentioning UserMentions of user@username
Start DateTweets after datesince:YYYY-MM-DD
End DateTweets before dateuntil:YYYY-MM-DD
Minimum LikesMin like thresholdmin_faves:N
Minimum RetweetsMin RT thresholdmin_retweets:N
Minimum RepliesMin reply thresholdmin_replies:N
Language FilterFilter by languagelang:xx
Only VerifiedVerified users onlyfilter:verified
Only ImagesTweets with imagesfilter:images
Only VideosTweets with videosfilter:videos
Only QuotesQuote tweets onlyfilter:quote
Geotagged NearLocation filternear:"city"
Within RadiusRadius from locationwithin:10km

Advanced Search Syntax

Combine operators: from:NASA #space since:2024-01-01 min_faves:100 lang:en -filter:retweets

OperatorExampleDescription
from:userfrom:NASATweets from specific user
to:userto:supportReplies to specific user
@user@OpenAIMentions of specific user
#hashtag#AITweets with hashtag
$cashtag$TSLATweets with cashtag
since:datesince:2024-01-01Tweets after date
until:dateuntil:2024-12-31Tweets before date
min_faves:Nmin_faves:100Minimum likes
min_retweets:Nmin_retweets:50Minimum retweets
lang:xxlang:enFilter by language
filter:mediafilter:mediaOnly tweets with media
filter:imagesfilter:imagesOnly tweets with images
filter:videosfilter:videosOnly tweets with videos
filter:verifiedfilter:verifiedOnly verified users
filter:blue_verifiedfilter:blue_verifiedOnly X Premium users
-filter:retweets-filter:retweetsExclude retweets
-filter:replies-filter:repliesExclude replies
filter:quotefilter:quoteOnly quote tweets
conversation_id:IDconversation_id:123Thread replies
near:"city"near:"San Francisco"Geotagged tweets
within:radiuswithin:10kmWithin radius

Technical Details

  • Stack: Node.js 20, Impit (Chrome TLS fingerprinting), Apify SDK 3
  • Memory: 128MB (HTTP-only, no browser)
  • Speed: 2-8 seconds per run, ~50MB peak memory
  • No proxy required for any mode
  • Cookie rotation for large-scale runs (multiple cookies via |||)
  • Resume capability via resumeFromDataset
  • Custom output via customMapFunction

Troubleshooting

Getting no results?

  • Check that your search query isn't too restrictive
  • Try sort: Top instead of Latest — X sometimes returns fewer results with Latest
  • Remove until: date filters if getting low counts

Authentication errors?

  • Your cookies may have expired. Get fresh auth_token and ct0 from x.com
  • Both auth_token and ct0 are required
  • Format: auth_token=YOUR_TOKEN; ct0=YOUR_CT0

Missing tweets?

  • Some tweets are shadow-banned or filtered by X. This is outside our control.
  • Try different date ranges or search terms

Want to resume a failed run?

  • Copy the dataset ID from the previous run
  • Add it to resumeFromDataset — already-scraped items will be skipped

Support

Found a bug or need help? Open an issue on the Issues tab.