Twitter/X Scraper
Pricing
Pay per event
Twitter/X Scraper
Scrape Twitter/X profiles, tweets, timelines, followers, and following lists. No login for profiles/tweets. Extract engagement stats, follower data, and media.
Pricing
Pay per event
Rating
5.0
(1)
Developer
Stas Persiianenko
Actor stats
2
Bookmarked
110
Total users
44
Monthly active users
a day ago
Last modified
Categories
Share
Scrape Twitter/X profiles, tweets, user timelines, follower lists, following lists, and search results. Extract follower counts, engagement stats, media URLs, hashtags, and more — all in structured JSON.
What does Twitter/X Scraper do?
This actor extracts public data from Twitter/X:
- Profiles — follower counts, bio, join date, verification status, and more
- User timelines — latest tweets from any user's timeline with pagination
- Tweets by URL — full tweet data from specific URLs including likes, retweets, media
- Search — find tweets by keyword, hashtag, or advanced query with full Twitter search syntax
- Followers — extract a user's complete follower list with profile details
- Following — extract who a user follows with full profile data
What works without login?
| Mode | Cookies needed? | Notes |
|---|---|---|
| Profiles | No | Guest authentication, ~2 sec per profile |
| User timelines | No | Guest auth returns popular tweets (not chronological) |
| Tweets by URL | Recommended | Works without cookies for some tweets, but many require auth |
| Search | Yes | Twitter requires login for search — no way around it |
| Followers | Yes | Requires cookies — Twitter locks down follower lists |
| Following | Yes | Requires cookies — same as followers |
Why do some modes need cookies? Twitter has progressively locked down their API. Profiles and timelines still work with guest tokens, but individual tweet lookup and search require authenticated sessions. Competitors either require cookies too (they just don't always tell you upfront) or use browser automation which costs 5-10x more per result. We use direct HTTP requests — no browser overhead — which keeps costs low.
Getting cookies takes 30 seconds: open x.com in your browser, grab two values from DevTools, paste them in. Instructions below.
Why use this scraper?
- $0.30 per 1,000 tweets — cheaper than most competitors ($0.40/1K for the market leader)
- No API keys needed — profiles and timelines work without any login
- No browser overhead — pure HTTP at 256 MB memory, fast and cheap
- Self-healing endpoints — dynamically resolves GraphQL endpoint IDs from X's frontend JS, survives API rotations automatically
- Rich data — likes, retweets, replies, views, bookmarks, quotes, media URLs, hashtags, mentions
- Full search syntax — from:user, since:date, min_faves:N, filter:media, lang:code, and more
Use cases
Marketing and PR teams
- Social media monitoring — track brand mentions, competitor activity, or industry trends in real time
- Campaign analysis — measure tweet performance, hashtag reach, and engagement during campaigns
- Crisis monitoring — detect negative sentiment spikes and respond quickly
Sales and growth teams
- Lead generation — find potential customers through profile analysis and relevant conversations
- Audience research — understand what your target market talks about and who they follow
Research and analysis
- Academic research — collect tweets on specific topics for sentiment analysis, NLP, or social network studies
- Content analysis — study viral tweets, media usage, and hashtag trends
- Influencer research — analyze follower counts, engagement rates, and posting patterns
Developers and data teams
- Archiving — save specific tweets by URL for permanent record
- Dataset building — create structured Twitter datasets for ML training or analytics dashboards
How much does it cost to scrape Twitter/X?
Pay-per-event pricing. You only pay for results you get.
| Event | Price | Notes |
|---|---|---|
| Actor start | $0.005 | One-time per run |
| Tweet scraped | $0.0003 | Per tweet (all modes) |
| Profile scraped | $0.002 | Per profile |
| Follower/following scraped | $0.001 | Per user in follower/following lists |
Cost examples:
| Task | Cost | Comparison |
|---|---|---|
| 10 profiles | $0.03 | — |
| 100 tweets (user timeline) | $0.04 | Competitors: ~$0.04-0.05 |
| 1,000 tweets (search) | $0.31 | Competitors: $0.25-0.40 |
| 10,000 tweets | $3.01 | Competitors: $2.50-4.00 |
The Apify free tier ($5/month) gets you ~16,000 tweets or ~2,400 profiles per month.
How to scrape Twitter/X
- Open Twitter/X Scraper in Apify Console
- Pick a mode: Profiles, User timeline, Tweets by URL, or Search
- Enter usernames, tweet URLs, or search queries
- For search or tweets-by-URL: paste your Twitter cookies (see below)
- Click Start and download results as JSON, CSV, or Excel
How to get Twitter cookies
Required for search mode, recommended for tweets-by-URL.
- Log in to x.com in your browser
- Open DevTools (F12) → Application tab → Cookies →
https://x.com - Find and copy the values of
auth_tokenandct0 - Paste in the input field as:
auth_token=YOUR_TOKEN; ct0=YOUR_CT0
Cookies typically last several weeks before you need to refresh them.
What data can I extract from Twitter/X?
Tweet fields
| Field | Type | Description |
|---|---|---|
text | string | Full tweet text (including long-form notes) |
url | string | Permanent tweet URL |
likeCount | number | Number of likes |
retweetCount | number | Number of retweets |
replyCount | number | Number of replies |
viewCount | number | Number of views (timeline/search modes) |
bookmarkCount | number | Number of bookmarks |
quoteCount | number | Number of quote tweets |
createdAt | string | Tweet creation date |
authorUsername | string | Author's username |
authorFollowers | number | Author's follower count |
hashtags | array | Hashtags used |
mentions | array | Mentioned users |
mediaUrls | array | Photo and video URLs |
mediaType | string | photo, video, gif, or null |
isReply / isRetweet / isQuote | boolean | Tweet type flags |
conversationId | string | Thread root tweet ID |
Profile fields
| Field | Type | Description |
|---|---|---|
username | string | Twitter handle |
name | string | Display name |
bio | string | Profile biography |
followers | number | Follower count |
following | number | Following count |
tweetsCount | number | Total tweets posted |
likesCount | number | Total likes given |
isBlueVerified | boolean | Blue verification status |
joinedAt | string | Account creation date |
profilePicture | string | High-res profile image URL |
coverPicture | string | Banner image URL |
website | string | Website from profile |
location | string | Location from profile |
How do I scrape tweets and Twitter profiles?
| Parameter | Description | Default |
|---|---|---|
mode | profiles, tweets, user-tweets, search, followers, or following | profiles |
usernames | Twitter usernames (for profiles and user-tweets modes) | — |
tweetUrls | Tweet URLs or numeric IDs (for tweets mode) | — |
searchTerms | Search queries (for search mode) | — |
searchMode | Sort order for search: Top or Latest | Top |
maxResults | Max results per query/user | 50 |
twitterCookie | Twitter cookies (format: auth_token=...; ct0=...) | — |
Search operators
Search mode supports Twitter's full advanced search syntax:
| Operator | Example | Description |
|---|---|---|
from:user | from:elonmusk | Tweets from a specific user |
to:user | to:NASA | Tweets replying to a user |
since:date | since:2024-01-01 | Tweets after a date |
until:date | until:2024-12-31 | Tweets before a date |
min_faves:N | min_faves:100 | Minimum likes |
min_retweets:N | min_retweets:50 | Minimum retweets |
filter:media | web scraping filter:media | Only tweets with media |
filter:images | apify filter:images | Only tweets with images |
filter:videos | AI filter:videos | Only tweets with videos |
lang:code | scraping lang:en | Filter by language |
-filter:replies | from:NASA -filter:replies | Exclude replies |
Combine operators freely: from:elonmusk since:2024-01-01 min_faves:1000 lang:en
Output examples
Tweet
{"type": "tweet","id": "1519480761749016577","url": "https://x.com/elonmusk/status/1519480761749016577","text": "Next I'm buying Coca-Cola to put the cocaine back in","createdAt": "Thu Apr 28 00:56:58 +0000 2022","likeCount": 4253891,"retweetCount": 588370,"replyCount": 168937,"quoteCount": 167450,"bookmarkCount": 21524,"authorUsername": "elonmusk","authorName": "Elon Musk","authorFollowers": 236257999,"hashtags": [],"mentions": [],"mediaUrls": []}
Profile
{"type": "profile","username": "elonmusk","name": "Elon Musk","followers": 236258375,"following": 1292,"tweetsCount": 98776,"likesCount": 215478,"isBlueVerified": true,"joinedAt": "Tue Jun 02 20:12:29 +0000 2009","profilePicture": "https://pbs.twimg.com/profile_images/..._400x400.jpg"}
Follower/Following
{"type": "follower","id": "44196397","username": "elonmusk","url": "https://x.com/elonmusk","name": "Elon Musk","bio": "","location": "","followers": 237387109,"following": 1301,"tweetsCount": 100007,"isVerified": false,"isBlueVerified": true,"joinedAt": "Tue Jun 02 20:12:29 +0000 2009","profilePicture": "https://pbs.twimg.com/profile_images/..._400x400.jpg","sourceUsername": "NASA","scrapedAt": "2026-03-25T04:30:00.000Z"}
How do I get the best results from Twitter Scraper?
- Start small — test with
maxResults: 10before scaling up - User timeline order — guest auth returns popular tweets, not chronological. Use search mode with
from:username+ cookies for chronological order - Endpoint auto-resolution — the actor extracts GraphQL endpoint IDs from X's frontend JS on each run, so it survives API rotations
- Rate limits — Twitter throttles requests; the actor adds delays automatically. Large runs take longer but succeed reliably
- Combine modes — use Search to discover relevant accounts, then User Timeline to deep-dive into specific users' full posting history
- Use advanced search operators — narrow results with
min_faves:100,filter:media, orlang:ento get higher-signal data and reduce costs - Schedule recurring runs — set up daily or weekly scrapes in Apify Console to monitor accounts or keywords over time automatically
How to scrape Twitter without getting blocked
Twitter actively limits access to its data — here's how this scraper stays reliable:
- Direct HTTP, not browser automation — the actor sends raw HTTP requests, mimicking what the X frontend does. This avoids browser fingerprinting and keeps memory usage at 256 MB instead of 1-2 GB.
- Self-healing GraphQL endpoints — Twitter periodically rotates its internal GraphQL endpoint IDs. This scraper re-extracts those IDs from X's frontend JavaScript on every run, so it automatically adapts without manual updates.
- Guest token authentication — profiles and user timelines use guest tokens (no login required), which are less aggressively rate-limited than API keys.
- Automatic rate-limit handling — when Twitter throttles requests, the actor backs off and retries. You don't need to worry about hitting limits — it handles pacing internally.
- Authenticated sessions for search — search requires valid browser cookies (
auth_token+ct0). Cookies last several weeks. If you get auth errors, refresh them from x.com in under a minute.
For large-scale runs (10K+ tweets), spread queries across multiple runs and use the since: / until: operators to batch by date range.
Twitter data export: how to download tweets
Once a run completes, you can download your scraped tweets in multiple formats:
- JSON — full structured data, best for developers and data pipelines
- CSV — opens directly in Excel or Google Sheets, ideal for analysts
- Excel (.xlsx) — pre-formatted spreadsheet, ready to share
- XML — compatible with legacy enterprise systems
Via Apify Console: Click Export on the dataset page and choose your format.
Via API: Use the dataset export endpoint:
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?format=csv&token=YOUR_TOKEN" \--output tweets.csv
Automatic export integrations: Connect the scraper to Google Sheets, Airtable, or a webhook endpoint so data is exported automatically after each run — no manual downloads needed.
API usage
Node.js
import { ApifyClient } from 'apify-client';const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });const run = await client.actor('automation-lab/twitter-scraper').call({mode: 'profiles',usernames: ['elonmusk', 'OpenAI'],});const { items } = await client.dataset(run.defaultDatasetId).listItems();console.log(items);
Python
from apify_client import ApifyClientclient = ApifyClient("YOUR_API_TOKEN")run = client.actor("automation-lab/twitter-scraper").call(run_input={"mode": "user-tweets","usernames": ["NASA"],"maxResults": 50,})items = client.dataset(run["defaultDatasetId"]).list_items().itemsprint(items)
cURL
curl "https://api.apify.com/v2/acts/automation-lab~twitter-scraper/runs" \-X POST \-H "Authorization: Bearer YOUR_API_TOKEN" \-H "Content-Type: application/json" \-d '{"mode": "profiles", "usernames": ["elonmusk"]}'
Integrations
Connect Twitter/X Scraper to your workflow using Apify integrations:
- Google Sheets — export tweets and profiles to a spreadsheet automatically. Useful for tracking follower growth or building engagement reports
- Slack / Discord — get notifications when scraping finishes or when tweets match specific criteria
- Zapier / Make — automate workflows triggered by scraping results, e.g., save viral tweets to Airtable or trigger alerts for brand mentions
- Webhooks — send data to your API endpoint automatically for custom processing
- Scheduled runs — set up recurring scrapes (hourly, daily, weekly) to monitor accounts or search queries over time
- Data warehouses — pipe results to BigQuery, Snowflake, or PostgreSQL for large-scale analytics
- AI/LLM pipelines — feed tweets into sentiment analysis, topic classification, or trend detection models
Use with AI agents via MCP
Twitter/X Scraper is available as a tool for AI assistants that support the Model Context Protocol (MCP). This lets you use natural language to scrape data — just ask your AI assistant and it will configure and run the scraper for you.
Setup for Claude Code
$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/twitter-scraper"
Setup for Claude Desktop, Cursor, or VS Code
Add this to your MCP config file:
{"mcpServers": {"apify": {"url": "https://mcp.apify.com?tools=automation-lab/twitter-scraper"}}}
Your AI assistant will use OAuth to authenticate with your Apify account on first use.
Example prompts
Once connected, try asking your AI assistant:
- "Get the latest 200 tweets from @elonmusk"
- "Search Twitter for tweets about 'climate change' from the past week"
- "Get the first 500 followers of @NASA with their profile details"
Learn more in the Apify MCP documentation.
Legality
This actor scrapes publicly available data from Twitter/X. It does not access private accounts, direct messages, or any data behind login walls.
Web scraping of public data is generally legal in most jurisdictions. Always respect Twitter's Terms of Service and applicable data protection laws (GDPR, CCPA) when using scraped data.
FAQ
Does this need a Twitter API key? No. Profiles and user timelines use guest authentication. Search and tweets-by-URL use your browser cookies (not API keys).
Why do some modes need cookies? Twitter has locked down their API over the past year. Guest tokens still work for profiles and timelines, but search and individual tweet lookup require authenticated sessions. This is a Twitter limitation, not ours. We use direct HTTP (no browser) to keep costs low — $0.30/1K tweets vs $0.40/1K for the market leader.
Can I scrape private accounts? No. Only publicly available data.
How many tweets can I scrape? Up to 5,000 per user in user-tweets mode. Search mode supports pagination up to 5,000 results per query.
What if Twitter rotates their API? The actor dynamically extracts endpoint IDs from X's frontend JavaScript on each run. It automatically adapts to API changes.
Why are some tweets-by-URL empty? Some tweets return empty data even with valid cookies. This happens with restricted, age-gated, or visibility-limited tweets. This is a Twitter API limitation.
Can I scrape a user's complete follower list?
Yes. Use followers or following mode with the target username. Set maxResults to control how many to extract. Both modes require cookies. Large accounts (1M+ followers) may take several minutes due to rate limiting — the actor handles pacing automatically.
How long do cookies last? Typically several weeks. If you get authentication errors, refresh your cookies from x.com.
The scraper returns 0 results for user-tweets mode — what's wrong?
Guest authentication for user-tweets mode returns popular tweets, not all tweets. If the user hasn't posted recently popular content, the result set may be small. Try search mode with from:username and cookies for more comprehensive results.
I'm getting "Could not authenticate" errors — how do I fix this?
Your cookies have likely expired. Log in to x.com in your browser again, open DevTools (F12) > Application > Cookies > x.com, and copy fresh values for auth_token and ct0. Paste them in the format: auth_token=YOUR_TOKEN; ct0=YOUR_CT0.
Related scrapers
- Instagram Scraper — Scrape Instagram posts, profiles, comments, and hashtags
- Threads Scraper — Extract posts and profiles from Meta's Threads
- TikTok Scraper — Scrape TikTok videos, profiles, and trending hashtag feeds
- Reddit Scraper — Scrape posts, comments, and communities from Reddit
- Bluesky Scraper — Scrape Bluesky posts, profiles, and search results
- Pinterest Scraper — Scrape Pinterest pins, boards, and profile data