Twitter/X Scraper avatar

Twitter/X Scraper

Pricing

Pay per event

Go to Apify Store
Twitter/X Scraper

Twitter/X Scraper

Scrape Twitter/X profiles, tweets, timelines, followers, and following lists. No login for profiles/tweets. Extract engagement stats, follower data, and media.

Pricing

Pay per event

Rating

5.0

(1)

Developer

Stas Persiianenko

Stas Persiianenko

Maintained by Community

Actor stats

2

Bookmarked

110

Total users

44

Monthly active users

a day ago

Last modified

Share

Scrape Twitter/X profiles, tweets, user timelines, follower lists, following lists, and search results. Extract follower counts, engagement stats, media URLs, hashtags, and more — all in structured JSON.

What does Twitter/X Scraper do?

This actor extracts public data from Twitter/X:

  • Profiles — follower counts, bio, join date, verification status, and more
  • User timelines — latest tweets from any user's timeline with pagination
  • Tweets by URL — full tweet data from specific URLs including likes, retweets, media
  • Search — find tweets by keyword, hashtag, or advanced query with full Twitter search syntax
  • Followers — extract a user's complete follower list with profile details
  • Following — extract who a user follows with full profile data

What works without login?

ModeCookies needed?Notes
ProfilesNoGuest authentication, ~2 sec per profile
User timelinesNoGuest auth returns popular tweets (not chronological)
Tweets by URLRecommendedWorks without cookies for some tweets, but many require auth
SearchYesTwitter requires login for search — no way around it
FollowersYesRequires cookies — Twitter locks down follower lists
FollowingYesRequires cookies — same as followers

Why do some modes need cookies? Twitter has progressively locked down their API. Profiles and timelines still work with guest tokens, but individual tweet lookup and search require authenticated sessions. Competitors either require cookies too (they just don't always tell you upfront) or use browser automation which costs 5-10x more per result. We use direct HTTP requests — no browser overhead — which keeps costs low.

Getting cookies takes 30 seconds: open x.com in your browser, grab two values from DevTools, paste them in. Instructions below.

Why use this scraper?

  • $0.30 per 1,000 tweets — cheaper than most competitors ($0.40/1K for the market leader)
  • No API keys needed — profiles and timelines work without any login
  • No browser overhead — pure HTTP at 256 MB memory, fast and cheap
  • Self-healing endpoints — dynamically resolves GraphQL endpoint IDs from X's frontend JS, survives API rotations automatically
  • Rich data — likes, retweets, replies, views, bookmarks, quotes, media URLs, hashtags, mentions
  • Full search syntax — from:user, since:date, min_faves:N, filter:media, lang:code, and more

Use cases

Marketing and PR teams

  • Social media monitoring — track brand mentions, competitor activity, or industry trends in real time
  • Campaign analysis — measure tweet performance, hashtag reach, and engagement during campaigns
  • Crisis monitoring — detect negative sentiment spikes and respond quickly

Sales and growth teams

  • Lead generation — find potential customers through profile analysis and relevant conversations
  • Audience research — understand what your target market talks about and who they follow

Research and analysis

  • Academic research — collect tweets on specific topics for sentiment analysis, NLP, or social network studies
  • Content analysis — study viral tweets, media usage, and hashtag trends
  • Influencer research — analyze follower counts, engagement rates, and posting patterns

Developers and data teams

  • Archiving — save specific tweets by URL for permanent record
  • Dataset building — create structured Twitter datasets for ML training or analytics dashboards

How much does it cost to scrape Twitter/X?

Pay-per-event pricing. You only pay for results you get.

EventPriceNotes
Actor start$0.005One-time per run
Tweet scraped$0.0003Per tweet (all modes)
Profile scraped$0.002Per profile
Follower/following scraped$0.001Per user in follower/following lists

Cost examples:

TaskCostComparison
10 profiles$0.03
100 tweets (user timeline)$0.04Competitors: ~$0.04-0.05
1,000 tweets (search)$0.31Competitors: $0.25-0.40
10,000 tweets$3.01Competitors: $2.50-4.00

The Apify free tier ($5/month) gets you ~16,000 tweets or ~2,400 profiles per month.

How to scrape Twitter/X

  1. Open Twitter/X Scraper in Apify Console
  2. Pick a mode: Profiles, User timeline, Tweets by URL, or Search
  3. Enter usernames, tweet URLs, or search queries
  4. For search or tweets-by-URL: paste your Twitter cookies (see below)
  5. Click Start and download results as JSON, CSV, or Excel

How to get Twitter cookies

Required for search mode, recommended for tweets-by-URL.

  1. Log in to x.com in your browser
  2. Open DevTools (F12) → Application tab → Cookieshttps://x.com
  3. Find and copy the values of auth_token and ct0
  4. Paste in the input field as: auth_token=YOUR_TOKEN; ct0=YOUR_CT0

Cookies typically last several weeks before you need to refresh them.

What data can I extract from Twitter/X?

Tweet fields

FieldTypeDescription
textstringFull tweet text (including long-form notes)
urlstringPermanent tweet URL
likeCountnumberNumber of likes
retweetCountnumberNumber of retweets
replyCountnumberNumber of replies
viewCountnumberNumber of views (timeline/search modes)
bookmarkCountnumberNumber of bookmarks
quoteCountnumberNumber of quote tweets
createdAtstringTweet creation date
authorUsernamestringAuthor's username
authorFollowersnumberAuthor's follower count
hashtagsarrayHashtags used
mentionsarrayMentioned users
mediaUrlsarrayPhoto and video URLs
mediaTypestringphoto, video, gif, or null
isReply / isRetweet / isQuotebooleanTweet type flags
conversationIdstringThread root tweet ID

Profile fields

FieldTypeDescription
usernamestringTwitter handle
namestringDisplay name
biostringProfile biography
followersnumberFollower count
followingnumberFollowing count
tweetsCountnumberTotal tweets posted
likesCountnumberTotal likes given
isBlueVerifiedbooleanBlue verification status
joinedAtstringAccount creation date
profilePicturestringHigh-res profile image URL
coverPicturestringBanner image URL
websitestringWebsite from profile
locationstringLocation from profile

How do I scrape tweets and Twitter profiles?

ParameterDescriptionDefault
modeprofiles, tweets, user-tweets, search, followers, or followingprofiles
usernamesTwitter usernames (for profiles and user-tweets modes)
tweetUrlsTweet URLs or numeric IDs (for tweets mode)
searchTermsSearch queries (for search mode)
searchModeSort order for search: Top or LatestTop
maxResultsMax results per query/user50
twitterCookieTwitter cookies (format: auth_token=...; ct0=...)

Search operators

Search mode supports Twitter's full advanced search syntax:

OperatorExampleDescription
from:userfrom:elonmuskTweets from a specific user
to:userto:NASATweets replying to a user
since:datesince:2024-01-01Tweets after a date
until:dateuntil:2024-12-31Tweets before a date
min_faves:Nmin_faves:100Minimum likes
min_retweets:Nmin_retweets:50Minimum retweets
filter:mediaweb scraping filter:mediaOnly tweets with media
filter:imagesapify filter:imagesOnly tweets with images
filter:videosAI filter:videosOnly tweets with videos
lang:codescraping lang:enFilter by language
-filter:repliesfrom:NASA -filter:repliesExclude replies

Combine operators freely: from:elonmusk since:2024-01-01 min_faves:1000 lang:en

Output examples

Tweet

{
"type": "tweet",
"id": "1519480761749016577",
"url": "https://x.com/elonmusk/status/1519480761749016577",
"text": "Next I'm buying Coca-Cola to put the cocaine back in",
"createdAt": "Thu Apr 28 00:56:58 +0000 2022",
"likeCount": 4253891,
"retweetCount": 588370,
"replyCount": 168937,
"quoteCount": 167450,
"bookmarkCount": 21524,
"authorUsername": "elonmusk",
"authorName": "Elon Musk",
"authorFollowers": 236257999,
"hashtags": [],
"mentions": [],
"mediaUrls": []
}

Profile

{
"type": "profile",
"username": "elonmusk",
"name": "Elon Musk",
"followers": 236258375,
"following": 1292,
"tweetsCount": 98776,
"likesCount": 215478,
"isBlueVerified": true,
"joinedAt": "Tue Jun 02 20:12:29 +0000 2009",
"profilePicture": "https://pbs.twimg.com/profile_images/..._400x400.jpg"
}

Follower/Following

{
"type": "follower",
"id": "44196397",
"username": "elonmusk",
"url": "https://x.com/elonmusk",
"name": "Elon Musk",
"bio": "",
"location": "",
"followers": 237387109,
"following": 1301,
"tweetsCount": 100007,
"isVerified": false,
"isBlueVerified": true,
"joinedAt": "Tue Jun 02 20:12:29 +0000 2009",
"profilePicture": "https://pbs.twimg.com/profile_images/..._400x400.jpg",
"sourceUsername": "NASA",
"scrapedAt": "2026-03-25T04:30:00.000Z"
}

How do I get the best results from Twitter Scraper?

  • Start small — test with maxResults: 10 before scaling up
  • User timeline order — guest auth returns popular tweets, not chronological. Use search mode with from:username + cookies for chronological order
  • Endpoint auto-resolution — the actor extracts GraphQL endpoint IDs from X's frontend JS on each run, so it survives API rotations
  • Rate limits — Twitter throttles requests; the actor adds delays automatically. Large runs take longer but succeed reliably
  • Combine modes — use Search to discover relevant accounts, then User Timeline to deep-dive into specific users' full posting history
  • Use advanced search operators — narrow results with min_faves:100, filter:media, or lang:en to get higher-signal data and reduce costs
  • Schedule recurring runs — set up daily or weekly scrapes in Apify Console to monitor accounts or keywords over time automatically

How to scrape Twitter without getting blocked

Twitter actively limits access to its data — here's how this scraper stays reliable:

  • Direct HTTP, not browser automation — the actor sends raw HTTP requests, mimicking what the X frontend does. This avoids browser fingerprinting and keeps memory usage at 256 MB instead of 1-2 GB.
  • Self-healing GraphQL endpoints — Twitter periodically rotates its internal GraphQL endpoint IDs. This scraper re-extracts those IDs from X's frontend JavaScript on every run, so it automatically adapts without manual updates.
  • Guest token authentication — profiles and user timelines use guest tokens (no login required), which are less aggressively rate-limited than API keys.
  • Automatic rate-limit handling — when Twitter throttles requests, the actor backs off and retries. You don't need to worry about hitting limits — it handles pacing internally.
  • Authenticated sessions for search — search requires valid browser cookies (auth_token + ct0). Cookies last several weeks. If you get auth errors, refresh them from x.com in under a minute.

For large-scale runs (10K+ tweets), spread queries across multiple runs and use the since: / until: operators to batch by date range.

Twitter data export: how to download tweets

Once a run completes, you can download your scraped tweets in multiple formats:

  • JSON — full structured data, best for developers and data pipelines
  • CSV — opens directly in Excel or Google Sheets, ideal for analysts
  • Excel (.xlsx) — pre-formatted spreadsheet, ready to share
  • XML — compatible with legacy enterprise systems

Via Apify Console: Click Export on the dataset page and choose your format.

Via API: Use the dataset export endpoint:

curl "https://api.apify.com/v2/datasets/DATASET_ID/items?format=csv&token=YOUR_TOKEN" \
--output tweets.csv

Automatic export integrations: Connect the scraper to Google Sheets, Airtable, or a webhook endpoint so data is exported automatically after each run — no manual downloads needed.

API usage

Node.js

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('automation-lab/twitter-scraper').call({
mode: 'profiles',
usernames: ['elonmusk', 'OpenAI'],
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items);

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("automation-lab/twitter-scraper").call(run_input={
"mode": "user-tweets",
"usernames": ["NASA"],
"maxResults": 50,
})
items = client.dataset(run["defaultDatasetId"]).list_items().items
print(items)

cURL

curl "https://api.apify.com/v2/acts/automation-lab~twitter-scraper/runs" \
-X POST \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"mode": "profiles", "usernames": ["elonmusk"]}'

Integrations

Connect Twitter/X Scraper to your workflow using Apify integrations:

  • Google Sheets — export tweets and profiles to a spreadsheet automatically. Useful for tracking follower growth or building engagement reports
  • Slack / Discord — get notifications when scraping finishes or when tweets match specific criteria
  • Zapier / Make — automate workflows triggered by scraping results, e.g., save viral tweets to Airtable or trigger alerts for brand mentions
  • Webhooks — send data to your API endpoint automatically for custom processing
  • Scheduled runs — set up recurring scrapes (hourly, daily, weekly) to monitor accounts or search queries over time
  • Data warehouses — pipe results to BigQuery, Snowflake, or PostgreSQL for large-scale analytics
  • AI/LLM pipelines — feed tweets into sentiment analysis, topic classification, or trend detection models

Use with AI agents via MCP

Twitter/X Scraper is available as a tool for AI assistants that support the Model Context Protocol (MCP). This lets you use natural language to scrape data — just ask your AI assistant and it will configure and run the scraper for you.

Setup for Claude Code

$claude mcp add --transport http apify "https://mcp.apify.com?tools=automation-lab/twitter-scraper"

Setup for Claude Desktop, Cursor, or VS Code

Add this to your MCP config file:

{
"mcpServers": {
"apify": {
"url": "https://mcp.apify.com?tools=automation-lab/twitter-scraper"
}
}
}

Your AI assistant will use OAuth to authenticate with your Apify account on first use.

Example prompts

Once connected, try asking your AI assistant:

  • "Get the latest 200 tweets from @elonmusk"
  • "Search Twitter for tweets about 'climate change' from the past week"
  • "Get the first 500 followers of @NASA with their profile details"

Learn more in the Apify MCP documentation.

Legality

This actor scrapes publicly available data from Twitter/X. It does not access private accounts, direct messages, or any data behind login walls.

Web scraping of public data is generally legal in most jurisdictions. Always respect Twitter's Terms of Service and applicable data protection laws (GDPR, CCPA) when using scraped data.

FAQ

Does this need a Twitter API key? No. Profiles and user timelines use guest authentication. Search and tweets-by-URL use your browser cookies (not API keys).

Why do some modes need cookies? Twitter has locked down their API over the past year. Guest tokens still work for profiles and timelines, but search and individual tweet lookup require authenticated sessions. This is a Twitter limitation, not ours. We use direct HTTP (no browser) to keep costs low — $0.30/1K tweets vs $0.40/1K for the market leader.

Can I scrape private accounts? No. Only publicly available data.

How many tweets can I scrape? Up to 5,000 per user in user-tweets mode. Search mode supports pagination up to 5,000 results per query.

What if Twitter rotates their API? The actor dynamically extracts endpoint IDs from X's frontend JavaScript on each run. It automatically adapts to API changes.

Why are some tweets-by-URL empty? Some tweets return empty data even with valid cookies. This happens with restricted, age-gated, or visibility-limited tweets. This is a Twitter API limitation.

Can I scrape a user's complete follower list? Yes. Use followers or following mode with the target username. Set maxResults to control how many to extract. Both modes require cookies. Large accounts (1M+ followers) may take several minutes due to rate limiting — the actor handles pacing automatically.

How long do cookies last? Typically several weeks. If you get authentication errors, refresh your cookies from x.com.

The scraper returns 0 results for user-tweets mode — what's wrong? Guest authentication for user-tweets mode returns popular tweets, not all tweets. If the user hasn't posted recently popular content, the result set may be small. Try search mode with from:username and cookies for more comprehensive results.

I'm getting "Could not authenticate" errors — how do I fix this? Your cookies have likely expired. Log in to x.com in your browser again, open DevTools (F12) > Application > Cookies > x.com, and copy fresh values for auth_token and ct0. Paste them in the format: auth_token=YOUR_TOKEN; ct0=YOUR_CT0.