Twitter X Posts Scraper
Pricing
$19.99/month + usage
Twitter X Posts Scraper
🐦 Twitter X Posts Scraper pulls tweets at scale by user, hashtag, or keyword—text, timestamps, author, likes, retweets, replies, views, hashtags, mentions & media URLs. 🔎 Ideal for social listening, sentiment & competitive research. 🚀 Export clean JSON/CSV for analytics.
Pricing
$19.99/month + usage
Rating
0.0
(0)
Developer
ScrapeFlow
Actor stats
0
Bookmarked
2
Total users
1
Monthly active users
2 days ago
Last modified
Categories
Share
Twitter X Posts Scraper
Twitter X Posts Scraper is an Apify actor that extracts tweets at scale from Twitter/X by username, profile URL, or numeric user ID. It solves the hassle of manual copying by delivering clean, structured tweet data — including text, engagement metrics, author metadata, hashtags, mentions, and media URLs — ready for analytics and automation. Built for marketers, developers, data analysts, and researchers, this scraper enables social listening, sentiment analysis, and competitive research workflows at scale.
What data / output can you get?
Below are real fields produced by the actor, as pushed to the Apify dataset during a run.
| Data field | Description | Example value |
|---|---|---|
| id | Tweet identifier (rest_id) | "1988877569597260072" |
| url | Canonical Tweet URL | "https://x.com/elonmusk/status/1988877569597260072" |
| user_posted | Author handle (screen_name) | "elonmusk" |
| name | Author display name | "Elon Musk" |
| description | Full tweet text | "@tetsuoai Long press on any image to turn it into a video..." |
| date_posted | ISO timestamp (UTC) | "2025-11-13T07:52:18.000Z" |
| likes | Like count | 1729 |
| replies | Reply count | 554 |
| reposts | Retweet/Repost count | 368 |
| quotes | Quote count | 38 |
| views | View/impression count | "1399060" |
| bookmarks | Bookmark count | 213 |
| is_verified | Blue verification flag | true |
| followers | Author follower count | 229031060 |
| following | Author following count | 1226 |
| posts_count | Author total posts | 89153 |
| profile_image_link | Profile image URL | "https://pbs.twimg.com/profile_images/…/normal.jpg" |
| biography | Author bio text | "" |
| hashtags | Hashtag texts from tweet | ["AI","video"] |
| tagged_users | Mentioned usernames | ["tetsuoai"] |
| photos | First photo URL if present | null |
| videos | Sorted MP4 video URLs (highest bitrate first) | ["https://video.twimg.com/.../file.mp4"] |
| quoted_post | JSON object with quoted post details | {"post_id": "123...", "description": "..."} |
| external_url | Author external URL from profile | null |
| input | Object with the tweet’s input link | {"url": "https://x.com/elonmusk/status/1988877569597260072/"} |
Notes:
- Export results from the Apify dataset in JSON, CSV, or Excel.
- Some fields may be null when not present (e.g., photos, videos, hashtags, quoted_post).
Key features
-
🚦 Dynamic authorization capture Automatically captures the required authorization header from X network requests before API calls. If blocked, the run stops with a clear message.
-
🔁 Proxy fallback logic Starts without a proxy and automatically falls back to Apify datacenter and then residential proxies (with retries) if requests fail, improving stability on X.
-
🧮 Flexible sorting Choose how results are ordered before saving: recent (newest first), oldest (oldest first), or popular (most liked first).
-
🎯 Per-profile tweet limits Control results with Max Tweets per User (1–100) to balance depth and speed.
-
🧵 Rich tweet + author metadata Collects tweet text, timestamps, likes, replies, reposts, quotes, views, bookmarks, hashtags, mentions, media URLs, and author info (followers, following, posts count, profile image, bio).
-
📦 Clean, structured outputs Pushes a consistent JSON object per tweet to the Apify dataset — ready for BI tools, notebooks, and automation.
-
🧰 Developer-friendly Integrate runs via the Apify API, schedule jobs, and pipe data to downstream systems.
-
🔌 Workflow-ready Export to JSON/CSV/Excel and connect the dataset to your analytics stack or automation platforms.
How to use Twitter X Posts Scraper - step by step
-
Create or log in to your Apify account You’ll run the actor from the Apify platform UI or via API.
-
Open the Twitter X Posts Scraper actor Find “Twitter X Posts Scraper” in the Apify Store and click “Try for free”.
-
Add your inputs to startUrls Accepts:
- Profile URLs: https://x.com/username or https://twitter.com/username
- Usernames: username or @username
- Numeric user IDs: "1234567890" Note: Full tweet URLs (…/status/ID) are detected and skipped. Provide profiles/usernames instead.
-
Choose Sort Order Set sortOrder to one of: recent, oldest, popular.
-
Set Max Tweets per User Limit how many tweets to collect per input with maxTweets (1–100).
-
(Optional) Configure Apify proxies By default, no Apify proxy is used. If requests fail, the actor can automatically switch to datacenter and then residential proxies for retries.
-
Run the actor The actor resolves user IDs, fetches tweets via X APIs, sorts according to your preference, and writes each result to the dataset.
-
Export your data Open the run’s Dataset and export to JSON, CSV, or Excel for analysis or pipelines.
Pro Tip: Avoid adding tweet URLs in startUrls. Supply usernames or profile URLs for best results and continuous pagination.
Use cases
| Use case | Description |
|---|---|
| Social media analytics & reporting | Track engagement (likes, replies, quotes, reposts, views) and content performance across key profiles to inform strategy. |
| Sentiment & NLP datasets | Collect clean tweet text with hashtags, mentions, and media links to power labeling, training, and inference. |
| Competitive intelligence | Monitor competitor posting habits, content themes, and engagement over time. |
| Brand monitoring & social listening | Capture responses and mentions via tagged users and quotes to understand customer feedback. |
| Research & journalism | Build auditable datasets of public posts with timestamps and author context for studies or stories. |
| API pipelines & automation | Trigger runs via the Apify API and export structured records directly to downstream tools. |
Why choose Twitter X Posts Scraper?
This actor focuses on precision, structured outputs, and resilient collection from public Twitter/X profiles.
- 🧠 Accurate, structured fields
- 🌍 Scales across multiple profiles with per-profile limits
- 🧰 Developer access via the Apify API
- 🛡️ Proxy fallback for higher resiliency on X
- 💸 Efficient, clean data with minimal post-processing
- 🔌 Integrations-ready exports (JSON/CSV/Excel)
Compared to ad-hoc scripts or unstable browser extensions, this production-ready Apify actor delivers consistent datasets and proxy-aware reliability for ongoing workflows.
Is it legal / ethical to use Twitter X Posts Scraper?
Yes — when done responsibly. This actor collects publicly available Twitter/X data and does not access private profiles. Users should:
- Scrape public information only and respect Twitter/X terms.
- Comply with applicable data protection laws (e.g., GDPR, CCPA).
- Avoid misuse of data and excessive request volumes.
- Consult legal counsel for edge cases or special jurisdictions.
Input parameters & output format
Example JSON input
{"startUrls": ["elonmusk","@username","https://x.com/BarackObama"],"sortOrder": "recent","maxTweets": 10,"proxyConfiguration": {"useApifyProxy": false}}
Parameter reference:
- startUrls (array, required)
- Description: Twitter URLs, usernames, or keywords. Add one value per line (e.g., https://x.com/username, username, or @username).
- Default: none
- sortOrder (string, optional)
- Description: Ordering of tweets before saving. Options: recent, oldest, popular.
- Default: "recent"
- maxTweets (integer, optional)
- Description: Max tweets to collect per profile/input. Allowed range 1–100.
- Default: 10
- proxyConfiguration (object, optional)
- Description: Apify proxy settings. Default uses no Apify proxy. Actor can fall back to datacenter then residential proxies if requests fail.
- Default: {"useApifyProxy": false}
Notes:
- Tweet URLs (…/status/ID) in startUrls are detected and skipped; use profile URLs, usernames, or numeric user IDs.
Example JSON output
[{"id": "1988877569597260072","url": "https://x.com/elonmusk/status/1988877569597260072","user_posted": "elonmusk","name": "Elon Musk","description": "@tetsuoai Long press on any image to turn it into a video in less than 30 seconds https://t.co/Nsp7Ba0flp","date_posted": "2025-11-13T07:52:18.000Z","likes": 1729,"replies": 554,"reposts": 368,"quotes": 38,"views": "1399060","bookmarks": 213,"is_verified": true,"followers": 229031060,"following": 1226,"posts_count": 89153,"profile_image_link": "https://pbs.twimg.com/profile_images/1983681414370619392/oTT3nm5Z_normal.jpg","biography": "","hashtags": null,"tagged_users": ["tetsuoai"],"photos": null,"videos": ["https://video.twimg.com/amplify_video/1988877511368019968/vid/avc1/576x856/34pcJSQSXqqM4JRQ.mp4?tag=23"],"quoted_post": {"data_posted": null,"description": null,"post_id": null,"profile_id": null,"profile_name": null,"url": null,"videos": null},"external_url": null,"input": {"url": "https://x.com/elonmusk/status/1988877569597260072/"}}]
Field notes:
- photos is a single URL string when a photo is present; null otherwise.
- videos is an array of mp4 URLs sorted by bitrate (highest first) when present; null otherwise.
- quoted_post contains nested details for quoted tweets when available; values may be null if not present.
FAQ
Do I need to input tweet URLs or usernames?
Use usernames, @handles, profile URLs, or numeric user IDs. Tweet URLs (…/status/ID) are detected and skipped, so provide profiles/usernames for data collection.
How many tweets per user can it scrape?
You can set maxTweets between 1 and 100 per input. The actor paginates through the user timeline until it reaches your limit or no more pages are available.
What sort options are supported?
Set sortOrder to one of: recent (newest first), oldest (oldest first), or popular (most liked first). Sorting is applied before saving records.
What data fields are included in the output?
Each record includes tweet text, timestamps, likes, replies, reposts, quotes, views, bookmarks, hashtags, mentions, media URLs (photos/videos), and author metadata (followers, following, posts_count, profile_image_link, biography), plus quoted_post details when available.
Do I need to use a proxy?
By default, the actor starts without an Apify proxy. If requests fail or you encounter blocks, it can automatically fall back to datacenter and then residential proxies with retries to improve stability.
Does it require being logged in to X?
The actor dynamically captures the authorization header from X’s network traffic. If it cannot capture the header, the run stops with the message “Failed to capture authorization header. Make sure you're logged in to Twitter/X.”
Can I export data to CSV or Excel?
Yes. Open the run’s Dataset in Apify and export to JSON, CSV, or Excel for downstream analytics and reporting.
Can I run it via API or automate it?
Yes. As an Apify actor, it can be triggered via the Apify API, scheduled, and integrated into pipelines for automated social analytics.
Closing CTA / Final thoughts
Twitter X Posts Scraper is built to deliver clean, reliable tweet and author data from public Twitter/X profiles at scale. With flexible sorting, proxy-aware resiliency, and structured outputs, it’s ideal for marketers, developers, analysts, and researchers who need efficient social data pipelines. Trigger runs via the Apify API, export to JSON/CSV/Excel, and connect the results to your BI or automation stack — start extracting smarter social insights today.