X / Twitter Scraper
Pricing
from $2.00 / 1,000 results
X / Twitter Scraper
Scrape X (Twitter) search results, profiles, posts, and lists into clean structured JSON. Supports bulk URLs, pagination, normalized output, and easy exports through Apify for monitoring, research, lead generation, and automation.
Pricing
from $2.00 / 1,000 results
Rating
0.0
(0)
Developer
Always Prime
Actor stats
2
Bookmarked
3
Total users
2
Monthly active users
3 days ago
Last modified
Categories
Share
Paste the X links, queries, handles, or list URLs you want to scrape. The deployment handles the rest.
X / Twitter Scraper is a no-code Apify Actor for extracting public X (Twitter) data from search results, profiles, single posts, and lists. It is built for people who need a practical Twitter scraper, X scraper, or lightweight Twitter API alternative for monitoring, research, lead generation, content analysis, and workflow automation.
Paste one target or many targets, choose how many posts you want per target, and export structured results to JSON, CSV, Excel, HTML, RSS, or via API integrations. The Actor stores normalized post data in the default dataset and keeps run metadata in the default key-value store.
Why this X / Twitter scraper is useful
- Scrape search results, profiles, single posts, and lists from one Actor.
- Paste multiple queries, URLs, handles, IDs, or list links in one run.
- Collect up to your requested per-target limit with automatic pagination for search, profile, and list runs.
- Export clean, normalized results with author info, engagement counters, media, hashtags, URLs, and timestamps.
- Use it in Apify Console, API calls, schedules, webhooks, Make, Zapier, or custom pipelines.
- Start with a simple target-based workflow instead of dealing with technical scraping setup.
Common use cases
- Brand monitoring for keywords, product names, or competitor mentions
- Tracking posts from creators, founders, journalists, or company accounts
- Collecting post datasets for AI enrichment, classification, or sentiment analysis
- Monitoring list feeds for niche communities, vertical experts, or local news
- Building lead lists from keyword or profile-based discovery workflows
- Saving tweet/post datasets to BI tools, spreadsheets, or internal dashboards
What this Twitter scraper can extract
The Actor supports these target types:
| Mode | What you provide | What it returns |
|---|---|---|
search | Search queries or full search URLs | Matching posts from X search |
profile | @handle or profile URL | Posts, replies, or media from that profile |
tweet | Post URL or post ID | A normalized record for that single post |
list | List URL or list ID | Recent posts from the list timeline |
Key features
- Bulk input support: paste several targets one per line and process them in a single run
- Client-friendly input: designed for non-developers, with minimal technical fields
- Simple operator experience: users focus on targets and limits, not on scraping internals
- Normalized output: results are standardized into a predictable JSON shape
- Dataset-ready output: download data or feed it directly into Apify integrations
- Helpful run records:
RUN_INFO,INPUT_USED, andOUTPUTrecords make debugging and automation easier
How to scrape X / Twitter data on Apify
- Choose what you want to scrape: search results, a profile, posts, or a list.
- Paste one target or many targets into the relevant field.
- Set how many posts you want per target for paginated modes.
- Run the Actor.
- Open the dataset in the Output tab, or export the results through API and integrations.
Simple user input
The public input is intentionally minimal. End users only need to:
- choose the target type
- paste one or many links, handles, IDs, or search queries
- choose the per-target result limit when relevant
Any deployment-specific live-access setup stays outside the user input, so the experience remains clean and client-friendly.
Bulk input examples
Search multiple queries
{"whatToScrape": "search","searchQuery": "from:NASA\nfrom:SpaceX\nhttps://x.com/search?q=OpenAI&src=typed_query&f=live","searchSort": "Latest","maxItems": 20}
Scrape multiple profiles
{"whatToScrape": "profile","profile": "NASA\nhttps://x.com/NASAHubble\nOpenAI","profileContent": "tweets","maxItems": 30}
Scrape multiple posts
{"whatToScrape": "tweet","postUrlOrId": "https://x.com/XDevelopers/status/1346889436626259968\nhttps://x.com/NASA/status/1872487026621776029"}
Scrape multiple lists
{"whatToScrape": "list","listUrlOrId": "https://x.com/i/lists/1265545834667610117\nhttps://x.com/i/lists/1441717848968364036","maxItems": 50}
Output example
Dataset items contain normalized X / Twitter post data such as:
{"type": "tweet","id": "1346889436626259968","url": "https://x.com/XDevelopers/status/1346889436626259968","text": "Hello from X.","createdAt": "Tue Apr 02 12:00:00 +0000 2026","lang": "en","likeCount": 120,"retweetCount": 10,"replyCount": 5,"quoteCount": 2,"viewCount": 4300,"mode": "search","searchTerm": "from:NASA","inputUrl": "https://x.com/search?q=from%3ANASA","fetchedAt": "2026-04-03T10:00:00Z","author": {"userName": "XDevelopers","name": "Developers","url": "https://x.com/XDevelopers"},"hashtags": [],"media": []}
The Output tab includes:
- Posts dataset: the main extracted records
- Output summary: a concise run summary in the
OUTPUTrecord - Run summary: detailed metadata in
RUN_INFO - Parsed input: the sanitized input in
INPUT_USED
What fields you get
Important output fields include:
idurltwitterUrltextcreatedAtlanglikeCountretweetCountreplyCountquoteCountviewCountauthorhashtagsmediamodesearchTermsourceTypeinputUrlfetchedAt
Why teams use this Actor instead of building from scratch
- No need to build and maintain your own X web request client
- No need to normalize raw GraphQL responses yourself
- No need to hand-roll pagination logic for common X timelines
- No need to build a separate export layer for dataset downloads and integrations
- Easier onboarding for operators or clients who prefer forms over code
Automation and integrations
This Actor works well when you want to:
- trigger X data collection on a schedule
- push results to webhooks after each run
- connect the output to Make, Zapier, n8n, or custom APIs
- enrich posts with AI or internal business logic in downstream Actors
- use Apify dataset endpoints as a simple Twitter data API for your apps
Cost and performance notes
- The Actor is lightweight and does not depend on a browser for normal runs.
- Search, profile, and list modes auto-paginate only until they hit your requested
maxItems. - Bulk runs let you scrape many targets in one run, but each target still consumes requests on X.
- For the best balance of cost and speed, start with smaller
maxItemsand scale up after you validate your workflow.
FAQ
Is this the official X API?
No. This Actor mirrors the web requests used by x.com and normalizes the response into structured output. If you need an official, contract-based API with long-term compatibility guarantees, use the official X API instead.
Why do I see demo or sample data?
Some deployments may use sample preview mode for onboarding, testing, or schema validation. In production-style deployments, live access can be enabled behind the scenes without changing the public input UX.
Can I scrape multiple URLs in one run?
Yes. Search, profile, post, and list inputs all support bulk mode. Paste one target per line and the Actor will process them in a single run.
Does maxItems apply to the whole run or each target?
For search, profile, and list, maxItems applies per target. In tweet mode, it is ignored because each target is a single post.
What if one target fails in a bulk run?
The Actor keeps processing the remaining targets. Per-target errors are stored in RUN_INFO, so one bad target does not ruin the whole batch.
Can I use this as a Twitter API alternative?
For many public-page scraping use cases, yes. Teams often use this Actor as a practical API layer on top of Apify datasets and webhooks. It is still based on observed web behavior, so it can change when X changes its frontend.
Troubleshooting
- If a deployment is configured for live collection and a run fails, review the private live-access configuration for that deployment.
- If a specific X endpoint changes, the Actor may need updated query IDs or request parameters.
- If you only want to validate the workflow, sample preview mode can still be used for schema and integration testing.
- If you paste many search queries, use one query per line for the cleanest parsing.
Local development
python -m venv .venv.venv\Scripts\activatepip install -r requirements.txtcopy .env.example .env
Local CLI examples:
python cli.py search --query "from:NASA"python cli.py profile --screen-name NASA --tab tweetspython cli.py tweet --tweet-id 1346889436626259968python cli.py list --list-id 1265545834667610117
To run the Actor locally, save your input to storage/key_value_stores/default/INPUT.json and start:
$python -m src
Support
If you run into an edge case, use the Actor's Issues tab on Apify or update the repository with a reproducible example input. Clear issue reports with target type, sample input, and error details make fixes much faster.
Technical notes
- Query IDs and feature flags on X can rotate over time
- Live collection depends on deployment-level access configuration
- Search, profile, and list modes auto-paginate; single-post mode performs one lookup per post
- The Actor is designed for public-page data workflows and normalized export rather than official API parity