Scweet Twitter/X Scraper
Pricing
from $0.25 / 1,000 tweets
Scweet Twitter/X Scraper
Scrape Twitter (X) tweets from search + profiles. Filter keywords/hashtags/users/dates. Export JSON/CSV/XLSX. Fast. $0.30/1k. Free plan.
Pricing
from $0.25 / 1,000 tweets
Rating
5.0
(4)
Developer
JEB
Actor stats
18
Bookmarked
1K
Total users
109
Monthly active users
4 days ago
Last modified
Categories
Share
Scweet — Twitter/X Scraper
Extract tweets from search results and profile timelines into JSON, CSV, and XLSX. No API key, no cookies, no account setup — just configure your query and Scweet handles the rest.
Run on Apify | Open-Source Library
What Scweet Does
- Search and profile scraping — query X/Twitter by keywords, hashtags, users, engagement, date range, location, and more, or scrape profile timelines. Run both in a single job.
- Zero configuration — no cookies, no proxies, nothing to manage. Just set your query and Scweet handles the rest.
- Deduplicated billing — you only pay for unique tweets. Every item is deduplicated before it reaches your dataset.
- Production-grade reliability — automatic retries, adaptive rate limiting, and built-in resilience keep runs stable at scale.
Use cases
Brand monitoring, lead generation, market research, academic datasets, OSINT, content strategy.
Pricing
| Plan | Per 1,000 tweets | Run-start fee |
|---|---|---|
| Free | $3.00 | $0.006 |
| Starter | $0.30 | $0.0006 |
| Scale | $0.28 | $0.0006 |
| Business | $0.25 | $0.0006 |
Apify provides monthly free platform credit (commonly $5/month). You only pay for unique, deduplicated tweets.
Free tier is for evaluation, not production. Higher free pricing reduces abuse and keeps paid tiers low.
Quick Start
- Open Scweet on Apify.
- Paste one of the inputs below.
- Run the Actor.
- Export your dataset (JSON, CSV, or XLSX).
Search
{"source_mode": "search","search_query": "bitcoin lang:en from:elonmusk -filter:replies min_faves:100","since": "2025-02-17","until": "2026-02-19","max_items": 1000}
Profile timeline
{"source_mode": "profiles","profile_urls": ["https://x.com/elonmusk","@apify"],"max_items": 1000}
Combined (search + profiles)
{"source_mode": "auto","all_words": ["ai", "agent"],"since": "2025-02-17","profile_urls": ["https://x.com/apify"],"search_sort": "Top","max_items": 500}
Input can be partial — omitted fields use defaults. The Apify Console opens with sample prefill values (
search_query="from:elonmusk",profile_urls=["@elonmusk"]). Replace or remove them before running your own job.
Input Reference
Source modes
| Mode | Behavior |
|---|---|
auto (default) | Runs whichever paths have input. If both search and profile inputs are present, both run. |
search | Search path only. Profile input is ignored. |
profiles | Profile timeline path only. Search input is ignored. |
Core fields
| Field | Type | Description |
|---|---|---|
source_mode | string | auto, search, or profiles (default: auto) |
search_query | string | Raw advanced query string. Operator reference |
profile_urls | array | Handles or profile URLs (@handle, x.com/<handle>, twitter.com/<handle>) |
max_items | integer | Global tweet target per run (default: 1000) |
since | string | Start date or UTC timestamp |
until | string | End date or UTC timestamp |
search_sort | string | Top or Latest (default: Top) |
Search builder fields
Instead of writing a raw search_query, you can use structured fields that Scweet combines into a query automatically:
| Category | Fields |
|---|---|
| Keywords | all_words, any_words, exact_phrases, exclude_words |
| Users | from_users, to_users, mentioning_users |
| Hashtags | hashtags_any, hashtags_exclude |
| Language | lang (e.g. en, fr, ar) |
| Tweet type | tweet_type: all, originals_only, replies_only, retweets_only, exclude_replies, exclude_retweets |
| Filters | verified_only, blue_verified_only, has_images, has_videos, has_links, has_mentions, has_hashtags |
| Engagement | min_likes, min_replies, min_retweets |
| Location | place, geocode (lat,lon,radius), near, within |
Defaults and limits
max_itemsis global per run, not per profile.- Minimum run size: if
max_items < 100, it is auto-adjusted to100. - Unknown input keys are rejected.
- If both
sinceanduntilare missing, lookback defaults to 4 years (Top) or 180 days (Latest). - Location filters depend on X/Twitter metadata and can be approximate.
- Free plan guardrails:
1000tweets/day,10runs/day, minimum60sbetween runs.
Output
Results are stored in the Apify dataset, deduplicated by tweet ID. Export as JSON, CSV, or XLSX.
Fields
| Field | Description |
|---|---|
id | Tweet ID |
text | Tweet text |
handle | Author handle |
tweet_url | Direct link to tweet |
favorite_count, retweet_count, reply_count, quote_count, bookmark_count, view_count | Engagement metrics |
created_at | Tweet creation time |
collected_at_utc | Collection timestamp (UTC) |
lang | Language |
conversation_id | Thread/conversation ID |
in_reply_to_status_id, in_reply_to_user_id, in_reply_to_screen_name | Reply references |
quoted_status_id | Quoted tweet ID |
is_quote, is_reply | Convenience flags |
source_root | search or profile_url |
source_value | Effective query or normalized profile URL |
user | Nested author object (handle, name, followers, bio, etc.) |
tweet | Nested tweet details (media, entities, edit history, etc.) |
Example (top-level fields)
{"id": "1996300676012376299","handle": "FTB_Team","text": "We dug through the first month of StoneBlock 4...","favorite_count": 71,"retweet_count": 5,"reply_count": 10,"view_count": "9695","tweet_url": "https://x.com/FTB_Team/status/1996300676012376299","created_at": "Wed Dec 03 19:29:05 +0000 2025","collected_at_utc": "2026-04-07T15:58:33.539740+00:00","lang": "en","source_root": "search","source_value": "(sample OR query) lang:en","is_quote": false,"is_reply": false,"user": { "handle": "FTB_Team", "name": "Feed The Beast", "followers_count": 43367, "..." : "..." },"tweet": { "media": ["..."], "entities": { "..." : "..." }, "..." : "..." }}
Tips
- Use
Topsort (default).Topconsistently returns more tweets thanLatestfor the same query and time range.Latestis useful when you need strict reverse-chronological order, but tends to return fewer total results because X's search index is less exhaustive in that mode. - Use wider time ranges. The wider the
since/untilwindow, the more tweets X will surface. Scweet automatically splits wide ranges into parallel sub-intervals, so a large window does not slow down the run. - Start broad, then narrow. If you get fewer tweets than expected, try removing restrictive filters (
min_likes,tweet_type,lang) one at a time to see which one is limiting results. - For large volumes from a single profile, use
searchinstead ofprofilemode. Profile mode (source_mode=profiles) is best for recent activity. For thousands of tweets from a specific user, usesource_mode=searchwithfrom_users: ["handle"](orsearch_query: "from:handle"), a widesince/untilrange, andsearch_sort: "Latest"for more complete results. - Date filters are handled internally. Scweet converts
sinceanduntilto precise Unix-timestamp operators under the hood. You can keep using human-readable dates (e.g."since": "2025-01-01"). - Combine structured fields with
search_query. You can usesearch_queryfor advanced operators not covered by the builder fields (e.g.filter:media,-filter:replies) and add structured fields likefrom_usersormin_likeson top — Scweet merges them into a single query.
How It Works
Scweet uses X's internal GraphQL API — the same endpoints the X website uses. No official Twitter API key or developer account is needed.
You configure a query and a target. Scweet handles everything else — authentication, proxies, pacing, retries, and deduplication. If something fails mid-run, work is automatically retried so you get complete results without manual intervention.
FAQ
Do I need a Twitter API key? No. Scweet uses X's internal GraphQL API — no developer account or API key required.
Do I need to provide cookies or an account? No. Account management, proxies, and rate limiting are all handled automatically.
Can I scrape private accounts? No. Only publicly visible content is accessible.
What export formats are supported? JSON, CSV, and XLSX — download directly from the Apify dataset tab.
Is there a free tier? Yes — Apify provides monthly free platform credit. The actor's free plan allows up to 1,000 tweets/day and 10 runs/day.
Power users: the full open-source Python library is at github.com/Altimis/Scweet — pip install Scweet for programmatic access, async support, resume, and full control over accounts and proxies.
Support
For help with query tuning, limits, or workflow design, contact us on the Actor page or open an issue in the open-source repository.
Responsible usage: Use this Actor lawfully and ethically. Comply with applicable platform terms and local regulations. Scweet applies adaptive rate limiting — repeatedly running queries that return zero results will trigger progressively longer cooldowns.
Privacy: Run metadata (user ID, timestamps, input payload, counters) may be stored for rate limiting, support, and stability. This data is used internally and is not shared with third parties.