Scweet
Pay $0.30 for 1,000 tweets
Scweet
Pay $0.30 for 1,000 tweets
Scweet is a scalable tweet-scraping tool built on the open-source Scweet library. Just specify dates, keywords, hashtags, and tweet count—the Actor automatically scales to fetch data at up to 1000 tweets per minute only $0.30 per 1000 tweets. All results come in JSON/CSV format.
Scweet on Apify
1. Introduction & Background
Scweet began as a simple Python tool to scrape tweets based on keywords, hashtags, dates, and more. Over time, X’s (formerly Twitter’s) policies, rate limits, and internal structures changed significantly, making the original Scweet library less reliable or scalable for larger or production-level scraping needs.
That’s why we created the Scweet on Apify—to supercharge the original Scweet concept in a cloud-based environment, so you can easily gather large volumes of tweets with minimal setup.
Responsible Usage: This Actor is intended for lawful and ethical purposes only (research, analytics, journalism, etc.). We do not condone malicious or harmful use of this tool.
2. What Makes Scweet Different?
- Built for Scale: Where the stand-alone library struggles with big data pulls, the Actor seamlessly handles thousands (or more) tweets by distributing work in the cloud.
- Flexible Date & Volume: You control how many tweets to fetch (maxItems) and how broad the date range (since to until). The Actor automatically scales its parallel fetching under the hood.
- Performance: It can scrape up to 1000 tweets per minute in typical scenarios. If more tweets exist in your query window, simply ask for more (increase maxItems, broaden the date range, etc.).
- Easy Data Export: Your results land in Apify’s dataset, easily downloadable as JSON, CSV, or XLSX.
3. Usage Overview
- Open the Actor on Apify.
- Define Your Input with the search parameters (detailed below).
- Run the Actor and monitor progress.
- Retrieve Your Data from the Apify dataset once the run is complete.
4. Configuration & Inputs
You can provide the following fields. All fields are optional—if omitted, defaults apply.
Field | Type | Default | Description |
---|---|---|---|
words_and | list[string] | [] (empty) | Each term (without space) must appear in the tweet. |
words_or | list[string] | [] (empty) | At least one term must appear in the tweet. |
hashtag | list[string] | [] (empty) | Hashtags to search for. |
from_user | string or None | None | Scrape tweets from this user (exclude @). |
to_user | string or None | None | Scrape tweets directed to this user. |
min_likes | string or None | None | Minimum number of likes a tweet must have. |
min_replies | string or None | None | Minimum number of replies a tweet must have. |
min_retweets | string or None | None | Minimum number of retweets a tweet must have. |
lang | string or None | None | Restrict tweets to a particular language (e.g., "en"). |
since | string (YYYY-MM-DD) | 2 years ago | Start date of tweets to scrape. |
until | string (YYYY-MM-DD) | today’s date | End date of tweets to scrape. |
type | string | "Top" | Either "Top" or "Latest". |
maxItems | number | 10000 | The maximum number of tweets the Actor attempts to get. |
geocode | string or None | None | Geolocation-based search, e.g. "39.8283,-98.5795,2500km". |
place | string or None | None | Place ID for area-based search, e.g. "96683cc9126741d1" (USA). |
near | string or None | None | Name of a city or location to narrow results, e.g. "Paris". |
Note IF you want to scrape tweets from a specific user profile, just keep the search parameters (words and hashtags) empty and input the user Handle in <from_user> parameter
Scaling & Parallelization
- Date Range: The wider your since → until range, the more potential tweets exist.
- maxItems: The higher this value, the more tweets the Actor tries to retrieve.
- The Actor scales automatically in the background, so just set these two factors according to your needs, and the system will handle concurrency and data fetching.
5. Output Format
JSON format
1[ 2 { 3 "id": "tweet-1877796743036743891", 4 "user_is_blue_verified": true, 5 "user_created_at": "Tue Jun 02 20:12:29 +0000 2009", 6 "user_description": "", 7 "user_urls": [], 8 "user_favourites_count": 113767, 9 "user_followers_count": 212302178, 10 "user_friends_count": 931, 11 "user_location": "", 12 "user_media_count": 3086, 13 "user_handle": "elonmusk", 14 "user_profile_image_url_https": "...", 15 "tweet_source": "<a href=\"http://twitter.com/download/iphone\" ...>", 16 "tweet_created_at": "Fri Jan 10 19:16:45 +0000 2025", 17 "tweet_mentions": [], 18 "tweet_url": "https://x.com/elonmusk/status/1877796743036743891", 19 "tweet_view_count": "28738465", 20 "tweet_text": "Tyrannical behavior", 21 "tweet_hashtags": [], 22 "tweet_favorite_count": 218062, 23 "tweet_quote_count": 1518, 24 "tweet_reply_count": 10558, 25 "tweet_retweet_count": 51030, 26 "tweet_lang": "en", 27 "tweet_media_urls": [], 28 "tweet_media_expanded_urls": [] 29}, 30 ... 31]
Or as a CSV format with the same column names as the JSON format.
6. Speed & Performance
Benchmark: Scweet can often scrape up to 1000 tweets per minute. Actual performance may vary depending on network conditions, query complexity, or how many tweets exist in the specified date range.
Pro Tip: If you need more tweets, raise maxItems and expand since → until. The Actor automatically fetches as many tweets as it can find.
7. Future Growth & Community
Scweet is currently in beta. As the Scweet community grows, we’ll add more features and enhancements. We welcome feedback from users, whether you’re a data scientist, a researcher, or a casual explorer—tell us what you need, and we’ll keep improving.
Enjoy Scweet—your easy path to structured tweet data at scale!
Actor Metrics
10 monthly users
-
3 stars
>99% runs succeeded
Created in Dec 2024
Modified 2 days ago