Scweet Twitter/X Scraper avatar

Scweet Twitter/X Scraper

Pricing

from $0.25 / 1,000 tweets

Go to Apify Store
Scweet Twitter/X Scraper

Scweet Twitter/X Scraper

Scrape Twitter (X) tweets from search + profiles. Filter keywords/hashtags/users/dates. Export JSON/CSV/XLSX. Fast. $0.30/1k. Free plan.

Pricing

from $0.25 / 1,000 tweets

Rating

5.0

(4)

Developer

JEB

JEB

Maintained by Community

Actor stats

15

Bookmarked

654

Total users

54

Monthly active users

48 days

Issues response

3 days ago

Last modified

Share

Scweet is a Twitter/X scraper for search results and profile timelines. Extract tweets into JSON, CSV, and XLSX with a simple input model.

Run Actor on Apify | Open-Source Scweet Library

Quick Navigation

๐Ÿ’ฐ Pricing (Effective March 4, 2026)

You are billed for:

  • Tweets collected
  • A one-time run-start fee per run
PlanTweet PriceRun-start
Free (No discount)$3.00 / 1,000 tweets$0.006
Starter (Bronze)$0.30 / 1,000 tweets$0.0006
Scale (Silver)$0.28 / 1,000 tweets$0.0006
Business (Gold)$0.25 / 1,000 tweets$0.0006

Apify currently advertises monthly free platform credit (commonly $5/month, subject to change).

Why free pricing is higher:

  • Free tier is for evaluation, not high-volume automation.
  • Higher free pricing helps reduce abuse and repeated empty runs.
  • It helps keep paid-tier pricing low for production users.

โšก Quick Start

  1. Open Scweet on Apify.
  2. Paste one of the inputs below.
  3. Run the Actor.
  4. Export dataset results.

Input can be partial. Omitted fields use defaults.

๐Ÿ”Ž Minimal Twitter Search Input

{
"source_mode": "search",
"search_query": "bitcoin lang:en from:elonmusk -filter:replies min_faves:100",
"since": "2025-02-17",
"until": "2026-02-19",
"search_sort": "Latest",
"max_items": 1000
}

๐Ÿ‘ค Minimal Twitter Profile Input

{
"source_mode": "profiles",
"profile_urls": [
"https://x.com/elonmusk",
"@apify"
],
"max_items": 1000
}

๐Ÿ”€ Combined Search + Profiles Input

{
"source_mode": "auto",
"all_words": ["ai", "agent"],
"since": "2025-02-17",
"profile_urls": ["https://x.com/apify"],
"search_sort": "Top",
"max_items": 500
}

โœ… Why Choose Scweet

  • Actively maintained scraping logic for changing X/Twitter behavior.
  • Reliability features: retries, account failover, task requeue, and deduplication.
  • Flexible modes: search, profiles, or both in one run.
  • Production-friendly export workflow in Apify datasets.

๐Ÿงฉ Inputs

Core fields

FieldTypeDescription
source_modestringauto, search, or profiles (default: auto)
search_querystringOptional raw advanced query string : Twitter advanced search operators
profile_urlsarray[string]Handles or profile URLs (@handle, x.com/<handle>, twitter.com/<handle>)
max_itemsintegerGlobal run target (default 1000)
since, untilstringDate or UTC timestamp window
search_sortstringTop or Latest (default: Latest)

Search builder fields (optional)

  • Keywords: all_words, any_words, exact_phrases, exclude_words
  • Hashtags/users: hashtags_any, hashtags_exclude, from_users, to_users, mentioning_users
  • Filters: tweet_type, verified_only, blue_verified_only, has_images, has_videos, has_links, has_mentions, has_hashtags, min_likes, min_replies, min_retweets
  • Location: place, geocode, near, within
  • Advanced operator reference:

Important behavior

  • Minimum requested run size is enforced: if max_items < 100, it is auto-adjusted to 100.
  • max_items is global per run, not per profile.
  • source_mode="search" ignores profile input.
  • source_mode="profiles" ignores search input.
  • If both since and until are missing, lookback defaults to:
    • Top: 4 years
    • Latest: 180 days
  • Free plan guardrails (current): 1000 tweets/day, 10 runs/day, minimum 60s between runs.

๐Ÿ“ค Output

Results are stored in the Apify dataset for the run.

  • Dataset output is deduplicated by tweet ID.
  • Export formats: JSON, CSV, XLSX.
  • Output includes source labeling fields (source_root, source_value).

What each item contains

FieldDescription
idTweet ID
textTweet text
handleAuthor handle
tweet_urlTweet URL
favorite_count, retweet_count, reply_countCore engagement metrics
source_rootsearch or profile_url
source_valueEffective query or normalized profile URL
userNested author object
tweetNested tweet details object

Full example output (generated sample values, not real user data)

[
{
"id": "tweet-1234567890123456789",
"sort_index": "1999999999999999999",
"entry_type": "TimelineTimelineItem",
"tweet_display_type": "Tweet",
"collected_at_utc": "2026-02-08T05:08:42.531721+00:00",
"source_root": "search",
"source_value": "(sample OR query) (#tag1 OR #tag2) lang:en since:2025-01-01 until:2025-01-31",
"user": {
"id": "VXNlcjoxMjM0NTY3OA==",
"rest_id": "12345678",
"name": "Sample Account",
"verified": false,
"verified_type": "None",
"is_blue_verified": true,
"created_at": "Mon Jan 01 00:00:00 +0000 2020",
"description": "Sample profile description.",
"url": "https://t.co/example",
"urls": [
{
"url": "https://t.co/example",
"expanded_url": "https://example.com",
"display_url": "example.com"
}
],
"favourites_count": 1200,
"followers_count": 98000,
"friends_count": 350,
"listed_count": 45,
"statuses_count": 15000,
"location": "Sample City",
"media_count": 420,
"handle": "sample_handle",
"profile_banner_url": "https://pbs.twimg.com/profile_banners/12345678/sample",
"profile_image_url_https": "https://pbs.twimg.com/profile_images/sample_normal.jpg"
},
"tweet": {
"rest_id": "1234567890123456789",
"conversation_id": "1234567890123456789",
"in_reply_to_status_id": null,
"in_reply_to_user_id": null,
"quoted_status_id": null,
"source": "<a href=\"https://example.com\" rel=\"nofollow\">Example App</a>",
"created_at": "Wed Jan 15 10:00:00 +0000 2025",
"mentions": ["example_user"],
"tweet_url": "https://x.com/sample_handle/status/1234567890123456789",
"view_count": "150000",
"text": "This is a sample tweet text for documentation.",
"hashtags": ["tag1", "tag2"],
"favorite_count": 1234,
"quote_count": 56,
"reply_count": 78,
"retweet_count": 90,
"bookmark_count": 12,
"is_quote_status": false,
"possibly_sensitive": false,
"is_translatable": false,
"edit_control": {
"edit_tweet_ids": ["1234567890123456789"],
"editable_until_msecs": "1762993800000",
"is_edit_eligible": true,
"edits_remaining": "5"
},
"entities": {
"hashtags": ["tag1", "tag2"],
"mentions": ["example_user"],
"urls": [
{
"url": "https://t.co/example",
"expanded_url": "https://example.com/page",
"display_url": "example.com/page"
}
],
"symbols": [],
"timestamps": []
},
"lang": "en",
"media": [
{
"id_str": "5555555555555555555",
"media_key": "3_5555555555555555555",
"type": "photo",
"media_url": "https://pbs.twimg.com/media/sample.jpg",
"expanded_url": "https://x.com/sample_handle/status/1234567890123456789/photo/1",
"display_url": "pic.x.com/sample",
"width": 1200,
"height": 800
}
]
}
}
]

โš–๏ธ Responsible Usage

Use this Actor lawfully and ethically, and comply with applicable platform terms and local regulations.

๐Ÿ”’ Data and Privacy Note

Run metadata (such as user ID, timestamps, input payload, and counters) may be stored for rate-limiting, support, and stability operations. This data is used internally and is not shared with third parties.

๐Ÿค Support

For help with query tuning, limits, or workflow design, contact us on the Actor page or open an issue in the open-source repository.