Twitter / X Scraper 2026 — No API Key, No Rate Limits avatar

Twitter / X Scraper 2026 — No API Key, No Rate Limits

Pricing

$5.00 / 1,000 result scrapeds

Go to Apify Store
Twitter / X Scraper 2026 — No API Key, No Rate Limits

Twitter / X Scraper 2026 — No API Key, No Rate Limits

Scrape Twitter (X) tweets, profiles, hashtags and search results without the $5,000/month X API. Bypass rate limits, extract engagement metrics, media, replies. Free tier available. Updated for 2026 X layout changes.

Pricing

$5.00 / 1,000 result scrapeds

Rating

0.0

(0)

Developer

Web Data Labs

Web Data Labs

Maintained by Community

Actor stats

0

Bookmarked

48

Total users

32

Monthly active users

12 hours ago

Last modified

Share

Twitter/X Scraper — Extract Tweets, Profiles & Search Results Without API Keys

Scrape public data from Twitter/X at scale — no official API key, no developer account, no authentication tokens required. Just enter a username or search term and get structured JSON data with full tweet text, engagement metrics, author details, and media URLs.

This actor uses multiple fallback methods (Twitter's public endpoints, syndication API, and Nitter mirrors) to ensure reliable data extraction even when individual methods hit rate limits.

Why Use This Instead of the Official Twitter API?

FeatureOfficial Twitter APIThis Scraper
Cost$100/mo (Basic) or $5,000/mo (Pro)Pay-per-event from $0.005/tweet
Setup timeApply for developer account (days/weeks)Start scraping in 30 seconds
Rate limitsStrict monthly capsAutomatic rate limit handling with retries
AuthenticationOAuth tokens requiredNo tokens needed
Data formatComplex nested JSONClean, flat JSON ready for analysis
Historical dataLimited on Basic tierAvailable via search

Twitter's official API pricing changed dramatically in 2023, making it inaccessible for many researchers, marketers, and small businesses. This scraper provides the same public data at a fraction of the cost.


What Data Can You Extract?

Tweet Data

Every scraped tweet includes:

  • Full text — complete tweet content including long tweets
  • Engagement metrics — likes, retweets, replies, quotes, views, bookmarks
  • Author info — username, display name, user ID
  • Timestamps — exact creation date and time
  • Media — URLs for images, videos, and GIFs (highest quality)
  • Tweet metadata — language, whether it's a retweet/reply/quote tweet
  • Direct URL — link back to the original tweet on x.com

Profile Data

When scraping a user profile, you also get:

  • Display name and bio
  • Follower and following counts
  • Tweet count
  • Profile and banner images
  • Account creation date
  • Verification status

Use Cases

1. Brand Monitoring & Reputation Management

Track what people are saying about your brand, products, or executives on Twitter/X. Set up scheduled runs to get daily or hourly alerts on brand mentions. Combine with sentiment analysis tools to gauge public perception over time.

Example: A SaaS company monitors mentions of their product name and competitor names to track share-of-voice and respond to customer complaints within hours.

2. Competitor Research & Market Intelligence

Analyze your competitors' Twitter presence — what content gets the most engagement, what topics they cover, and how their audience responds. Extract competitor tweet data to benchmark your social media performance.

Example: An e-commerce brand scrapes competitor profiles weekly to identify trending product categories and successful promotional strategies.

3. Trend Tracking & News Monitoring

Monitor hashtags, keywords, or specific accounts to stay on top of industry trends, breaking news, or emerging topics. Useful for journalists, analysts, and content creators who need to react quickly.

Example: A crypto research firm tracks tweets from key opinion leaders and trending hashtags to identify market-moving narratives before they go mainstream.

4. Sentiment Analysis & Opinion Mining

Collect tweets about a topic, product launch, or event and feed them into NLP/sentiment analysis pipelines. The structured JSON output integrates directly with Python data science tools (pandas, TextBlob, VADER, OpenAI).

Example: A political consultancy scrapes tweets mentioning candidate names during a debate to measure real-time public sentiment shifts.

5. Influencer Discovery & Outreach

Find influential accounts in any niche by scraping tweets with specific keywords and sorting by engagement metrics. Identify micro-influencers with high engagement rates for marketing campaigns.

Example: A fitness brand searches for tweets about "home workout" and "protein powder" to find creators with engaged audiences for partnership opportunities.

6. Academic & Social Science Research

Researchers studying online discourse, misinformation, political polarization, or cultural trends need large tweet datasets. This scraper provides structured, exportable data suitable for academic analysis without the cost barriers of the official API.

Example: A university research team collects tweets about climate change over a 6-month period to study how framing differs across political groups.

7. Lead Generation

Extract tweets from people expressing purchase intent, asking for recommendations, or complaining about competitor products. These high-intent signals make excellent sales leads.

Example: A B2B software company scrapes tweets containing "looking for CRM" or "hate my current CRM" to find warm leads for outreach.

8. Content Curation & Inspiration

Content creators and social media managers scrape top-performing tweets in their niche to understand what resonates with audiences, find shareable content, and generate ideas for their own posts.

Example: A marketing agency scrapes the top 50 most-liked tweets each week about "AI tools" to curate a weekly newsletter for their audience.


Input Parameters

ParameterTypeRequiredDefaultDescription
usernameStringYesTwitter/X username to scrape (without the @ symbol). Example: elonmusk
actionStringNoprofileWhat to scrape. Options: profile (profile info + tweets), tweets (tweets only)
maxTweetsIntegerNo5Maximum number of tweets to retrieve. Range: 1–100

Input Example

{
"username": "elonmusk",
"action": "profile",
"maxTweets": 20
}

Advanced Input — Scraping Multiple Users

To scrape multiple accounts, run the actor in a loop or use the Apify scheduler to run separate configurations for each username.


Sample Output

Tweet Object

{
"tweet_id": "1908234567890123456",
"user_id": "44196397",
"user_name": "Elon Musk",
"user_handle": "elonmusk",
"tweet_text": "The mass of a mass-produced vehicle is the best proxy for its cost. Reducing mass is the key design challenge for making electric vehicles affordable for everyone.",
"likes": 142850,
"retweets": 18230,
"replies": 12450,
"quotes": 3280,
"views": 48500000,
"bookmarks": 9840,
"tweet_url": "https://x.com/elonmusk/status/1908234567890123456",
"created_at": "Wed Mar 05 14:23:01 +0000 2026",
"is_retweet": false,
"is_reply": false,
"is_quote": false,
"language": "en",
"media_urls": [
"https://pbs.twimg.com/media/example_image.jpg"
],
"source": "twitter_api"
}

Profile Object (when action = "profile")

The actor returns profile metadata alongside tweets when using the profile action:

{
"user_id": "44196397",
"user_name": "Elon Musk",
"user_handle": "elonmusk",
"bio": "Read @TheBoringCompany, @Tesla, @SpaceX, & @xAI",
"followers": 210500000,
"following": 860,
"tweet_count": 52400,
"verified": true,
"profile_image": "https://pbs.twimg.com/profile_images/example.jpg",
"banner_image": "https://pbs.twimg.com/profile_banners/44196397/example.jpg",
"created_at": "Tue Jun 02 20:12:29 +0000 2009",
"location": "Mars & Austin, TX"
}

Code Examples

Python

from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
run_input = {
"username": "elonmusk",
"action": "tweets",
"maxTweets": 50,
}
run = client.actor("cryptosignals/twitter-scraper").call(run_input=run_input)
# Fetch results from the dataset
for tweet in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"@{tweet['user_handle']}: {tweet['tweet_text'][:80]}...")
print(f" Likes: {tweet['likes']} Retweets: {tweet['retweets']} Views: {tweet['views']}")
print()

Export to CSV with Pandas

from apify_client import ApifyClient
import pandas as pd
client = ApifyClient("YOUR_API_TOKEN")
run = client.actor("cryptosignals/twitter-scraper").call(
run_input={"username": "OpenAI", "action": "tweets", "maxTweets": 100}
)
tweets = list(client.dataset(run["defaultDatasetId"]).iterate_items())
df = pd.DataFrame(tweets)
# Analyze engagement
print(f"Average likes: {df['likes'].mean():.0f}")
print(f"Average views: {df['views'].mean():.0f}")
print(f"Top tweet: {df.loc[df['likes'].idxmax()]['tweet_text'][:100]}")
df.to_csv("tweets.csv", index=False)

Node.js / JavaScript

import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const run = await client.actor('cryptosignals/twitter-scraper').call({
username: 'elonmusk',
action: 'tweets',
maxTweets: 50,
});
const { items } = await client.dataset(run.defaultDatasetId).listItems();
for (const tweet of items) {
console.log(`@${tweet.user_handle}: ${tweet.tweet_text.slice(0, 80)}...`);
console.log(` Likes: ${tweet.likes} Retweets: ${tweet.retweets} Views: ${tweet.views}`);
console.log();
}

cURL (Direct API Call)

# Start the actor run
curl -X POST \
"https://api.apify.com/v2/acts/cryptosignals~twitter-scraper/runs?token=YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{"username": "elonmusk", "action": "tweets", "maxTweets": 20}'
# Fetch results (replace DATASET_ID with the ID from the run response)
curl "https://api.apify.com/v2/datasets/DATASET_ID/items?token=YOUR_API_TOKEN&format=json"

Pricing

This actor uses a pay-per-event pricing model — you only pay for the data you actually receive.

EventPrice
Tweet scraped$0.005 per tweet
Profile scraped$0.005 per profile

Cost Examples

ScenarioTweetsCost
Quick check on one account10 tweets$0.05
Daily brand monitoring100 tweets/day$0.50/day (~$15/month)
Weekly competitor analysis (5 accounts)500 tweets/week$2.50/week (~$10/month)
Research dataset1,000 tweets$5.00
Large-scale analysis10,000 tweets$50.00

Compare with Twitter API: The Basic plan costs $100/month for 10,000 tweets. This scraper delivers the same 10,000 tweets for $50 — with no monthly commitment, no application process, and pay-as-you-go flexibility.

You can also use the Apify Free plan (which includes $5/month in platform credits) to try the scraper at no additional cost.


Scheduling & Automation

Set up automatic recurring runs to collect tweet data on a schedule:

  1. Go to the actor's page on Apify Console
  2. Click "Schedule" in the top menu
  3. Set your desired frequency (hourly, daily, weekly)
  4. Configure the input parameters
  5. Results are saved to a named dataset you can access via API

Common Scheduling Patterns

Use CaseFrequencymaxTweetsEstimated Monthly Cost
Brand alertsEvery 4 hours20~$0.90/month
Daily digestOnce daily100~$15/month
Weekly reportOnce weekly500~$10/month

Webhook Integration

Set up a webhook URL in the actor's run configuration to get notified (or trigger downstream processes) whenever a run completes. This works great with:

  • Zapier / Make — pipe tweet data into Google Sheets, Slack, or email
  • Custom endpoints — send data to your own API for processing
  • Apify integrations — chain with other actors for enrichment

Output & Export Options

Results are stored in an Apify dataset and can be exported in multiple formats:

  • JSON — structured data, ideal for programmatic access
  • CSV — open in Excel, Google Sheets, or import into databases
  • XLSX — native Excel format
  • XML — for legacy system integration
  • RSS — subscribe to results as a feed

Access your data through the Apify Console, API, or client libraries.


How It Works

The scraper uses a multi-method approach for maximum reliability:

  1. Twitter GraphQL API — Uses publicly available endpoints (the same ones twitter.com uses) with guest tokens for unauthenticated access
  2. Twitter Syndication API — Falls back to Twitter's embed/syndication endpoints for timeline data
  3. Nitter Mirrors — If Twitter endpoints are rate-limited, the scraper tries multiple Nitter instances as a last resort

This layered approach means the scraper keeps working even when individual methods temporarily fail. Rate limits are handled automatically with exponential backoff and retries.


Limitations

  • Public data only — Cannot access private/protected accounts or DMs
  • Rate limits — Twitter enforces rate limits on public endpoints; the actor handles this with retries and fallbacks, but very large scraping jobs may take longer
  • Maximum 100 tweets per run — For larger datasets, run the actor multiple times or use scheduling
  • No search by keyword — Currently supports username-based scraping only (search feature coming soon)
  • Data freshness — Scrapes live data from Twitter; results reflect what is publicly visible at the time of scraping

Frequently Asked Questions

Do I need a Twitter/X API key or developer account?

No. This scraper works without any Twitter API credentials. You do not need to apply for or maintain a Twitter developer account. Just provide a username and run the actor.

This scraper only accesses publicly available data through public endpoints — the same data any visitor to twitter.com can see. It does not bypass any authentication, access private data, or violate Twitter's technical access controls. Users are responsible for complying with applicable laws and regulations in their jurisdiction regarding data collection and use.

How fresh is the data?

The scraper fetches live data directly from Twitter's servers each time it runs. Data is as fresh as the moment of scraping — typically within seconds of the current state of an account's public timeline.

Can I scrape tweets from private/protected accounts?

No. This scraper only accesses publicly available data. Protected accounts' tweets are not visible to unauthenticated users and cannot be scraped.

What happens if Twitter changes their API or blocks scraping?

The actor uses multiple fallback methods. If one method stops working, it automatically switches to alternatives. We actively monitor and update the scraper to adapt to changes in Twitter's infrastructure.

The current version supports username-based scraping. Keyword and hashtag search functionality is on our roadmap and will be available in a future update.

How do I scrape multiple accounts?

Run the actor separately for each username, or use Apify's scheduling feature to set up recurring runs for multiple accounts. You can also use the Apify API to programmatically start runs for a list of usernames.

What is pay-per-event pricing?

Instead of paying a flat monthly fee, you pay only for the data you actually receive. Each tweet or profile scraped counts as one event at $0.005. If you scrape 200 tweets, you pay $1.00. No minimums, no commitments, no wasted budget.

Can I integrate this with my existing tools?

Yes. Apify provides native integrations with Zapier, Make (Integromat), Google Sheets, Slack, and more. You can also use webhooks or the REST API to connect with any custom tool or pipeline.

How does this compare to other Twitter scrapers on Apify?

This scraper is designed for simplicity and reliability. It uses multiple fallback methods for consistent data extraction, provides clean flat JSON output (no nested objects to parse), and offers competitive pay-per-event pricing at $0.005 per tweet.


Support & Updates

If you encounter any issues or have feature requests, please open an issue on the actor's Apify page. We actively maintain this scraper and release updates to handle changes in Twitter's infrastructure.

Recent updates:

  • Multi-method fallback system (GraphQL -> Syndication -> Nitter)
  • Media extraction for images, videos, and GIFs
  • View count and bookmark count included in output
  • Automatic rate limit handling with exponential backoff