Actor picture

Twitter Scraper

quacker/twitter-scraper

Scrape tweets from any Twitter user profile. Top Twitter API alternative to scrape Twitter hashtags, threads, replies, followers, images, videos, statistics, and Twitter history. Download your data in any format, including JSON and Excel. Seamless integration with apps, reports, and databases.

No credit card required

Author's avatarQuacker
  • Modified
  • Users6,956
  • Runs710,542

What data can Twitter Scraper extract?

Twitter Scraper crawls specified Twitter profiles and URLs, and extracts:

➡️ User information, such as name, Twitter handle (username), location, follower/following count, profile URL/image/banner, date of creation.

➡️ List of tweets, retweets, and replies from profiles.

➡️ Statistics for each tweet: favorites, replies, and retweets for each tweet.

➡️ Search hashtags, get top, latest, people, picture, or video tweets.

Our Twitter Scraper enables you to extract large amounts of data from Twitter. It lets you do much more than the Twitter API, because it doesn't have rate limits and you don't even need to have a Twitter account, a registered app, or Twitter API key.

You can crawl based on a list of Twitter handles or just by using a Twitter URL such as a search, trending topics, or hashtags.

Why use Twitter Scraper?

Scraping Twitter will give you access to the more than 500 million tweets posted every day. You can use that data in lots of different ways:

👉 Track discussions about your brand, products, country, or city.

👉 Monitor your competitors and see how popular they really are, and how you can get a competitive edge.

👉 Keep an eye on new trends, attitudes, and fashions as they emerge.

👉 Use the data to train AI models or for academic research.

👉 Track sentiment to make sure your investments are protected.

👉 Fight fake news by understanding the pattern of how misinformation spreads.

👉 Explore discussions about travel destinations, services, amenities, and take advantage of local knowledge.

👉 Analyze consumer habits and develop new products or target underdeveloped niches.

If you would like more inspiration on how scraping social media can help your business or organization, check out our industry pages.

How to use Twitter Scraper

You can read our step-by-step tutorial on how to scrape Twitter if you need some guidance on how to run the scraper.

It is legal to scrape Twitter to extract publicly available information, but you should be aware that the data extracted might contain personal data. Personal data is protected by GDPR in the European Union and by other regulations around the world. You should not scrape personal data unless you have a legitimate reason to do so. If you're unsure whether your reason is legitimate, consult your lawyers. You can also read our blog post on the legality of web scraping.

Want more Twitter scraping options?

If you want to keep your scraping tasks as quick and easy as possible, you should try these targeted Twitter scrapers for simpler and more targeted scraping activities ⬇️

👉 Twitter URL Scraper

👉 Easy Twitter Search Scraper

👉 Twitter Image Scraper

👉 Twitter History Scraper

👉 Twitter Latest Scraper

👉 Twitter Video Scraper

👉 Twitter Info Scraper

👉 Twitter History Hashtag Scraper

👉 Twitter Profile Scraper

Tips and tricks

Using the URL option

The default option is to scrape using search terms, but you can also scrape by Twitter handles or Twitter URLs. If you want to use the URL option, these are the supported Twitter URL types ⬇️

Using cookies to log in

This solution allows you to log in using the already initialized cookies of a logged-in user. If you use this option, the scraper will do as much as possible to prevent the account from being banned (slow down to just one page open at a time and introduce delays between actions).

It's highly recommended that you don't use your personal account (unless you really have to). You should instead create a new Twitter account to use with this solution. Using your personal account could result in the account being banned by Twitter.

To log in using cookies, you can use a Chrome browser extension such as EditThisCookie. Once you have installed it, open Twitter in your browser, log in with the account you want to use, and export cookies with the extension. This should give you an array of cookies that you can paste as a value for the loginCookies input field.

If you log out of the Twitter account connected to the cookies, it will invalidate them, and your solution will stop working.

Input parameters

Twitter Scraper has the following input options: Apify - Twitter Scraper input

Twitter data output

You can download the resulting dataset in various formats such as JSON, HTML, CSV or Excel. Each item in the dataset will contain a separate tweet following this format:

{
  "user": {
    "id_str": "44196397",
    "name": "Elon Musk",
    "screen_name": "elonmusk",
    "location": "",
    "description": "",
    "followers_count": 42583621,
    "fast_followers_count": 0,
    "normal_followers_count": 42583621,
    "friends_count": 104,
    "listed_count": 59150,
    "created_at": "2009-06-02T20:12:29.000Z",
    "favourites_count": 7840,
    "verified": true,
    "statuses_count": 13360,
    "media_count": 801,
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1295975423654977537/dHw9JcrK_normal.jpg",
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/44196397/1576183471",
    "has_custom_timelines": true,
    "advertiser_account_type": "promotable_user",
    "business_profile_state": "none",
    "translator_type": "none"
  },
  "id": "1338857124508684289",
  "conversation_id": "1338390123373801472",
  "full_text": "@CyberpunkGame The objective reality is that it is impossible to run an advanced game well on old hardware. This is a much more serious issue: https://t.co/OMNCTa9hJY",
  "reply_count": 792,
  "retweet_count": 669,
  "favorite_count": 17739,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [
    {
      "screen_name": "CyberpunkGame",
      "name": "Cyberpunk 2077",
      "id_str": "821102114"
    }
  ],
  "urls": [
    {
      "url": "https://t.co/OMNCTa9hJY",
      "expanded_url": "https://www.pcgamer.com/the-more-time-i-spend-in-cyberpunk-2077s-world-the-less-i-believe-in-it/",
      "display_url": "pcgamer.com/the-more-time-…"
    }
  ],
  "url": "https://twitter.com/elonmusk/status/1338857124508684289",
  "created_at": "2020-12-15T14:43:07.000Z"
}

You can use a predefined search using Advanced Search as a startUrl, e.g. https://twitter.com/search?q=cool%20until%3A2020-01-01&src=typed_query

This returns only tweets containing "cool" before 2020-01-01.

Workaround for max tweets limit

By default, Twitter will return only at most 3,200 tweets per profile or search. If you need to get more than the maximum number, you can split your start URLs with time slices, like this:

  • https://twitter.com/search?q=(from%3Aelonmusk)%20since%3A2020-03-01%20until%3A2020-04-01&src=typed_query&f=live
  • https://twitter.com/search?q=(from%3Aelonmusk)%20since%3A2020-02-01%20until%3A2020-03-01&src=typed_query&f=live
  • https://twitter.com/search?q=(from%3Aelonmusk)%20since%3A2020-01-01%20until%3A2020-02-01&src=typed_query&f=live

All URLs are from the same profile (elonmusk), but they are split by month (January -> February -> March 2020). This can be created using Twitter "Advanced Search" on https://twitter.com/search

You can use bigger intervals for profiles that don't post very often.

Other limitations include:

  • Live tweets are capped by at most 1 day in the past (use the search filters above to get around this)
  • Most search modes are capped at around 150 tweets (Top, Videos, Pictures)

Extend output function

This parameter allows you to change the shape of your dataset output, split arrays into separate dataset items, or filter the output:

async ({ data, item, request }) => {
    item.user = undefined; // removes this field from the output
    delete item.user; // this works as well

    const raw = data.tweets[item['#sort_index']]; // allows you to access the raw data

    item.source = raw.source; // adds "Twitter for ..." to the output

    if (request.userData.search) {
        item.search = request.userData.search; // add the search term to the output
        item.searchUrl = request.loadedUrl; // add the raw search URL to the output
    }

    return item;
}

Filtering items:

async ({ item }) => {
    if (!item.full_text.includes('lovely')) {
        return null; // omit the output if the tweet body doesn't contain the text
    }

    return item;
}

Splitting into multiple dataset items and change the output completely:

async ({ item }) => {
    // dataset will be full of items like { hashtag: '#somehashtag' }
    // returning an array here will split in multiple dataset items
    return item.hashtags.map((hashtag) => {
        return { hashtag: `#${hashtag}` };
    });
}

Extend scraper function

This parameter allows you to extend how the scraper works and can make it easier to extend the default functionality without having to create your own custom version. For example, you can include a search of the trending topics on each page visit:

async ({ page, request, addSearch, addProfile, addThread, customData }) => {
    await page.waitForSelector('[aria-label="Timeline: Trending now"] [data-testid="trend"]');

    const trending = await page.evaluate(() => {
        const trendingEls = $('[aria-label="Timeline: Trending now"] [data-testid="trend"]');

        return trendingEls.map((_, el) => {
            return {
                term: $(el).find('> div > div:nth-child(2)').text().trim(),
                profiles: $(el).find('> div > div:nth-child(3) [role="link"]').map((_, el) => $(el).text()).get()
            }
        }).get();
    });

    for (const { search, profiles } of trending) {
        await addSearch(search); // add a search using text

        for (const profile of profiles) {
            await addProfile(profile); // adds a profile using link
        }
    }

    // adds a thread and get replies. can accept an id, like from conversation_id or an URL
    // you can call this multiple times but will be added only once
    await addThread("1351044768030142464");
}

Additional variables are available inside extendScraperFunction:

async ({ label, response, url }) => {
    if (label === 'response' && response) {
        // inside the page.on('response') callback
        if (url.includes('live_pipeline')) {
            // deal with plain text content
            const blob = await (await response.blob()).text();
        }
    } else if (label === 'before') {
        // executes before the page.on('response'), can be used for intercept request/response
    } else if (label === 'after') {
        // executes after the scraping process has finished, even on crash
    }
}

Integrations and Twitter Scraper

Last but not least, Twitter Scraper can be connected with almost any cloud service or web app thanks to integrations on the Apify platform. You can integrate with Make, Zapier, Slack, Airbyte, GitHub, Google Sheets, Google Drive, and more. Or you can use webhooks to carry out an action whenever an event occurs, e.g. get a notification whenever Twitter Scraper successfully finishes a run.

Using Twitter Scraper with the Apify API

The Apify API gives you programmatic access to the Apify platform. The API is organized around RESTful HTTP endpoints that enable you to manage, schedule, and run Apify actors. The API also lets you access any datasets, monitor actor performance, fetch results, create and update versions, and more.

To access the API using Node.js, use the apify-client NPM package. To access the API using Python, use the apify-client PyPI package.

Check out the Apify API reference docs for full details or click on the API tab for code examples.

Industries

See how Twitter Scraper is used in industries around the world