🏯 Tweet Scraper V2 ($0.4 / 1K tweets) - X / Twitter Scraper avatar
🏯 Tweet Scraper V2 ($0.4 / 1K tweets) - X / Twitter Scraper

Pricing

from $0.40 / 1,000 tweets

Go to Store
🏯 Tweet Scraper V2 ($0.4 / 1K tweets) - X / Twitter Scraper

🏯 Tweet Scraper V2 ($0.4 / 1K tweets) - X / Twitter Scraper

Developed by

API Dojo

API Dojo

Maintained by Community

⚡️ Lightning-fast search, URL, list, and profile scraping, with customizable filters. At $0.40 per 1000 tweets, and 30-80 tweets per second, it is ideal for researchers, entrepreneurs, and businesses! Get comprehensive insights from Twitter (X) now!

2.6 (72)

Pricing

from $0.40 / 1,000 tweets

719

Total users

17K

Monthly users

2.1K

Runs succeeded

>99%

Issues response

5.3 hours

Last modified

13 hours ago

YO

Downloading datasers

Closed

yoy48kes opened this issue
8 days ago

Hi,

I have a few questions on how to use API.

  1. How can I download the datset from a run? I did it but it didn’t give me the dataset showing "all items", just the base one but I need all items

  2. If i want to to multiple runs, can I do it with a script that keeps changing search terms and starts new runs after the previous has finished?

apidojo avatar

Hello,

1- You can use the "Export results" button from the run details page. I am attaching a screenshot for reference. Please refer to Apify docs to learn more about datasets https://docs.apify.com/platform/storage/dataset

2- Yes you can do it however you can only have 1 concurrent runs. You need to wait for the active one to finish and the run a new one. You can read more about the terms at https://apify.com/apidojo/tweet-scraper#important-note

Cheers

YO

yoy48kes

7 days ago

Thanks, but what I wanted to know is if there is a way to download the dataset showing "all fields" also from API:

my code is:

Run the Actor and wait for it to finish run = client.actor("61RPP7dywgiy0JPD0").call(run_input=run_input)

Fetch and print Actor results from the run's dataset (if there are any)

#for item in client.dataset(run["defaultDatasetId"]).iterate_items():

print(item)

Get dataset ID

dataset_id = run["defaultDatasetId"]

Create CSV download URL

csv_url = f"https://api.apify.com/v2/datasets/{dataset_id}/items?format=csv&clean=true"

Download CSV content and save to file

response = requests.get(csv_url)

if response.status_code == 200: with open("tweets_output.csv", "wb") as f: f.write(response.content) print("✅ CSV file saved as 'tweets_output.csv'") else: print(f"Failed to download CSV: HTTP {response.status_code}")

this works but only gives me the standard datset, not all the hidden fields that appear when (on the console) i click on "All fields"

what should I change to get that dataset (with all the fields)?

apidojo avatar

Hello,

Unfortunately we cannot give code level support as these features belong to Apify not to the scraper.

Have you checked the documentation? You can also ask Apify support or ask this directly to the Discord channel.

I hope these help

Cheers