
Twitter (X.com) Scraper Unlimited: No Rate-Limits
This Actor is paid per event

Twitter (X.com) Scraper Unlimited: No Rate-Limits
This Actor is paid per event
Introducing Twitter Scraper Unlimited, the most comprehensive Twitter data extraction solution available. Our enterprise-grade scraper offers unmatched capabilities with a transparent event-based pricing model, making it perfect for both small-scale and large-scale data extraction needs.
Actor Metrics
629 monthly users
4.5 / 5 (6)
93 bookmarks
71% runs succeeded
2.6 hours response time
Created in May 2024
Modified 6 hours ago
You can access the Twitter (X.com) Scraper Unlimited: No Rate-Limits programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1{
2 "openapi": "3.0.1",
3 "info": {
4 "version": "0.0",
5 "x-build-id": "WSQ2IgPNgCWPtaJrR"
6 },
7 "servers": [
8 {
9 "url": "https://api.apify.com/v2"
10 }
11 ],
12 "paths": {
13 "/acts/apidojo~twitter-scraper-lite/run-sync-get-dataset-items": {
14 "post": {
15 "operationId": "run-sync-get-dataset-items-apidojo-twitter-scraper-lite",
16 "x-openai-isConsequential": false,
17 "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
18 "tags": [
19 "Run Actor"
20 ],
21 "requestBody": {
22 "required": true,
23 "content": {
24 "application/json": {
25 "schema": {
26 "$ref": "#/components/schemas/inputSchema"
27 }
28 }
29 }
30 },
31 "parameters": [
32 {
33 "name": "token",
34 "in": "query",
35 "required": true,
36 "schema": {
37 "type": "string"
38 },
39 "description": "Enter your Apify token here"
40 }
41 ],
42 "responses": {
43 "200": {
44 "description": "OK"
45 }
46 }
47 }
48 },
49 "/acts/apidojo~twitter-scraper-lite/runs": {
50 "post": {
51 "operationId": "runs-sync-apidojo-twitter-scraper-lite",
52 "x-openai-isConsequential": false,
53 "summary": "Executes an Actor and returns information about the initiated run in response.",
54 "tags": [
55 "Run Actor"
56 ],
57 "requestBody": {
58 "required": true,
59 "content": {
60 "application/json": {
61 "schema": {
62 "$ref": "#/components/schemas/inputSchema"
63 }
64 }
65 }
66 },
67 "parameters": [
68 {
69 "name": "token",
70 "in": "query",
71 "required": true,
72 "schema": {
73 "type": "string"
74 },
75 "description": "Enter your Apify token here"
76 }
77 ],
78 "responses": {
79 "200": {
80 "description": "OK",
81 "content": {
82 "application/json": {
83 "schema": {
84 "$ref": "#/components/schemas/runsResponseSchema"
85 }
86 }
87 }
88 }
89 }
90 }
91 },
92 "/acts/apidojo~twitter-scraper-lite/run-sync": {
93 "post": {
94 "operationId": "run-sync-apidojo-twitter-scraper-lite",
95 "x-openai-isConsequential": false,
96 "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
97 "tags": [
98 "Run Actor"
99 ],
100 "requestBody": {
101 "required": true,
102 "content": {
103 "application/json": {
104 "schema": {
105 "$ref": "#/components/schemas/inputSchema"
106 }
107 }
108 }
109 },
110 "parameters": [
111 {
112 "name": "token",
113 "in": "query",
114 "required": true,
115 "schema": {
116 "type": "string"
117 },
118 "description": "Enter your Apify token here"
119 }
120 ],
121 "responses": {
122 "200": {
123 "description": "OK"
124 }
125 }
126 }
127 }
128 },
129 "components": {
130 "schemas": {
131 "inputSchema": {
132 "type": "object",
133 "properties": {
134 "searchTerms": {
135 "title": "What search terms do you want to scrape?",
136 "type": "array",
137 "description": "If you add search terms, the scraper will find and extract tweets that mention those terms. Alternatively, see further down to scrape by Twitter URL.",
138 "items": {
139 "type": "string"
140 }
141 },
142 "sort": {
143 "title": "Do you want to filter by content?",
144 "enum": [
145 "Top",
146 "Latest"
147 ],
148 "type": "string",
149 "description": "This setting will change how the data is received by the scraper. Setting it to latest yields more results."
150 },
151 "maxItems": {
152 "title": "Maximum number of tweets",
153 "type": "integer",
154 "description": "This value lets you set the maximum number of tweets to retrieve. Twitter has a default limit of around 800 tweets per query. Check the README for workarounds."
155 },
156 "start": {
157 "title": "Tweets from this date",
158 "type": "string",
159 "description": "Scrape tweets starting from this date"
160 },
161 "end": {
162 "title": "Tweets until this date",
163 "type": "string",
164 "description": "Scrape tweets until this date"
165 },
166 "twitterHandles": {
167 "title": "Do you want to scrape by Twitter handle?",
168 "type": "array",
169 "description": "You can add the twitter handles of specific profiles you want to scrape. This is a shortcut so that you don't have to add full username URLs like https://twitter.com/apify",
170 "items": {
171 "type": "string"
172 }
173 },
174 "startUrls": {
175 "title": "Do you want to scrape by Twitter URL?",
176 "type": "array",
177 "description": "This lets you tell the scraper where to start. You can enter Twitter URLs one by one. You can also link to or upload a text file with a list of URLs",
178 "items": {
179 "type": "string"
180 }
181 }
182 }
183 },
184 "runsResponseSchema": {
185 "type": "object",
186 "properties": {
187 "data": {
188 "type": "object",
189 "properties": {
190 "id": {
191 "type": "string"
192 },
193 "actId": {
194 "type": "string"
195 },
196 "userId": {
197 "type": "string"
198 },
199 "startedAt": {
200 "type": "string",
201 "format": "date-time",
202 "example": "2025-01-08T00:00:00.000Z"
203 },
204 "finishedAt": {
205 "type": "string",
206 "format": "date-time",
207 "example": "2025-01-08T00:00:00.000Z"
208 },
209 "status": {
210 "type": "string",
211 "example": "READY"
212 },
213 "meta": {
214 "type": "object",
215 "properties": {
216 "origin": {
217 "type": "string",
218 "example": "API"
219 },
220 "userAgent": {
221 "type": "string"
222 }
223 }
224 },
225 "stats": {
226 "type": "object",
227 "properties": {
228 "inputBodyLen": {
229 "type": "integer",
230 "example": 2000
231 },
232 "rebootCount": {
233 "type": "integer",
234 "example": 0
235 },
236 "restartCount": {
237 "type": "integer",
238 "example": 0
239 },
240 "resurrectCount": {
241 "type": "integer",
242 "example": 0
243 },
244 "computeUnits": {
245 "type": "integer",
246 "example": 0
247 }
248 }
249 },
250 "options": {
251 "type": "object",
252 "properties": {
253 "build": {
254 "type": "string",
255 "example": "latest"
256 },
257 "timeoutSecs": {
258 "type": "integer",
259 "example": 300
260 },
261 "memoryMbytes": {
262 "type": "integer",
263 "example": 1024
264 },
265 "diskMbytes": {
266 "type": "integer",
267 "example": 2048
268 }
269 }
270 },
271 "buildId": {
272 "type": "string"
273 },
274 "defaultKeyValueStoreId": {
275 "type": "string"
276 },
277 "defaultDatasetId": {
278 "type": "string"
279 },
280 "defaultRequestQueueId": {
281 "type": "string"
282 },
283 "buildNumber": {
284 "type": "string",
285 "example": "1.0.0"
286 },
287 "containerUrl": {
288 "type": "string"
289 },
290 "usage": {
291 "type": "object",
292 "properties": {
293 "ACTOR_COMPUTE_UNITS": {
294 "type": "integer",
295 "example": 0
296 },
297 "DATASET_READS": {
298 "type": "integer",
299 "example": 0
300 },
301 "DATASET_WRITES": {
302 "type": "integer",
303 "example": 0
304 },
305 "KEY_VALUE_STORE_READS": {
306 "type": "integer",
307 "example": 0
308 },
309 "KEY_VALUE_STORE_WRITES": {
310 "type": "integer",
311 "example": 1
312 },
313 "KEY_VALUE_STORE_LISTS": {
314 "type": "integer",
315 "example": 0
316 },
317 "REQUEST_QUEUE_READS": {
318 "type": "integer",
319 "example": 0
320 },
321 "REQUEST_QUEUE_WRITES": {
322 "type": "integer",
323 "example": 0
324 },
325 "DATA_TRANSFER_INTERNAL_GBYTES": {
326 "type": "integer",
327 "example": 0
328 },
329 "DATA_TRANSFER_EXTERNAL_GBYTES": {
330 "type": "integer",
331 "example": 0
332 },
333 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
334 "type": "integer",
335 "example": 0
336 },
337 "PROXY_SERPS": {
338 "type": "integer",
339 "example": 0
340 }
341 }
342 },
343 "usageTotalUsd": {
344 "type": "number",
345 "example": 0.00005
346 },
347 "usageUsd": {
348 "type": "object",
349 "properties": {
350 "ACTOR_COMPUTE_UNITS": {
351 "type": "integer",
352 "example": 0
353 },
354 "DATASET_READS": {
355 "type": "integer",
356 "example": 0
357 },
358 "DATASET_WRITES": {
359 "type": "integer",
360 "example": 0
361 },
362 "KEY_VALUE_STORE_READS": {
363 "type": "integer",
364 "example": 0
365 },
366 "KEY_VALUE_STORE_WRITES": {
367 "type": "number",
368 "example": 0.00005
369 },
370 "KEY_VALUE_STORE_LISTS": {
371 "type": "integer",
372 "example": 0
373 },
374 "REQUEST_QUEUE_READS": {
375 "type": "integer",
376 "example": 0
377 },
378 "REQUEST_QUEUE_WRITES": {
379 "type": "integer",
380 "example": 0
381 },
382 "DATA_TRANSFER_INTERNAL_GBYTES": {
383 "type": "integer",
384 "example": 0
385 },
386 "DATA_TRANSFER_EXTERNAL_GBYTES": {
387 "type": "integer",
388 "example": 0
389 },
390 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
391 "type": "integer",
392 "example": 0
393 },
394 "PROXY_SERPS": {
395 "type": "integer",
396 "example": 0
397 }
398 }
399 }
400 }
401 }
402 }
403 }
404 }
405 }
406}
Twitter (X.com) Scraper Unlimited: No Rate-Limits OpenAPI definition
OpenAPI is a standard for designing and describing RESTful APIs, allowing developers to define API structure, endpoints, and data formats in a machine-readable way. It simplifies API development, integration, and documentation.
OpenAPI is effective when used with AI agents and GPTs by standardizing how these systems interact with various APIs, for reliable integrations and efficient communication.
By defining machine-readable API specifications, OpenAPI allows AI models like GPTs to understand and use varied data sources, improving accuracy. This accelerates development, reduces errors, and provides context-aware responses, making OpenAPI a core component for AI applications.
You can download the OpenAPI definitions for Twitter (X.com) Scraper Unlimited: No Rate-Limits from the options below:
If you’d like to learn more about how OpenAPI powers GPTs, read our blog post.
You can also check out our other API clients: