
Twitter Scraper
Pay $3.50 for 1,000 posts
This Actor may be unreliable while under maintenance. Would you like to try a similar Actor instead?
See alternative Actors
Twitter Scraper
Pay $3.50 for 1,000 posts
Scrape tweets from any Twitter user profile. Top Twitter API alternative to scrape Twitter hashtags, threads, replies, followers, images, videos, statistics, and Twitter history. Export scraped data, run the scraper via API, schedule and monitor runs or integrate with other tools.
You can access the Twitter Scraper programmatically from your own applications by using the Apify API. You can choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1{
2 "openapi": "3.0.1",
3 "info": {
4 "version": "1.0",
5 "x-build-id": "A0eEOUt0YnymJefmS"
6 },
7 "servers": [
8 {
9 "url": "https://api.apify.com/v2"
10 }
11 ],
12 "paths": {
13 "/acts/quacker~twitter-scraper/run-sync-get-dataset-items": {
14 "post": {
15 "operationId": "run-sync-get-dataset-items-quacker-twitter-scraper",
16 "x-openai-isConsequential": false,
17 "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
18 "tags": [
19 "Run Actor"
20 ],
21 "requestBody": {
22 "required": true,
23 "content": {
24 "application/json": {
25 "schema": {
26 "$ref": "#/components/schemas/inputSchema"
27 }
28 }
29 }
30 },
31 "parameters": [
32 {
33 "name": "token",
34 "in": "query",
35 "required": true,
36 "schema": {
37 "type": "string"
38 },
39 "description": "Enter your Apify token here"
40 }
41 ],
42 "responses": {
43 "200": {
44 "description": "OK"
45 }
46 }
47 }
48 },
49 "/acts/quacker~twitter-scraper/runs": {
50 "post": {
51 "operationId": "runs-sync-quacker-twitter-scraper",
52 "x-openai-isConsequential": false,
53 "summary": "Executes an Actor and returns information about the initiated run in response.",
54 "tags": [
55 "Run Actor"
56 ],
57 "requestBody": {
58 "required": true,
59 "content": {
60 "application/json": {
61 "schema": {
62 "$ref": "#/components/schemas/inputSchema"
63 }
64 }
65 }
66 },
67 "parameters": [
68 {
69 "name": "token",
70 "in": "query",
71 "required": true,
72 "schema": {
73 "type": "string"
74 },
75 "description": "Enter your Apify token here"
76 }
77 ],
78 "responses": {
79 "200": {
80 "description": "OK",
81 "content": {
82 "application/json": {
83 "schema": {
84 "$ref": "#/components/schemas/runsResponseSchema"
85 }
86 }
87 }
88 }
89 }
90 }
91 },
92 "/acts/quacker~twitter-scraper/run-sync": {
93 "post": {
94 "operationId": "run-sync-quacker-twitter-scraper",
95 "x-openai-isConsequential": false,
96 "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
97 "tags": [
98 "Run Actor"
99 ],
100 "requestBody": {
101 "required": true,
102 "content": {
103 "application/json": {
104 "schema": {
105 "$ref": "#/components/schemas/inputSchema"
106 }
107 }
108 }
109 },
110 "parameters": [
111 {
112 "name": "token",
113 "in": "query",
114 "required": true,
115 "schema": {
116 "type": "string"
117 },
118 "description": "Enter your Apify token here"
119 }
120 ],
121 "responses": {
122 "200": {
123 "description": "OK"
124 }
125 }
126 }
127 }
128 },
129 "components": {
130 "schemas": {
131 "inputSchema": {
132 "type": "object",
133 "required": [
134 "proxyConfig"
135 ],
136 "properties": {
137 "handles": {
138 "title": "Twitter profile(s)",
139 "type": "array",
140 "description": "You can add the twitter handles of specific profiles you want to scrape. This is a shortcut so that you don't have to add a full username URLs such as <code>https://twitter.com/username</code>",
141 "items": {
142 "type": "string"
143 }
144 },
145 "tweetsDesired": {
146 "title": "Number of tweets per profile",
147 "type": "integer",
148 "description": "Note that due to Twitter's limitations, you can only scrape a 100 tweets per profile. The tweets will be arranged by the number of likes they got.",
149 "default": 100
150 },
151 "addUserInfo": {
152 "title": "Add user information",
153 "type": "boolean",
154 "description": "Extends the tweets with user information. You can decrease the size of your dataset by turning this off.",
155 "default": true
156 },
157 "startUrls": {
158 "title": "Or use direct tweet URL(s)",
159 "type": "array",
160 "description": "This lets you tell the scraper where to start. You can enter Tweet URLs one by one. You can also link to or upload a text file with a list of URLs.",
161 "default": [],
162 "items": {
163 "type": "object",
164 "required": [
165 "url"
166 ],
167 "properties": {
168 "url": {
169 "type": "string",
170 "title": "URL of a web page",
171 "format": "uri"
172 }
173 }
174 }
175 },
176 "proxyConfig": {
177 "title": "Proxy configuration",
178 "type": "object",
179 "description": "",
180 "default": {
181 "useApifyProxy": true
182 }
183 }
184 }
185 },
186 "runsResponseSchema": {
187 "type": "object",
188 "properties": {
189 "data": {
190 "type": "object",
191 "properties": {
192 "id": {
193 "type": "string"
194 },
195 "actId": {
196 "type": "string"
197 },
198 "userId": {
199 "type": "string"
200 },
201 "startedAt": {
202 "type": "string",
203 "format": "date-time",
204 "example": "2025-01-08T00:00:00.000Z"
205 },
206 "finishedAt": {
207 "type": "string",
208 "format": "date-time",
209 "example": "2025-01-08T00:00:00.000Z"
210 },
211 "status": {
212 "type": "string",
213 "example": "READY"
214 },
215 "meta": {
216 "type": "object",
217 "properties": {
218 "origin": {
219 "type": "string",
220 "example": "API"
221 },
222 "userAgent": {
223 "type": "string"
224 }
225 }
226 },
227 "stats": {
228 "type": "object",
229 "properties": {
230 "inputBodyLen": {
231 "type": "integer",
232 "example": 2000
233 },
234 "rebootCount": {
235 "type": "integer",
236 "example": 0
237 },
238 "restartCount": {
239 "type": "integer",
240 "example": 0
241 },
242 "resurrectCount": {
243 "type": "integer",
244 "example": 0
245 },
246 "computeUnits": {
247 "type": "integer",
248 "example": 0
249 }
250 }
251 },
252 "options": {
253 "type": "object",
254 "properties": {
255 "build": {
256 "type": "string",
257 "example": "latest"
258 },
259 "timeoutSecs": {
260 "type": "integer",
261 "example": 300
262 },
263 "memoryMbytes": {
264 "type": "integer",
265 "example": 1024
266 },
267 "diskMbytes": {
268 "type": "integer",
269 "example": 2048
270 }
271 }
272 },
273 "buildId": {
274 "type": "string"
275 },
276 "defaultKeyValueStoreId": {
277 "type": "string"
278 },
279 "defaultDatasetId": {
280 "type": "string"
281 },
282 "defaultRequestQueueId": {
283 "type": "string"
284 },
285 "buildNumber": {
286 "type": "string",
287 "example": "1.0.0"
288 },
289 "containerUrl": {
290 "type": "string"
291 },
292 "usage": {
293 "type": "object",
294 "properties": {
295 "ACTOR_COMPUTE_UNITS": {
296 "type": "integer",
297 "example": 0
298 },
299 "DATASET_READS": {
300 "type": "integer",
301 "example": 0
302 },
303 "DATASET_WRITES": {
304 "type": "integer",
305 "example": 0
306 },
307 "KEY_VALUE_STORE_READS": {
308 "type": "integer",
309 "example": 0
310 },
311 "KEY_VALUE_STORE_WRITES": {
312 "type": "integer",
313 "example": 1
314 },
315 "KEY_VALUE_STORE_LISTS": {
316 "type": "integer",
317 "example": 0
318 },
319 "REQUEST_QUEUE_READS": {
320 "type": "integer",
321 "example": 0
322 },
323 "REQUEST_QUEUE_WRITES": {
324 "type": "integer",
325 "example": 0
326 },
327 "DATA_TRANSFER_INTERNAL_GBYTES": {
328 "type": "integer",
329 "example": 0
330 },
331 "DATA_TRANSFER_EXTERNAL_GBYTES": {
332 "type": "integer",
333 "example": 0
334 },
335 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
336 "type": "integer",
337 "example": 0
338 },
339 "PROXY_SERPS": {
340 "type": "integer",
341 "example": 0
342 }
343 }
344 },
345 "usageTotalUsd": {
346 "type": "number",
347 "example": 0.00005
348 },
349 "usageUsd": {
350 "type": "object",
351 "properties": {
352 "ACTOR_COMPUTE_UNITS": {
353 "type": "integer",
354 "example": 0
355 },
356 "DATASET_READS": {
357 "type": "integer",
358 "example": 0
359 },
360 "DATASET_WRITES": {
361 "type": "integer",
362 "example": 0
363 },
364 "KEY_VALUE_STORE_READS": {
365 "type": "integer",
366 "example": 0
367 },
368 "KEY_VALUE_STORE_WRITES": {
369 "type": "number",
370 "example": 0.00005
371 },
372 "KEY_VALUE_STORE_LISTS": {
373 "type": "integer",
374 "example": 0
375 },
376 "REQUEST_QUEUE_READS": {
377 "type": "integer",
378 "example": 0
379 },
380 "REQUEST_QUEUE_WRITES": {
381 "type": "integer",
382 "example": 0
383 },
384 "DATA_TRANSFER_INTERNAL_GBYTES": {
385 "type": "integer",
386 "example": 0
387 },
388 "DATA_TRANSFER_EXTERNAL_GBYTES": {
389 "type": "integer",
390 "example": 0
391 },
392 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
393 "type": "integer",
394 "example": 0
395 },
396 "PROXY_SERPS": {
397 "type": "integer",
398 "example": 0
399 }
400 }
401 }
402 }
403 }
404 }
405 }
406 }
407 }
408}
🆇 Twitter Scraper OpenAPI definition
OpenAPI is a standard for designing and describing RESTful APIs, allowing developers to define API structure, endpoints, and data formats in a machine-readable way. It simplifies API development, integration, and documentation.
OpenAPI is effective when used with AI agents and GPTs by standardizing how these systems interact with various APIs, for reliable integrations and efficient communication.
By defining machine-readable API specifications, OpenAPI allows AI models like GPTs to understand and use varied data sources, improving accuracy. This accelerates development, reduces errors, and provides context-aware responses, making OpenAPI a core component for AI applications.
You can download the OpenAPI definitions for Twitter Scraper from the options below:
If you’d like to learn more about how OpenAPI powers GPTs, read our blog post.
You can also check out our other API clients:
Actor Metrics
783 monthly users
-
126 bookmarks
75% runs succeeded
Created in Jun 2019
Modified 3 months ago