
Audio and Video Transcript (OpenAI Whisper)
Pricing
$4.99/month + usage

Audio and Video Transcript (OpenAI Whisper)
This Actor transcribes audio or video files from publicly accessible URLs using OpenAI's Whisper API. To use this Actor, you'll need to provide your own OpenAI API key. It supports multiple languages and highly customizable parameters, enabling precise control over the transcription process.
5.0 (1)
Pricing
$4.99/month + usage
0
Monthly users
7
Runs succeeded
>99%
Response time
1.3 hours
Last modified
a month ago
You can access the Audio and Video Transcript (OpenAI Whisper) programmatically from your own applications by using the Apify API. You can also choose the language preference from below. To use the Apify API, you’ll need an Apify account and your API token, found in Integrations settings in Apify Console.
1{
2 "openapi": "3.0.1",
3 "info": {
4 "version": "0.0",
5 "x-build-id": "EUElNUgj1i2Mx1t7C"
6 },
7 "servers": [
8 {
9 "url": "https://api.apify.com/v2"
10 }
11 ],
12 "paths": {
13 "/acts/vittuhy~audio-and-video-transcript/run-sync-get-dataset-items": {
14 "post": {
15 "operationId": "run-sync-get-dataset-items-vittuhy-audio-and-video-transcript",
16 "x-openai-isConsequential": false,
17 "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
18 "tags": [
19 "Run Actor"
20 ],
21 "requestBody": {
22 "required": true,
23 "content": {
24 "application/json": {
25 "schema": {
26 "$ref": "#/components/schemas/inputSchema"
27 }
28 }
29 }
30 },
31 "parameters": [
32 {
33 "name": "token",
34 "in": "query",
35 "required": true,
36 "schema": {
37 "type": "string"
38 },
39 "description": "Enter your Apify token here"
40 }
41 ],
42 "responses": {
43 "200": {
44 "description": "OK"
45 }
46 }
47 }
48 },
49 "/acts/vittuhy~audio-and-video-transcript/runs": {
50 "post": {
51 "operationId": "runs-sync-vittuhy-audio-and-video-transcript",
52 "x-openai-isConsequential": false,
53 "summary": "Executes an Actor and returns information about the initiated run in response.",
54 "tags": [
55 "Run Actor"
56 ],
57 "requestBody": {
58 "required": true,
59 "content": {
60 "application/json": {
61 "schema": {
62 "$ref": "#/components/schemas/inputSchema"
63 }
64 }
65 }
66 },
67 "parameters": [
68 {
69 "name": "token",
70 "in": "query",
71 "required": true,
72 "schema": {
73 "type": "string"
74 },
75 "description": "Enter your Apify token here"
76 }
77 ],
78 "responses": {
79 "200": {
80 "description": "OK",
81 "content": {
82 "application/json": {
83 "schema": {
84 "$ref": "#/components/schemas/runsResponseSchema"
85 }
86 }
87 }
88 }
89 }
90 }
91 },
92 "/acts/vittuhy~audio-and-video-transcript/run-sync": {
93 "post": {
94 "operationId": "run-sync-vittuhy-audio-and-video-transcript",
95 "x-openai-isConsequential": false,
96 "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
97 "tags": [
98 "Run Actor"
99 ],
100 "requestBody": {
101 "required": true,
102 "content": {
103 "application/json": {
104 "schema": {
105 "$ref": "#/components/schemas/inputSchema"
106 }
107 }
108 }
109 },
110 "parameters": [
111 {
112 "name": "token",
113 "in": "query",
114 "required": true,
115 "schema": {
116 "type": "string"
117 },
118 "description": "Enter your Apify token here"
119 }
120 ],
121 "responses": {
122 "200": {
123 "description": "OK"
124 }
125 }
126 }
127 }
128 },
129 "components": {
130 "schemas": {
131 "inputSchema": {
132 "type": "object",
133 "required": [
134 "url",
135 "openai_api_key"
136 ],
137 "properties": {
138 "url": {
139 "title": "File URL",
140 "type": "array",
141 "description": "Publicly accessible URL(s) of the audio/video file(s).",
142 "items": {
143 "type": "object",
144 "required": [
145 "url"
146 ],
147 "properties": {
148 "url": {
149 "type": "string",
150 "title": "URL of a web page",
151 "format": "uri"
152 }
153 }
154 }
155 },
156 "language": {
157 "title": "Language",
158 "enum": [
159 "Auto-detect",
160 "Afrikaans",
161 "Albanian",
162 "Amharic",
163 "Arabic",
164 "Armenian",
165 "Assamese",
166 "Azerbaijani",
167 "Bashkir",
168 "Basque",
169 "Belarusian",
170 "Bengali",
171 "Bosnian",
172 "Breton",
173 "Bulgarian",
174 "Burmese",
175 "Catalan",
176 "Chinese",
177 "Croatian",
178 "Czech",
179 "Danish",
180 "Dutch",
181 "English",
182 "Esperanto",
183 "Estonian",
184 "Faroese",
185 "Finnish",
186 "French",
187 "Galician",
188 "Georgian",
189 "German",
190 "Greek",
191 "Gujarati",
192 "Haitian Creole",
193 "Hausa",
194 "Hawaiian",
195 "Hebrew",
196 "Hindi",
197 "Hungarian",
198 "Icelandic",
199 "Indonesian",
200 "Italian",
201 "Japanese",
202 "Javanese",
203 "Kannada",
204 "Kazakh",
205 "Khmer",
206 "Korean",
207 "Lao",
208 "Latin",
209 "Latvian",
210 "Lithuanian",
211 "Luxembourgish",
212 "Macedonian",
213 "Malagasy",
214 "Malay",
215 "Malayalam",
216 "Maltese",
217 "Maori",
218 "Marathi",
219 "Mongolian",
220 "Nepali",
221 "Norwegian",
222 "Norwegian Nynorsk",
223 "Occitan",
224 "Pashto",
225 "Persian",
226 "Polish",
227 "Portuguese",
228 "Punjabi",
229 "Romanian",
230 "Russian",
231 "Sanskrit",
232 "Serbian",
233 "Shona",
234 "Sindhi",
235 "Sinhala",
236 "Slovak",
237 "Slovenian",
238 "Somali",
239 "Spanish",
240 "Sundanese",
241 "Swahili",
242 "Swedish",
243 "Tagalog",
244 "Tajik",
245 "Tamil",
246 "Tatar",
247 "Telugu",
248 "Thai",
249 "Turkish",
250 "Turkmen",
251 "Ukrainian",
252 "Urdu",
253 "Uzbek",
254 "Vietnamese",
255 "Welsh",
256 "Yiddish",
257 "Yoruba"
258 ],
259 "type": "string",
260 "description": "Select the language of the audio. Choose 'Auto-detect' to let the system determine the language automatically.",
261 "default": "Auto-detect"
262 },
263 "temperature": {
264 "title": "Temperature",
265 "pattern": "^[0-9]*\\.?[0-9]+$",
266 "type": "string",
267 "description": "Set the temperature value as a floating-point number.",
268 "default": "0.0"
269 },
270 "response_format": {
271 "title": "Response Format",
272 "enum": [
273 "text",
274 "srt",
275 "vtt",
276 "json",
277 "verbose_json"
278 ],
279 "type": "string",
280 "description": "Choose how the transcript should be formatted.",
281 "default": "text"
282 },
283 "word_timestamps": {
284 "title": "Word Timestamps",
285 "type": "boolean",
286 "description": "Include timestamps for each word (only valid for 'verbose_json' format).",
287 "default": false
288 },
289 "prompt": {
290 "title": "Prompt",
291 "type": "string",
292 "description": "Provide additional context to improve transcription accuracy.",
293 "default": ""
294 },
295 "temperature_increment_on_fallback": {
296 "title": "Temperature Increment on Fallback",
297 "minimum": 0,
298 "maximum": 1,
299 "type": "integer",
300 "description": "Amount to increase the temperature if the initial decoding fails.",
301 "default": 0
302 },
303 "compression_ratio_threshold": {
304 "title": "Compression Ratio Threshold",
305 "minimum": 1,
306 "maximum": 10,
307 "type": "integer",
308 "description": "Threshold for the compression ratio before rejecting a transcript.",
309 "default": 2
310 },
311 "logprob_threshold": {
312 "title": "Log Probability Threshold",
313 "type": "integer",
314 "description": "Minimum average log probability for a segment to be included.",
315 "default": -1
316 },
317 "no_speech_threshold": {
318 "title": "No Speech Threshold",
319 "minimum": 0,
320 "maximum": 100,
321 "type": "integer",
322 "description": "Probability of no speech required to consider audio as silent.",
323 "default": 1
324 },
325 "openai_api_key": {
326 "title": "OpenAI API Key",
327 "type": "string",
328 "description": "Your secret OpenAI API key."
329 }
330 }
331 },
332 "runsResponseSchema": {
333 "type": "object",
334 "properties": {
335 "data": {
336 "type": "object",
337 "properties": {
338 "id": {
339 "type": "string"
340 },
341 "actId": {
342 "type": "string"
343 },
344 "userId": {
345 "type": "string"
346 },
347 "startedAt": {
348 "type": "string",
349 "format": "date-time",
350 "example": "2025-01-08T00:00:00.000Z"
351 },
352 "finishedAt": {
353 "type": "string",
354 "format": "date-time",
355 "example": "2025-01-08T00:00:00.000Z"
356 },
357 "status": {
358 "type": "string",
359 "example": "READY"
360 },
361 "meta": {
362 "type": "object",
363 "properties": {
364 "origin": {
365 "type": "string",
366 "example": "API"
367 },
368 "userAgent": {
369 "type": "string"
370 }
371 }
372 },
373 "stats": {
374 "type": "object",
375 "properties": {
376 "inputBodyLen": {
377 "type": "integer",
378 "example": 2000
379 },
380 "rebootCount": {
381 "type": "integer",
382 "example": 0
383 },
384 "restartCount": {
385 "type": "integer",
386 "example": 0
387 },
388 "resurrectCount": {
389 "type": "integer",
390 "example": 0
391 },
392 "computeUnits": {
393 "type": "integer",
394 "example": 0
395 }
396 }
397 },
398 "options": {
399 "type": "object",
400 "properties": {
401 "build": {
402 "type": "string",
403 "example": "latest"
404 },
405 "timeoutSecs": {
406 "type": "integer",
407 "example": 300
408 },
409 "memoryMbytes": {
410 "type": "integer",
411 "example": 1024
412 },
413 "diskMbytes": {
414 "type": "integer",
415 "example": 2048
416 }
417 }
418 },
419 "buildId": {
420 "type": "string"
421 },
422 "defaultKeyValueStoreId": {
423 "type": "string"
424 },
425 "defaultDatasetId": {
426 "type": "string"
427 },
428 "defaultRequestQueueId": {
429 "type": "string"
430 },
431 "buildNumber": {
432 "type": "string",
433 "example": "1.0.0"
434 },
435 "containerUrl": {
436 "type": "string"
437 },
438 "usage": {
439 "type": "object",
440 "properties": {
441 "ACTOR_COMPUTE_UNITS": {
442 "type": "integer",
443 "example": 0
444 },
445 "DATASET_READS": {
446 "type": "integer",
447 "example": 0
448 },
449 "DATASET_WRITES": {
450 "type": "integer",
451 "example": 0
452 },
453 "KEY_VALUE_STORE_READS": {
454 "type": "integer",
455 "example": 0
456 },
457 "KEY_VALUE_STORE_WRITES": {
458 "type": "integer",
459 "example": 1
460 },
461 "KEY_VALUE_STORE_LISTS": {
462 "type": "integer",
463 "example": 0
464 },
465 "REQUEST_QUEUE_READS": {
466 "type": "integer",
467 "example": 0
468 },
469 "REQUEST_QUEUE_WRITES": {
470 "type": "integer",
471 "example": 0
472 },
473 "DATA_TRANSFER_INTERNAL_GBYTES": {
474 "type": "integer",
475 "example": 0
476 },
477 "DATA_TRANSFER_EXTERNAL_GBYTES": {
478 "type": "integer",
479 "example": 0
480 },
481 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
482 "type": "integer",
483 "example": 0
484 },
485 "PROXY_SERPS": {
486 "type": "integer",
487 "example": 0
488 }
489 }
490 },
491 "usageTotalUsd": {
492 "type": "number",
493 "example": 0.00005
494 },
495 "usageUsd": {
496 "type": "object",
497 "properties": {
498 "ACTOR_COMPUTE_UNITS": {
499 "type": "integer",
500 "example": 0
501 },
502 "DATASET_READS": {
503 "type": "integer",
504 "example": 0
505 },
506 "DATASET_WRITES": {
507 "type": "integer",
508 "example": 0
509 },
510 "KEY_VALUE_STORE_READS": {
511 "type": "integer",
512 "example": 0
513 },
514 "KEY_VALUE_STORE_WRITES": {
515 "type": "number",
516 "example": 0.00005
517 },
518 "KEY_VALUE_STORE_LISTS": {
519 "type": "integer",
520 "example": 0
521 },
522 "REQUEST_QUEUE_READS": {
523 "type": "integer",
524 "example": 0
525 },
526 "REQUEST_QUEUE_WRITES": {
527 "type": "integer",
528 "example": 0
529 },
530 "DATA_TRANSFER_INTERNAL_GBYTES": {
531 "type": "integer",
532 "example": 0
533 },
534 "DATA_TRANSFER_EXTERNAL_GBYTES": {
535 "type": "integer",
536 "example": 0
537 },
538 "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
539 "type": "integer",
540 "example": 0
541 },
542 "PROXY_SERPS": {
543 "type": "integer",
544 "example": 0
545 }
546 }
547 }
548 }
549 }
550 }
551 }
552 }
553 }
554}
Audio and Video Transcript (OpenAI Whisper) OpenAPI definition
OpenAPI is a standard for designing and describing RESTful APIs, allowing developers to define API structure, endpoints, and data formats in a machine-readable way. It simplifies API development, integration, and documentation.
OpenAPI is effective when used with AI agents and GPTs by standardizing how these systems interact with various APIs, for reliable integrations and efficient communication.
By defining machine-readable API specifications, OpenAPI allows AI models like GPTs to understand and use varied data sources, improving accuracy. This accelerates development, reduces errors, and provides context-aware responses, making OpenAPI a core component for AI applications.
You can download the OpenAPI definitions for Audio and Video Transcript (OpenAI Whisper) from the options below:
If you’d like to learn more about how OpenAPI powers GPTs, read our blog post.
You can also check out our other API clients:
Pricing
Pricing model
RentalTo use this Actor, you have to pay a monthly rental fee to the developer. The rent is subtracted from your prepaid usage every month after the free trial period. You also pay for the Apify platform usage.
Free trial
30 minutes
Price
$4.99