# Transcribe Voice Memo to Text — Speaker Labels & Timestamps (`sian.agency/transcribe-voice-memo-to-text`) Actor

Transcribe iPhone and Android voice memos to text. Speaker labels, word-level timestamps, SRT/VTT. Bulk upload, 99+ languages. Try free.

- **URL**: https://apify.com/sian.agency/transcribe-voice-memo-to-text.md
- **Developed by:** [SIÁN OÜ](https://apify.com/sian.agency) (community)
- **Categories:** Automation, Developer tools, Other
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, 1 bookmarks
- **User rating**: No ratings yet

## Pricing

from $0.15 / 1,000 audio second processeds

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Transcribe Voice Memo to Text — Speaker Labels & Timestamps

[![SIÁN Agency Store](https://img.shields.io/badge/Store-SI%C3%81N%20Agency-97cc64)](https://apify.com/sian.agency?fpr=sian)
[![Telegram Support](https://img.shields.io/badge/Telegram-Support%20Group-0088cc?logo=telegram)](https://t.me/+vyh1sRE08sAxMGRi)
[![Instagram AI Transcript Extractor](https://img.shields.io/badge/Platform-Instagram-E4405F)](https://apify.com/sian.agency/instagram-ai-transcript-extractor?fpr=sian)
[![Best TikTok AI Transcript Extractor](https://img.shields.io/badge/Platform-TikTok-25F4EE)](https://apify.com/sian.agency/best-tiktok-ai-transcript-extractor?fpr=sian)
[![YouTube Shorts AI Transcript Extractor](https://img.shields.io/badge/Platform-YouTube%20Shorts-FF0000)](https://apify.com/sian.agency/youtube-shorts-ai-transcript-and-metadata-extractor?fpr=sian)
[![Facebook AI Transcript Extractor](https://img.shields.io/badge/Platform-Facebook-1877F2)](https://apify.com/sian.agency/facebook-ai-transcript-extractor?fpr=sian)

> **Transcribe iPhone and Android voice memos to text.** Drop your `.m4a` files into the upload field, get clean transcripts with speaker labels, word-level timestamps, and SRT/VTT subtitles ready for video repurposing. 99+ languages, bulk processing, free tier available.

---

### How to transcribe a voice memo in 4 steps

1. **Upload your voice memo files** — drop `.m4a` (iPhone Voice Memos), `.mp3`, `.wav`, or any common audio format into the **Upload Voice Memo Files** field. You can drag in many at once for bulk transcription.
2. **Pick your options** — auto-detect language or pick from 99+, toggle speaker diarization for multi-speaker recordings, and optionally translate non-English audio to English.
3. **Run the actor** — files process 10 at a time in parallel on the paid tier; bulk batches are usually done in minutes.
4. **Download results** — every file lands in the dataset with the transcript, segment-level + word-level timestamps, speaker labels, and ready-to-use SRT/VTT subtitle strings.

Supported formats: M4A, MP3, WAV, FLAC, AAC, OPUS, OGG, MP4, MOV, WebM. Max 1 GB per file on the paid tier.

---

### Example output — voice memo transcript with speaker labels

A typical successful transcription returns:

```json
{
  "transcript": "Quick note for the team — the client wants the launch pushed to Q3...",
  "detected_language": "en",
  "duration": 42.18,
  "segments": [
    {
      "id": 0,
      "text": "Quick note for the team — the client wants the launch pushed to Q3.",
      "start": 0.20,
      "end": 4.86,
      "speaker": "SPEAKER_00",
      "language": "en",
      "words": [
        { "word": "Quick",  "start": 0.20, "end": 0.42, "speaker": "SPEAKER_00" },
        { "word": "note",   "start": 0.42, "end": 0.68, "speaker": "SPEAKER_00" },
        { "word": "for",    "start": 0.68, "end": 0.84, "speaker": "SPEAKER_00" }
      ]
    }
  ],
  "srt": "1\n00:00:00,200 --> 00:00:04,860\nQuick note for the team — the client wants the launch pushed to Q3.",
  "vtt": "WEBVTT\n\n00:00:00.200 --> 00:00:04.860\nQuick note for the team — the client wants the launch pushed to Q3.",
  "speakers": ["SPEAKER_00"],
  "languages": ["en"],
  "fileSizeMB": 0.31,
  "success": true
}
````

Every result includes the full transcript, segment-level timestamps, word-level timestamps, language detection, voice memo duration in seconds, file size, ready-to-use `srt` and `vtt` subtitle strings, and (when speaker diarization is enabled) speaker labels per segment and per word.

***

### Speaker diarization

Toggle the **Speaker Diarization** input to identify who's speaking in multi-person voice memos — interview-style recordings, family conversations, group brainstorms. Each segment and each word receives a `speaker` label (`SPEAKER_00`, `SPEAKER_01`, …) so you can keep one person's quotes separate from another's. Powered by pyannote-audio, the same model used in production speech-to-text pipelines. Charged per audio second; only billed when enabled.

***

### SRT / VTT export for video repurposing

Every transcription returns ready-to-use `srt` and `vtt` subtitle strings. Save the field value as a `.srt` or `.vtt` file and:

- Drop it into a video editor to caption a video version of your voice memo (great for short-form social posts)
- Use it as a starter caption track for YouTube uploads
- Add HTML5 `<track>` accessibility captions to web embeds

Set **Timestamp Granularities** to `word` for cue precision down to individual words.

***

### Why voice-memo users choose this actor

- ✅ **99+ languages** — automatic detection across English, Spanish, French, Mandarin, Arabic, Portuguese, and 90+ more
- 📤 **Direct file upload** — drop `.m4a`, `.mp3`, `.wav` straight from your phone or Mac, no need to host them anywhere first
- 🎤 **Speaker diarization** powered by pyannote-audio — separate the recorder from interview guests automatically
- ⏱️ **Word-level timestamps** on every transcription — `{word, start, end, speaker}` per word, ready for quote search and clip extraction
- 🎬 **SRT *and* VTT subtitles included** on every successful run — perfect for turning a voice memo into a captioned video
- 🚀 **Bulk processing** — drop in 10, 50, or 200 files at once; 10× parallel on the paid tier
- 💰 **Pay per audio second** — no subscriptions, no minimums; only pay for the audio you actually transcribe

***

### Use cases

- 🎓 **Students** — turn lecture and seminar voice memos into searchable study notes; never lose a key concept buried in an hour of audio
- 📰 **Journalists** — transcribe phone-recorded interviews on the go; pull attributed quotes with word-level timestamps
- 💼 **Professionals dictating notes** — convert post-meeting voice recaps, brainstorms, and quick ideas into shareable text
- 🧪 **Qualitative researchers** — preserve participant voice memos with speaker separation for thematic analysis
- ✍️ **Writers and creators** — capture voice notes in the wild, edit them as text drafts later
- 🎙️ **Interview-style podcasters recording on phones** — get clean, attributed transcripts ready for show notes
- 📱 **Anyone using iPhone Voice Memos or Android voice recorders** — the lowest-friction path from spoken word to text

***

### Pricing & tiers

Pay only for the audio seconds you actually transcribe. No subscriptions, no minimums.

| FREE tier | PAID tier |
|---|---|
| Perfect for testing and small jobs | Built for production volume |
| Up to 5 files per run | Unlimited files per run |
| 50 MB max per file | 1 GB max per file |
| 200 MB / 20 minutes monthly | Unlimited monthly volume |
| 3 concurrent files | 10 concurrent files (10× parallel) |
| No credit card required | $0.0005 per audio second |

**Optional add-ons** (only billed when enabled):

| Feature | Price |
|---|---|
| Speaker diarization | $0.0001 per audio second |
| Translate to English | $0.0003 per audio second |
| EU-region processing | $0.0007 per audio second (replaces base $0.0005) |

A 5-minute voice memo on the paid tier costs approximately **$0.15** (transcription only).

***

### Integration examples

#### JavaScript / Node.js

```javascript
import { ApifyClient } from 'apify-client';

const client = new ApifyClient({ token: 'YOUR_APIFY_TOKEN' });

const run = await client.actor('sian.agency/transcribe-voice-memo-to-text').call({
    audioFiles: ['https://example.com/voice-memo.m4a'],
    speakerDiarization: true,
});

const { items } = await client.dataset(run.defaultDatasetId).listItems();
console.log(items[0].transcript);
console.log(items[0].srt);
```

#### Python

```python
from apify_client import ApifyClient

client = ApifyClient('YOUR_APIFY_TOKEN')

run = client.actor('sian.agency/transcribe-voice-memo-to-text').call(run_input={
    'audioFiles': ['https://example.com/voice-memo.m4a'],
    'speakerDiarization': True,
})

items = client.dataset(run['defaultDatasetId']).list_items().items
print(items[0]['transcript'])
print(items[0]['vtt'])
```

#### cURL

```bash
curl -X POST 'https://api.apify.com/v2/acts/sian.agency~transcribe-voice-memo-to-text/run-sync-get-dataset-items?token=YOUR_APIFY_TOKEN' \
  -H 'Content-Type: application/json' \
  -d '{
    "audioFiles": ["https://example.com/voice-memo.m4a"],
    "speakerDiarization": true
  }'
```

#### n8n / Zapier / Make

Wire this actor as a downstream step on any "new voice memo synced" trigger (Dropbox, iCloud, Google Drive). The dataset record returned per item includes `transcript`, `segments[].words[]`, `srt`, and `vtt` — drop them into Notion, Slack, Google Sheets, Obsidian, or your CRM with no transformation step.

***

### FAQ

**How accurate is voice memo transcription?**
Powered by an industrial speech-to-text pipeline tuned for natural conversation. Accuracy is typically 95–99% on clean iPhone or Android voice recordings, lower on noisy environments or strong accents. Word-level timestamps are returned even when accuracy is imperfect, so you can verify and correct faster than transcribing from scratch.

**What audio formats are supported?**
M4A (iPhone Voice Memos default), MP3, WAV, FLAC, AAC, OPUS, OGG, MP4, MOV, WebM. Max 50 MB per file on the free tier, 1 GB per file on the paid tier.

**Can I transcribe non-English voice memos?**
Yes — auto-detection across 99+ languages including Spanish, French, German, Mandarin, Japanese, Portuguese, Arabic, Hindi. Toggle **Translate to English** to receive an English transcript alongside the timestamps.

**Is speaker diarization included?**
Yes, opt-in via the **Speaker Diarization** toggle. Each segment and word gets labeled `SPEAKER_00`, `SPEAKER_01`, etc. Powered by pyannote-audio. Billed at $0.0001 per audio second only when enabled.

**How does pricing work?**
Pay-per-audio-second. The free tier covers small jobs and testing without a credit card. The paid tier is $0.0005 per second of audio, plus optional add-ons for diarization, translation, and EU processing. No subscriptions.

**Can I use this in n8n, Zapier, or Make?**
Yes. The actor exposes a standard Apify run/dataset API. Use any "trigger → run actor → use dataset items" pattern. The dataset record includes `transcript`, `segments[].words[]`, `srt`, and `vtt` ready to feed into downstream tools.

**Do I need to host my voice memos somewhere first?**
No. Use the **Upload Voice Memo Files** field to upload directly from your computer or phone. Apify stores the upload in your key-value store and the actor processes it from there.

**How long does a transcription take?**
A 5-minute voice memo usually finishes in 10–30 seconds. A 60-minute recording takes 1–3 minutes on the paid tier. Bulk batches process 10 files in parallel.

***

### Legal disclaimer

Use this actor only on voice memos and audio you have rights to transcribe — your own recordings, content with consent, or properly licensed media. The actor does not retain audio or transcripts beyond the run's lifetime. **EU-region processing** is available via the EU Processing toggle for GDPR-aligned workflows. SIÁN Agency provides this actor as-is; users are responsible for the legal use of transcribed content.

***

### Support

[![Telegram Support](https://img.shields.io/badge/Telegram-Support%20Group-0088cc?logo=telegram)](https://t.me/+vyh1sRE08sAxMGRi)
[![Email](https://img.shields.io/badge/Email-support%40sian--agency.online-EA4335)](mailto:support@sian-agency.online)
[![SIÁN Agency](https://img.shields.io/badge/Store-SI%C3%81N%20Agency-97cc64)](https://apify.com/sian.agency?fpr=sian)

Join the Telegram support group, email **support@sian-agency.online**, or open an issue on the [SIÁN Agency Apify Store](https://apify.com/sian.agency?fpr=sian) page.

***

### More from SIÁN Agency

Platform-specific scrapers + transcribers:

- [Instagram AI Transcript Extractor](https://apify.com/sian.agency/instagram-ai-transcript-extractor?fpr=sian)
- [Best TikTok AI Transcript Extractor](https://apify.com/sian.agency/best-tiktok-ai-transcript-extractor?fpr=sian)
- [YouTube Shorts AI Transcript Extractor](https://apify.com/sian.agency/youtube-shorts-ai-transcript-and-metadata-extractor?fpr=sian)
- [Facebook AI Transcript Extractor](https://apify.com/sian.agency/facebook-ai-transcript-extractor?fpr=sian)

Browse the full [SIÁN Agency Apify Store](https://apify.com/sian.agency?fpr=sian) for all available actors.

***

# Actor input Schema

## `audioFiles` (type: `array`):

Upload your voice memo files directly from your phone or computer. iPhone Voice Memos export as `.m4a`; Android voice recorders typically export `.m4a`, `.mp3`, or `.wav`. Just drop them in.

**Supported Formats:** M4A, MP3, WAV, FLAC, AAC, OPUS, OGG, MP4, MOV, WebM
**Max file size:** 1GB per file

You can also paste URLs in the optional **Voice Memo URLs** field below — both lists are processed together.

## `audioUrls` (type: `array`):

Optional: add direct audio file URLs (one per line) if your voice memos are already hosted somewhere (Dropbox, Google Drive direct link, your own CDN, etc.). Most users only need the **Upload Voice Memo Files** field above.

🚫 **Social media links (Instagram, TikTok, YouTube Shorts, Facebook) are rejected.** Use the specialized actors linked in the description above.

✅ **Valid:** `https://example.com/voice-memo.m4a`, `https://cdn.example.com/recording.wav`

**Supported Formats:** M4A, MP3, WAV, FLAC, AAC, OPUS, OGG, MP4, MOV, WebM

## `language` (type: `string`):

Language spoken in the audio. Leave as 'Auto-detect' for automatic language detection.

## `translateToEnglish` (type: `boolean`):

Translate non-English audio into English. Additional charges apply for translation service.

## `useEuServers` (type: `boolean`):

Process data within the EU for GDPR compliance. Higher processing rates apply for EU servers.

## `speakerDiarization` (type: `boolean`):

Identify and label different speakers in the audio (e.g., SPEAKER\_00, SPEAKER\_01). Perfect for meetings, interviews, and multi-person conversations. Additional charges apply.

## Actor input object example

```json
{
  "audioUrls": [
    "https://raw.githubusercontent.com/rara-cyber/podcast-test-file/main/KchSfOdwCJE-56eb7ccd6dc030fa3cc280ab26925c4e234.opus"
  ],
  "language": "auto",
  "translateToEnglish": false,
  "useEuServers": false,
  "speakerDiarization": false
}
```

# Actor output Schema

## `audioTranscripts` (type: `string`):

Processed transcription data with timestamps, speaker diarization, detected languages, and file metadata

## `scrapingSummary` (type: `string`):

HTML summary showing successful and failed transcriptions with key metrics

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "audioUrls": [
        "https://raw.githubusercontent.com/rara-cyber/podcast-test-file/main/KchSfOdwCJE-56eb7ccd6dc030fa3cc280ab26925c4e234.opus"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("sian.agency/transcribe-voice-memo-to-text").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "audioUrls": ["https://raw.githubusercontent.com/rara-cyber/podcast-test-file/main/KchSfOdwCJE-56eb7ccd6dc030fa3cc280ab26925c4e234.opus"] }

# Run the Actor and wait for it to finish
run = client.actor("sian.agency/transcribe-voice-memo-to-text").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "audioUrls": [
    "https://raw.githubusercontent.com/rara-cyber/podcast-test-file/main/KchSfOdwCJE-56eb7ccd6dc030fa3cc280ab26925c4e234.opus"
  ]
}' |
apify call sian.agency/transcribe-voice-memo-to-text --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=sian.agency/transcribe-voice-memo-to-text",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Transcribe Voice Memo to Text — Speaker Labels & Timestamps",
        "description": "Transcribe iPhone and Android voice memos to text. Speaker labels, word-level timestamps, SRT/VTT. Bulk upload, 99+ languages. Try free.",
        "version": "1.0",
        "x-build-id": "v0mP6x3tWM5sbrCaB"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/sian.agency~transcribe-voice-memo-to-text/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-sian.agency-transcribe-voice-memo-to-text",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/sian.agency~transcribe-voice-memo-to-text/runs": {
            "post": {
                "operationId": "runs-sync-sian.agency-transcribe-voice-memo-to-text",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/sian.agency~transcribe-voice-memo-to-text/run-sync": {
            "post": {
                "operationId": "run-sync-sian.agency-transcribe-voice-memo-to-text",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "audioFiles": {
                        "title": "Upload Voice Memo Files",
                        "type": "array",
                        "description": "Upload your voice memo files directly from your phone or computer. iPhone Voice Memos export as `.m4a`; Android voice recorders typically export `.m4a`, `.mp3`, or `.wav`. Just drop them in.\n\n**Supported Formats:** M4A, MP3, WAV, FLAC, AAC, OPUS, OGG, MP4, MOV, WebM\n**Max file size:** 1GB per file\n\nYou can also paste URLs in the optional **Voice Memo URLs** field below — both lists are processed together."
                    },
                    "audioUrls": {
                        "title": "Voice Memo URLs (optional)",
                        "type": "array",
                        "description": "Optional: add direct audio file URLs (one per line) if your voice memos are already hosted somewhere (Dropbox, Google Drive direct link, your own CDN, etc.). Most users only need the **Upload Voice Memo Files** field above.\n\n🚫 **Social media links (Instagram, TikTok, YouTube Shorts, Facebook) are rejected.** Use the specialized actors linked in the description above.\n\n✅ **Valid:** `https://example.com/voice-memo.m4a`, `https://cdn.example.com/recording.wav`\n\n**Supported Formats:** M4A, MP3, WAV, FLAC, AAC, OPUS, OGG, MP4, MOV, WebM",
                        "items": {
                            "type": "string"
                        }
                    },
                    "language": {
                        "title": "Language (Optional)",
                        "enum": [
                            "auto",
                            "english",
                            "chinese",
                            "german",
                            "spanish",
                            "russian",
                            "korean",
                            "french",
                            "japanese",
                            "portuguese",
                            "turkish",
                            "polish",
                            "catalan",
                            "dutch",
                            "arabic",
                            "swedish",
                            "italian",
                            "indonesian",
                            "hindi",
                            "finnish",
                            "vietnamese",
                            "hebrew",
                            "ukrainian",
                            "greek",
                            "malay",
                            "czech",
                            "romanian",
                            "danish",
                            "hungarian",
                            "tamil",
                            "norwegian",
                            "thai",
                            "urdu",
                            "croatian",
                            "bulgarian",
                            "lithuanian",
                            "latin",
                            "maori",
                            "malayalam",
                            "welsh",
                            "slovak",
                            "telugu",
                            "persian",
                            "latvian",
                            "bengali",
                            "serbian",
                            "azerbaijani",
                            "slovenian",
                            "kannada",
                            "estonian",
                            "macedonian",
                            "breton",
                            "basque",
                            "icelandic",
                            "armenian",
                            "nepali",
                            "mongolian",
                            "bosnian",
                            "kazakh",
                            "albanian",
                            "swahili",
                            "galician",
                            "marathi",
                            "punjabi",
                            "sinhala",
                            "khmer",
                            "shona",
                            "yoruba",
                            "somali",
                            "afrikaans",
                            "occitan",
                            "georgian",
                            "belarusian",
                            "tajik",
                            "sindhi",
                            "gujarati",
                            "amharic",
                            "yiddish",
                            "lao",
                            "uzbek",
                            "faroese",
                            "haitian creole",
                            "pashto",
                            "turkmen",
                            "nynorsk",
                            "maltese",
                            "sanskrit",
                            "luxembourgish",
                            "myanmar",
                            "tibetan",
                            "tagalog",
                            "malagasy",
                            "assamese",
                            "tatar",
                            "hawaiian",
                            "lingala",
                            "hausa",
                            "bashkir",
                            "javanese",
                            "sundanese",
                            "cantonese"
                        ],
                        "type": "string",
                        "description": "Language spoken in the audio. Leave as 'Auto-detect' for automatic language detection.",
                        "default": "auto"
                    },
                    "translateToEnglish": {
                        "title": "🔄 Translate to English (PAID)",
                        "type": "boolean",
                        "description": "Translate non-English audio into English. Additional charges apply for translation service.",
                        "default": false
                    },
                    "useEuServers": {
                        "title": "🇪🇺 EU-Based Processing (PAID)",
                        "type": "boolean",
                        "description": "Process data within the EU for GDPR compliance. Higher processing rates apply for EU servers.",
                        "default": false
                    },
                    "speakerDiarization": {
                        "title": "🎤 Speaker Diarization (PAID)",
                        "type": "boolean",
                        "description": "Identify and label different speakers in the audio (e.g., SPEAKER_00, SPEAKER_01). Perfect for meetings, interviews, and multi-person conversations. Additional charges apply.",
                        "default": false
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
