# Reddit Post Media Downloader and Metadata Scraper⚡ (`apiharvest/reddit-post-media-downloader-and-metadata-scraper`) Actor

📥 Extract full metadata + media download URLs from any Reddit post — videos, images, galleries, external embeds, promoted ads & more. 🎬 Video MP4 with separate audio streams, 🖼️ high-res images, 📸 full carousel/gallery sets, 📢 promoted ad data, complete post details and comments

- **URL**: https://apify.com/apiharvest/reddit-post-media-downloader-and-metadata-scraper.md
- **Developed by:** [APIHarvest](https://apify.com/apiharvest) (community)
- **Categories:** Automation, Videos, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $2.50 / 1,000 post scrapeds

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Reddit Post Media Downloader and Metadata Scraper

The **Reddit Post Media Downloader and Metadata Scraper** is the most comprehensive Apify actor for extracting complete metadata, media download links, and optional comments from any Reddit post. Whether you need to scrape Reddit post data, download Reddit videos, download Reddit images, or extract Reddit gallery content — the **Reddit Post Media Downloader and Metadata Scraper** handles every post type automatically.

---

### What Is the Reddit Post Media Downloader and Metadata Scraper?

The **Reddit Post Media Downloader and Metadata Scraper** is a production-grade Apify actor that takes one or more individual Reddit post URLs and returns:

- **Complete post metadata** — title, author, score, comments count, flair, timestamps, and 60+ data fields
- **Direct media download links** — video (MP4), audio (separate DASH stream), images (full resolution + all sizes), gallery items
- **Optional post comments** — toggle ON/OFF via a checkbox in the actor input
- **External embed data** — YouTube, Vimeo, and other embedded media with provider info, thumbnails, and embed HTML
- **Post type auto-classification** — every post is tagged as `video`, `image`, `gallery`, `text`, `link`, `rich_video`, or `ad`
- **Promoted post metadata** — call-to-action, destination URL, ad-specific fields

The **Reddit Post Media Downloader and Metadata Scraper** processes each URL through multiple parallel extraction channels simultaneously, ensuring maximum reliability and data completeness.

---

### Key Features of the Reddit Post Media Downloader and Metadata Scraper

- **All post types supported** — video, image, gallery/carousel, text, external link, rich video, promoted/ad posts
- **Individual post URLs only** — paste any Reddit post URL (subreddit or profile) and get structured data back
- **Comments toggle** — include or exclude comments from output with a single checkbox
- **Video download data** — fallback MP4, DASH manifest, HLS manifest, separate audio stream, all quality formats
- **Gallery extraction** — every image/video/GIF in a carousel extracted as a separate media item
- **External embed support** — YouTube, Vimeo, and other oembed providers with full metadata
- **Residential US proxy** — automatic proxy rotation for reliable access
- **No subreddit/profile scraping** — this actor processes individual post URLs only, not entire subreddits or profiles

---

### What This Actor Does NOT Do

The **Reddit Post Media Downloader and Metadata Scraper** does **not**:

- Scrape entire subreddits (e.g., all posts from r/python)
- Scrape entire user profiles (e.g., all posts from u/username)
- Search Reddit by keyword
- Download the actual media files — it provides direct download URLs you can use

It only processes **individual post URLs** that you provide.

---

### Supported Post Types

The **Reddit Post Media Downloader and Metadata Scraper** automatically detects and handles:

| Post Type | What You Get |
|---|---|
| **Video Posts** | Direct MP4 download URL, separate audio stream, DASH/HLS manifests, all quality formats, dimensions, duration |
| **Image Posts** | Full-resolution image URL, all resolution variants, dimensions |
| **Gallery/Carousel Posts** | Individual download link for every image/video/GIF, per-item dimensions, captions, media type |
| **Text Posts** | Full selftext (plain + HTML), complete post metadata |
| **External Link Posts** | External URL, oembed data (YouTube/Vimeo), embed HTML, thumbnail, provider info |
| **Rich Video Posts** | External video embed URL, provider name, iframe HTML, oembed metadata |
| **Promoted/Ad Posts** | Call-to-action text, destination URL, outbound link data, ad classification |
| **Mixed Carousels** | Each gallery item separately with correct media type (image/gif/video) |

---

### How to Use the Reddit Post Media Downloader and Metadata Scraper

#### Input

Provide one or more Reddit post URLs:

````

https://www.reddit.com/r/subreddit/comments/postid/title/
https://www.reddit.com/user/username/comments/postid/title/
https://v.redd.it/videoid

````

#### Input Parameters

| Parameter | Type | Default | Description |
|---|---|---|---|
| `post_urls` | Array | *required* | One or more Reddit post URLs to process |
| `include_comments` | Boolean | `false` | Toggle ON to include comments in output, OFF to exclude |
| `max_results` | Integer | `0` | Maximum items per bucket (posts, media, comments). `0` = unlimited |

#### Example Input

```json
{
  "post_urls": [
    { "url": "https://www.reddit.com/r/Damnthatsinteresting/comments/1sa33fi/" },
    { "url": "https://www.reddit.com/r/Wellthatsucks/comments/1s9q97r/" },
    { "url": "https://www.reddit.com/r/AITAH/comments/1s9vzep/" },
    { "url": "https://www.reddit.com/user/DavidFromNeo/comments/1rj3crp/" }
  ],
  "include_comments": false,
  "max_results": 0
}
````

***

### Output Structure

The **Reddit Post Media Downloader and Metadata Scraper** produces records tagged with `_item_type` for easy filtering.

#### Post Record (`_item_type: "post"`)

Every URL produces a post record with 60+ metadata fields:

| Field | Type | Description |
|---|---|---|
| `post_id` | string | Reddit post ID |
| `title` | string | Post title |
| `author` | string | Username |
| `subreddit` | string | Subreddit name |
| `post_type` | string | Auto-detected: `video`, `image`, `gallery`, `text`, `link`, `rich_video`, `ad` |
| `url` | string | Target URL |
| `permalink` | string | Full Reddit permalink |
| `selftext` | string | Post body text |
| `score` | integer | Net upvotes |
| `upvote_ratio` | float | Upvote ratio (0.0–1.0) |
| `num_comments` | integer | Comment count |
| `created_at` | string | ISO-8601 creation time |
| `domain` | string | Content domain |
| `media_url` | string | Media destination URL |
| `is_video` | boolean | Video post flag |
| `is_gallery` | boolean | Gallery post flag |
| `over_18` | boolean | NSFW flag |
| `flair` | string | Post flair text |
| `is_promoted` | boolean | Promoted post flag |
| `is_ad` | boolean | Ad post flag |
| `video_info` | object | Video: `fallback_url`, `dash_url`, `hls_url`, `width`, `height`, `duration`, `has_audio` |
| `preview_images` | array | All resolution preview images |
| `gallery_items` | array | Gallery items: `url`, `width`, `height`, `caption`, `media_type` |
| `oembed` | object | External embed: `provider_name`, `title`, `html`, `thumbnail_url` |
| `call_to_action` | string | Ad CTA text |
| `href_url` | string | Ad destination URL |

#### Media Record (`_item_type: "media"`)

For every extractable media element, a separate media record:

| Field | Type | Description |
|---|---|---|
| `download_url` | string | **Best quality direct download URL** |
| `audio_url` | string | Separate audio stream (Reddit videos) |
| `media_type` | string | `video`, `image`, `gif`, `gallery`, `rich_video` |
| `source_url` | string | Original source URL |
| `width` | integer | Width in pixels |
| `height` | integer | Height in pixels |
| `duration` | float | Duration in seconds (video) |
| `ext` | string | File extension |
| `thumbnail_url` | string | Thumbnail URL |
| `title` | string | Media title |
| `uploader` | string | Post author |
| `subreddit` | string | Subreddit name |
| `reddit_post_url` | string | Source Reddit post |
| `embed_url` | string | External embed URL |
| `provider_name` | string | External provider (YouTube, Vimeo) |
| `is_reddit_hosted` | boolean | True = Reddit-hosted, False = external |
| `all_formats` | array | All available quality options |

#### Comment Record (`_item_type: "comment"`) — Only When Toggle Is ON

| Field | Type | Description |
|---|---|---|
| `comment_id` | string | Comment ID |
| `author` | string | Comment author |
| `body` | string | Comment text |
| `score` | integer | Comment score |
| `created_at` | string | ISO-8601 time |
| `depth` | integer | Nesting depth |
| `is_submitter` | boolean | True if comment by post author |

***

### How the Comments Toggle Works

The **Reddit Post Media Downloader and Metadata Scraper** includes a **💬 Include Comments** checkbox:

- **OFF (default)** — No comments in output. Only posts and media records are returned.
- **ON** — Comments are included as separate records with `_item_type: "comment"`.

This lets you control output size. A popular post can have thousands of comments — toggle OFF to keep results focused on media data.

***

### How max\_results Works

The `max_results` filter limits the number of items returned **per bucket** (posts, media, comments). It does NOT specifically control comment count — it applies equally to all output types.

- `max_results: 0` — Return everything (default)
- `max_results: 10` — Return at most 10 posts, 10 media items, and 10 comments

If you don't need comments at all, use the `include_comments` toggle instead.

***

### Video Download Example

```json
{
  "_item_type": "media",
  "media_type": "video",
  "download_url": "https://v.redd.it/93jz90kreosg1/CMAF_720.mp4?source=fallback",
  "audio_url": "https://v.redd.it/93jz90kreosg1/DASH_audio.mp4",
  "width": 1280,
  "height": 720,
  "duration": 123.0,
  "all_formats": [
    { "label": "fallback", "url": "https://v.redd.it/.../CMAF_720.mp4?source=fallback", "type": "mp4/video" },
    { "label": "dash", "url": "https://v.redd.it/.../DASHPlaylist.mpd", "type": "dash" },
    { "label": "hls", "url": "https://v.redd.it/.../HLSPlaylist.m3u8", "type": "hls" },
    { "label": "audio", "url": "https://v.redd.it/.../DASH_audio.mp4", "type": "mp4/audio" }
  ]
}
```

***

### Gallery/Carousel Example

For a post with 5 images, the **Reddit Post Media Downloader and Metadata Scraper** returns 5 separate media records — one per gallery item:

```json
{
  "_item_type": "media",
  "media_type": "image",
  "download_url": "https://i.redd.it/abc123.jpg",
  "width": 1920,
  "height": 1080,
  "ext": "jpeg",
  "title": "Gallery item caption"
}
```

***

### 🖼️ Gallery / Carousel: How It Works

For gallery and carousel posts, the **Reddit Post Media Downloader** creates a **separate media record for every item** in the carousel. This includes:

- Multi-image carousels — each image gets its own download URL
- Mixed media carousels — images and videos in the same post are extracted with the correct media type
- Captions and outbound links per item

#### Gallery Output Example

```json
[
  {
    "media_id": "abc123",
    "title": "Gallery image 1",
    "media_type": "image",
    "download_url": "https://preview.redd.it/abc123.jpeg?...",
    "width": 4284,
    "height": 5712
  },
  {
    "media_id": "def456",
    "title": "Gallery video",
    "media_type": "video",
    "download_url": "https://preview.redd.it/def456.mp4?...",
    "width": 1920,
    "height": 1080
  }
]
```

***

### 🔗 External Link Posts: How It Works

When the **Reddit Post Media Downloader** finds a post linking to YouTube, Vimeo, or other external sites, it extracts the full oembed metadata including:

- The external video URL
- Embed iframe HTML (for embedding on your own site)
- Provider info (name, author, channel URL)
- Thumbnail image URL and dimensions

***

### ⚡ Performance & Reliability

The **Reddit Post Media Downloader** uses multiple parallel extraction channels running simultaneously to ensure:

- **Maximum data completeness** — if one channel is throttled, others compensate
- **Fast extraction** — parallel processing means results in seconds, not minutes
- **Automatic retry** — built-in retry logic with fresh request signatures
- **Anti-detection** — randomized request patterns to avoid blocks
- **Proxy support** — US residential proxy enabled automatically on Apify

***

### 📋 Output Metadata Fields

Every record pushed to the dataset includes these system fields:

| Field | Type | Description |
|---|---|---|
| `_item_type` | string | Record type: `"post"`, `"media"`, `"comment"`, `"trophy"`, `"subreddit_rule"` |
| `_scraper_type` | string | The extraction mode used |
| `_reddit_input` | string | The original input URL |
| `_scraped_at` | string | ISO-8601 timestamp of when the data was collected |

***

### 💡 Use Cases for Reddit Post Media Downloader

- **Content archival** — Save complete post data and media before deletion
- **Media collection** — Download Reddit videos, images, galleries in bulk
- **Research & analytics** — Collect structured post metadata for analysis
- **Social monitoring** — Track post performance (score, comments, awards)
- **Content curation** — Filter and collect media by subreddit, type, or engagement
- **Ad intelligence** — Extract promoted post details and targeting data

***

### 🔒 Privacy & Compliance

The **Reddit Post Media Downloader** only accesses publicly available data through Reddit's public endpoints. No authentication or login credentials are required. Please ensure your use of this tool complies with Reddit's Terms of Service and applicable data privacy regulations.

***

### 📄 License

This project is provided as-is. Use responsibly and in accordance with applicable terms of service.

# Actor input Schema

## `post_urls` (type: `array`):

One or more Reddit post URLs to extract metadata and media from.

Supported URL formats:
Subreddit posts:  https://www.reddit.com/r/subreddit/comments/postid/
Profile posts:    https://www.reddit.com/user/username/comments/postid/
Direct videos:    https://v.redd.it/videoid

Works with both subreddit (r/) and user profile (u/) posts. You can add multiple URLs — each is processed individually and results are merged into one dataset.

## `include_comments` (type: `boolean`):

Toggle ON to include post comments in the output.
Toggle OFF to exclude comments entirely.

Default: OFF (comments not included).

## `max_results` (type: `integer`):

Maximum number of items to return per bucket (posts, media, comments).

0 = no limit — return all collected results (default).

## Actor input object example

```json
{
  "post_urls": [
    {
      "url": "https://www.reddit.com/r/interesting/comments/1sau1xh/first_time_he_ever_saw_a_female/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button"
    },
    {
      "url": "https://www.reddit.com/r/interestingasfuck/comments/1sb0wun/"
    }
  ],
  "include_comments": false,
  "max_results": 0
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "post_urls": [
        {
            "url": "https://www.reddit.com/r/interesting/comments/1sau1xh/first_time_he_ever_saw_a_female/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button"
        },
        {
            "url": "https://www.reddit.com/r/interestingasfuck/comments/1sb0wun/"
        }
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("apiharvest/reddit-post-media-downloader-and-metadata-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "post_urls": [
        { "url": "https://www.reddit.com/r/interesting/comments/1sau1xh/first_time_he_ever_saw_a_female/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button" },
        { "url": "https://www.reddit.com/r/interestingasfuck/comments/1sb0wun/" },
    ] }

# Run the Actor and wait for it to finish
run = client.actor("apiharvest/reddit-post-media-downloader-and-metadata-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "post_urls": [
    {
      "url": "https://www.reddit.com/r/interesting/comments/1sau1xh/first_time_he_ever_saw_a_female/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button"
    },
    {
      "url": "https://www.reddit.com/r/interestingasfuck/comments/1sb0wun/"
    }
  ]
}' |
apify call apiharvest/reddit-post-media-downloader-and-metadata-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=apiharvest/reddit-post-media-downloader-and-metadata-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Reddit Post Media Downloader and Metadata Scraper⚡",
        "description": "📥 Extract full metadata + media download URLs from any Reddit post — videos, images, galleries, external embeds, promoted ads & more. 🎬 Video MP4 with separate audio streams, 🖼️ high-res images, 📸 full carousel/gallery sets, 📢 promoted ad data, complete post details and comments",
        "version": "0.0",
        "x-build-id": "Qxpd1bPViP5CfSqzq"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/apiharvest~reddit-post-media-downloader-and-metadata-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-apiharvest-reddit-post-media-downloader-and-metadata-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/apiharvest~reddit-post-media-downloader-and-metadata-scraper/runs": {
            "post": {
                "operationId": "runs-sync-apiharvest-reddit-post-media-downloader-and-metadata-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/apiharvest~reddit-post-media-downloader-and-metadata-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-apiharvest-reddit-post-media-downloader-and-metadata-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "post_urls"
                ],
                "properties": {
                    "post_urls": {
                        "title": "📝 Post URLs",
                        "type": "array",
                        "description": "One or more Reddit post URLs to extract metadata and media from.\n\nSupported URL formats:\n  Subreddit posts:  https://www.reddit.com/r/subreddit/comments/postid/\n  Profile posts:    https://www.reddit.com/user/username/comments/postid/\n  Direct videos:    https://v.redd.it/videoid\n\nWorks with both subreddit (r/) and user profile (u/) posts. You can add multiple URLs — each is processed individually and results are merged into one dataset.",
                        "items": {
                            "type": "object",
                            "properties": {
                                "url": {
                                    "title": "URL",
                                    "type": "string",
                                    "description": "Reddit post URL (subreddit or user profile)"
                                }
                            },
                            "required": [
                                "url"
                            ]
                        }
                    },
                    "include_comments": {
                        "title": "💬 Include Comments",
                        "type": "boolean",
                        "description": "Toggle ON to include post comments in the output.\nToggle OFF to exclude comments entirely.\n\nDefault: OFF (comments not included).",
                        "default": false
                    },
                    "max_results": {
                        "title": "🔢 Max Results",
                        "minimum": 0,
                        "type": "integer",
                        "description": "Maximum number of items to return per bucket (posts, media, comments).\n\n0 = no limit — return all collected results (default).",
                        "default": 0
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
