# Bluesky User Posts Scraper (`ecomscrape/bluesky-user-posts-scraper`) Actor

Bluesky User Follows Scraper automates extraction of follower and following data from Bluesky profiles. Efficiently collect user network information including handles, display names, bios, and account metadata for audience analysis, influencer research, and social network mapping.

- **URL**: https://apify.com/ecomscrape/bluesky-user-posts-scraper.md
- **Developed by:** [ecomscrape](https://apify.com/ecomscrape) (community)
- **Categories:** Automation, Developer tools, Social media
- **Stats:** 1 total users, 0 monthly users, 0.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $1.50 / 1,000 results

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

## Contact

If you encounter any issues or need to exchange information, please feel free to contact us through the following link:
[My profile](https://apify.com/ecomscrape)

## Bluesky User Posts Scraper: Extract Social Media Data for Analysis & Research

### Introduction

Bluesky is a rapidly growing decentralized social media platform that has emerged as a significant player in the social networking landscape. As an alternative to traditional platforms, Bluesky offers unique features for open discourse and content sharing, attracting journalists, influencers, brands, and active communities across diverse topics.

For businesses conducting social media monitoring, brand analysis, competitive intelligence, or academic research, accessing structured Bluesky post data is essential. However, manually collecting posts and engagement metrics from user profiles is impractical when analyzing content trends, tracking brand mentions, or studying social media behavior patterns.

The Bluesky User Posts Scraper automates extraction of comprehensive post data from user profiles, enabling systematic collection of content, engagement metrics, and conversation threads for analysis and research purposes.

### Scraper Overview

The Bluesky User Posts Scraper is a specialized tool designed to extract post-level data from Bluesky user profiles. It captures complete post content, engagement metrics, author information, embedded media, and reply threads.

Key capabilities include filtering by post type (with/without replies, media posts, video posts, author threads), configurable extraction limits, and bulk profile processing. The scraper serves social media analysts, brand managers, researchers, journalists, and marketing professionals who need structured access to Bluesky content data.

### Input Configuration

Example url 1: https://bsky.app/profile/nbcnews.com

Example url 2: https://bsky.app/profile/iuculano.bsky.social

Example url 3: https://bsky.app/profile/mirror.co.uk
    
Example Screenshot of product information page:
    
![](https://i.ibb.co/v4sNFjBm/Screenshot-from-2026-04-08-17-16-38.png)

#### Input Format
```json
{
  "urls": [
    "https://bsky.app/profile/politico.com"
  ],
  "max_items_per_url": 100,
  "filter": "posts_with_replies",
  "ignore_url_failures": true,
  "max_retries_per_url": 2
}
````

**The `urls` parameter**: Add Bluesky profile URLs you want to scrape. Paste URLs individually or use bulk edit to add a prepared list. These should be direct profile links (e.g., `https://bsky.app/profile/username`).

**The `filter` parameter**: Filter posts by type:

- `"posts_with_replies"` - All posts including those with reply conversations
- `"posts_no_replies"` - Original posts without replies
- `"posts_with_media"` - Posts containing images, videos, or other media
- `"posts_and_author_threads"` - Posts and threaded content from the author
- `"posts_with_video"` - Video posts only

**The `max_items_per_url` parameter**: Limit posts extracted per profile (default: 20). Increase for comprehensive historical analysis or decrease for recent activity monitoring.

**The `ignore_url_failures` parameter**: If `true`, scraper continues even if some profiles fail after retries. Essential for batch processing multiple accounts.

**The `max_retries_per_url` parameter**: Number of retry attempts for failed requests (default: 2). Adjust based on connection stability needs.

#### Output Format

```json
[
  {
  "uri": "at://did:plc:wmho6q2uiyktkam3jsvrms3s/app.bsky.feed.post/3miy5b4uq6z2i",
  "cid": "bafyreief5zp5376at2rybdyhyyas2tx7qm7arrbldo4xt6o7ud5w6davzy",
  "author": {
    "did": "did:plc:wmho6q2uiyktkam3jsvrms3s",
    "handle": "nbcnews.com",
    "display_name": "NBC News",
    "avatar": "https://cdn.bsky.app/img/avatar/plain/did:plc:wmho6q2uiyktkam3jsvrms3s/bafkreifjfoaox34dlcdm4dxje7x7awyyzqcw4jiv4x4i3lrxtyx63qdzru",
    "associated": {
      "activity_subscription": {
        "allow_subscriptions": "followers"
      }
    },
    "labels": [],
    "created_at": "2023-05-31T01:09:28.017Z",
    "verification": {
      "verifications": [
        {
          "issuer": "did:plc:z72i7hdynmk6r22z27h6tvur",
          "uri": "at://did:plc:z72i7hdynmk6r22z27h6tvur/app.bsky.graph.verification/3lndpvtksqg2l",
          "is_valid": true,
          "created_at": "2025-04-21T10:47:52.030Z"
        }
      ],
      "verified_status": "valid",
      "trusted_verifier_status": "valid"
    }
  },
  "record": {
    "$type": "app.bsky.feed.post",
    "created_at": "2026-04-08T11:00:18Z",
    "embed": {
      "$type": "app.bsky.embed.external",
      "external": {
        "description": "Police said the parents had walked away from the boy and appeared to be on their phones at the time of the attack.",
        "thumb": {
          "$type": "blob",
          "ref": {
            "$link": "bafkreidp3cuuvk3gzlv2xlurftmu6p53fvko3tacw4lqusfxmsuehfgu5a"
          },
          "mime_type": "image/jpeg",
          "size": 520465
        },
        "title": "Parents of toddler hurt by wolf at Zoo America charged with child endangerment",
        "uri": "https://nbcnews.to/4bYOP67"
      }
    },
    "text": "The parents of a toddler who suffered a minor injury at a Pennsylvania theme park zoo after squeezing through a fence near a wolf enclosure and making contact with one of the animals are charged with endangering the welfare of children, police say."
  },
  "embed": {
    "external": {
      "uri": "https://nbcnews.to/4bYOP67",
      "title": "Parents of toddler hurt by wolf at Zoo America charged with child endangerment",
      "description": "Police said the parents had walked away from the boy and appeared to be on their phones at the time of the attack.",
      "thumb": "https://cdn.bsky.app/img/feed_thumbnail/plain/did:plc:wmho6q2uiyktkam3jsvrms3s/bafkreidp3cuuvk3gzlv2xlurftmu6p53fvko3tacw4lqusfxmsuehfgu5a"
    },
    "$type": "app.bsky.embed.external#view"
  },
  "bookmark_count": 0,
  "reply_count": 2,
  "repost_count": 0,
  "like_count": 9,
  "quote_count": 0,
  "indexed_at": "2026-04-08T11:00:19.048Z",
  "labels": [],
  "replies": null,
  "from_url": "https://public.api.bsky.app/xrpc/app.bsky.feed.getAuthorFeed?actor=nbcnews.com&limit=50"
}
]
```

The scraper returns structured post data with each field providing specific analytical value:

- **URI**: Unique resource identifier for the post. *Essential for tracking specific posts, building post databases, and referencing content.*

- **CID**: Content identifier hash. *Technical identifier for content verification and deduplication across scraping runs.*

- **Author**: Post author information including username and profile details. *Critical for author attribution, influence analysis, and building user networks.*

- **Record**: Complete post record including text content, timestamps, and metadata. *Core data containing the actual post content for text analysis, sentiment mining, and content categorization.*

- **Embed**: Embedded content including images, videos, links, and quoted posts. *Provides rich media context, enables visual content analysis, and tracks content sharing patterns.*

- **Bookmark Count**: Number of times post was bookmarked. *Indicates content value and user intent to revisit, useful for identifying high-value content.*

- **Reply Count**: Number of replies to the post. *Measures conversation engagement, topic resonance, and community interaction levels.*

- **Repost Count**: Number of times post was shared/reposted. *Key virality metric showing content amplification and reach extension.*

- **Like Count**: Number of likes received. *Primary engagement indicator measuring content approval and audience resonance.*

- **Quote Count**: Times the post was quoted by others. *Shows deeper engagement where users add commentary, indicating thought leadership.*

- **Indexed At**: Timestamp when post was indexed by Bluesky. *Enables temporal analysis, trending detection, and time-series studies.*

- **Labels**: Content labels and moderation tags. *Important for content classification, filtering sensitive material, and compliance monitoring.*

- **Replies**: Full reply thread data. *Provides conversation context, enables sentiment analysis of discussions, and reveals community dynamics.*

Each field supports social listening, competitive analysis, influencer identification, trend detection, and audience research across Bluesky's growing user base.

### Usage Guide

#### Setting Up Profile Extraction

**Step 1: Identify Target Profiles**

Navigate to Bluesky and identify profiles to monitor:

- Brand accounts for competitive intelligence
- Influencers for partnership evaluation
- News outlets for media monitoring
- Industry leaders for trend analysis
- Community accounts for audience research

Copy profile URLs from the address bar.

**Step 2: Configure Filtering**

Select appropriate filter based on research objectives:

- `posts_with_replies` - Comprehensive activity including conversations
- `posts_no_replies` - Clean content stream without noise
- `posts_with_media` - Visual content analysis
- `posts_with_video` - Video marketing research
- `posts_and_author_threads` - Complete narrative threads

**Step 3: Set Extraction Volume**

Configure `max_items_per_url`:

- 20-50 posts for recent activity snapshots
- 100-500 posts for monthly trend analysis
- 1000+ posts for historical research

#### Best Practices

**Profile Selection:**

- Organize URLs by category (competitors, influencers, partners)
- Verify profiles are active before large extractions
- Track profile handle changes over time
- Document profile selection criteria for reproducible research

**Filter Strategy:**

- Use `posts_no_replies` for clean content analysis
- Use `posts_with_replies` for engagement and sentiment analysis
- Use `posts_with_media` for visual content strategies
- Combine filters across multiple runs for comprehensive datasets

**Data Quality:**

- Verify extracted post counts match expectations
- Check timestamp ranges cover desired periods
- Validate engagement metrics for reasonableness
- Handle deleted or private posts gracefully

**Use Case Specific Approaches:**

**Brand Monitoring:**

- Extract competitor posts with media filter
- Track engagement patterns over time
- Identify content strategies that drive engagement
- Monitor posting frequency and timing

**Influencer Research:**

- Use posts\_with\_replies to assess community engagement
- Analyze reply sentiment and conversation quality
- Track quote counts for thought leadership indicators
- Measure reach through repost metrics

**Content Strategy:**

- Compare posts with/without replies for engagement drivers
- Analyze media posts for visual content performance
- Study thread structures for narrative techniques
- Identify optimal posting patterns

**Trend Analysis:**

- Track indexed\_at timestamps for viral content timing
- Monitor reply/repost velocity for trending detection
- Analyze label patterns for topic classification
- Study embed types for content format trends

#### Common Troubleshooting

**URL Issues:**

- Verify profile URLs are correctly formatted
- Check for suspended or deleted accounts
- Ensure handles haven't changed
- Test URLs manually before bulk processing

**Filtering Problems:**

- Confirm filter selection matches data needs
- Verify filter returns expected post types
- Test different filters on sample profiles
- Check for empty results on restricted accounts

**Volume Management:**

- Adjust max\_items for profile activity levels
- Handle rate limits with appropriate delays
- Process large profiles in batches
- Monitor extraction completion rates

### Benefits and Applications

**Social Media Monitoring:** Track brand mentions, sentiment, and engagement across Bluesky. Monitor competitor activity, content strategies, and audience response patterns. Identify emerging trends and conversation topics in real-time.

**Influencer Analysis:** Evaluate potential partners through engagement metrics, content quality, and audience interaction patterns. Identify micro-influencers in niche communities. Track influencer performance over time.

**Content Research:** Analyze successful content formats, posting strategies, and engagement drivers. Study viral content patterns and conversation triggers. Benchmark content performance against competitors.

**Audience Intelligence:** Understand community dynamics through reply analysis. Identify active participants and conversation leaders. Map network connections and information flow patterns.

**Trend Detection:** Track emerging topics through hashtag and keyword analysis. Monitor conversation velocity and virality indicators. Identify early signals of trending discussions.

**Academic Research:** Study social media behavior, discourse patterns, and community formation. Analyze information diffusion and network effects. Research platform-specific communication dynamics.

**The scraper provides advantages through:**

- Flexible filtering for targeted data collection
- Complete engagement metrics for performance analysis
- Reply thread capture for conversation analysis
- Media embed extraction for visual content research
- Temporal data for time-series studies
- Scalable bulk processing for multiple profiles

Output integrates with analytics platforms, sentiment analysis tools, social listening dashboards, and research databases for immediate activation in monitoring, analysis, and strategic planning.

### Conclusion

The Bluesky User Posts Scraper transforms manual social media monitoring into efficient automated data collection. By providing structured access to Bluesky post data with flexible filtering and comprehensive metrics, it enables data-driven social media strategies and research.

Whether monitoring brand presence, analyzing influencer performance, researching content strategies, or studying social media behavior, this scraper delivers the systematic extraction capabilities needed for effective Bluesky analysis.

Start extracting Bluesky insights today and enhance your social media intelligence capabilities.

## Your feedback

We are always working to improve Actors' performance. So, if you have any technical feedback about Pinterest Search Scraper or simply found a bug, please create an issue on the Actor's Issues tab in Apify Console.

# Actor input Schema

## `urls` (type: `array`):

Add the URLs of the User Posts list urls you want to scrape. You can paste URLs one by one, or use the Bulk edit section to add a prepared list.

## `filter` (type: `string`):

Filter items by Type

## `ignore_url_failures` (type: `boolean`):

If true, the scraper will continue running even if some URLs fail to be scraped after the maximum number of retries is reached.

## `max_items_per_url` (type: `integer`):

Limit the number of items per URL or search filters you want to scrape

## `max_retries_per_url` (type: `integer`):

Limit the number of retries for each URL or search filters if the scrape is detected as a bot or the page fails to load

## Actor input object example

```json
{
  "urls": [
    "https://bsky.app/profile/nbcnews.com"
  ],
  "ignore_url_failures": true,
  "max_items_per_url": 20,
  "max_retries_per_url": 2
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "urls": [
        "https://bsky.app/profile/nbcnews.com"
    ],
    "ignore_url_failures": true,
    "max_items_per_url": 20,
    "max_retries_per_url": 2
};

// Run the Actor and wait for it to finish
const run = await client.actor("ecomscrape/bluesky-user-posts-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "urls": ["https://bsky.app/profile/nbcnews.com"],
    "ignore_url_failures": True,
    "max_items_per_url": 20,
    "max_retries_per_url": 2,
}

# Run the Actor and wait for it to finish
run = client.actor("ecomscrape/bluesky-user-posts-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "urls": [
    "https://bsky.app/profile/nbcnews.com"
  ],
  "ignore_url_failures": true,
  "max_items_per_url": 20,
  "max_retries_per_url": 2
}' |
apify call ecomscrape/bluesky-user-posts-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=ecomscrape/bluesky-user-posts-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Bluesky User Posts Scraper",
        "description": "Bluesky User Follows Scraper automates extraction of follower and following data from Bluesky profiles. Efficiently collect user network information including handles, display names, bios, and account metadata for audience analysis, influencer research, and social network mapping.",
        "version": "0.0",
        "x-build-id": "aJIWHMpATWGiD7afu"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/ecomscrape~bluesky-user-posts-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-ecomscrape-bluesky-user-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/ecomscrape~bluesky-user-posts-scraper/runs": {
            "post": {
                "operationId": "runs-sync-ecomscrape-bluesky-user-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/ecomscrape~bluesky-user-posts-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-ecomscrape-bluesky-user-posts-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "urls": {
                        "title": "URLs of the User Posts list urls to scrape",
                        "type": "array",
                        "description": "Add the URLs of the User Posts list urls you want to scrape. You can paste URLs one by one, or use the Bulk edit section to add a prepared list.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "filter": {
                        "title": "Filter by Type",
                        "enum": [
                            "posts_with_replies",
                            "posts_no_replies",
                            "posts_with_media",
                            "posts_and_author_threads",
                            "posts_with_video"
                        ],
                        "type": "string",
                        "description": "Filter items by Type"
                    },
                    "ignore_url_failures": {
                        "title": "Ignore URL failures",
                        "type": "boolean",
                        "description": "If true, the scraper will continue running even if some URLs fail to be scraped after the maximum number of retries is reached."
                    },
                    "max_items_per_url": {
                        "title": "Limit the number of items per URL or search filters",
                        "type": "integer",
                        "description": "Limit the number of items per URL or search filters you want to scrape"
                    },
                    "max_retries_per_url": {
                        "title": "Limit the number of retries",
                        "type": "integer",
                        "description": "Limit the number of retries for each URL or search filters if the scrape is detected as a bot or the page fails to load"
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
