# Goodreads Book Scraper (`scraperforge/goodreads-book-scraper`) Actor

- **URL**: https://apify.com/scraperforge/goodreads-book-scraper.md
- **Developed by:** [ScraperForge](https://apify.com/scraperforge) (community)
- **Categories:** Automation, Lead generation, Social media
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $3.99 / 1,000 results

This Actor is paid per event and usage. You are charged both the fixed price for specific events and for Apify platform usage.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

### What is Goodreads Book Scraper?

A Goodreads Book Scraper is a tool that automatically extracts book data from Goodreads, the world’s largest reading platform. Instead of manually searching for titles, authors, and reviews, the scraper collects this information in bulk and organizes it for easy use. It’s often seen as an alternative to the limited Goodreads API, making it valuable for researchers, publishers, or developers. Variations include open-source options like goodreads scraper, goodreads book scraper github, or even browser-based goodreads book scraper online. Compared to hours of manual research, automated scraping saves time, reduces errors, and delivers structured datasets in minutes.

### What Goodreads Data Can You Extract?

A Goodreads Book Scraper lets you collect structured data that would otherwise take hours to gather manually. Below is a table showing the key fields you can extract when running a scraper:

| 📚 Data Point | 📌 Description |
| --- | --- |
| Book Title | Full title of the book, including series info if available |
| Author | Author’s name with a link to their Goodreads profile |
| Book URL | Direct Goodreads link to the book detail page |
| Cover Image | URL of the book’s cover image |
| ISBN | Standard identifier for the book edition |
| Publication Year | Year the book was first published or republished |
| Average Rating | Goodreads average rating (0–5 scale) |
| Ratings Count | Number of users who rated the book |
| Reviews | Short or detailed user reviews, depending on settings |
| Editions | Number of editions listed on Goodreads |
| Shelves & Genres | Genres and community shelves tagging the book |
| Author Bio & Links | Author description, external links, and official websites |

  

#### Key Features of Goodreads Book Scraper

##### 1\. Batch Scraping

*   Extract data from multiple books, lists, or shelves in one run.  
      
    
*   Scale up from a few titles to thousands of entries seamlessly.  
      
    

##### 2\. Goodreads API Alternative

*   Works as a replacement for the limited Goodreads API.  
      
    
*   Collects richer details like reviews, editions, and genres without API restrictions.  
      
    

##### 3\. Multiple Export Formats

*   Export results in CSV, Excel, or JSON.  
      
    
*   Easily import into CRMs, research tools, or analytics dashboards.  
      
    

##### 4\. Developer-Friendly Integration

*   Compatible with python goodreads libraries and GitHub projects.  
      
    
*   Fits into automation workflows with tools like Apify or Make.com.  
      
    

##### 5\. Proxy Support for Reliability

*   Built-in proxy configuration prevents IP blocks.  
      
    
*   Ensures smooth and consistent scraping, even at large scale
    

### How to Use Goodreads Book Scraper

Using a Goodreads Book Scraper is simple, even if you’re not highly technical. In just a few steps, you can collect structured book data without relying on the limited Goodreads API. Here’s how to get started:

Step-by-Step Guide

#### 1\. Log in to Apify

 Create a free account or sign in to access the scraper.

#### 2\. Select the Actor

Search for Goodreads Book Scraper in the Apify store. Some users also try goodreads book scraper online tools or goodreads book scraper free download from GitHub, but hosted versions are easier to run.

#### 3\. Add Input Data

Enter your keywords, book URLs, lists, shelves, or author pages into the input field. You can scrape a single title or batch process thousands.

#### 4\. Configure Options

Decide whether you want reviews included, set limits with maxItems or endPage, and choose proxy options for reliable scraping.

#### 5\. Run the Actor

Click start, and the scraper will begin extracting data like title, author, ratings, and editions automatically.

#### 6\. Export Results

Download your dataset in JSON, CSV, Excel, or JSONL, ready for analytics, catalog building, or app integration.

#### Input

```json
{
    "urls": [
        "python programming"
    ]
}
````

#### Output

```json
[
  {
    "title": "Automate the Boring Stuff with Python: Practical Programming for Total Beginners",
    "author": "Al Sweigart",
    "rating": "4.28",
    "ratingsCount": "3,105",
    "published": "2014",
    "editions": "21",
    "url": "https://www.goodreads.com/book/show/22514127-automate-the-boring-stuff-with-python?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=1",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1418768948i/22514127._SX50_.jpg"
  },
  {
    "title": "PYTHON: PROGRAMMING: A BEGINNER’S GUIDE TO LEARN PYTHON IN 7 DAYS",
    "author": "Ramsey Hamilton",
    "rating": "4.01",
    "ratingsCount": "672",
    "published": "",
    "editions": "1 edition",
    "url": "https://www.goodreads.com/book/show/30526086-python?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=2",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1465726794i/30526086._SY75_.jpg"
  },
  {
    "title": "Black Hat Python: Python Programming for Hackers and Pentesters",
    "author": "Justin Seitz",
    "rating": "4.11",
    "ratingsCount": "602",
    "published": "2014",
    "editions": "23",
    "url": "https://www.goodreads.com/book/show/22299369-black-hat-python?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=3",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1418765234i/22299369._SX50_.jpg"
  },
  {
    "title": "Core Python Programming",
    "author": "R. Nageswara Rao",
    "rating": "4.20",
    "ratingsCount": "307",
    "published": "",
    "editions": "5",
    "url": "https://www.goodreads.com/book/show/35844353-core-python-programming?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=4",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1501256614i/35844353._SX50_.jpg"
  },
  {
    "title": "Python: 3 Manuscripts in 1 book: - Python Programming For Beginners - Python Programming For Intermediates - Python Programming for Advanced (Your place to learn Python with ease Book 4)",
    "author": "Maurice J. Thompson",
    "rating": "3.94",
    "ratingsCount": "222",
    "published": "",
    "editions": "3",
    "url": "https://www.goodreads.com/book/show/39985055-python?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=5",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1524932726i/39985055._SX50_.jpg"
  },
  {
    "title": "Programming Python",
    "author": "Mark Lutz",
    "rating": "really liked it4.00",
    "ratingsCount": "1,078",
    "published": "1996",
    "editions": "41",
    "url": "https://www.goodreads.com/book/show/80436.Programming_Python?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=6",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1388622021i/80436._SX50_.jpg"
  },
  {
    "title": "Python Programming: Using Problem Solving Approach",
    "author": "Reema Thareja",
    "rating": "4.06",
    "ratingsCount": "157",
    "published": "",
    "editions": "2",
    "url": "https://www.goodreads.com/book/show/40672518-python-programming?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=7",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1530323493i/40672518._SX50_.jpg"
  },
  {
    "title": "Python Programming for Beginners: An Introduction to the Python Computer Language and Computer Programming (Python, Python 3, Python Tutorial)",
    "author": "Jason Cannon",
    "rating": "3.92",
    "ratingsCount": "279",
    "published": "2014",
    "editions": "6",
    "url": "https://www.goodreads.com/book/show/23265431-python-programming-for-beginners?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=8",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1432663461i/23265431._SY75_.jpg"
  },
  {
    "title": "Python: Programming: Your Step By Step Guide To Easily Learn Python in 7 Days (Python for Beginners, Python Programming for Beginners, Learn Python, Python Language) (Programming Languages Book 6)",
    "author": "iCode Academy",
    "rating": "3.77",
    "ratingsCount": "206",
    "published": "",
    "editions": "2",
    "url": "https://www.goodreads.com/book/show/33814161-python?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=9",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1483818988i/33814161._SY75_.jpg"
  },
  {
    "title": "Python Programming: An Introduction to Computer Science",
    "author": "John Zelle",
    "rating": "4.01",
    "ratingsCount": "480",
    "published": "2003",
    "editions": "21",
    "url": "https://www.goodreads.com/book/show/80440.Python_Programming?from_search=true&from_srp=true&qid=JFZ1bQjFIx&rank=10",
    "coverUrl": "https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1386921436i/80440._SX50_.jpg"
  }
]
```

### 🎯 Use Cases of Goodreads Book Scraper

A Goodreads Book Scraper isn’t just for casual data collection — it has real, practical applications across publishing, research, and content industries. Here are some of the most valuable use cases:

#### 1. Market Research for Publishers & Authors

Publishers and independent authors can track ratings, reviews, and reader sentiment across genres. This helps identify trends, competitor performance, and gaps in the market.

#### 2. Building a Goodreads Dataset for Analytics

Researchers and data analysts use scrapers to create Goodreads datasets that power dashboards, trend reports, or academic studies. By collecting structured data in bulk, you can analyze thousands of books at once.

#### 3. Content Curation & Recommendation Engines

Scraping titles, genres, and reviews makes it possible to build book catalogs or AI-powered recommendation systems. Libraries, apps, and book blogs benefit from automated, updated lists.

#### 4. Social Media & Book Blogs Automation

A Goodreads integration can fuel content workflows — automatically pulling top-rated books, trending reviews, or author bios to share on Twitter, Instagram, or blogs.

#### 5. CRM & Business Applications

Businesses can connect scraped book data directly into CRMs, apps, or e-commerce platforms. For example, integrating book details into a customer loyalty program or a reading-focused mobile app.

### How Many Results Can You Scrape with Goodreads Book Scraper?

The Goodreads Book Scraper is built to handle everything from a single title to massive bulk collections. If you only need details from one book, you can run the scraper with a single Goodreads URL. On the other end of the spectrum, you can process 10,000+ entries in one project, making it highly scalable.

#### Scale of Results

- Small Runs: Perfect for quick lookups — scrape 1 to 10 books with minimal setup.

- Medium Runs: Collect 500 to 1,000 books, ideal for academic projects, book clubs, or niche catalog building.

- Large Runs: With the right proxy configuration, you can scrape thousands of books, lists, or shelves without interruptions.

#### Proxy Setup & Limits

Your scraping scale depends heavily on proxy support and the plan you’re running. Basic setups may face rate limits, while premium proxy setups can handle thousands of requests reliably.

#### Example Case

A single run can extract data from 1,000 Goodreads books within minutes, generating a clean dataset including titles, authors, ratings, editions, and genres.

In short, whether you’re building a personal list or a full Goodreads dataset, the scraper adapts to your project’s size.

### How Much Will Scraping Goodreads Cost You?

The cost of using a Goodreads Book Scraper depends on whether you choose a free, DIY approach or a paid, fully managed service.

#### Free Options

If you’re comfortable with coding, there are open-source solutions like github goodreads repositories or python goodreads scripts. These options are free to use but require technical skills, proxy setup, and regular maintenance to keep up with site changes. They’re great for developers who want flexibility, but they can be time-consuming and prone to break when Goodreads updates its structure.

#### Paid SaaS Options

On the other side, SaaS platforms like Apify’s Goodreads Scraper provide a plug-and-play solution. Prices start around $5/month, depending on your usage. Paid scrapers are faster, maintained by professionals, and include support, proxy handling, and export features (CSV, Excel, JSON). They save hours of troubleshooting and make large-scale scraping more reliable.

#### Comparison

- Goodreads Book Scraper GitHub: Free, customizable, but DIY and requires coding knowledge.

- Apify Scraper: Paid, user-friendly, scalable, and continuously updated.

### Is it Legal to Scrape Goodreads?

#### ✅ Public Data (Allowed)

- Titles, authors, ISBNs, ratings, and book details are publicly available.

- Collecting this type of information is generally safe and similar to manual research.

#### ❌ Private or Sensitive Data (Not Ethical)

- Bulk scraping of user profiles, reviews, or private interactions crosses into restricted use.

- Even if visible, using this data commercially may violate Goodreads’ terms of service.

#### Open Library vs Goodreads

- Open Library: Fully open-source, API-friendly, designed for free public access to book datasets.

- Goodreads: Has a limited API; scraping is often used as a workaround but comes with restrictions.

#### Scraping Limitations

- Large-scale scraping may trigger rate limits or IP blocks.

- Always use proxies and respect request delays.

- For compliance, avoid redistributing or reselling Goodreads data directly.

### What Are Other Goodreads Scraping Tools?

If you’re looking beyond a Goodreads Book Scraper, there are several other specialized tools designed for specific types of data collection. Each scraper focuses on different use cases, whether it’s reviews, author insights, or curated book lists. Here’s a breakdown:

| Tool / Scraper | Purpose & Data Collected | Best For |
| --- | --- | --- |
| Goodreads Review Scraper | Extracts full reviews, ratings, reviewer profiles, likes, and timestamps | Sentiment analysis, reader insights, review aggregation |
| Goodreads Author Scraper | Scrapes author profiles, bios, social links, and published book lists | Author research, publisher databases, literary studies |
| Goodreads List Scraper | Collects books from themed lists (e.g., “Best Sci-Fi of 2024”) | Market trends, content curation, book discovery |
| Goodreads Shelf Scraper | Extracts books under shelves/genres like “romance,” “fantasy,” or “history” | Genre research, catalog building, library automation |
| Open Library API | Free, open-source API for book metadata (ISBN, editions, authors, subjects, publishing) | Alternative to Goodreads scraping, open datasets for research and apps |

### FAQ – Goodreads Book Scraper

#### Is there a free Goodreads Book Scraper GitHub project?

Yes, you can find open-source Goodreads scraper projects on GitHub. They’re free but require coding knowledge and setup.

#### How to get a Goodreads dataset for research?

Run a Goodreads Book Scraper or use the Open Library API for free, structured book datasets suited for research.

#### Can I scrape Goodreads reviews?

Yes, with a Goodreads Review Scraper, but avoid bulk or private user data to stay compliant.

#### Is there an official Goodreads API?

Yes, but the Goodreads API is limited and doesn’t provide full access to book, list, or review data.

#### How safe is Goodreads integration for apps?

It’s safe if you only use public book data and respect Goodreads’ terms. Use proxies for large-scale scraping.

#### What’s the difference between Goodreads and Open Library?

Goodreads is community-driven but limited for scraping, while Open Library is open-source, API-friendly, and free for public use.

# Actor input Schema

## `urls` (type: `array`):

📝 Add search phrases (e.g. *mystery novels*) or full search URLs. One per line. At least one entry is required.

## `resultsPerQuery` (type: `integer`):

🎯 Target number of books to collect for each search. The actor keeps paginating until it reaches this count or runs out of results.

## `proxyConfiguration` (type: `object`):

⚡ Proxy is off by default. If the site blocks the request, the actor turns on Apify Proxy (e.g. RESIDENTIAL) and retries. Configure groups and country here if you want to customize fallback behaviour.

## Actor input object example

```json
{
  "urls": [
    "python programming"
  ],
  "resultsPerQuery": 10
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "urls": [
        "python programming"
    ]
};

// Run the Actor and wait for it to finish
const run = await client.actor("scraperforge/goodreads-book-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = { "urls": ["python programming"] }

# Run the Actor and wait for it to finish
run = client.actor("scraperforge/goodreads-book-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "urls": [
    "python programming"
  ]
}' |
apify call scraperforge/goodreads-book-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=scraperforge/goodreads-book-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Goodreads Book Scraper",
        "version": "1.0",
        "x-build-id": "B6oePKq9msXMNWMf2"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/scraperforge~goodreads-book-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-scraperforge-goodreads-book-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/scraperforge~goodreads-book-scraper/runs": {
            "post": {
                "operationId": "runs-sync-scraperforge-goodreads-book-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/scraperforge~goodreads-book-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-scraperforge-goodreads-book-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "required": [
                    "urls"
                ],
                "properties": {
                    "urls": {
                        "title": "🔗 Search terms or Goodreads URLs",
                        "type": "array",
                        "description": "📝 Add search phrases (e.g. *mystery novels*) or full search URLs. One per line. At least one entry is required.",
                        "items": {
                            "type": "string"
                        }
                    },
                    "resultsPerQuery": {
                        "title": "📊 How many books per search?",
                        "minimum": 1,
                        "maximum": 10000,
                        "type": "integer",
                        "description": "🎯 Target number of books to collect for each search. The actor keeps paginating until it reaches this count or runs out of results.",
                        "default": 10
                    },
                    "proxyConfiguration": {
                        "title": "🛡️ Proxy (optional)",
                        "type": "object",
                        "description": "⚡ Proxy is off by default. If the site blocks the request, the actor turns on Apify Proxy (e.g. RESIDENTIAL) and retries. Configure groups and country here if you want to customize fallback behaviour.",
                        "properties": {
                            "useApifyProxy": {
                                "title": "✅ Use Apify Proxy",
                                "type": "boolean",
                                "description": "Turn on to allow proxy fallback when a block is detected. The run still starts without proxy for faster first requests."
                            },
                            "apifyProxyGroups": {
                                "title": "📌 Proxy groups",
                                "type": "array",
                                "items": {
                                    "type": "string"
                                },
                                "description": "Choose proxy groups (e.g. RESIDENTIAL). Used when the actor switches to proxy after a block."
                            },
                            "apifyProxyCountry": {
                                "title": "🌍 Proxy country",
                                "type": "string",
                                "description": "Optional: ISO-2 country code (e.g. US, GB). Leave blank for no country filter."
                            }
                        }
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
