# Wikipedia Pageviews Scraper (`parseforge/wikipedia-pageviews-scraper`) Actor

Pull Wikipedia pageview metrics for any article in any language edition. Daily or monthly granularity, filter by access type (desktop, mobile, app) and agent type (user, spider, automated). Pick a date range. Export to JSON, CSV, or Excel for SEO research and content benchmarking.

- **URL**: https://apify.com/parseforge/wikipedia-pageviews-scraper.md
- **Developed by:** [ParseForge](https://apify.com/parseforge) (community)
- **Categories:** Education, News, Other
- **Stats:** 2 total users, 1 monthly users, 100.0% runs succeeded, NaN bookmarks
- **User rating**: No ratings yet

## Pricing

from $8.25 / 1,000 items

This Actor is paid per event. You are not charged for the Apify platform usage, but only a fixed price for specific events.
Since this Actor supports Apify Store discounts, the price gets lower the higher subscription plan you have.

Learn more: https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event

## What's an Apify Actor?

Actors are a software tools running on the Apify platform, for all kinds of web data extraction and automation use cases.
In Batch mode, an Actor accepts a well-defined JSON input, performs an action which can take anything from a few seconds to a few hours,
and optionally produces a well-defined JSON output, datasets with results, or files in key-value store.
In Standby mode, an Actor provides a web server which can be used as a website, API, or an MCP server.
Actors are written with capital "A".

## How to integrate an Actor?

If asked about integration, you help developers integrate Actors into their projects.
You adapt to their stack and deliver integrations that are safe, well-documented, and production-ready.
The best way to integrate Actors is as follows.

In JavaScript/TypeScript projects, use official [JavaScript/TypeScript client](https://docs.apify.com/api/client/js.md):

```bash
npm install apify-client
```

In Python projects, use official [Python client library](https://docs.apify.com/api/client/python.md):

```bash
pip install apify-client
```

In shell scripts, use [Apify CLI](https://docs.apify.com/cli/docs.md):

````bash
# MacOS / Linux
curl -fsSL https://apify.com/install-cli.sh | bash
# Windows
irm https://apify.com/install-cli.ps1 | iex
```bash

In AI frameworks, you might use the [Apify MCP server](https://docs.apify.com/platform/integrations/mcp.md).

If your project is in a different language, use the [REST API](https://docs.apify.com/api/v2.md).

For usage examples, see the [API](#api) section below.

For more details, see Apify documentation as [Markdown index](https://docs.apify.com/llms.txt) and [Markdown full-text](https://docs.apify.com/llms-full.txt).


# README

![ParseForge Banner](https://github.com/ParseForge/apify-assets/blob/ad35ccc13ddd068b9d6cba33f323962e39aed5b2/banner.jpg?raw=true)

## 📚 Wikipedia Pageviews Scraper

> 🚀 **Pull daily and monthly Wikipedia pageviews for any article in any language.** Filter by date range, access type, and agent type. No API key, no registration, no quota negotiation.

> 🕒 **Last updated:** 2026-05-01 · **📊 8 fields** per row · **📚 300+ language editions** · **📅 daily and monthly granularity** · **🗓️ data from July 2015 onward**

The **Wikipedia Pageviews Scraper** queries the official Wikimedia REST API and returns the number of times any Wikipedia article was viewed during a date range. Each row reports the language project, article title, timestamp, access type, agent type, and view count. The endpoint covers every Wikipedia language edition, and the underlying dataset goes back to **July 2015**, giving you nearly a decade of continuous traffic history per article.

Wikipedia is the **eighth most visited website in the world** with billions of pageviews per month. Pageview trends are a leading indicator for cultural moments, search demand, breaking news, and product launches. Building your own pipeline against the Wikimedia API means handling URL encoding, paginated date ranges, and per-language hosts. This Actor handles all of that and lets you focus on the analysis.

| 🎯 Target Audience | 💡 Primary Use Cases |
|---|---|
| SEO teams, journalists, trend researchers, market analysts, academics, dashboard builders | Search demand forecasting, cultural research, content benchmarking, trend tracking, comparative analysis |

---

### 📋 What the Wikipedia Pageviews Scraper does

Five filtering workflows in a single run:

- 📚 **Per-article views.** Submit any Wikipedia article title and pull its full traffic history for the date range you choose.
- 🌍 **Any language edition.** Pick from 20+ supported language projects including English, Spanish, German, French, Japanese, Russian, and Chinese Wikipedia.
- 📅 **Daily or monthly granularity.** Daily rollups give you weekday seasonality. Monthly rollups give you long-term trend lines.
- 📱 **Access type filter.** Slice traffic by desktop, mobile web, mobile app, or all-access combined.
- 🤖 **Agent type filter.** Separate human (`user`) traffic from spiders and automated agents to clean up trend lines.

Each row in the dataset reports the project (e.g. `en.wikipedia`), URL-encoded article title, granularity, timestamp in `YYYYMMDD00` format, access slice, agent slice, and view count. Dataset entries go back to July 2015.

> 💡 **Why it matters:** pageview data is one of the cleanest free signals for tracking real-world attention. When a celebrity dies, a film trailer drops, or a country votes, the matching Wikipedia article spikes within hours. SEO teams use the pageview series as a free proxy for search demand. Researchers cite it in studies of collective attention. Dashboard builders embed it as a public-interest gauge.

---

### 🎬 Full Demo

_🚧 Coming soon: a 3-minute walkthrough showing how to go from sign-up to a downloaded dataset._

---

### ⚙️ Input

<table>
<thead>
<tr><th>Input</th><th>Type</th><th>Default</th><th>Behavior</th></tr>
</thead>
<tbody>
<tr><td><code>maxItems</code></td><td>integer</td><td><code>10</code></td><td>Rows to return. Free plan caps at 10, paid plan at 1,000,000.</td></tr>
<tr><td><code>articles</code></td><td>array of strings</td><td><code>["Albert_Einstein"]</code></td><td>Article titles with underscores in place of spaces. One title per array entry.</td></tr>
<tr><td><code>project</code></td><td>string</td><td><code>"en.wikipedia.org"</code></td><td>Wikipedia language project. Pick from the enum of 20 supported language editions.</td></tr>
<tr><td><code>granularity</code></td><td>string</td><td><code>"daily"</code></td><td>Either <code>daily</code> or <code>monthly</code>.</td></tr>
<tr><td><code>startDate</code></td><td>string</td><td>30 days ago</td><td>ISO date <code>YYYY-MM-DD</code>. Earliest supported is 2015-07-01.</td></tr>
<tr><td><code>endDate</code></td><td>string</td><td>yesterday</td><td>ISO date <code>YYYY-MM-DD</code>.</td></tr>
<tr><td><code>access</code></td><td>string</td><td><code>"all-access"</code></td><td><code>all-access</code>, <code>desktop</code>, <code>mobile-app</code>, or <code>mobile-web</code>.</td></tr>
<tr><td><code>agent</code></td><td>string</td><td><code>"all-agents"</code></td><td><code>all-agents</code>, <code>user</code>, <code>spider</code>, or <code>automated</code>.</td></tr>
</tbody>
</table>

**Example: daily English-Wikipedia views for three articles in April 2026.**

```json
{
    "maxItems": 100,
    "articles": ["Albert_Einstein", "ChatGPT", "Taylor_Swift"],
    "project": "en.wikipedia.org",
    "granularity": "daily",
    "startDate": "2026-04-01",
    "endDate": "2026-04-30",
    "access": "all-access",
    "agent": "user"
}
````

**Example: monthly Spanish-Wikipedia views since 2020.**

```json
{
    "maxItems": 1000,
    "articles": ["Lionel_Messi", "Real_Madrid_CF"],
    "project": "es.wikipedia.org",
    "granularity": "monthly",
    "startDate": "2020-01-01",
    "endDate": "2026-04-01"
}
```

> ⚠️ **Good to Know:** Wikipedia article titles are case sensitive and use underscores, not spaces. Submit `Albert_Einstein`, not `albert einstein`. Articles that have been moved or deleted return zero rows. The Wikimedia API is unauthenticated but expects a descriptive User-Agent string, which the Actor sends automatically.

***

### 📊 Output

Each row contains **8 fields**. Download the dataset as CSV, Excel, JSON, or XML.

#### 🧾 Schema

| Field | Type | Example |
|---|---|---|
| 🌐 `project` | string | `"en.wikipedia"` |
| 📄 `article` | string | `"Albert_Einstein"` |
| ⏱️ `granularity` | string | `"daily"` |
| 📅 `timestamp` | string | `"2026040100"` |
| 📱 `access` | string | `"all-access"` |
| 🤖 `agent` | string | `"all-agents"` |
| 👁️ `views` | integer | `15626` |
| 🕒 `scrapedAt` | ISO 8601 | `"2026-05-01T02:00:11.931Z"` |

#### 📦 Sample records

<details>
<summary><strong>📅 Daily view of a long-stable article: Albert Einstein, April 1 2026</strong></summary>

```json
{
    "project": "en.wikipedia",
    "article": "Albert_Einstein",
    "granularity": "daily",
    "timestamp": "2026040100",
    "access": "all-access",
    "agent": "all-agents",
    "views": 15626,
    "scrapedAt": "2026-05-01T02:00:11.931Z"
}
```

</details>

<details>
<summary><strong>📈 Daily view of a trending article: ChatGPT</strong></summary>

```json
{
    "project": "en.wikipedia",
    "article": "ChatGPT",
    "granularity": "daily",
    "timestamp": "2026040100",
    "access": "all-access",
    "agent": "user",
    "views": 8742,
    "scrapedAt": "2026-05-01T02:00:12.840Z"
}
```

</details>

<details>
<summary><strong>📊 Monthly view of a celebrity article: Taylor Swift</strong></summary>

```json
{
    "project": "en.wikipedia",
    "article": "Taylor_Swift",
    "granularity": "monthly",
    "timestamp": "2026040100",
    "access": "all-access",
    "agent": "all-agents",
    "views": 942103,
    "scrapedAt": "2026-05-01T02:00:13.220Z"
}
```

</details>

***

### ✨ Why choose this Actor

| | Capability |
|---|---|
| 🆓 | **Free official source.** Pulls directly from the public Wikimedia REST API, no scraping of HTML pages. |
| 🌍 | **All Wikipedia languages.** Pick from 20+ enum-listed projects, request more if you need them. |
| 📅 | **Decade of history.** Data goes back to July 2015, with daily and monthly rollups. |
| 🧪 | **Clean filter slices.** Separate desktop from mobile, separate human traffic from spiders. |
| 🚀 | **Sub-10-second runs.** A typical 100-row pull finishes in under 10 seconds. |
| 🛠️ | **Bulk article support.** Submit dozens of articles in a single run, results pushed in order. |
| 🔄 | **Export anywhere.** Output ships as CSV, Excel, JSON, or XML through the Apify dataset endpoints. |

> 📊 The Wikimedia Foundation reports more than 18 billion pageviews per month across all editions.

***

### 📈 How it compares to alternatives

| Approach | Cost | Coverage | Refresh | Filters | Setup |
|---|---|---|---|---|---|
| Manual queries to the Wikimedia REST API | Free | Full | Live | Manual | Engineer hours |
| Third-party paid SEO suites | $$$ subscription | Partial | Daily | Built-in | Account setup |
| Generic web traffic estimators | $$ subscription | Estimated | Weekly | Limited | Account setup |
| **⭐ Wikipedia Pageviews Scraper** *(this Actor)* | Pay-per-event | Full | Live | Granularity, access, agent | None |

The same data the Wikimedia Foundation publishes, exposed as clean structured records you can pipe into anything.

***

### 🚀 How to use

1. 🆓 **Create a free Apify account.** [Sign up here](https://console.apify.com/sign-up?fpr=vmoqkp) and get $5 in free credit.
2. 🔍 **Open the Actor.** Search for "Wikipedia Pageviews" in the Apify Store.
3. ⚙️ **Set your inputs.** Pick articles, project, date range, granularity, and any filters.
4. ▶️ **Click Start.** Most runs finish in under 10 seconds.
5. 📥 **Download.** Export as CSV, Excel, JSON, or XML, or wire it into a Make / Zapier flow.

> ⏱️ Total time from sign-up to first dataset: under five minutes.

***

### 💼 Business use cases

<table>
<tr>
<td width="50%">

#### 📈 SEO & content teams

- Forecast search demand by tracking Wikipedia traffic on related terms
- Find rising topics before they show up in keyword tools
- Benchmark content performance against the canonical Wikipedia article
- Justify content investments with neutral third-party numbers

</td>
<td width="50%">

#### 📰 Journalists & research

- Quantify public attention to figures and events
- Study attention bursts around news cycles
- Compare regional interest across language editions
- Cite a free, reproducible, primary source in stories

</td>
</tr>
<tr>
<td width="50%">

#### 💰 Finance & market research

- Use article traffic as an alt-data signal for consumer interest
- Track brand and product awareness over time
- Benchmark IPO or product-launch attention
- Build leading indicators for niche markets

</td>
<td width="50%">

#### 🧠 Data science & ML

- Generate time-series features for downstream models
- Train demand-forecasting models on cultural attention
- Build dashboards for trend monitoring
- Cross-reference pageviews with social and search data

</td>
</tr>
</table>

***

### 🌟 Beyond business use cases

Data like this powers more than commercial workflows. The same structured records support research, education, civic projects, and personal initiatives.

<table>
<tr>
<td width="50%">

#### 🎓 Research and academia

- Empirical datasets for papers, thesis work, and coursework
- Longitudinal studies tracking changes across snapshots
- Reproducible research with cited, versioned data pulls
- Classroom exercises on data analysis and ethical scraping

</td>
<td width="50%">

#### 🎨 Personal and creative

- Side projects, portfolio demos, and indie app launches
- Data visualizations, dashboards, and infographics
- Content research for bloggers, YouTubers, and podcasters
- Hobbyist collections and personal trackers

</td>
</tr>
<tr>
<td width="50%">

#### 🤝 Non-profit and civic

- Transparency reporting and accountability projects
- Advocacy campaigns backed by public-interest data
- Community-run databases for local issues
- Investigative journalism on public records

</td>
<td width="50%">

#### 🧪 Experimentation

- Prototype AI and machine-learning pipelines with real data
- Validate product-market hypotheses before engineering spend
- Train small domain-specific models on niche corpora
- Test dashboard concepts with live input

</td>
</tr>
</table>

***

### 🔌 Automating Wikipedia Pageviews Scraper

Run this Actor on a schedule, from your codebase, or inside another tool:

- **Node.js** SDK: see [Apify JavaScript client](https://docs.apify.com/api/client/js/) for programmatic runs and dataset exports.
- **Python** SDK: see [Apify Python client](https://docs.apify.com/api/client/python/) for the same flow in Python.
- **HTTP API**: see [Apify API docs](https://docs.apify.com/api/v2) for raw REST integration.

Schedule daily, weekly, or monthly runs from the Apify Console. Export results to Google Sheets, S3, or your own webhook with the built-in [integrations](https://docs.apify.com/platform/integrations).

***

### ❓ Frequently Asked Questions

<details>
<summary><strong>📅 How far back does the data go?</strong></summary>

The Wikimedia REST API serves pageview data from July 1, 2015 onward. Earlier data is not available through this endpoint.

</details>

<details>
<summary><strong>🌍 Which language editions are supported?</strong></summary>

The input schema lists 20 widely-used Wikipedia language projects including English, Spanish, German, French, Italian, Portuguese, Russian, Chinese, Japanese, Korean, Arabic, Hebrew, Turkish, Polish, Dutch, Indonesian, Vietnamese, Hindi, Persian, and Ukrainian. Open a request via our contact form if you need a different project.

</details>

<details>
<summary><strong>⏱️ What granularity does the API support?</strong></summary>

Daily and monthly. Hourly granularity is available through a different endpoint and is not part of this Actor.

</details>

<details>
<summary><strong>🔠 How do I format article titles?</strong></summary>

Use underscores in place of spaces and match the exact capitalization of the Wikipedia URL. `Albert_Einstein` works, `albert einstein` does not. Special characters and non-Latin scripts are URL-encoded automatically.

</details>

<details>
<summary><strong>📦 Can I pass dozens of articles at once?</strong></summary>

Yes. The `articles` input is a string array. The Actor processes each title sequentially and pushes results in order until `maxItems` is reached.

</details>

<details>
<summary><strong>🤖 What is the difference between agent types?</strong></summary>

`user` traffic is human visits. `spider` traffic is search engine crawlers. `automated` traffic is known automation tools. `all-agents` sums everything. For trend analysis, filter by `user` to remove crawler noise.

</details>

<details>
<summary><strong>📱 What is the difference between access types?</strong></summary>

`desktop` covers traffic from desktop browsers, `mobile-web` from mobile browsers, `mobile-app` from the official Wikipedia app. `all-access` sums everything.

</details>

<details>
<summary><strong>💼 Can I use the data for commercial work?</strong></summary>

Yes. Wikipedia pageview data is published under the [Creative Commons CC0 license](https://wikitech.wikimedia.org/wiki/Analytics/Data_Lake/Traffic/Pageviews) and can be used commercially. Always cite Wikimedia as the data source.

</details>

<details>
<summary><strong>💳 Do I need a paid plan to use this?</strong></summary>

The free plan returns up to 10 rows per run, which is enough for testing. Paid plans return up to 1,000,000 rows.

</details>

<details>
<summary><strong>⚠️ What if a run fails or returns empty?</strong></summary>

The most common cause is a misspelled article title. Confirm the exact slug on Wikipedia, then retry. If the issue persists, [open a contact form](https://tally.so/r/BzdKgA) and include the run URL.

</details>

<details>
<summary><strong>📊 Can I trust the numbers?</strong></summary>

Yes. The data comes directly from the Wikimedia Foundation's published pageview metrics, the same numbers used in their public dashboards.

</details>

<details>
<summary><strong>⚖️ Is scraping Wikipedia legal?</strong></summary>

This Actor uses the official Wikimedia REST API, not scraping. The API is publicly documented and explicitly built for programmatic access.

</details>

***

### 🔌 Integrate with any app

- [**Make**](https://apify.com/integrations/make) - drop run results into 1,800+ apps with a no-code visual builder.
- [**Zapier**](https://apify.com/integrations/zapier) - trigger automations off completed runs.
- [**Slack**](https://apify.com/integrations/slack) - post run summaries to a channel.
- [**Google Sheets**](https://apify.com/integrations/google-sheets) - sync each run into a spreadsheet.
- [**Webhooks**](https://docs.apify.com/platform/integrations/webhooks) - notify your own services on run finish.
- [**Airbyte**](https://apify.com/integrations/airbyte) - load runs into Snowflake, BigQuery, or Postgres.

***

### 🔗 Recommended Actors

- [**🅱️ Bing Search Scraper**](https://apify.com/parseforge/bing-search-scraper) - track organic search demand alongside Wikipedia traffic.
- [**🦆 DuckDuckGo Search Scraper**](https://apify.com/parseforge/duckduckgo-search-scraper) - alternative SERP signal for the same topic.
- [**📰 Substack Publication Scraper**](https://apify.com/parseforge/substack-publication-scraper) - pair Wikipedia trends with newsletter cadence.
- [**🐙 GitHub Trending Repos Scraper**](https://apify.com/parseforge/github-trending-scraper) - capture developer attention next to public attention.
- [**🌐 Common Crawl Index Scraper**](https://apify.com/parseforge/common-crawl-index-scraper) - cross-reference web archive captures with traffic data.

> 💡 **Pro Tip:** browse the complete [ParseForge collection](https://apify.com/parseforge) for more pre-built scrapers and data tools.

***

**🆘 Need Help?** [**Open our contact form**](https://tally.so/r/BzdKgA) and we'll route the question to the right person.

***

> Wikipedia is a registered trademark of the Wikimedia Foundation. This Actor is not affiliated with or endorsed by the Wikimedia Foundation. It is built on the public Wikimedia REST API and respects all published rate limits.

# Actor input Schema

## `maxItems` (type: `integer`):

Free users: Limited to 10 items (preview). Paid users: Optional, max 1,000,000.

## `articles` (type: `array`):

Wikipedia article titles to query (use the slug or human-readable title).

## `project` (type: `string`):

Wikipedia language project (e.g. en.wikipedia.org).

## `granularity` (type: `string`):

How to bucket pageviews.

## `startDate` (type: `string`):

Start date (YYYY-MM-DD). Defaults to 30 days ago.

## `endDate` (type: `string`):

End date (YYYY-MM-DD). Defaults to yesterday.

## `access` (type: `string`):

Device type slice.

## `agent` (type: `string`):

Traffic agent slice.

## Actor input object example

```json
{
  "maxItems": 10,
  "articles": [
    "Albert_Einstein",
    "ChatGPT",
    "Taylor_Swift"
  ],
  "project": "en.wikipedia.org",
  "granularity": "daily",
  "access": "all-access",
  "agent": "all-agents"
}
```

# API

You can run this Actor programmatically using our API. Below are code examples in JavaScript, Python, and CLI, as well as the OpenAPI specification and MCP server setup.

## JavaScript example

```javascript
import { ApifyClient } from 'apify-client';

// Initialize the ApifyClient with your Apify API token
// Replace the '<YOUR_API_TOKEN>' with your token
const client = new ApifyClient({
    token: '<YOUR_API_TOKEN>',
});

// Prepare Actor input
const input = {
    "maxItems": 10,
    "articles": [
        "Albert_Einstein",
        "ChatGPT",
        "Taylor_Swift"
    ],
    "project": "en.wikipedia.org",
    "granularity": "daily",
    "access": "all-access",
    "agent": "all-agents"
};

// Run the Actor and wait for it to finish
const run = await client.actor("parseforge/wikipedia-pageviews-scraper").call(input);

// Fetch and print Actor results from the run's dataset (if any)
console.log('Results from dataset');
console.log(`💾 Check your data here: https://console.apify.com/storage/datasets/${run.defaultDatasetId}`);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach((item) => {
    console.dir(item);
});

// 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/js/docs

```

## Python example

```python
from apify_client import ApifyClient

# Initialize the ApifyClient with your Apify API token
# Replace '<YOUR_API_TOKEN>' with your token.
client = ApifyClient("<YOUR_API_TOKEN>")

# Prepare the Actor input
run_input = {
    "maxItems": 10,
    "articles": [
        "Albert_Einstein",
        "ChatGPT",
        "Taylor_Swift",
    ],
    "project": "en.wikipedia.org",
    "granularity": "daily",
    "access": "all-access",
    "agent": "all-agents",
}

# Run the Actor and wait for it to finish
run = client.actor("parseforge/wikipedia-pageviews-scraper").call(run_input=run_input)

# Fetch and print Actor results from the run's dataset (if there are any)
print("💾 Check your data here: https://console.apify.com/storage/datasets/" + run["defaultDatasetId"])
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
    print(item)

# 📚 Want to learn more 📖? Go to → https://docs.apify.com/api/client/python/docs/quick-start

```

## CLI example

```bash
echo '{
  "maxItems": 10,
  "articles": [
    "Albert_Einstein",
    "ChatGPT",
    "Taylor_Swift"
  ],
  "project": "en.wikipedia.org",
  "granularity": "daily",
  "access": "all-access",
  "agent": "all-agents"
}' |
apify call parseforge/wikipedia-pageviews-scraper --silent --output-dataset

```

## MCP server setup

```json
{
    "mcpServers": {
        "apify": {
            "command": "npx",
            "args": [
                "mcp-remote",
                "https://mcp.apify.com/?tools=parseforge/wikipedia-pageviews-scraper",
                "--header",
                "Authorization: Bearer <YOUR_API_TOKEN>"
            ]
        }
    }
}

```

## OpenAPI specification

```json
{
    "openapi": "3.0.1",
    "info": {
        "title": "Wikipedia Pageviews Scraper",
        "description": "Pull Wikipedia pageview metrics for any article in any language edition. Daily or monthly granularity, filter by access type (desktop, mobile, app) and agent type (user, spider, automated). Pick a date range. Export to JSON, CSV, or Excel for SEO research and content benchmarking.",
        "version": "1.0",
        "x-build-id": "vLE7JqMZvMAi2ecGx"
    },
    "servers": [
        {
            "url": "https://api.apify.com/v2"
        }
    ],
    "paths": {
        "/acts/parseforge~wikipedia-pageviews-scraper/run-sync-get-dataset-items": {
            "post": {
                "operationId": "run-sync-get-dataset-items-parseforge-wikipedia-pageviews-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for its completion, and returns Actor's dataset items in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        },
        "/acts/parseforge~wikipedia-pageviews-scraper/runs": {
            "post": {
                "operationId": "runs-sync-parseforge-wikipedia-pageviews-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor and returns information about the initiated run in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK",
                        "content": {
                            "application/json": {
                                "schema": {
                                    "$ref": "#/components/schemas/runsResponseSchema"
                                }
                            }
                        }
                    }
                }
            }
        },
        "/acts/parseforge~wikipedia-pageviews-scraper/run-sync": {
            "post": {
                "operationId": "run-sync-parseforge-wikipedia-pageviews-scraper",
                "x-openai-isConsequential": false,
                "summary": "Executes an Actor, waits for completion, and returns the OUTPUT from Key-value store in response.",
                "tags": [
                    "Run Actor"
                ],
                "requestBody": {
                    "required": true,
                    "content": {
                        "application/json": {
                            "schema": {
                                "$ref": "#/components/schemas/inputSchema"
                            }
                        }
                    }
                },
                "parameters": [
                    {
                        "name": "token",
                        "in": "query",
                        "required": true,
                        "schema": {
                            "type": "string"
                        },
                        "description": "Enter your Apify token here"
                    }
                ],
                "responses": {
                    "200": {
                        "description": "OK"
                    }
                }
            }
        }
    },
    "components": {
        "schemas": {
            "inputSchema": {
                "type": "object",
                "properties": {
                    "maxItems": {
                        "title": "Max Items",
                        "minimum": 1,
                        "maximum": 1000000,
                        "type": "integer",
                        "description": "Free users: Limited to 10 items (preview). Paid users: Optional, max 1,000,000."
                    },
                    "articles": {
                        "title": "Articles",
                        "type": "array",
                        "description": "Wikipedia article titles to query (use the slug or human-readable title).",
                        "items": {
                            "type": "string"
                        }
                    },
                    "project": {
                        "title": "Wikipedia language edition",
                        "enum": [
                            "en.wikipedia.org",
                            "es.wikipedia.org",
                            "de.wikipedia.org",
                            "fr.wikipedia.org",
                            "ja.wikipedia.org",
                            "ru.wikipedia.org",
                            "it.wikipedia.org",
                            "pt.wikipedia.org",
                            "zh.wikipedia.org",
                            "ar.wikipedia.org",
                            "pl.wikipedia.org",
                            "nl.wikipedia.org",
                            "tr.wikipedia.org",
                            "ko.wikipedia.org",
                            "id.wikipedia.org",
                            "vi.wikipedia.org",
                            "sv.wikipedia.org",
                            "fa.wikipedia.org",
                            "uk.wikipedia.org",
                            "he.wikipedia.org"
                        ],
                        "type": "string",
                        "description": "Wikipedia language project (e.g. en.wikipedia.org).",
                        "default": "en.wikipedia.org"
                    },
                    "granularity": {
                        "title": "Granularity",
                        "enum": [
                            "daily",
                            "monthly"
                        ],
                        "type": "string",
                        "description": "How to bucket pageviews.",
                        "default": "daily"
                    },
                    "startDate": {
                        "title": "Start date",
                        "type": "string",
                        "description": "Start date (YYYY-MM-DD). Defaults to 30 days ago."
                    },
                    "endDate": {
                        "title": "End date",
                        "type": "string",
                        "description": "End date (YYYY-MM-DD). Defaults to yesterday."
                    },
                    "access": {
                        "title": "Access type",
                        "enum": [
                            "all-access",
                            "desktop",
                            "mobile-app",
                            "mobile-web"
                        ],
                        "type": "string",
                        "description": "Device type slice.",
                        "default": "all-access"
                    },
                    "agent": {
                        "title": "Agent type",
                        "enum": [
                            "all-agents",
                            "user",
                            "spider",
                            "automated"
                        ],
                        "type": "string",
                        "description": "Traffic agent slice.",
                        "default": "all-agents"
                    }
                }
            },
            "runsResponseSchema": {
                "type": "object",
                "properties": {
                    "data": {
                        "type": "object",
                        "properties": {
                            "id": {
                                "type": "string"
                            },
                            "actId": {
                                "type": "string"
                            },
                            "userId": {
                                "type": "string"
                            },
                            "startedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "finishedAt": {
                                "type": "string",
                                "format": "date-time",
                                "example": "2025-01-08T00:00:00.000Z"
                            },
                            "status": {
                                "type": "string",
                                "example": "READY"
                            },
                            "meta": {
                                "type": "object",
                                "properties": {
                                    "origin": {
                                        "type": "string",
                                        "example": "API"
                                    },
                                    "userAgent": {
                                        "type": "string"
                                    }
                                }
                            },
                            "stats": {
                                "type": "object",
                                "properties": {
                                    "inputBodyLen": {
                                        "type": "integer",
                                        "example": 2000
                                    },
                                    "rebootCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "restartCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "resurrectCount": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "computeUnits": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "options": {
                                "type": "object",
                                "properties": {
                                    "build": {
                                        "type": "string",
                                        "example": "latest"
                                    },
                                    "timeoutSecs": {
                                        "type": "integer",
                                        "example": 300
                                    },
                                    "memoryMbytes": {
                                        "type": "integer",
                                        "example": 1024
                                    },
                                    "diskMbytes": {
                                        "type": "integer",
                                        "example": 2048
                                    }
                                }
                            },
                            "buildId": {
                                "type": "string"
                            },
                            "defaultKeyValueStoreId": {
                                "type": "string"
                            },
                            "defaultDatasetId": {
                                "type": "string"
                            },
                            "defaultRequestQueueId": {
                                "type": "string"
                            },
                            "buildNumber": {
                                "type": "string",
                                "example": "1.0.0"
                            },
                            "containerUrl": {
                                "type": "string"
                            },
                            "usage": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "integer",
                                        "example": 1
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            },
                            "usageTotalUsd": {
                                "type": "number",
                                "example": 0.00005
                            },
                            "usageUsd": {
                                "type": "object",
                                "properties": {
                                    "ACTOR_COMPUTE_UNITS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATASET_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "KEY_VALUE_STORE_WRITES": {
                                        "type": "number",
                                        "example": 0.00005
                                    },
                                    "KEY_VALUE_STORE_LISTS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_READS": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "REQUEST_QUEUE_WRITES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_INTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "DATA_TRANSFER_EXTERNAL_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_RESIDENTIAL_TRANSFER_GBYTES": {
                                        "type": "integer",
                                        "example": 0
                                    },
                                    "PROXY_SERPS": {
                                        "type": "integer",
                                        "example": 0
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
```
