Wikipedia Scraper | $5 / 1k | Fast & Reliable
Pricing
Pay per event
Wikipedia Scraper | $5 / 1k | Fast & Reliable
Get full articles and detailed search results with the Wikipedia Scraper. Extract structured data including titles, summaries, citations, and full content. Ideal for market research, AI training, and competitive intelligence.
Pricing
Pay per event
Rating
5.0
(1)
Developer

Fatih Tahta
Actor stats
1
Bookmarked
4
Total users
2
Monthly active users
23 days ago
Last modified
Categories
Share
Slug: fatihtahta/wikipedia-scraper
Get full articles and detailed search results with the Wikipedia Scraper. Extract structured data including titles, summaries, citations, and full content. Ideal for market research, AI training, and competitive intelligence.
Overview
Wikipedia is the world’s most comprehensive open encyclopedia, constantly updated across thousands of topics and languages. The Wikipedia Scraper automates the collection of article-level insights, transforming public encyclopedia pages into structured datasets ready for analysis.
The actor reliably gathers:
- Complete article content with metadata such as titles, summaries, and publication details.
- Reference counts, internal links, media assets, and infobox attributes for deeper context.
- Search and category results expanded into the full articles they reference.
Run it once and receive consistent, ready-to-use records—no manual browsing, copying, or formatting required.
Why Use This Actor
- Market researchers & analysts: Track company histories, industry timelines, and competitive narratives straight from a trusted knowledge base.
- Developers & data teams: Feed LLM training pipelines, knowledge graphs, or semantic search indices with normalized Wikipedia data.
- Content strategists & educators: Assemble curated reading lists, bibliographies, or citation-rich briefings without handcrafting each entry.
- Knowledge operations & directory builders: Populate internal wikis, catalogues, or monitoring dashboards with up-to-date encyclopedia coverage.
Use it for lead and partner research, market landscaping, product discovery, directory building, due diligence prep, and any workflow that benefits from detailed, cited background information.
Input Parameters
| Parameter | Type | Description | Default |
|---|---|---|---|
articleInputs | array of strings | Provide Wikipedia article slugs or full URLs to fetch directly. | — |
searchInputs | array of strings | Enter search queries or Wikipedia search result URLs to discover matching articles before scraping them. | — |
language | string (select) | Choose the Wikipedia language edition that pairs with the provided slugs and targets. | "en" |
limit | integer | Maximum number of articles saved per input. Useful for sampling or capping run size. | 50000 |
proxyConfiguration | object | Configure the connection settings. The default Apify datacenter proxy keeps runs stable. | Apify datacenter proxy |
Example Input
{"articleInputs": ["YouTube","https://en.wikipedia.org/wiki/OpenAI"],"searchInputs": ["generative AI","https://en.wikipedia.org/w/index.php?search=cloud%20computing&title=Special:Search&fulltext=1"],"language": "en","limit": 250,"proxyConfiguration": {"useApifyProxy": true}}
Example Output
{"title": "YouTube","pageId": 3524766,"language": "en","url": "https://en.wikipedia.org/wiki/YouTube","referencesCount": 409,"internalLinks": ["https://en.wikipedia.org/wiki/Online_video_platform","https://en.wikipedia.org/wiki/Alphabet_Inc.","https://en.wikipedia.org/wiki/Social_media_platform"],"imageUrls": ["https://upload.wikimedia.org/wikipedia/commons/thumb/2/20/YouTube_2024.svg/330px-YouTube_2024.svg.png"],"infobox": {"Type of business": "Subsidiary","Founded": "February 14, 2005","Headquarters": "San Bruno, California, United States","Owner": "Alphabet Inc."},"mainContent": "YouTube is an American online video sharing platform owned by Google...","fetchedAt": "2025-11-05T10:11:18.247Z"}
Field highlights
title,pageId,language, andurlidentify the article.referencesCount,internalLinks, andimageUrlsshow sourcing depth and media assets.infoboxcompiles structured summary facts.mainContentdelivers the full article body for text analysis or summarization.fetchedAtrecords when the data was collected.
Notes & Limitations
- Wikipedia content changes frequently; schedule runs to keep datasets current.
- Always review and respect Wikipedia’s licensing terms and robots guidelines when redistributing or republishing material.
- Use the data responsibly, especially when combining it with other datasets or personal information.
Support
Questions or custom needs? Open an issue on the Issues tab of the actor page in Apify Console and it will be resolved around the clock.
Happy Scraping,
- Fatih

