Get started
Product
Back
Start here!
Get data with ready-made web scrapers for popular websites
Browse 20,901 Actors
Apify platform
Apify Store
Pre-built web scraping tools
Actors
Build and run serverless programs
Integrations
Connect with apps and services
MCP
Give your AI access to Actors
Anti-blocking
Scrape without getting blocked
Proxy
Rotate scraper IP addresses
Open source
Crawlee
Web scraping and crawling library
Solutions
MCP server configuration
Configure your Apify MCP server with Actors and tools for seamless integration with MCP clients.
Start building
Web data for
Enterprise
Startups
Universities
Nonprofits
Use cases
Data for generative AI
Data for AI agents
Lead generation
Market research
View more →
Consulting
Apify Professional Services
Apify Partners
Developers
Documentation
Full reference for the Apify platform
Code templates
Python, JavaScript, and TypeScript
Web scraping academy
Courses for beginners and experts
Monetize your code
Publish your scrapers and get paid
Learn
API reference
CLI
SDK
Earn from your code
$596k paid out in December. Many developers earn $3k+ every month.
Start earning now
Resources
Help and support
Advice and answers about Apify
Actor ideas
Get inspired to build Actors
Changelog
See what’s new on Apify
Customer stories
Find out how others use Apify
Company
About Apify
Contact us
Blog
Live events
Partners
Jobs
We're hiring!
Join our Discord
Talk to scraping experts
Pricing
Contact sales
Fandom & Wikipedia Extractor
$30.00/month + usage
jupri/wiki-scraper
Scrape content from Fandom.com and Wikipedia.com
Rating
0.0
(0)
Developer
cat
Actor stats
1
Bookmarked
138
Total users
0
Monthly active users
3 years ago
Last modified
Categories
Social media
Share
kuaima/Fandom
Fandom is one of the biggest source for all things TV, movies, and games, including Star Wars, Fallout, Marvel, DC and more. This scraper can help you to get data from Fandom topic like https://www.fandom.com/topics/movies.
kuai ma
91
tuningsearch/wikipedia-search-scraper
🔥 Only $0.5 per 1,000 results 🔥 **CHEAPEST** Wikipedia Search + Full Page Scraper! 🔍 Search 100 results per query across 70 languages 📄 Extract complete page content in Markdown format ⚡ Lightning-fast batch processing with zero failure charges!
tuningsearch
44
pluzgi/wikipedia-scraper
The scraper searches Wikipedia for a given term, extracts the titles and URLs of search results, and retrieves the last modification date from each page.
pluzgi
45
logie/shopify-products-scraper
Our shopify website scraper is the perfect tool for anyone looking to get information on shopify websites. Use it to get products from any concurrent including price, images, variants
Kinder Théo
877
2.4
easyapi/reddit-comments-search-scraper
Search and extract Reddit comments with advanced filtering options. Get detailed metadata including comment content, author info, post context, and engagement metrics. Perfect for sentiment analysis, trend research, and social media monitoring.
EasyApi
123
contacts-api/wikipedia-email-scraper-fast-advanced-and-cheapest
📚 Wikipedia Email Scraper allows you to collect publicly available editor and organization emails from Wikipedia pages 🔎 Great for research and academic outreach 📧
Lead Heaven
2
changeable_acacia/wikipedia-article-extractor-ai-ready
Extracts clean JSON from any Wikipedia article for AI/RAG use.
SABYASACHI TRIPATHY
agentify/wikipedia-mcp-server
MCP server for Wikipedia, providing LLMs and clients with real-time access to Wikipedia articles, summaries, sections, and related information via Apify Actor.
agentify
18
automation-lab/wikipedia-scraper
Search and extract Wikipedia articles — titles, summaries, full content, categories, and images. Uses the free MediaWiki API.
Stas Persiianenko
3
mstephen190/proxy-scraper
Free proxy scraper and checker. Search dozens of free proxy websites. Get list of 100% working public proxies in seconds. Automatically test proxies based on target URL and maximum timeout.
Matthias Stephens
2.9K
Description
JSON example
🔥 Pages
pages
Required
Fandom or Wikipedia pages
🔥 Content
content
Optional
Content format (default: mediawiki)