Get started
Product
Back
Start here!
Get data with ready-made web scrapers for popular websites
Browse 7,000+ Actors
Apify platform
Apify Store
Pre-built web scraping tools
Actors
Build and run serverless programs
Integrations
Connect with apps and services
Anti-blocking
Scrape without getting blocked
Proxy
Rotate scraper IP addresses
Open source
Crawlee
Web scraping and crawling library
Solutions
MCP server configuration
Configure your Apify MCP server with Actors and tools for seamless integration with MCP clients.
Start building
Web data for
Enterprise
Startups
Universities
Nonprofits
Use cases
Data for generative AI
Data for AI agents
Lead generation
Market research
View more →
Consulting
Apify Professional Services
Apify Partners
Developers
Documentation
Full reference for the Apify platform
Code templates
Python, JavaScript, and TypeScript
Web scraping academy
Courses for beginners and experts
Monetize your code
Publish your scrapers and get paid
Learn
API reference
CLI
SDK
Earn from your code
$495k paid out in August alone. Many developers earn $3k+ every month.
Start earning now
Resources
Help and support
Advice and answers about Apify
Submit your ideas
Tell us the Actors you want
Changelog
See what’s new on Apify
Customer stories
Find out how others use Apify
Company
About Apify
Contact us
Blog
Live events
Partners
Jobs
We're hiring!
Join our Discord
Talk to scraping experts
Pricing
Contact sales
$30.00/month + usage
trudax/kabum
Developed by
Gustavo Rudiger
0.0 (0)
0
11
2
Last modified
8 months ago
E-commerce
yasmany.casanova/ifood-scraper
Extracts restaurant data from iFood Brazil—including profiles, menus, prices, and ratings—with location-based search and clean, structured JSON output.
Yasmany Grijalba Casanova
6
5.0
yasmany.casanova/vipcommerce-scraper
Extract public product, price, and stock data from VipCommerce supermarkets in Brazil. Clean, structured output in JSON with automated updates. 100% LGPD compliant.
3
brasil-scrapers/santander-imoveis-api
API para busca de imóveis do Santander com diversos filtros personalizáveis.
Brasil Scrapers
16
brasil-scrapers/caixa-leiloes-api
API confiável de leilões de imóvel Caixa. Esta ferramente de web scraper permite buscar dados no site de leilões da Caixa federal com filtros específicos por estado, cidade e modo de busca. Solução ótima para criar automações e análise do mercado de leilões imobiliário.
88
giovannibiancia/Doctoralia
Doctoralia Data Extractor will extract detailed information on healthcare professionals from Doctoralia. Ideal for market research, lead generation, and data analysis in the healthcare sector.
Giovanni Bianciardi
69
4.5
brasil-scrapers/busca-de-editais-compras-eletronicas-rs
O Busca de Editais | Compras Eletrônicas RS é especializado na coleta automatizada de editais do Portal de Compras do Estado do Rio Grande do Sul. Este crawler extrai dados estruturados de editais públicos
1
muhammetakkurtt/doctoralia-brazil-scraper
This Apify actor collects doctor reviews and ratings from the doctoralia.com.br website. Users can search by specific specialties and cities. The actor pulls data such as doctor ID, review score, review text and review date and presents it in a structured format.
Muhammet Akkurt
44
4.9
makemakers/Viva-Real-Scraper
Extracts real estate listings from www.vivareal.com.br a brazilian real estate platform
makemakers
53
brasil-scrapers/quinto-andar-api
Quinto Andar API: extraia dados detalhados de imóveis no Brasil utilizando esse scraper do Quinto Andar, incluindo tipo, localização, preço e descrição. Ideal para análise de mercado e buscas imobiliárias. Automatize a coleta de informações do Quinto Andar com eficiência.
147
brasil-scrapers/consulta-cnpj-api
Essa API é uma ferramenta rápida e confiável para extrair dados do site da Receita Federal. Extrai informações da empresa como registro, situação cadastral, quadro societário e cnaes. Esse web scraper está sempre atualizado para pegar as informações mais atualizadas possíveis.
70
1.0
Description
JSON Example
searchesarrayOptional
searches
Search terms to start with.
startUrlsarrayOptional
startUrls
URLs to start with.
maxItemsintegerOptional
maxItems
Maximum number of products to be scraped. Leave it empty for unlimited search.