Zillow Search Scraper avatar

Zillow Search Scraper

Pricing

$19.99/month + usage

Go to Apify Store
Zillow Search Scraper

Zillow Search Scraper

🏠 Zillow Search Scraper turns Zillow search results into structured real estate data—prices, beds/baths, sqft, address, status, photos & links. 🔎 Ideal for comps, lead gen & market analysis. ⚡ Fast, reliable, anti-blocking. 🚀 For investors, agents & analysts.

Pricing

$19.99/month + usage

Rating

0.0

(0)

Developer

ScrapAPI

ScrapAPI

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

a day ago

Last modified

Share

Zillow Search Scraper

The Zillow Search Scraper is a fast, reliable Zillow web scraping tool that turns Zillow search results into structured real estate data for analysis and automation. It solves the pain of manually collecting listing details by programmatically extracting prices, beds/baths, square footage, addresses, statuses, photos, and links from search pages. Built for investors, agents, analysts, and developers, this Zillow listings scraper scales from one-off comps to large market studies—an effective Zillow API alternative for Zillow search export workflows.

What data / output can you get?

Below are example fields pushed to the Apify dataset for each listing. Values are illustrative and align with real output keys.

Data fieldDescriptionExample value
zpidZillow Property ID (stringified)"55826232"
idSame as zpid (for convenience)"55826232"
priceFormatted price string"$649,000"
unformattedPriceNumeric price value parsed from price649000
addressFull address string"3810 Hawthorne Ave, Dallas, TX 75219"
addressStreetStreet address"3810 Hawthorne Ave"
addressCityCity"Dallas"
addressStateState (abbrev)"TX"
addressZipcodeZIP code"75219"
bedsNumber of bedrooms3
bathsNumber of bathrooms2.5
areaSquare footage2622
detailUrlFull URL to the listing detail page"https://www.zillow.com/homedetails/..."
statusTypeStatus code (e.g., FOR_SALE)"FOR_SALE"
statusTextHuman-readable status text"House for sale"
imgSrcMain image URL"https://photos.zillowstatic.com/..."
latLongGeo coordinates object{"latitude": 32.822216, "longitude": -96.811325}
zestimateZillow’s estimated value (if present)994000

Bonus metadata includes fields like palsId, rawHomeStatusCd, marketingStatusSimplifiedCd, providerListingId, isUndisclosedAddress, streetViewURL, hasImage, has3DModel, hasVideo, hdpData, carouselPhotosComposable, foundFromSearchUrl, and zoomQuadrantSequence.

Results are saved to your Apify dataset in real time. You can export data to JSON or CSV directly from the dataset.

Key features

  • ⚡ Dynamic zoom API pagination
    Use “Pagination with dynamic zoom increase” to split large map areas into quadrants and go beyond normal Zillow pagination caps—great for big markets and complete coverage.

  • 🧭 HTML pagination fallback
    When the API method isn’t usable (e.g., keyword-only URLs without searchQueryState), the scraper switches to robust HTML parsing of the search pages.

  • 🛡️ Intelligent proxy management
    Automatic fallback from direct requests to datacenter and then residential proxies (with retries), helping avoid blocks and maintain uptime.

  • 💾 Live dataset saving
    Listings are pushed to the Apify dataset as they’re found—so you don’t lose progress if a run stops unexpectedly.

  • 📦 Bulk input support
    Feed multiple search URLs or keywords in one run for automated Zillow search export across cities, ZIPs, or regions.

  • 🎯 Configurable limits
    Control scale with maxItems (1–10,000). Map Markers mode safely caps per-URL output to ~500 results.

  • 🧑‍💻 Developer-friendly, Python-based
    Implemented in Python for easy integration into your pipelines. Advanced users can pass sortOrder via API (honored at runtime) for custom sorting.

  • 🔍 Detailed logging
    Clear, incremental progress logs with periodic “Saved X/Y” snapshots for monitoring.

How to use Zillow Search Scraper - step by step

  1. Sign in to your Apify account and open the Apify Console.
  2. Go to Actors and locate “zillow-search-scraper”.
  3. Add input:
    • Paste Zillow search URLs (recommended: full URLs with searchQueryState) or keywords like “dallas-tx”, “new-york-ny” into searchUrls.
  4. Choose the extractionMethod:
    • PAGINATION_WITH_DYNAMIC_ZOOM_INCREASE for large result sets.
    • PAGINATION_WITHOUT_ZOOMING_IN for HTML-based pagination.
    • MAP_MARKERS for quick map-marker snapshots (~500 results cap per URL).
  5. Set maxItems to control how many listings you want per search URL.
  6. Configure proxyConfiguration if needed (by default, it starts without a proxy and can auto-fallback).
  7. Run the actor and monitor logs. Results are saved live to the dataset.
  8. Download your results from the “Dataset” tab—export to JSON or CSV for analysis or upload.

Pro tip: Advanced users can provide a sortOrder (via API only) and the actor will apply it to eligible URLs. You can also control pacing via a delay parameter when invoking programmatically.

Use cases

Use caseDescription
Real estate market researchAggregate for-sale and rental inventory to benchmark pricing, beds/baths, and sqft across neighborhoods for trend analysis.
Lead generation for agentsBuild targeted lead lists by exporting structured listing data and links for outreach.
Investment screeningRapidly evaluate comps and shortlist properties by price, area, and status to streamline underwriting.
Rental market monitoringTrack rental availability and pricing using statusType/statusText for a Zillow rental scraper workflow.
Competitor & broker trackingMonitor listing volumes and changes to stay ahead of market shifts.
Data enrichment pipeline (API)Feed normalized listing data into internal databases or analytics stacks for Zillow property data scraping workflows.
Academic & urban researchCollect longitudinal datasets for housing studies and policy analysis using consistent Zillow search export runs.

Why choose Zillow Search Scraper?

This production-ready Zillow data scraper is built for precision, scale, and reliability—ideal for teams who value structured outputs and stable automation.

  • ✅ Accurate, normalized output with both formatted and numeric pricing, detailed addresses, and geolocation.
  • 🚀 Scales to large markets using dynamic zoom to bypass normal search caps.
  • 🧑‍💻 Developer access: Python-based actor, honors advanced parameters like sortOrder when passed via API.
  • 🔄 Bulk-friendly: process multiple search URLs or keywords in one automated Zillow search scraper workflow.
  • 🔌 Easy exports: fetch results from the Apify dataset and scrape Zillow listings to CSV or JSON for downstream use.
  • 🛡️ Anti-blocking built-in: automatic proxy fallback improves stability vs. extension-based or ad hoc tools.
  • 💼 Reliable infrastructure: live-saving to datasets and clear logging reduce operational risk.

In short, it’s a robust Zillow scraping service alternative to unstable browser extensions—built for repeatable Zillow search results scraper jobs.

Yes—when used responsibly. This actor collects data from publicly available Zillow search pages and does not access private or password-protected content.

Guidelines for compliant use:

  • Only collect public listing data and respect Zillow’s terms of service.
  • Ensure your use complies with applicable laws (e.g., GDPR/CCPA).
  • Avoid spam and misuse; use data for analysis and research purposes.
  • Consult your legal team for edge cases or jurisdiction-specific requirements.

Input parameters & output format

Example JSON input

{
"searchUrls": [
"https://www.zillow.com/dallas-tx/?category=SEMANTIC&searchQueryState=%7B%22isMapVisible%22%3Atrue%2C%22mapBounds%22%3A%7B%22north%22%3A33.10483509834637%2C%22south%22%3A32.53008985410089%2C%22east%22%3A-96.3544578671875%2C%22west%22%3A-97.2004051328125%7D%2C%22filterState%22%3A%7B%22sort%22%3A%7B%22value%22%3A%22globalrelevanceex%22%7D%7D%2C%22isListVisible%22%3Atrue%2C%22usersSearchTerm%22%3A%22Dallas%2C%20TX%22%2C%22category%22%3A%22cat1%22%2C%22regionSelection%22%3A%5B%7B%22regionId%22%3A38128%2C%22regionType%22%3A6%7D%5D%7D"
],
"extractionMethod": "PAGINATION_WITH_DYNAMIC_ZOOM_INCREASE",
"maxItems": 20,
"proxyConfiguration": {
"useApifyProxy": false
}
}
FieldTypeRequiredDefaultDescription
searchUrlsarrayYesList of Zillow search URLs (full URLs with searchQueryState recommended) or keywords/locations (e.g., "dallas-tx"). Supports bulk input.
extractionMethodstringNoPAGINATION_WITH_DYNAMIC_ZOOM_INCREASEChoose the scraping method: PAGINATION_WITH_DYNAMIC_ZOOM_INCREASE, PAGINATION_WITHOUT_ZOOMING_IN, or MAP_MARKERS.
maxItemsintegerNo20Maximum number of property listings to scrape (1–10,000).
proxyConfigurationobjectNo{"useApifyProxy": false}Proxy settings. By default, runs without a proxy. On blocks, it can fallback to datacenter, then residential with retries.

Note for developers: At runtime, the actor also honors sortOrder (if provided via API) and uses a delay setting internally for pacing.

Example JSON output

Each item in the dataset is a normalized listing object:

{
"zpid": "55826232",
"palsId": null,
"id": "55826232",
"rawHomeStatusCd": "FOR_SALE",
"marketingStatusSimplifiedCd": "FOR_SALE",
"providerListingId": "*APID55826232*",
"imgSrc": "https://photos.zillowstatic.com/...",
"hasImage": true,
"detailUrl": "https://www.zillow.com/homedetails/55826232_zpid/",
"statusType": "FOR_SALE",
"statusText": "House for sale",
"countryCurrency": "$",
"price": "$649,000",
"unformattedPrice": 649000,
"address": "3810 Hawthorne Ave, Dallas, TX 75219",
"addressStreet": "3810 Hawthorne Ave",
"addressCity": "Dallas",
"addressState": "TX",
"addressZipcode": "75219",
"isUndisclosedAddress": false,
"beds": 3,
"baths": 2.5,
"area": 2622,
"latLong": { "latitude": 32.822216, "longitude": -96.811325 },
"zestimate": 994000,
"has3DModel": false,
"hasVideo": false,
"hdpData": {},
"carouselPhotosComposable": {
"baseUrl": "https://photos.zillowstatic.com/fp/{photoKey}-p_e.jpg",
"communityBaseUrl": null,
"photoData": [],
"communityPhotoData": null,
"isStaticUrls": false
},
"foundOnSearchPage": 1,
"foundFromSearchUrl": "https://www.zillow.com/dallas-tx/?category=SEMANTIC&searchQueryState=...",
"zoomQuadrantSequence": []
}

Some fields may be null or empty depending on availability in the source (e.g., palsId, certain media fields).

FAQ

Do I need proxies to run this Zillow scraper?

No by default. The actor starts with no proxy. If Zillow blocks requests, it automatically falls back to datacenter and then residential proxies with retries. You can also explicitly set proxyConfiguration to use Apify Proxy from the start.

Which extraction method should I pick?

For large markets, use PAGINATION_WITH_DYNAMIC_ZOOM_INCREASE to go beyond normal caps by splitting the map into quadrants. Use PAGINATION_WITHOUT_ZOOMING_IN for HTML-based pagination when API-style calls are blocked. MAP_MARKERS is best for quick snapshots, limited to roughly 500 results per search URL.

Can I scrape multiple cities or ZIP codes at once?

Yes. Add multiple entries to searchUrls—either full Zillow search URLs (recommended) or keywords like “dallas-tx”. The actor processes them in bulk as an automated Zillow search scraper.

How many results can I extract per run?

You control this with maxItems (1–10,000). In MAP_MARKERS mode, results are limited to roughly 500 per search URL by design. Dynamic zoom helps collect more complete sets on large areas.

What data fields are included in the output?

Core fields include zpid, id, price, unformattedPrice, address components, beds, baths, area, detailUrl, statusType/statusText, imgSrc, latLong, zestimate, plus metadata like hdpData and carouselPhotosComposable. See the “What data / output can you get?” section for examples.

Can I use this as a Zillow rental scraper or for-sale scraper?

Yes. The scraper captures statusType and statusText from search results, so it works for both rentals and for-sale listings depending on your search URLs and filters.

How do I export results—can I scrape Zillow listings to CSV?

After the run, open the dataset and export your results to JSON or CSV. This supports seamless Zillow search export into analytics or CRMs.

Is there a trial or pricing?

A flat monthly price of $19.99 is available with 120 trial minutes to test the actor before committing to larger runs.

Does this support developers and Python integration?

Yes. The actor is implemented in Python and runs on the Apify platform. You can automate runs and consume dataset exports programmatically to build a Zillow scraper Python workflow or pipeline.

Closing thoughts

Zillow Search Scraper is built to turn Zillow search results into structured, analysis-ready real estate data at scale. With dynamic zoom API pagination, robust HTML fallback, live-saving, and proxy resilience, it’s ideal for investors, agents, analysts, and researchers. Developers can integrate it into automated pipelines, export Zillow data to CSV/JSON, and operate it as a dependable Zillow API alternative. Start extracting smarter Zillow property insights with a reliable, automated Zillow search results scraper today.