InvestorLift Scraper · Export Deals & Wholesaler Data
Pricing
from $2.99 / 1,000 results
InvestorLift Scraper · Export Deals & Wholesaler Data
Export properties, wholesalers, pricing, locations, and deal data from InvestorLift. Get a clean dataset ready for CRM, outreach, comps, or deal sourcing — in one run. Turn InvestorLift into a structured deal pipeline. One run, one dataset, thousands of deals.
Pricing
from $2.99 / 1,000 results
Rating
5.0
(1)
Developer
Corentin Robert
Actor stats
2
Bookmarked
14
Total users
7
Monthly active users
1.8 days
Issues response
13 days ago
Last modified
Categories
Share
InvestorLift Marketplace Scraper
Turn InvestorLift — a US wholesale / off-market deal marketplace — into structured rows you can use: CRM, cold outreach, market maps, comps, or internal deal flow. No manual copy-paste from the marketplace UI.
Why use this Actor?
| You want to… | This helps you… |
|---|---|
| Fill a pipeline | Export hundreds or thousands of deals with location, price, and seller signals in one run. |
| Contact wholesalers | Optional enrichment: company, contact name, seller rating & review count, description, condition — so your team can personalize outreach. |
| See what already sold or went pending | Historical mode pulls sold and pending inventory for research and comp building. |
| Focus on fresh listings | Active mode + optional listed from / until dates to match your buy box timing. |
| Own a list of IDs | Specific mode: paste deal URLs or numeric IDs and enrich only those. |
| Survive long runs | Large historical crawls can resume after timeout or failure — already-saved deals are not duplicated. |
Technical note: Scraping uses HTTP + parsing (no headless browser farm), so runs stay predictable on Apify.
Modes
| Mode | Best for | Typical runtime |
|---|---|---|
| Active | Current listings | Minutes |
| Historical | Sold + pending (bulk history) | Hours — raise run timeout if needed |
| Specific | Your own URL/ID list | Depends on list size |
| Active + date filter | “Listed in this window” only | Minutes |
Use Listed from / until (YYYY-MM-DD) only with Active — they narrow listings by when they appeared on the marketplace.
Recommended input (first run)
- Choose Mode (usually Active to start).
- For full seller + description + ratings (best for outreach), pass
"enrichWithDetails": truein the API input or yourINPUT.json. The Console hides this field; the schema default isfalse, so set it explicitly when you need enrichment. - Start, then download the Dataset.
Output (what lands in your dataset)
| Use case | Fields (high level) |
|---|---|
| Routing & geography | city, state_code, county, zip, latitude, longitude |
| Deal economics | price, bedrooms, bathrooms, sq_footage, lot_size, ARV / margin fields when present |
| Seller & trust | wholesaler_company, wholesaler_name, account_title, wholesaler_rating, wholesaler_review_count, account_id |
| Timing & traction | days_on_il, page_views, published_at, expires_at |
| Narrative | description, condition, property_type, img_url, property_page_url |
| Lineage | source: available, historical, or bulk |
Troubleshooting
| Issue | What to do |
|---|---|
| Timeout on Historical | Resurrect the run or relaunch; raise Timeout in run options (e.g. 8 h). |
| 403 / blocked | Site may rate-limit or block datacenter IPs — retry later or reduce concurrency via detailConcurrency. |
| Thin or empty rows | For Active, check date format YYYY-MM-DD; retry if the API returned nothing transiently. |
| No description / wholesaler columns | Set enrichWithDetails": true in input (see above). |
Resume / checkpoint
On failure, the Actor stores state so a Resurrect or rerun with the same input skips deals already present in the dataset (and associated ID tracking). For long Historical jobs: resurrect → optionally increase timeout → continue without duplicating rows.
API input (advanced)
All fields can be set via the Apify API. Console-visible fields are listed in the Actor input form; others are optional.
| Parameter | Notes |
|---|---|
scrapeMode | active_only · historical · specific |
dealIds | For specific — URLs or numeric IDs |
dateRangeStart / dateRangeEnd | Active only — filter by listed date |
enrichWithDetails | true recommended for outreach-quality rows; false for faster, lighter runs |
maxItems | Optional cap on how many deals to push (API / hidden) |
excludeIds | Skip known IDs (API) |
detailConcurrency | Parallel detail fetches (API; default 64, max 128) |
Example:
{"scrapeMode": "historical","enrichWithDetails": true}
Local development
$npm install
Create storage/key_value_stores/default/INPUT.json:
{"scrapeMode": "active_only","enrichWithDetails": true}
Then:
$apify run
Unit tests (no network): npm test · Integration (live site): npm run test:integration