Investorlift Scraper avatar
Investorlift Scraper
Under maintenance

Pricing

$10.00 / actor start

Go to Apify Store
Investorlift Scraper

Investorlift Scraper

Under maintenance

Extract comprehensive data from InvestorLift marketplace properties including property details, pricing, location data, and account information. Intelligently fetches the complete list of available properties first, then processes them in parallel batches for maximum efficiency.

Pricing

$10.00 / actor start

Rating

0.0

(0)

Developer

Corentin Robert

Corentin Robert

Maintained by Community

Actor stats

0

Bookmarked

2

Total users

1

Monthly active users

15 days ago

Last modified

Share

InvestorLift Scraper - Extract Property Data

Extract comprehensive data from InvestorLift marketplace properties across the United States. This Actor scrapes the InvestorLift API to provide you with a complete database of real estate investment properties, including property details, pricing, location data, and account information.

What does InvestorLift Scraper do?

The InvestorLift Scraper extracts comprehensive data from the InvestorLift marketplace using the official API. The Actor intelligently fetches the complete list of available properties first, then processes them in parallel batches. It extracts complete property information including location, pricing, property characteristics, and account details, then exports structured data ready for analysis, CRM integration, or business intelligence.

The Actor uses a two-phase approach: first retrieves all available property IDs and partial data from the main endpoint, then enriches each property with detailed information from the detailed endpoint. Data is aggregated progressively into a local CSV file for real-time monitoring.

What can this InvestorLift Scraper do?

🚀 Key Features

  • Smart Two-Phase Approach: First fetches the complete list of available properties, then processes only existing properties
  • Efficient Data Extraction: Uses partial data from the main endpoint and enriches with detailed information
  • Complete Property Data: Extracts all property details including location, pricing, characteristics, and images
  • Account Information: Extracts wholesaler account details and dispositions manager information
  • Parallel Processing: Configurable batch size for parallel processing (default: 100 properties per batch)
  • Progressive Aggregation: Data is aggregated progressively into a local CSV file
  • High Success Rate: Accurate extraction using official JSON API
  • Clean CSV Output: Automatically removes line breaks and extra whitespace from text fields
  • Structured Output: Clean, normalized data ready for immediate use

🎯 Platform Advantages

Your Actor + the Apify platform. They come as a package. This scraper benefits from:

  • Monitoring & Logs: Real-time execution monitoring with detailed logs
  • API Access: Access your data programmatically via Apify API
  • Scheduling: Set up automated runs on a schedule
  • Integrations: Connect to Make.com, Zapier, Google Sheets, and more
  • Proxy Rotation: Automatic proxy management for reliable scraping
  • Scalability: Handle large-scale scraping with cloud infrastructure
  • Data Storage: Secure dataset storage with multiple export formats (JSON, CSV, Excel, HTML)
  • CSV Export: Automatic CSV generation for easy data analysis

What data can InvestorLift Scraper extract?

The Actor extracts comprehensive data from InvestorLift properties. Here's what you can extract:

Data CategoryFields ExtractedDescription
Identificationid, account_id, titleUnique identifiers and property title
Locationcity, county, zip, state_code, public_address, latitude, longitudeComplete address and GPS coordinates
Property Detailsbedrooms, bathrooms, half_bathrooms, year_built, sq_footage, lot_size, lot_size_unitProperty characteristics
Pricingprice, buy_now_price, arv_estimate, min_emd, repair_estimate_min, repair_estimate_max, zestimateAll pricing information
Property Typeproperty_type, parking_type, condition, occupancyProperty classification
System Detailsroof_age, heating_system_type, heating_system_age, air_conditioning_type, foundation_type, foundation_conditionProperty system information
Account Infoaccount_title, account_slug, account_type, dispositions_manager_nameWholesaler and manager information
Mediamain_image_url, images_countImage information
Metadatastatus, published_at, expires_at, views, property_page_urlProperty status and metadata
Additional Metricsarv_percentage, gross_margin, hotness, tags, is_verified, score, entry_feeAdditional property metrics and scoring

Detailed Field Description

  • id: Unique identifier of the property
  • account_id: ID of the wholesaler account
  • title: Full property address/title
  • city, county, zip, state_code: Location information
  • public_address: Public-facing address
  • latitude, longitude: GPS coordinates
  • bedrooms, bathrooms, half_bathrooms: Room counts
  • year_built: Year the property was built
  • sq_footage: Square footage of the property
  • lot_size, lot_size_unit: Lot size and unit (acres, sqft, etc.)
  • price: Current asking price
  • buy_now_price: Buy now price
  • arv_estimate: After Repair Value estimate
  • min_emd: Minimum earnest money deposit
  • repair_estimate_min/max: Repair cost estimates
  • zestimate: Zillow estimate (if available)
  • property_type: Type of property (Single-Family, Multi-Family, etc.)
  • parking_type: Parking type (Assigned, Unassigned, etc.)
  • condition: Property condition (Turn Key, Needs Work, etc.)
  • occupancy: Occupancy status (Vacant, Occupied, etc.)
  • property_page_url: URL to the property page on InvestorLift
  • account_title: Name of the wholesaler account
  • account_slug: Slug of the wholesaler account
  • account_type: Type of account (wholesaler, etc.)
  • dispositions_manager_name: Name of the dispositions manager
  • main_image_url: URL of the main property image
  • images_count: Number of images available
  • status: Property status (available, sold, etc.)
  • published_at, expires_at: Publication dates
  • views: Number of views
  • arv_percentage: ARV percentage (from main endpoint)
  • gross_margin: Gross margin calculation
  • hotness: Property hotness score
  • tags: Property tags (comma-separated)
  • is_verified: Whether the property is verified
  • score: Property score
  • entry_fee: Entry fee amount
  • show_entry_fee_instead_price: Whether to show entry fee instead of price

How to scrape InvestorLift properties?

Step-by-Step Tutorial

  1. Configure Batch Size (optional): Set batchSize in input.json to control parallel processing (default: 100)
  2. Run the Actor: Click "Start" to begin scraping all available properties
  3. Monitor Progress: Watch real-time logs showing properties extracted with batch progress
  4. Download Results: Once complete, download your data in JSON, CSV, Excel, or HTML format from the Dataset tab

The Actor automatically:

  • Fetches the complete list of available properties from the main endpoint
  • Extracts partial data (IDs, location, pricing, metrics) from the main endpoint
  • Processes properties in parallel batches (configurable batch size)
  • Enriches each property with detailed information from the detailed endpoint
  • Merges partial and detailed data for complete property records
  • Normalizes and validates all data according to the schema
  • Cleans text fields (removes line breaks, extra whitespace)
  • Aggregates data progressively into a local CSV file
  • Saves results to the Apify dataset and generates a CSV file

Performance Tips

  • Two-Phase Approach: First phase fetches all available properties in one request, second phase processes them in parallel
  • Configurable Parallelism: Adjust batchSize to balance speed and API load (recommended: 10-1000)
  • Progressive Aggregation: Data is saved to CSV as it's extracted, so you can monitor progress in real-time
  • Efficient Data Usage: Uses partial data from main endpoint to avoid unnecessary detailed requests
  • Monitor Resource Usage: Check compute unit consumption in the Actor run details

How much will it cost to scrape InvestorLift properties?

The InvestorLift Scraper uses consumption-based pricing (Compute Units). The cost depends on:

  • Number of properties: One request to fetch all property IDs, then one detailed request per property
  • API response time: Faster responses use fewer compute units
  • Batch size: Larger batches process faster but may use more memory

Estimated costs:

  • Free plan: Can scrape hundreds of properties
  • Starter plan: Ideal for regular monitoring and data collection
  • Professional plan: Perfect for automated workflows and integrations

The Actor is optimized for efficiency with a two-phase approach: first fetches all available properties in one request, then processes them in parallel batches with a 500ms delay between batches to minimize API load while maintaining reasonable speed.

Input Configuration

InvestorLift Scraper has the following input options:

{
"workerCount": 20,
"requestDelay": 50,
"startFromId": 143697,
"blacklist": [123456, 789012],
"urls": [
"https://investorlift.com/marketplace/p/123456"
]
}
  • workerCount (optional): Number of parallel workers to process properties. Defaults to 20 if not specified. Recommended range: 10-50. Higher values process faster but may increase API load.
  • requestDelay (optional): Delay in milliseconds between each request. Defaults to 50ms if not specified. Helps avoid rate limiting.
  • startFromId (optional): Property ID to resume from. All properties with ID less than this value will be skipped. Useful when resuming after a timeout. Example: if you stopped at ID 143697, set "startFromId": 143697 to resume from that point.
  • blacklist (optional): Array of property IDs to exclude from processing. You can also provide a comma-separated string. These IDs will be skipped even if they haven't been processed yet. Useful for excluding specific properties you don't want to scrape.
  • urls (optional): Array of specific property URLs to scrape. If provided, only these properties will be scraped instead of all available properties.

Automatic Duplicate Prevention

The Actor automatically prevents processing duplicate properties by:

  • Reading local CSV file: Checks for already processed IDs in the local output.csv file
  • Reading Apify Dataset: On Apify platform, checks the dataset for already processed IDs
  • Using blacklist: Excludes any IDs specified in the blacklist parameter
  • Checkpoint system: Saves progress periodically to allow resuming after timeouts

You don't need to manually manage processed IDs - the Actor handles this automatically!

Output Format

You can download the dataset extracted by InvestorLift Scraper in various formats such as JSON, HTML, CSV, or Excel.

Output Example

{
"id": "293110",
"account_id": "357114",
"title": "3916 S Youngs Pl, Oklahoma City, OK 73119",
"city": "Oklahoma City",
"county": "Oklahoma County",
"zip": "73119",
"state_code": "OK",
"public_address": "Oklahoma County, Oklahoma City, OK 73119",
"bedrooms": "3",
"bathrooms": "1",
"half_bathrooms": "",
"year_built": "2014",
"sq_footage": "3145",
"lot_size": "0.337",
"lot_size_unit": "acres",
"latitude": "35.4262408",
"longitude": "-97.5532446",
"status": "available",
"price": "160000",
"buy_now_price": "160000",
"arv_estimate": "160000",
"description": "",
"property_type": "Single-Family",
"parking_type": "Unassigned",
"condition": "Turn Key",
"occupancy": "Vacant",
"property_page_url": "https://investorlift.com/marketplace/p/293110",
"account_title": "Amin Home Offers",
"account_slug": "amin-home-offers",
"account_type": "wholesaler",
"dispositions_manager_name": "Rafael Ali",
"main_image_url": "https://s3.us-east-2.amazonaws.com/sendlift/property-images/7894857.jpg",
"images_count": "1",
"published_at": "2025-12-24 02:33:54",
"expires_at": "2025-12-31",
"views": "1"
}

CSV Export

The Actor automatically generates a CSV file (OUTPUT.csv) in the Key-Value Store with all extracted data, using semicolons (;) as delimiters for easy import into Excel or other tools.

When running locally, the CSV file is saved as output.csv in the scraper directory and is updated progressively as properties are scraped.

Advanced Options

Performance Optimization

  • Two-Phase Approach: First fetches all available properties in one request, then processes in parallel batches
  • Configurable Parallelism: Adjust batch size to balance speed and API load
  • Progressive Aggregation: Data is saved to CSV as it's extracted for real-time monitoring
  • Error Handling: Automatic error handling with Promise.allSettled ensures reliable data extraction
  • Efficient Data Usage: Uses partial data from main endpoint to minimize detailed API calls

Data Quality

  • Complete Coverage: Extracts all available properties automatically (no need to specify ID ranges)
  • Accurate Extraction: Uses official JSON API for maximum accuracy
  • Normalized Format: All data is consistently formatted (all values as strings, null values handled)
  • Complete Property Data: Extracts all available fields including nested objects
  • Clean Text Fields: Automatically removes line breaks and extra whitespace from text fields for clean CSV output
  • Data Merging: Intelligently merges partial data from main endpoint with detailed data from property endpoint

Our scrapers are ethical and extract only publicly available property listing data from the InvestorLift marketplace API. We believe that our scrapers, when used for ethical purposes by Apify users, are safe.

However, you should be aware that your results could contain property and account information. You should not scrape data unless you have a legitimate reason to do so. If you're unsure whether your reason is legitimate, consult your lawyers.

You can also read our blog post on the legality of web scraping.

FAQ

How many properties can I scrape?

The Actor automatically fetches and processes all available properties from the InvestorLift marketplace. The exact number depends on what's currently available.

How fast is the scraping?

The Actor uses a two-phase approach: first fetches all property IDs in one request, then processes them in parallel batches. With default batch size of 100, processing 1000 properties takes approximately 2-3 minutes.

Can I adjust the processing speed?

Yes! Set the batchSize parameter in the input configuration. Higher values (e.g., 500-1000) process faster but may increase API load. Lower values (e.g., 10-50) are more conservative.

What if the scraping fails?

The Actor includes automatic error handling. If an API call fails, it will continue to the next property. The Actor stops after 10 consecutive errors (typically indicating the end of available properties).

Can I schedule regular runs?

Yes! Use Apify's scheduling feature to automatically run the Actor on a schedule (daily, weekly, etc.) to keep your data up-to-date.

How do I access the data via API?

Once the Actor completes, you can access the dataset via the Apify API. Check the API tab on the Actor detail page for code examples in JavaScript and Python.

Can I integrate this with other tools?

Yes! Apify supports integrations with Make.com, Zapier, Google Sheets, and many other platforms. Check the Integrations section in your Apify account.

I need a custom solution

If you need additional features or custom modifications to this Actor, feel free to reach out. We're open to creating custom solutions based on the current one.

Support

For issues, questions, or feedback:

  • Issues Tab: Report bugs or request features in the Actor's Issues tab
  • Actor Support: Contact support through the Apify platform
  • Documentation: Check the Apify Academy for tutorials and guides

We're always open to feedback and suggestions to improve the Actor!


Ready to extract InvestorLift property data? Start the Actor and get comprehensive real estate investment property information!