Airbnb Pro Host Business Email Scraper avatar
Airbnb Pro Host Business Email Scraper
Under maintenance

Pricing

Pay per event

Go to Apify Store
Airbnb Pro Host Business Email Scraper

Airbnb Pro Host Business Email Scraper

Under maintenance

Developed by

Corentin Robert

Corentin Robert

Maintained by Community

🚀 High-performance Airbnb scraper for B2B lead generation. Extracts professional host business information including company names, email addresses, phone numbers, and registration details. Perfect for real estate agencies, property managers, and business development teams.

0.0 (0)

Pricing

Pay per event

0

3

2

Last modified

9 days ago

Airbnb Professional Host Email Scraper

A specialized Apify scraper for extracting professional host business information from Airbnb.

🎯 Features

  • Complete extraction : Collects all properties in a city
  • Smart filtering : Automatically identifies professional hosts
  • Detailed information : Extracts company name, address, email, phone, registration number
  • Automatic navigation : Visits PROFESSIONAL_HOST_DETAILS modals
  • Robustness : Error handling and multiple fallbacks

📊 Extracted Data

The scraper extracts detailed professional information:

{
"url": "https://www.airbnb.fr/rooms/41639499",
"hostType": "Professional Host",
"companyName": "Merle Benoit",
"address": "11 Quai André Lassagne, 69001, Lyon, France",
"email": "canopeelyon@gmail.com",
"phone": "+33 6 72 52 38 02",
"registrationNumber": "850593237",
"extractedAt": "2025-10-26T05:15:00.000Z"
}

Detailed Fields

  • url : Airbnb property URL
  • hostType : Host type (e.g., "Professional Host")
  • companyName : Company name or business name
  • address : Business address
  • email : Professional email address
  • phone : Professional phone number
  • registrationNumber : Registration number (SIRET)
  • extractedAt : Extraction timestamp

🚀 Usage

Input Format

The scraper accepts two input formats:

1. City name only

{
"input": "Verneuil-sur-Seine"
}

2. Full Airbnb search URL (with /homes at the end)

{
"input": "https://www.airbnb.fr/s/verneuil-sur-seine/homes"
}

The scraper automatically detects the format and constructs the appropriate search URL.

Examples

{
"input": "Lyon"
}

Full URL with /homes

{
"input": "https://www.airbnb.fr/s/lyon/homes"
}

City with spaces and special characters

{
"input": "Verneuil-sur-Seine"
}

Full URL for complex cities

{
"input": "https://www.airbnb.fr/s/verneuil-sur-seine/homes"
}

⚡ Performance

  • 4 parallel workers : Balanced processing for optimal speed and stability
  • waitForSelector : Intelligent modal waiting (more reliable than setTimeout)
  • Automatic navigation : Browses through all available pages (max 15 pages)
  • Smart detection : Automatically stops when no more pages
  • Clean URLs : Removes search parameters
  • Deduplication : Avoids duplicate URLs
  • Progressive saving : Each extraction is saved immediately

🛠️ Technical Architecture

Technologies Used

  • Apify SDK : Automation framework
  • PuppeteerCrawler : Navigation and extraction
  • Puppeteer : Browser control
  • Node.js : JavaScript runtime

Extraction Process

Phase 1 - Property Collection:

  1. Navigation : Access to Airbnb search page
  2. Cookie management : Automatic cookie acceptance
  3. Property extraction : Retrieval of lodging data (URL, title, price, host type)
  4. Automatic navigation : Moving to next pages (max 15 pages)
  5. Smart filtering : Professional host identification

Phase 2 - Professional Host Analysis: 6. Validation : Verification of identified professional hosts 7. URL construction : Creation of URLs with modal=PROFESSIONAL_HOST_DETAILS

Phase 3 - Business Information Extraction: 8. Parallel processing : 4 simultaneous workers for optimal performance 9. waitForSelector : Intelligent modal loading wait 10. Detailed extraction : Company name, address, email, phone, SIRET 11. Progressive saving : Each extraction saved immediately

📈 Typical Results

  • 200-250 properties analyzed per city
  • 15-25 professional hosts identified on average
  • 95%+ success rate in business information extraction
  • Execution time : ~2-3 minutes per city (optimized with 4 workers)
  • Complete data : Company name, address, email, phone, SIRET
  • Progressive saving : Each extraction saved immediately

Optimal Settings for Apify:

MEMORY: 8 GB (recommended)
TIMEOUT: 7200s (2 hours)
RESTART ON ERROR: ON

Why 8GB Memory?

  • Puppeteer + Chrome : ~4-5GB base consumption
  • Airbnb pages : ~1-2GB for complex layouts
  • Navigation buffer : ~1GB for smooth transitions
  • Total recommended : 8GB for optimal performance

Performance Comparison:

MemorySpeedStabilityRecommendation
4GB❌ Slow❌ Crashes❌ Not recommended
6GB⚠️ Medium⚠️ Unstable⚠️ Minimum
8GBFastStableRecommended
16GB✅ Very fast✅ Very stable✅ Optimal

Alternative (Budget Option):

MEMORY: 6 GB (minimum)
TIMEOUT: 5400s (1.5 hours)
RESTART ON ERROR: ON

⚠️ Important: Manual Configuration

If the default settings don't apply automatically, you can manually configure them in the Apify console:

  1. Go to your Actor in the Apify console
  2. Click on "Settings" tab
  3. Set the following values:
    • Memory: 8192 MB (8 GB)
    • Timeout: 7200 seconds (2 hours)
    • Restart on error: ON

This ensures optimal performance and prevents memory-related crashes.

🔍 Selectors Used

The scraper uses multiple selectors to maximize extraction:

  • [data-testid="card-container"] a[href*="/rooms/"]
  • a[href*="/rooms/"]
  • [data-testid="listing-card-link"]

🚀 Deployment

The scraper is ready for deployment on Apify Cloud:

  • Complete configuration : package.json, Dockerfile, actor.json
  • Optimized code : Based on Airbnb HTML analysis
  • Documentation : Complete README with examples

📝 Important Notes

  • Search URLs : Use Airbnb search URLs (e.g., /s/Paris--France)
  • Pagination : The scraper automatically detects next pages
  • Deduplication : Duplicate URLs are automatically removed
  • Cleaning : Search parameters are removed from URLs

📞 Support

For any questions or issues:

  • Check execution logs
  • Verify Airbnb HTML structure
  • Adapt selectors if necessary
  • Verify that the search URL is valid