Davos Ai House Attendees avatar
Davos Ai House Attendees

Pricing

$50.00/month + usage

Go to Apify Store
Davos Ai House Attendees

Davos Ai House Attendees

Extract comprehensive participant data from Davos AI House events. Automatically scrapes event attendees, extracts contact information, professional details, social media links, and enriches data with position types and keywords. Perfect for networking, CRM import, or event analysis.

Pricing

$50.00/month + usage

Rating

0.0

(0)

Developer

Corentin Robert

Corentin Robert

Maintained by Community

Actor stats

0

Bookmarked

1

Total users

0

Monthly active users

21 days ago

Last modified

Share

Extract event participants data from Davos AI House events via inevent.uk API. This Actor extracts comprehensive participant information from Davos AI House events including contact details, professional information, social media links, and event-specific data.

What does Davos AI House Attendees do?

The Davos AI House Attendees scraper is designed to help you extract comprehensive participant data from Davos AI House events. The Actor automatically:

  • Extracts participant data including names, emails, roles, companies, and contact information
  • Extracts professional information including LinkedIn profiles (categorized as pro/company), websites, and headlines
  • Enriches data automatically by extracting websites from email domains, detecting position types, and extracting keywords
  • Extracts event-specific data including enrollment dates (with formatted dates), RSVP status, and approval information
  • Handles unlimited pagination with parallel processing (3 requests at once) for optimal speed
  • Manages API authentication with Bearer token support
  • Exports structured data ready for CRM import, networking, or event analysis

The Actor processes all available participants from an event, extracts complete participant profiles, and handles pagination automatically until all results are retrieved.

What can Davos AI House Attendees do?

๐Ÿš€ Key Features

  • Complete Participant Profiles: Extracts all available fields from inevent.uk API with intelligent data enrichment
  • Parallel Processing: Processes 3 pages simultaneously for 3x faster scraping
  • Unlimited Pagination: Automatically scrapes all available pages until no more results (configurable limit)
  • Contact Information: Extracts emails, assistant emails, and automatically extracts websites from email domains
  • Professional Data: Extracts roles, companies, headlines, and LinkedIn profiles (categorized as pro/company)
  • Data Enrichment: Automatically detects position types (CEO, CTO, Managing Director, etc.) and extracts keywords (AI, fintech, etc.)
  • Social Media Links: Extracts LinkedIn profiles (with type detection) and website URLs (auto-extracted from emails when missing)
  • Event-Specific Data: Extracts enrollment dates (with formatted YYYY-MM-DD dates) and approval status
  • Authentication Handling: Bearer token authentication with automatic error detection
  • Error Recovery: Continues processing even if individual pages fail, with 3 consecutive error confirmation
  • Structured Output: Clean, normalized data ready for immediate use in CSV, JSON, Excel, or HTML formats

๐ŸŽฏ Platform Advantages

Your Actor + the Apify platform. They come as a package. This scraper benefits from:

  • Monitoring & Logs: Real-time execution monitoring with detailed logs to track scraping progress
  • API Access: Access your data programmatically via Apify API for seamless integration
  • Scheduling: Set up automated runs on a schedule to keep your participant database up-to-date
  • Integrations: Connect to Make.com, Zapier, Google Sheets, and more for automated workflows
  • Scalability: Handle large-scale scraping with cloud infrastructure that scales automatically
  • Data Storage: Secure dataset storage with multiple export formats (JSON, CSV, Excel, HTML)
  • Progress Tracking: Automatic CSV export every 5 pages for data safety

What data can Davos AI House Attendees extract?

The Actor extracts comprehensive data from Davos AI House events. Here's what you can extract:

Data CategoryFields ExtractedDescription
Basic InformationpersonID, eventID, username, emailParticipant identification and contact
Personal Detailssalutation, firstName, lastName, namePersonal information
Professional Informationrole, company, summary, image, headlineProfessional background
Social MedialinkedIn, linkedInType, websiteLinkedIn profiles (categorized as pro/company) and websites (auto-extracted from emails)
Event DataenrollmentDate, enrollmentDateFormatted, updatedDate, updatedDateFormatted, rsvp, approvedEvent participation status with formatted dates (YYYY-MM-DD)
ContactassistantEmailAssistant email address
Enriched DatapositionType, positionKeywordsAutomatically detected position types (CEO, CTO, etc.) and extracted keywords (AI, fintech, etc.)

Detailed Field Description

Basic Information

  • personID: Unique participant identifier
  • eventID: Event identifier
  • username: Username or email used for login (normalized to lowercase)
  • email: Primary email address (normalized to lowercase)

Personal Details

  • salutation: Title (Mr., Ms., etc.)
  • firstName: First name
  • lastName: Last name
  • name: Full name

Professional Information

  • role: Job title or role
  • company: Company name (normalized, spaces cleaned)
  • summary: Professional summary or bio
  • image: Profile image URL
  • headline: Professional headline

Social Media

  • linkedIn: LinkedIn profile URL (normalized with https://)
  • linkedInType: LinkedIn type - "pro" for personal profiles, "company" for company pages
  • website: Personal or company website (auto-extracted from email domain if not provided, normalized with https://)

Event Data

  • enrollmentDate: Timestamp of enrollment
  • enrollmentDateFormatted: Enrollment date in YYYY-MM-DD format
  • updatedDate: Timestamp of last update
  • updatedDateFormatted: Updated date in YYYY-MM-DD format
  • rsvp: RSVP status
  • approved: Approval status

Contact

  • assistantEmail: Assistant email address (normalized to lowercase)

Enriched Data

  • positionType: Automatically detected position type (CEO, CTO, CFO, CMO, COO, Managing Director, Director, VP, Partner, Investor, Head, Manager, Senior, Lead, Advisor, Executive, Specialist, Analyst, C-Level)
  • positionKeywords: Extracted keywords from headline, role, company, and summary (AI, machine learning, fintech, healthtech, edtech, SaaS, startup, VC, private equity, consulting, innovation, cybersecurity, cloud, etc.)

How to scrape Davos AI House attendees?

Step-by-Step Tutorial

  1. Get Your Authorization Token: You need a Bearer token to access the API:

    • Open your browser and navigate to the Davos AI House event on inevent.uk
    • Open Developer Tools (F12 or Right-click โ†’ Inspect)
    • Go to the Network tab
    • Make a request to the API (e.g., load the participants page)
    • Find the request to api.inevent.uk and check the Headers tab
    • Copy the authorization header value (should start with Bearer )
    • Paste the full authorization token in the authorizationToken field in the Actor input
  2. Configure Input: Click on the Input tab and configure:

    • eventID: Event ID to scrape (e.g., "1988")
    • authorizationToken: REQUIRED - Bearer token from your browser session
    • limit: Number of results per page (default: 15)
    • maxPages: Maximum pages to scrape (set to 0 or null for unlimited scraping)
    • selection: Selection type (default: "public")
    • order: Sort order field (default: "enrollmentDate")
    • orderDirection: Sort direction (default: "asc")
    • timezone: Timezone (default: "Europe/Berlin")
    • lang: Language (default: "en")
  3. Run the Actor: Click Start to begin scraping

  4. Monitor Progress: Watch the logs to see real-time progress for each page

  5. Download Results: Once complete, download your data from the Dataset tab in JSON, CSV, Excel, or HTML format

The Actor automatically:

  • Processes 3 pages in parallel for optimal speed
  • Handles pagination automatically until all results are retrieved
  • Applies progressive delays (300ms โ†’ 500ms โ†’ 1s) to avoid rate limiting
  • Extracts websites from email domains when website field is empty
  • Categorizes LinkedIn profiles (pro/company)
  • Detects position types and extracts keywords
  • Normalizes all URLs and emails
  • Formats dates in readable YYYY-MM-DD format
  • Saves progress every 5 pages to prevent data loss
  • Continues processing even if individual pages fail (stops after 3 consecutive errors)

How much will it cost to scrape Davos AI House attendees?

Scraping Davos AI House attendees is priced based on Compute Units (CUs) consumed during the Actor run. The cost depends on:

  • Number of participants: More participants mean more pages to process
  • Total pages: The Actor processes all available pages
  • API response time: Faster responses mean lower costs

Estimated costs:

  • Free plan: Test with small events (few hundred participants)
  • Starter plan: Scrape medium events efficiently
  • Professional plan: Handle large events with thousands of participants

The Actor is optimized to minimize CU consumption by using efficient API calls and configurable pagination limits.

Input

Davos AI House Attendees has the following input options. Click on the Input tab for more information:

  • eventID (string, optional): Event ID to scrape. Default: "1988"
  • authorizationToken (string, REQUIRED): Bearer token for API authentication. Format: Bearer $2a$08$.... Extract from browser DevTools after loading an inevent.uk event page. See "How to scrape inevent.uk events?" section above for detailed instructions. Note: Without valid token, the API will return 401/403 authentication errors.
  • limit (integer, optional): Number of results per page. Default: 15
  • maxPages (integer, optional): Maximum number of pages to scrape. Set to 0 or null for unlimited scraping (scrapes until no more results). Default: 0 (unlimited)
  • selection (string, optional): Selection type. Default: "public"
  • order (string, optional): Sort order field. Default: "enrollmentDate"
  • orderDirection (string, optional): Sort direction ("asc" or "desc"). Default: "asc"
  • timezone (string, optional): Timezone for date formatting. Default: "Europe/Berlin"
  • lang (string, optional): Language code. Default: "en"

Output

You can download the dataset extracted by Davos AI House Attendees in various formats such as JSON, HTML, CSV, or Excel.

Output Example

{
"personID": "901142",
"eventID": "1988",
"username": "office@mnty.ch",
"email": "office@mnty.ch",
"salutation": "Ms.",
"firstName": "Mara-Elsa",
"lastName": "Montoya",
"name": "Mara-Elsa Montoya",
"role": "CEO",
"company": "MNTY Technology Ventures",
"summary": "",
"image": "https://cdn.inevent.uk/250/1988/d11cf770c74a854b3649a0101bb16ffa996ba936.jpeg",
"headline": "CEO @ MNTY Technology Ventures",
"linkedIn": "https://www.linkedin.com/in/maraelsamontoya/",
"linkedInType": "pro",
"website": "https://mnty.ch",
"enrollmentDate": "1765547909",
"enrollmentDateFormatted": "2025-12-12",
"updatedDate": "1767802292",
"updatedDateFormatted": "2026-01-07",
"rsvp": "1",
"approved": "1",
"assistantEmail": "office@mnty.ch",
"positionType": "CEO",
"positionKeywords": "tech, technology"
}

Tips for Best Results

Performance Optimization

  • Event Size: Start with smaller events to test, then scale up
  • Pagination Limits: Use unlimited (maxPages: 0) for complete data, or set limits for faster testing
  • Token Management: Tokens may expire after some time. Update them if you start getting 401 errors
  • Monitor Progress: Check logs regularly to ensure smooth operation

Data Quality

  • Complete Extraction: The Actor extracts all available fields from inevent.uk API
  • Data Enrichment: Automatically enriches data by extracting websites from emails, detecting position types, and extracting keywords
  • Data Normalization: All URLs are normalized (https:// added), emails are lowercased, company names are cleaned
  • Date Formatting: Timestamps are converted to readable YYYY-MM-DD format
  • LinkedIn Categorization: LinkedIn URLs are automatically categorized as "pro" (personal) or "company" (company page)
  • CSV Export: Data is exported with semicolon delimiter as preferred format

Authentication

  • Token Refresh: If you get 401 errors, refresh your token from a new browser session
  • Progressive Delays: The Actor automatically increases delays as it progresses (500ms โ†’ 1s โ†’ 2s)
  • Error Handling: Stops after 3 consecutive errors to confirm end of data

Our scrapers are ethical and do not extract any private user data beyond what is publicly available through the event's participant list. They only extract participant information that is displayed through inevent.uk's API. We therefore believe that our scrapers, when used for ethical purposes by Apify users, are safe.

However, you should be aware that:

  • The inevent.uk platform requires valid authentication (Bearer token)
  • The scraper uses your own authentication token
  • You should comply with inevent.uk's terms of service
  • Data usage should respect privacy regulations (GDPR, etc.)
  • Personal data protection regulations may apply depending on your use case

If you're unsure whether your use case is legitimate, consult your lawyers. You can also read our blog post on the legality of web scraping.

FAQ

How many participants can I scrape?

The Actor can scrape all participants available in the specified event. The exact number depends on:

  • The total participants in the event
  • The event's access permissions
  • The API rate limits

The Actor will automatically process all pages until no more results are available or authentication errors occur.

How do I get the authorization token?

  1. Visit the Davos AI House event page on inevent.uk in your browser
  2. Open Developer Tools (F12) โ†’ Network tab
  3. Make a request that calls the API (e.g., load participants)
  4. Find the request to api.inevent.uk in the Network tab
  5. Click on it and go to the Headers tab
  6. Find the authorization header (should start with Bearer )
  7. Copy the full value and paste it in the authorizationToken field

Important: Tokens may expire after some time. If you start getting 401 errors, update your token by following the steps above again.

What if a page fails to load?

The Actor includes automatic error handling:

  • Retries failed requests (for server errors 500+)
  • Continues processing other pages if one fails
  • Stops after 3 consecutive errors to confirm end of data
  • Logs all errors for debugging

Can I scrape multiple events?

Currently, the Actor processes one event at a time. You can run multiple Actor instances with different eventID values to scrape multiple events in parallel.

How often should I run this Actor?

It depends on your needs:

  • One-time extraction: Run once to get current participant list
  • Regular updates: Run periodically to track new enrollments
  • Event analysis: Run as needed for networking or CRM import

Does the Actor handle pagination?

Yes! The Actor automatically handles pagination:

  • Detects total count from the first page
  • Processes all pages sequentially using offset/limit
  • Stops when no more results are available
  • Confirms end of data with 3 consecutive error checks

What happens if I get blocked (401/403 error)?

If you get a 401 Unauthorized or 403 Forbidden error:

  1. Update your token: Token may have expired - get fresh token from your browser
  2. Check event access: Ensure you have access to the event
  3. Reduce concurrency: The Actor already uses progressive delays
  4. Wait and retry: Sometimes waiting a few minutes and retrying helps

The Actor will automatically stop if a 401/403 error is detected and log a clear message.

Can I get support or request features?

Yes! If you encounter issues or have feature requests, please use the Issues tab on the Actor page. We're open to feedback and continuously improving the Actor based on user needs.

Need a custom solution?

If you need a customized version of this Actor for specific requirements, feel free to contact us through the Actor page. We can create tailored solutions based on your needs.

Technical Details

Extraction Process

  1. API Request Construction: Builds GET requests to inevent.uk API endpoint (for Davos AI House events) with event ID, pagination parameters, and authentication
  2. Direct API Calls: Makes direct API calls with proper headers and Bearer token authentication
  3. Pagination Handling: Automatically handles pagination by incrementing offset until no more results
  4. Data Extraction: Parses JSON responses and extracts participant data from structured API responses
  5. Progressive Delays: Applies increasing delays (500ms โ†’ 1s โ†’ 2s) based on page number to avoid rate limiting
  6. Error Recovery: Implements retry mechanism for server errors and continues processing after individual page failures

Error Handling

  • Automatic retry mechanism for failed requests (server errors 500+)
  • 401/403 error detection with clear authentication/blocking messages
  • Consecutive error counting (stops after 3 consecutive errors)
  • Graceful error handling that continues processing other pages
  • Progress saving every 5 pages to prevent data loss

Rate Limiting Protection

  • Parallel processing: 3 pages processed simultaneously
  • Progressive delays: 300ms for pages 1-10, 500ms for pages 11-30, 1s for pages 31+
  • Automatic 401/403 detection and stopping
  • Bearer token authentication
  • User-Agent and header mimicking

Data Enrichment Features

  • Website Extraction: Automatically extracts website URLs from email domains (e.g., office@mnty.ch โ†’ https://mnty.ch)
  • LinkedIn Categorization: Detects if LinkedIn URL is a personal profile (pro) or company page (company)
  • Position Type Detection: Automatically detects position types from headline and role (CEO, CTO, Managing Director, etc.)
  • Keyword Extraction: Extracts relevant keywords from headline, role, company, and summary (AI, fintech, healthtech, etc.)
  • URL Normalization: Automatically adds https:// to incomplete URLs
  • Email Normalization: Converts all emails to lowercase for consistency
  • Date Formatting: Converts Unix timestamps to readable YYYY-MM-DD format

Limitations

  • The scraper depends on the inevent.uk API structure (used for Davos AI House events). If the API changes significantly, the Actor may need updates
  • Requires valid Bearer token authentication
  • Rate limiting may occur with very large events - use progressive delays
  • Tokens expire after some time - need to refresh periodically
  • Some events may have limited data availability depending on access permissions

Resources


Need help? Check the Issues tab for common problems and solutions, or contact support through the Actor page.