Bayt.com Scraper — MENA Jobs with Salary & Skills Filter avatar

Bayt.com Scraper — MENA Jobs with Salary & Skills Filter

Pricing

from $1.00 / 1,000 results

Go to Apify Store
Bayt.com Scraper — MENA Jobs with Salary & Skills Filter

Bayt.com Scraper — MENA Jobs with Salary & Skills Filter

Scrape bayt.com — the leading Middle East job board covering UAE, Saudi Arabia, Qatar, Egypt and more. Salary data, experience requirements, skill-based filtering, career level, and full job descriptions across 10,000+ active listings. Incremental mode detects new jobs.

Pricing

from $1.00 / 1,000 results

Rating

0.0

(0)

Developer

Black Falcon Data

Black Falcon Data

Maintained by Community

Actor stats

1

Bookmarked

17

Total users

7

Monthly active users

4 hours ago

Last modified

Share

What does Bayt.com Scraper do?

Bayt.com Scraper extracts structured job data from bayt.com — including salary data, apply URLs, company metadata, full descriptions, remote-work indicators, and skill tags. It supports keyword search, location filters, and controllable result limits, so you can run the same query consistently over time. The actor also offers detail enrichment (full descriptions and company metadata) where the source provides them.

New to Apify? Sign up free and use the included $5 monthly platform credit to test this actor.

Key features

  • 🌎 13-market coverage — 13 MENA markets supported — bayt.com. One actor, one codebase, one input shape across the region. Pick a country per run, or schedule parallel runs per market for a unified hiring feed.
  • ♻️ Incremental mode — recurring runs emit only NEW / UPDATED / REAPPEARED records — UNCHANGED and EXPIRED are opt-in. First run builds the baseline; subsequent runs emit and charge only for the diff. Pair with notifications for daily "new jobs" alerts to your hiring team. Saves 80–95% on daily monitoring.
  • 🔔 Notifications — Telegram, Slack, Discord, WhatsApp Cloud API, generic webhook — out of the box. Pair with incremental + notifyOnlyChanges for daily "new Bayt jobs" pings to your hiring channel.
  • 💰 Structured salary — two parallel salary blocks per listing — salaryCurrency / salaryMin / salaryMax / salaryPeriod in the native sidebar currency (AED, SAR, QAR, EGP, KWD, BHD, JOD, LBP, OMR, IQD, MAD, PKR, INR), and salaryUsdCurrency / salaryUsdMin / salaryUsdMax / salaryUsdPeriod with Bayt's JSON-LD USD-normalized values when available. Native parsing covers Latin tokens, Arabic-script currency words (درهم, ريال, دينار), and Arabic period markers (شهر, ساعة, يوم, أسبوع, سنة) with country-priority disambiguation for ambiguous symbols.
  • 🔗 Paste-mode — paste any bayt.com URL straight from your browser — single-job pages, search-results URLs, or category SEO URLs. Build the search you want in the UI, copy the URL, paste it here.
  • 📋 Detail enrichment — two-stage mode: list, then enrich each job with the full description + detail-page fields (apply counts, education, etc.). One toggle, no extra orchestration.
  • 📧 Email + phone extraction — every record carries extractedEmails[] and extractedPhones[] regex-pulled from the description — direct-outreach lists with no extra processing step.
  • 🔗 URL + social-profile extraction — every record carries extractedUrls[] and structured socialProfiles { linkedin, twitter, github, … } parsed from the description — useful when employers drop their careers page or recruiter LinkedIn in-line.
  • 📦 Compact mode — AI-agent and MCP-friendly compact payloads with core fields only — pipe straight into your ATS, salary-benchmarking tool, or LLM context without parsing extras.
  • 📤 Export anywhere — Download the dataset as JSON, CSV, or Excel from the Apify Console, or stream live via the Apify API and integrations (Make, Zapier, Google Sheets, n8n, …).

What data can you extract from bayt.com?

Each result includes Core listing fields (jobId, title, location, city, country, salaryText, salaryCurrency, and salaryMin, and more), detail fields when enrichment is enabled (description, descriptionHtml, and descriptionMarkdown), apply information (directApply and applyUrl), and company metadata (company, companyUrl, companyLogoUrl, and companySize). In standard mode, all fields are always present — unavailable data points are returned as null, never omitted. In compact mode, only core fields are returned.

Enable detail enrichment in the input to get richer fields such as full descriptions and company metadata where the source provides them.

Input

The main inputs are a search keyword, an optional location filter, and a result limit. Additional filters and options are available in the input schema.

Key parameters:

  • query — Job search keywords (e.g. software engineer, marketing manager, accountant). Bayt matches against job title and description. Use multiple words for narrower results — software engineer returns fewer (more relevant) jobs than software. Leave empty if you use Start URLs below.

  • startUrls — Optional alternative to Search Term(s) — paste Bayt URLs directly:

  • Search URLs (e.g. https://www.bayt.com/en/uae/jobs/?q=accountant) are crawled page-by-page like a normal search. Build the search in your browser with the filters you want, then copy the URL.

  • Job detail URLs (e.g. https://www.bayt.com/en/uae/jobs/it-support-engineer-5444606/) skip the search step entirely and fetch only those specific listings. Useful for refreshing a known set of jobs.

You can mix both types in one run. Filters/Country/Location below are ignored when Start URLs are set.

  • country — Country to search in. International searches across all 13 supported Bayt markets at once. Pick a specific country to scope results — recommended for higher-quality matches and faster runs. Supported: UAE, Saudi Arabia, Egypt, Kuwait, Qatar, Bahrain, Jordan, Lebanon, Pakistan, India, Oman, Iraq, Morocco. (default: "INTERNATIONAL")
  • location — Optional city or region within the selected country (e.g. Dubai, Riyadh, Cairo, Doha). Narrows results geographically — leave empty to search the whole country. Bayt's free-text location field, so spelling matches what appears in listings.
  • employmentType — Filter by employment contract type. Full time is the default behavior on Bayt and usually the largest segment. Remote filters for jobs explicitly tagged remote-first by the employer (also surfaced via the isRemote output flag). Leave empty to include all types.
  • careerLevel — Filter by required seniority. Mid career and Senior typically have the most listings; Executive / C-level the fewest. Career level also appears in the output as both the raw careerLevel string and the structured yearsOfExperience field when Bayt provides it.
  • datePosted — Filter by how recently the job was posted. Past 24 hours is the tightest — useful when running on a daily cron to catch fresh jobs. Past 30 days gives the largest pool. Combine with incrementalMode for a near-zero-cost daily delta feed.
  • maxResults — Maximum number of job records to return per run. Set to 0 for unlimited (returns every matching job — can be thousands for broad queries). For ad-hoc exploration start with 10–50; for production monitoring 100–500 is typical. You pay $0.001 per result returned, so this is also your hard cost cap per run. (default: 50)
  • includeDetails — When enabled (default), fetches each job's full detail page to populate description, skills, salaryMin/Max, employmentTypeNormalized, hiringOrganizationId, isAiTranslated, nationality, gender, directApply, and the preferred-candidate criteria. Disable to skip detail fetches — runs ~3× faster but only SERP-card fields are populated. (default: true)
  • descriptionMaxLength — Truncate the description field to N characters. 0 keeps full descriptions. Useful for LLM pipelines where descriptions cost tokens — try 5002000 to keep enough context while capping prompt size. Does not affect descriptionHtml or descriptionMarkdown. (default: 0)
  • compact — Returns only 11 core fields per record: jobId, title, company, location, salaryText, employmentType, careerLevel, url, postedDate, contentHash, changeType. Designed for AI-agent and MCP workflows where token budget matters. Mutually exclusive with removeEmptyFieldscompact wins if both are enabled. (default: false)
  • removeEmptyFields — Recursively drops null, empty string "", and empty array [] values from each record before output. Empty nested objects (e.g. socialProfiles with all-null values) are also pruned. Preserves false and 0 since those are real signals. Recommended for AI agents, Make, Zapier, n8n, and any webhook flow where smaller payloads matter. Keep disabled for fixed-schema warehouse loads (Snowflake/BigQuery COPY INTO) where every row needs identical columns. (default: false)
  • ...and 17 more parameters

Input examples

Basic search — Keyword-driven search with a result cap.

→ Full payload per result — all standard fields populated where the source provides them.

{
"query": "software engineer",
"maxResults": 50
}

Filtered search — Narrow results with advanced filters — only matching jobs are returned.

→ Same field set as basic search; fewer, more relevant rows.

{
"query": "software engineer",
"employmentType": "full-time",
"careerLevel": "student",
"datePosted": "past-24h",
"maxResults": 100
}

Incremental tracking — Only emit jobs that changed since the previous run with this stateKey.

→ First run builds the baseline state. Subsequent runs emit only records that are new or whose tracked content changed. Set emitUnchanged: true to include unchanged records as well.

{
"query": "software engineer",
"maxResults": 200,
"incrementalMode": true,
"stateKey": "software-engineer-tracker"
}

Compact filtered output — Combine filters with compact mode for a lightweight AI-agent or MCP data source.

→ Core fields only — ideal for piping into LLMs or downstream tools without token overhead.

{
"query": "software engineer",
"employmentType": "full-time",
"careerLevel": "student",
"maxResults": 50,
"compact": true
}

Output

Each run produces a dataset of structured job records. Results can be downloaded as JSON, CSV, or Excel from the Dataset tab in Apify Console.

Example job record

{
"jobId": "738553f78835618eeb0cff2ae19951cebbd5b36bdc3eaaeba69a6c17ac4afb2a",
"title": "Hiring Now – Remote Data Entry Operator (Part-Time)",
"company": "M.U.E LLC",
"companyUrl": "https://www.bayt.com/en/company/m-u-e-llc-2301381/",
"companyLogoUrl": "https://secure.b8cdn.com/58x58/images/logo/81/2301381_logo_264x120_1778660616_n.png",
"location": "Dubai, UAE",
"city": "Dubai",
"country": "AE",
"salaryText": "AED 3,704 - AED 5,556",
"salaryCurrency": "AED",
"salaryMin": 3704,
"salaryMax": 5556,
"salaryPeriod": null,
"salaryUsdCurrency": "USD",
"salaryUsdMin": 1000,
"salaryUsdMax": 1500,
"salaryUsdPeriod": "MONTH",
"employmentType": "Part time",
"employmentTypeNormalized": "PART_TIME",
"careerLevel": "Entry level",
"yearsOfExperience": null,
"industry": null,
"companySize": null,
"hiringOrganizationId": "2382502",
"description": "We are urgently looking for Data Entry Operators to join our remote team for part-time work.Position Details:* Job Type: Part-Time (Contract-Based)* Location: Remote / Work from Home* Working Hours: O...",
"descriptionHtml": "We are urgently looking for Data Entry Operators to join our remote team for part-time work.Position Details:* Job Type: Part-Time (Contract-Based)* Location: Remote / Work from Home* Working Hours: O...",
"descriptionMarkdown": "We are urgently looking for Data Entry Operators to join our remote team for part-time work.Position Details:* Job Type: Part-Time (Contract-Based)* Location: Remote / Work from Home* Working Hours: O...",
"contactName": null,
"contactEmail": null,
"contactPhone": null,
"extractedEmails": [],
"extractedPhones": [],
"extractedUrls": [
"https://dataentryhiring●com/Only"
],
"socialProfiles": {
"linkedin": null,
"twitter": null,
"instagram": null,
"facebook": null,
"youtube": null,
"tiktok": null,
"github": null,
"xing": null,
"bluesky": null,
"threads": null,
"mastodon": null
},
"skills": "Fast and accurate typing skills* Basic computer knowledge* Attention to detail* Data entry accuracy* Time management skills* Ability to maintain confidentiality",
"nationality": null,
"gender": null,
"directApply": true,
"totalOpenings": 45,
"isRemote": false,
"isExternal": true,
"isAggregated": false,
"isAiTranslated": false,
"url": "https://www.bayt.com/en/uae/jobs/hiring-now-remote-data-entry-operator-part-time-5455751/",
"applyUrl": "https://www.bayt.com/en/job/apply/index/5455751/",
"postedDate": "2026-05-14",
"postedAt": "2026-05-14",
"validThrough": "2026-06-13T00:00:00Z",
"portalUrl": "https://www.bayt.com",
"contentHash": "fd97f3ca0cb43624938e094d254bbbdaefe7f7ae07218c8552a0f95ed994483c",
"scrapedAt": "2026-05-14T20:47:22.425Z",
"source": "bayt.com",
"changeType": "NEW",
"firstSeenAt": "2026-05-14T20:47:22.425Z",
"lastSeenAt": "2026-05-14T20:47:22.425Z",
"previousSeenAt": null,
"expiredAt": null,
"isRepost": false,
"repostOfId": null,
"repostDetectedAt": null
}

Incremental fields

When incremental: true, each record also carries:

  • changeType — one of NEW, UPDATED, UNCHANGED, REAPPEARED, EXPIRED. Default output covers NEW / UPDATED / REAPPEARED; set emitUnchanged: true or emitExpired: true to opt into the others.
  • firstSeenAt, lastSeenAt — ISO-8601 timestamps tracking the listing across runs.
  • isRepost, repostOfId, repostDetectedAt — populated when a new listing matches the tracked content of a previously expired one. Set skipReposts: true to drop detected reposts from the output.

How to scrape bayt.com

  1. Go to Bayt.com Scraper in Apify Console.
  2. Enter a search keyword and optional location filter.
  3. Set maxResults to control how many results you need.
  4. Enable includeDetails if you need full descriptions, company data.
  5. Click Start and wait for the run to finish.
  6. Export the dataset as JSON, CSV, or Excel.

Use cases

  • Extract job data from bayt.com for market research and competitive analysis.
  • Track salary trends across regions and categories over time.
  • Monitor new and changed listings on scheduled runs without processing the full dataset every time.
  • Auto-apply or feed apply URLs into your ATS / hiring pipeline.
  • Research company hiring patterns, employer profiles, and industry distribution.
  • Feed structured data into AI agents, MCP tools, and automated pipelines using compact mode.
  • Export clean, structured data to dashboards, spreadsheets, or data warehouses.
  • Analyze skill demand across listings using structured skill tags.

How much does it cost to scrape bayt.com?

Bayt.com Scraper uses pay-per-event pricing. You pay a small fee when the run starts and then for each result that is actually produced.

  • Run start: $0.01 per run
  • Per result: $0.001 per job record

Example costs:

  • 10 results: $0.02
  • 100 results: $0.11
  • 500 results: $0.51

Example: recurring monitoring savings

These examples compare full re-scrapes with incremental runs at different churn rates. Churn is the share of listings that are new or whose tracked content changed since the previous run. Actual churn depends on your query breadth, source activity, and polling frequency — the scenarios below are examples, not predictions.

Example setup: 200 results per run, daily polling (30 runs/month). Event-pricing examples scale linearly with result count.

Churn rateFull re-scrape run costIncremental run costSavings vs full re-scrapeMonthly cost after baseline
5% — stable niche query$0.21$0.02$0.19 (90%)$0.60
15% — moderate broad query$0.21$0.04$0.17 (81%)$1.20
30% — high-volume aggregator$0.21$0.07$0.14 (67%)$2.10

Full re-scrape monthly cost at daily polling: $6.30. First month with incremental costs $0.79 / $1.37 / $2.24 for the 5% / 15% / 30% scenarios because the first run builds baseline state at full cost before incremental savings apply.

FAQ

How many results can I get from bayt.com?

The number of results depends on the search query and available listings on bayt.com. Use the maxResults parameter to control how many results are returned per run.

Does Bayt.com Scraper support recurring monitoring?

Yes. Enable incremental mode to only receive new or changed listings on subsequent runs. This is ideal for scheduled monitoring where you want to track changes over time without re-processing the full dataset.

Can I integrate Bayt.com Scraper with other apps?

Yes. Bayt.com Scraper works with Apify's integrations to connect with tools like Zapier, Make, Google Sheets, Slack, and more. You can also use webhooks to trigger actions when a run completes.

Can I use Bayt.com Scraper with the Apify API?

Yes. You can start runs, manage inputs, and retrieve results programmatically through the Apify API. Client libraries are available for JavaScript, Python, and other languages.

Can I use Bayt.com Scraper through an MCP Server?

Yes. Apify provides an MCP Server that lets AI assistants and agents call this actor directly. Use compact mode and descriptionMaxLength to keep payloads manageable for LLM context windows.

This actor extracts publicly available data from bayt.com. Web scraping of public information is generally considered legal, but you should always review the target site's terms of service and ensure your use case complies with applicable laws and regulations, including GDPR where relevant.

Your feedback

If you have questions, need a feature, or found a bug, please open an issue on the actor's page in Apify Console. Your feedback helps us improve.

You might also like

Getting started with Apify

New to Apify? Create a free account with $5 credit — no credit card required.

  1. Sign up — $5 platform credit included
  2. Open this actor and configure your input
  3. Click Start — export results as JSON, CSV, or Excel

Need more later? See Apify pricing.